Generative AI is changing the cybersecurity game

Powerful AI tools are changing the world of cybersecurity. But while businesses should embrace these new innovations, it is critical they do so responsibly

From privacy breaches to ransomware attacks, cybersecurity threats are a continuous challenge facing businesses across the globe.

Cybercriminals are constantly innovating their tactics, exploiting vulnerabilities, and breaching defences with alarming precision. Consequently, businesses must adopt a proactive approach – one that not only responds to attacks, but also anticipates and thwarts them before they materialise.

To help security teams deal with these endless attacks, AI is becoming an essential ally.

AI's ability to analyse vast amounts of data, identify patterns, and make intelligent decisions in real time has positioned it as a game-changer in the ongoing battle against cyber threats. As foundational models continue to evolve, AI is a solution businesses should certainly consider implementing when it comes to combatting these risks and threats.  

Firstly, as Hitesh Bansal, Country Head (UK & Ireland) – Cybersecurity & Risk Services at Wipro, explains, the best defence is a good offence: “Advanced AI now leverages existing protection technologies to build a logical layer within models to proactively protect data. For example, this can take the form of blocking traffic at the firewall level, before the threats compromise the boundaries of an organisation.” 

Next, Bansal explains, businesses must be able to accurately detect and identify the threats and risks they are facing. “By processing and analysing large data sets, AI can provide useful intelligence upfront to recognise potential threats; for example, spotting anomalies through correlated network activity which would help engineers to identify payloads in malicious codes and malware.  

“Finally, managing the response mechanisms in the event of cybersecurity threats is essential. Now, more sophisticated AI and Machine Learning-enabled playbooks and strategies are being refined and developed. While SOAR (Security Orchestration & Response) as a concept is not new, the role that enhanced AI plays is now allowing access to more data, analytics and most importantly, behavioural context that will help IT teams to manage response mechanisms for each threat the business faces.”

Using AI to enhance cybersecurity

AI can be used to boost cybersecurity in a number of ways, comments Kunal Purohit, Chief Digital Services Officer at Tech Mahindra, enhancing threat detection, malware analysis, vulnerability assessment, automated response, data augmentation, adaptive defence strategies, and user behaviour analysis. 

“AI enables more proactive and effective defence measures against evolving cyber threats,” he explains. “And, with the advent of Generative AI, it is changing the game for cybersecurity, analysing massive quantities of risk data to speed response times and augment under-resourced security operations. It has massive potential to transform cybersecurity, including cloud, device, and even home security systems. By creating predictive models, generating simulated environments, and analysing large volumes of data, generative AI can help identify and respond to threats before they cause harm.”

As Damien Duff, Senior Machine Learning Consultant at Daemon, explains, AI can be used to enhance cybersecurity through the use of behaviour analytics. “AI can monitor user behaviour and identify any unusual or suspicious activities. By detecting these activities early on, AI can help prevent potential threats before they can do any harm.  

“On the other hand, AI algorithms can sometimes reflect the biases of their developers or the data they are trained on, leading to discriminatory outcomes. This can have serious consequences in the context of cybersecurity, resulting in false positives or false negatives.”

As Bansal adds, while businesses should certainly embrace AI innovation and the new tools it brings, they must do so responsibly. “There are some known challenges with the use of AI, including the technical, social and organisational complexity of AI applications, bias of data sets and the associated cost of integrating AI algorithms,” he describes. 

“All of these risks and challenges must be fully evaluated when AI models are built. The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework is a good example of how organisations can keep track of risks and put in place a plan to mitigate them. If these ideas are followed, then enterprises will benefit from the development of more trustworthy AI.”

Ensuring the ethical use of AI in security

The integration of AI into cybersecurity practices, however, raises important considerations regarding privacy, ethics, and the potential for adversarial exploitation. It is crucial to strike a delicate balance between leveraging AI's immense potential and ensuring responsible implementation to protect user privacy while maintaining ethical boundaries.

Ivana Bartoletti, Global Privacy Officer at Wipro, explains that, first of all, companies need to have the right governance in place to assess the development and deployment of AI systems: “Whether it is by creating a separate structure or building ethical AI into an existing governance structure, using AI presents risks that need to be identified and managed.” 

Developing these tools requires controls, training about how to label data and prepare the right datasheets, monitoring, and defining what fairness means, Bartoletti states: “When purchasing a tool from a provider, it is essential to assess that the product has followed due diligence and that its development is aligned with the corporate values of the business. 

“Bias is never an easy thing; it can create harm as well as financial damage. Bias can emerge at many different points of the AI system life cycle, and data may not be the only reason. Aggregation, evaluation, and measurement bias are all dangerous, and these start with people. But it is also important for companies to protect their data. For example, secure data storage is important to avoid manipulation and tampering, which could lead to severe discriminatory issues in AI outputs later on.” 

As Purohit adds, ensuring the ethical use of AI in security is critical to maintaining the trust and confidence of customers and stakeholders. “Businesses should develop ethical principles that guide the use of AI in security. These principles and guidelines should align with the organisation's values and be based on internationally recognised ethical frameworks.

“Businesses must take a proactive approach to the ethical use of AI in security,” he concludes. “By developing ethical principles and guidelines, assessing and mitigating bias, certifying data privacy and security, businesses can use AI for security in a responsible and ethical manner.”

Share

Featured Articles

Dell at 40: A Long-Standing Commitment to Digital Innovation

From a university dorm room, to a multinational technology conglomerate, Dell Technologies has always been a brave and bold digital transformation venture

Globant to Drive Formula 1’s Digital Transformation

Globant has announced it has become an official partner of Formula 1, with the deal set to focus on digitising the pit wall and boosting the fan experience

HPE: Businesses Must Tackle Blind Spots in AI Strategies

As businesses rush to embrace AI, HPE research finds many are falling into an overconfidence trap by overlooking critical gaps in their strategies

Google’s Becky Power joins Tech & AI LIVE London

Digital Transformation

Join Belden for a Free Webinar on Connected Plant Floor Data

Digital Transformation

Microsoft Invests $1.7bn in Indonesia's Cloud and AI Future

Cloud & Cybersecurity