Jun 19, 2020

AI in cybersecurity – should we believe the hype?

Cybersecurity
AI
Machine Learning
Martin Mackay
4 min
Is AI really the panacea that many in the industry are holding it up to be, or just another tool in an already broad arsenal?
Is AI really the panacea that many in the industry are holding it up to be, or just another tool in an already broad arsenal...

Artificial Intelligence (AI) and in particular the field of Machine Learning (ML) have been causing a buzz in the cybersecurity community for some time now. In recent years, however, talk about the game-changing potential of the technology has reached fever pitch and now people are questioning whether it is really the panacea that many in the industry are holding it up to be, or just another tool in an already broad arsenal?

Last year, Gartner highlighted AI as one of its Top 10 Data and Analytics Technology Trends for 2019 while earlier this year, Forbes hailed the technology as the “Future of Cybersecurity”.

Such beliefs are fast gaining traction on the ground among cybersecurity professionals too. A Capgemini Research Institute study of over 850 senior executives in IT info security, cybersecurity and IT operations found that:

  • Nearly two-thirds of execs don’t believe they can identify critical threats without AI
  • Three in five organisations say AI improves the accuracy and efficiency of cyber analysts
  • Around three-quarters of organisations are testing AI use cases

Clearly AI has its place in a robust cybersecurity defence. But are we overhyping its potential?

What should we expect from AI and ML?

AI and its associated fields of ML, Natural Language Processing and Robotic Process Automation may be modern industry buzzwords, but they are certainly not new in the world of cybersecurity.

The original spam filter is the earliest common example of machine learning for this purpose, dating back to the early 2000s. Over the years, the level of analysis undertaken by such tools has grown from filtering certain words to scanning URLs, domains, attachments and more.

But it is the latest developments in AI that are catching the industry’s attention. And with good reason.

AI is making great strides, aiding in the defence of a range of threat vectors with fraud detection, malware detection, intrusion detection, risk scoring and user/machine behavioural analysis being the top five use cases.

And such uses are more common than you may think. Capgemini research found that over half of enterprises have already implemented at least five high impact cases.

All of which goes to show that when we ask – should we believe the hype? We are not questioning AI or ML’s worth as a tool in cybersecurity defence. Rather we are questioning whether considering it a silver bullet could do more harm than good. After all, if the discussion in the Boardroom revolves around the deployment of AI for enhanced protection, there is the risk that complacency regarding protection against new threat vectors settles in.

For all its merits, AI does not offer a catch-all solution. AI may be able to carry out deeper analysis in much faster timescales than humans, but we are a long way from it becoming the first, last and only line of defence.

It’s important that we see AI as a tool to assist cybersecurity teams in our work and not as a method of replacing human intervention – as it is when human and machine techniques are applied together that cyber defences are most robust.

A recent study from the Massachusetts Institute of Technology (MIT) found that a combination of human expertise and machine learning systems – what it calls “supervised machine learning” – is much more effective than humans or ML alone. The supervised model performed 10 times better than the ML-only equivalent.

Man and machine: working alongside AI

The MIT study cuts to the heart of how AI technology fits into cyber defence. It is a powerful tool when it comes to spotting and stopping a range of cyberattacks, but it alone is not enough.

AI has great potential when it comes to identifying common threats but can only effectively defend against the modern threat landscape with the aid of human assistance. For example, a ML system may be able to identify and nullify a threat contained in a malicious link or attachment, but it is much less effective at protecting against social engineering attacks such as Business Email Compromise (BEC), for example.

For all its advancements, ML is still not a great way to analyse nuance and the idiosyncrasies of human behaviour – which can result in missed threats as well as a high rate of false positives.

Why does this matter? The reason is that today’s cyber-threat actors have switched their attack from infrastructure and network to people: unwittingly employees remain the point of vulnerability for the enterprise and a people-centric approach to security is critical.

And just as AI and ML should not be considered a replacement for human expertise, nor should we expect either to supersede current cybersecurity technologies. Outside of ML, techniques such as static analysis, dynamic behavioural analysis and protocol analysis will continue to have their place.

A good cyber defence must be as broad as it is wide. This means creating a security-first culture through training and education and arming your teams with robust defence techniques alongside the best possible protection.

So, should we believe the hype? As far as AI being a powerful tool that can bolster our cyber defences – yes. But as a single cure for all that ails us? Absolutely not.

By Martin Mackay, SVP, EMEA at Proofpoint

Share article

May 7, 2021

AI Shows its Value; Governments Must Unleash its Potential

AI
Technology
digitisation
Digital
His Excellency Omar bin Sultan...
4 min
His Excellency Omar bin Sultan Al Olama talks us through artificial intelligence's progress and potential for practical deployment in the workplace.
His Excellency Omar bin Sultan Al Olama talks us through artificial intelligence's progress and potential for practical deployment in the workplace...

2020 has revealed just how far AI technology has come as it achieves fresh milestones in the fight against Covid-19. Google’s DeepMind helped predict the protein structure of the virus; AI-drive infectious disease tracker BlueDot spotted the novel coronavirus nine days before the World Health Organisation (WHO) first sounded the alarm. Just a decade ago, these feats were unfathomable. 

Yet, we have only just scratched the surface of AI’s full potential. And it can’t be left to develop on its own. Governments must do more to put structures in place to advance the responsible growth of AI. They have a dual responsibility: fostering environments that enable innovation while ensuring the wider ethical and social implications are considered.

It is this balance that we are trying to achieve in the United Arab Emirates (UAE) to ensure government accelerates, rather than hinders, the development of AI. Just as every economy is transitioning at the moment, we see innovation as being vital to realising our vision for a post-oil economy. Our work in his space has highlighted three barriers in the government approach when it comes to realising AI’s potential. 

First, addressing the issue of ignorance 

While much time is dedicated to talking about the importance of AI, there simply isn’t enough understanding of where it’s useful and where it isn’t. There are a lot of challenges to rolling out AI technologies, both practically and ethically. However, those enacting the policies too often don’t fully understand the technology and its implications. 

The Emirates is not exempt from this ignorance, but it is an issue we have been trying to address. Over the last few years, we have been running an AI diploma in partnership with Oxford University, teaching government officials the ethical implications of AI deployment. Our ambition is for every government ministry to have a diploma graduate, as it is essential to ensure policy decision-making is informed. 

Second, moving away from the theoretical

While this grounding in the moral implications of AI is critical, it is important to go beyond the theoretical. It is vital that experimentation in AI is allowed to happen for its own sake and not let ethical problems stymie innovations that don’t yet exist. Indeed, many of these concerns – while well-founded – are born out in the practical deployment of these end-use cases and can’t be meaningfully discussed on paper.

If you take facial recognition as an example, looking at this issue in abstract quickly leads to discussions over privacy concerns with potential surveillance and intrusion by private companies or authorities’ regimes. 

But what about the more specific issue of computer vision? Although part of the same field, the same moral quandaries do not arise, and the technology is already bearing fruit. In 2018, we developed an algorithmic solution that can be used in the detection and diagnosis of tuberculosis from chest X-rays. You can upload any image of a chest X-ray, and the system will identify if a person has the disease. Laws and regulations must be tailored to unique use-cases of AI, rather than lumping disparate fields together.

To create this culture that encourages experimentation, we launched the RegLab. It provides a safe and flexible legislation ecosystem to supports the utilisation of future technologies. This means we can actually see AI in practice before determining appropriate regulation, not the other way around. Regulation is vital to cap any unintended negative consequences of AI, but it should never be at the expense of innovation. 

Finally, understanding the knock-on effects of AI

There needs to be a deeper, more nuanced understanding of AI’s wider impact. It is too easy to think the economic benefits and efficiency gains of AI must also come with negative social implications, particularly concern over job loss. 

But with the right long-term government planning, it’s possible to have one without the other; to maximise the benefits and mitigate potential downsides. If people are appropriately trained in how to use or understand AI, the result is a future workforce capable of working alongside these technologies for the better – just as computers complement most people’s work today.

We’ve to start this training as soon as possible in the Emirates. Through our Ministry of Education, we have rolled out an education programme to start teaching children about AI as young as five years old. This includes coding skills and ethics, and we are carrying this right through to higher education with the Mohamed bin Zayed University of Artificial Intelligence set to welcome its first cohort in January. We hope to create future generations of talent that can work in harmony with AI for the betterment of society, not the detriment.

AI will inevitably become more pervasive in society, digitisation will continue in the wake of the pandemic, and in time we will see AI’s prominence grow. But governments have a responsibility to society to ensure that this growth is matched with the appropriate understanding of AI’s impacts. We must separate the hype from the practical solutions, and we must rigorously interrogate AI deployment and ensure that it used to enhance our existence. If governments can overcome these challenges and create the environments for AI to flourish, then we have a very exciting future ahead of us.

Share article