Jun 19, 2020

AI in cybersecurity – should we believe the hype?

Cybersecurity
AI
Machine Learning
Martin Mackay
4 min
Is AI really the panacea that many in the industry are holding it up to be, or just another tool in an already broad arsenal?
Is AI really the panacea that many in the industry are holding it up to be, or just another tool in an already broad arsenal...

Artificial Intelligence (AI) and in particular the field of Machine Learning (ML) have been causing a buzz in the cybersecurity community for some time now. In recent years, however, talk about the game-changing potential of the technology has reached fever pitch and now people are questioning whether it is really the panacea that many in the industry are holding it up to be, or just another tool in an already broad arsenal?

Last year, Gartner highlighted AI as one of its Top 10 Data and Analytics Technology Trends for 2019 while earlier this year, Forbes hailed the technology as the “Future of Cybersecurity”.

Such beliefs are fast gaining traction on the ground among cybersecurity professionals too. A Capgemini Research Institute study of over 850 senior executives in IT info security, cybersecurity and IT operations found that:

  • Nearly two-thirds of execs don’t believe they can identify critical threats without AI
  • Three in five organisations say AI improves the accuracy and efficiency of cyber analysts
  • Around three-quarters of organisations are testing AI use cases

Clearly AI has its place in a robust cybersecurity defence. But are we overhyping its potential?

What should we expect from AI and ML?

AI and its associated fields of ML, Natural Language Processing and Robotic Process Automation may be modern industry buzzwords, but they are certainly not new in the world of cybersecurity.

The original spam filter is the earliest common example of machine learning for this purpose, dating back to the early 2000s. Over the years, the level of analysis undertaken by such tools has grown from filtering certain words to scanning URLs, domains, attachments and more.

But it is the latest developments in AI that are catching the industry’s attention. And with good reason.

AI is making great strides, aiding in the defence of a range of threat vectors with fraud detection, malware detection, intrusion detection, risk scoring and user/machine behavioural analysis being the top five use cases.

And such uses are more common than you may think. Capgemini research found that over half of enterprises have already implemented at least five high impact cases.

All of which goes to show that when we ask – should we believe the hype? We are not questioning AI or ML’s worth as a tool in cybersecurity defence. Rather we are questioning whether considering it a silver bullet could do more harm than good. After all, if the discussion in the Boardroom revolves around the deployment of AI for enhanced protection, there is the risk that complacency regarding protection against new threat vectors settles in.

For all its merits, AI does not offer a catch-all solution. AI may be able to carry out deeper analysis in much faster timescales than humans, but we are a long way from it becoming the first, last and only line of defence.

It’s important that we see AI as a tool to assist cybersecurity teams in our work and not as a method of replacing human intervention – as it is when human and machine techniques are applied together that cyber defences are most robust.

A recent study from the Massachusetts Institute of Technology (MIT) found that a combination of human expertise and machine learning systems – what it calls “supervised machine learning” – is much more effective than humans or ML alone. The supervised model performed 10 times better than the ML-only equivalent.

Man and machine: working alongside AI

The MIT study cuts to the heart of how AI technology fits into cyber defence. It is a powerful tool when it comes to spotting and stopping a range of cyberattacks, but it alone is not enough.

AI has great potential when it comes to identifying common threats but can only effectively defend against the modern threat landscape with the aid of human assistance. For example, a ML system may be able to identify and nullify a threat contained in a malicious link or attachment, but it is much less effective at protecting against social engineering attacks such as Business Email Compromise (BEC), for example.

For all its advancements, ML is still not a great way to analyse nuance and the idiosyncrasies of human behaviour – which can result in missed threats as well as a high rate of false positives.

Why does this matter? The reason is that today’s cyber-threat actors have switched their attack from infrastructure and network to people: unwittingly employees remain the point of vulnerability for the enterprise and a people-centric approach to security is critical.

And just as AI and ML should not be considered a replacement for human expertise, nor should we expect either to supersede current cybersecurity technologies. Outside of ML, techniques such as static analysis, dynamic behavioural analysis and protocol analysis will continue to have their place.

A good cyber defence must be as broad as it is wide. This means creating a security-first culture through training and education and arming your teams with robust defence techniques alongside the best possible protection.

So, should we believe the hype? As far as AI being a powerful tool that can bolster our cyber defences – yes. But as a single cure for all that ails us? Absolutely not.

By Martin Mackay, SVP, EMEA at Proofpoint

Share article

Jun 11, 2021

Google AI Designs Next-Gen Chips In Under 6 Hours

Google
AI
Manufacturing
semiconductor
3 min
Google AI’s deep reinforcement learning algorithms can optimise chip floor plans exponentially faster than their human counterparts

In a Google-Nature paper published on Wednesday, the company announced that AI will be able to design chips in less than six hours. Humans currently take months to design and layout the intricate chip wiring. Although the tech giant has been working in silence on the technology for years, this is the first time that AI-optimised chips have hit the mainstream—and that the company will sell the result as a commercial product. 

 

“Our method has been used in production to design the next generation of Google TPU (tensor processing unit chips)”, the paper’s authors, Azalea Mirhoseini and Anna Goldie wrote. The TPU v4 chips are the fastest Google system ever launched. “If you’re trying to train a large AI/ML system, and you’re using Google’s TensorFlow, this will be a big deal”, said Jack Gold, President and Principal Analyst at J.Gold Associates

 

Training the Algorithm 

In a process called reinforcement learning, Google engineers used a set of 10,000 chip floor plans to train the AI. Each example chip was assigned a score of sorts based on its efficiency and power usage, which the algorithm then used to distinguish between “good” and “bad” layouts. The more layouts it examines, the better it can generate versions of its own. 

 

Designing floor plans, or the optimal layouts for a chip’s sub-systems, takes intense human effort. Yet floorplanning is similar to an elaborate game. It has rules, patterns, and logic. In fact, just like chess or Go, it’s the ideal task for machine learning. Machines, after all, don’t follow the same constraints or in-built conditions that humans do; they follow logic, not preconception of what a chip should look like. And this has allowed AI to optimise the latest chips in a way we never could. 

 

As a result, AI-generated layouts look quite different to what a human would design. Instead of being neat and ordered, they look slightly more haphazard. Blurred photos of the carefully guarded chip designs show a slightly more chaotic wiring layout—but no one is questioning its efficiency. In fact, Google is starting to evaluate how it could use AI in architecture exploration and other cognitively intense tasks. 

 

Major Implications for the Semiconductor Sector 

Part of what’s impressive about Google’s breakthrough is that it could throw Moore’s Law, the axion that the number of transistors on a chip doubles every five years, out the window. The physical difficulty of squeezing more CPUs, GPUs, and memory on tiny silicon die will still exist, but AI optimisation may help speed up chip performance.

 

Any chance that AI can help speed up current chip production is welcome news. Though the U.S. Senate recently passed a US$52bn bill to supercharge domestic semiconductor supply chains, its largest tech firms remain far behind. According to Holger Mueller, principal analyst at Constellation Research, “the faster and cheaper AI will win in business and government, including with the military”. 

 

All in all, AI chip optimisation could allow Google to pull ahead of its competitors such as AWS and Microsoft. And if we can speed up workflows, design better chips, and use humans to solve more complex, fluid, wicked problems, that’s a win—for the tech world and for society. 

 

 

Share article