Aug 7, 2020

Cybersecurity: do AI and Machine Learning make a difference?

Cybersecurity
Webroot
AI
Machine Learning
Matt Aldridge
3 min
Despite the confusion around AI and ML, most respondents planned to continue increasing spending on these technologies throughout 2020
Despite the confusion around AI and ML, most respondents planned to continue increasing spending on these technologies throughout 2020...

We’ve recently marked the three-year anniversary of “WannaCry”, a powerful ransomware cyberattack which infected over 200,000 computers in 150 countries over the course of just a few days. It worked by first infecting a Windows computer, then encrypting files on the PC's hard drive making them impossible for users to access and demanding a ransom payment in bitcoin in order to decrypt them. WannaCry affected everyone from individuals to large organisations like the NHS, Spanish telecom giant Telefonica and FedEx with losses estimated at up to $4 billion.

Although few are as successful or as devastating as “WannaCry”, there are still a huge number of cyberattacks generated by criminals each year. In 2019 alone, there were 9.9 billion malware attacks. That's simply too much volume for humans to handle.

Fortunately, technologies such as artificial intelligence (AI) and machine learning (ML) are picking up some of the slack. 

Machine learning is a subset of artificial intelligence and uses algorithms born of previous datasets and statistical analysis to make assumptions about patterns of behaviour. The computer can then adjust its actions and perform functions for which it hasn’t been explicitly programmed.

With its ability to sort through millions of files and identify potentially hazardous ones, machine learning is a godsend for cybersecurity. It’s essential for uncovering threats and automatically squashing them before they can wreak havoc.

The rise of AI/ML in cybersecurity

In 2017, around the same time as the WannaCry attack, we were surveying IT decision makers across the United States and Japan on their use of AI and ML in cybersecurity, discovering that approximately 74% of businesses in both regions were already using some form of AI or ML to protect their organisations from cyber threats.

And over the last several years, its use has sustained consistent growth among businesses. When we checked in again with both regions at the end of 2018, 73% of respondents we surveyed reported they planned to use even more AI/ML tools in the following year.

Fast forward to our most recent report published this year, which surveyed 800 IT professionals with cybersecurity decision making power across the US, UK, Japan, and Australia/New Zealand regions, and we’ve discovered that 96% of respondents now use AI/ML tools in their cybersecurity programs. 

However, there were some findings that left us surprised.

A lack of understanding

Despite the increase in adoption rates for these technologies, our survey found that more than half of IT decision makers admitted they do not fully understand the benefits of these tools. Even more jarring was that nearly three quarters (74%) of IT decision makers worldwide really don’t care whether they’re using AI or ML, as long as the tools they use are effective in preventing attacks. 

This highlights the continued confusion and lack of knowledge regarding the use cases and capabilities of AI and machine learning-based cybersecurity tools, as well as a general distrust in their capabilities, based on how such tools are advertised by vendors.

Scepticism across geographies

Despite a small regional variance, the overall results of our survey also indicated a relatively consistent level of uncertainty across all geographies with respect to how much benefit AI/ML brings. 

This really highlights that continued education and increased awareness of the benefits that the technologies bring across the industry is crucial to ensuring businesses around the world become more resilient against cyberattacks and other IT challenges.

Preparing for the future

Despite the confusion around AI and ML, most respondents planned to continue increasing spending on these technologies throughout 2020.

For these organisations, it’s crucial that they improve their understanding in order to realise maximum value. 

By vetting and partnering with cybersecurity vendors who have long-standing experience using and developing AI/ML, and who can provide expert guidance, we expect businesses will be more likely to achieve the highest levels of cyber resilience, whilst efficiently maximising the capabilities of the human analysts on their teams.

By Matt Aldridge, Principal Solutions Architect, Webroot, an OpenText company 

Share article

Jun 21, 2021

ICO warns of privacy concerns on the use of LFR technology

Technology
ICO
LFR
cameras
3 min
Organisations need to justify their use of live facial recognition (LFR) is fair, necessary, and proportionate, says the Information Commissioner’s Office

Live facial recognition (LFR) technology should not be used simply because it is available and must be used for a specific purpose, the Information Commissioner’s Office (ICO) has warned.

“I am deeply concerned about the potential for live facial recognition (LFR) technology to be used inappropriately, excessively, or even recklessly. When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant,” said Elizabeth Denham, the UK’s Information Commissioner.

Denham explained that with any new technology, building public trust and confidence in the way people’s information is used is crucial so the benefits derived from the technology can be fully realised.

“It is not my role to endorse or ban a technology but, while this technology is developing and not widely deployed, we have an opportunity to ensure it does not expand without due regard for data protection,” Denham added.

The Information Commissioner’s Office has said it will work with organisations to ensure that the use of LFR is lawful, and that a fair balance is struck between their own purposes and the interests and rights of the public. They will also engage with Government, regulators and industry, as well as international colleagues to make sure data protection and innovation can continue to work hand in hand.
 

What is live facial recognition? 

Facial recognition is the process by which a person can be identified or recognised from a digital facial image. Cameras are used to capture these images and FRT software measures and analyses facial features to produce a biometric template. This typically enables the user to identify, authenticate or verify, or categorise individuals. 

Live facial recognition (LFR) is a type of FRT that allows this process to take place automatically and in real-time. LFR is typically deployed in a similar way to traditional CCTV in that it is directed towards everyone in a particular area rather than specific individuals. It can capture the biometric data of all individuals passing within range of the camera indiscriminately, as opposed to more targeted “one-to-one” data processing. This can involve the collection of biometric data on a mass scale and there is often a lack of awareness, choice or control for the individual in this process. 

 

Why is biometric data particularly sensitive?

Biometrics are physical or behavioural human characteristics that can be used to digitally identify a person to grant access to systems, devices, or data. Biometric data extracted from a facial image can be used to uniquely identify an individual in a range of different contexts. It can also be used to estimate or infer other characteristics, such as their age, sex, gender, or ethnicity.

The security of the biometric authentication data is vitally important, even more than the security of passwords, since passwords can be easily changed if they are exposed. A fingerprint or retinal scan, however, is immutable. 

The UK courts have concluded that “like fingerprints and DNA [a facial biometric template] is information of an “intrinsically private” character.” LFR can collect this data without any direct engagement with the individual. Given that LFR relies on the use of sensitive personal data, the public must have confidence that its use is lawful, fair, transparent, and meets the other standards set out in data protection legislation.

Share article