AI is not a replacement for cybersecurity practices
For the UK public sector, artificial intelligence (AI) can be used to reduce costs, improve efficiency, and complement traditional cybersecurity practices. However, it is important that AI not be viewed or deployed as a replacement for cybersecurity. The UK government is working hard to convey this message to public sector businesses in the interests of strengthened cybersecurity situations.
There are many reasons for this emergent interest in AI. Public sector bodies are dealing with enormous amounts of data and network traffic from many different sources, including on-premises and hosted infrastructures, and in some cases a combination of both. The sheer volume and variances in the source of the data makes it a near-impossible task for humans to sift through the masses of information, making managing security a task that cannot be exclusively handled manually.
AI alleviates many of these challenges. Machines have the ability to automatically comb through copious amounts of information and detect suspicious behaviour. The more data these machines analyse, the more intelligent they become, and the better they are at noticing, or predicting, potential security breaches. This allows public sector IT managers to focus on other mission-critical tasks, and new and innovative technologies that will help advance their organisation’s agendas.
However, while AI offers a number of great benefits, it should not be considered as a replacement for human intervention, or a 1:1 replacement of existing network monitoring tools. Instead, it should be used to support the people and tools that organisations are already using to keep their networks safe.
The human factor is crucial
The cyberthreat landscape continues to change rapidly, and some aspects require human management now more than ever before. According to our US-based Federal Cybersecurity Survey, respondents indicated concern over a wide range of threat sources, ranging from foreign governments to hackers, terrorists, and beyond.
The biggest perceived threat, comes from careless or untrained workers, with 54% of respondents listing this as their top concern. This point emphasises why people still very much matter when it comes to cybersecurity. According to the report “The Cyber Threat to UK Business” by the UK National Crime agency, people are a crucial component in cybersecurity and can be the strongest link. By providing staff, an organisation’s most powerful defence, with the correct training and tools, there is a higher guarantee of preventing future compromising situations.
Even though machines and systems can be highly effective at preventing suspicious behaviour; businesses must continue to rely on their security managers to train employees on everything from potential attack techniques to simple daily habits that can help protect its networks. This is particularly important for public sector bodies that host extremely sensitive data.
AI can aid in preventing malicious or careless insiders from doing damage, but it is only one piece of a larger security plan. The automatic detection of suspicious activity and immediate alerts can help public sector IT managers to respond quickly to potential threats. It can also be used to fill in gaps resulting from a lack of human resources or security training, and significantly decrease the time it takes to analyse data against known Indicators of Compromise (IOCs). As such, AI can reduce Mean Time to Resolve (MTTR) times from days to hours or even minutes.
Although AI can help in certain areas, humans will still be required to react to and implement those responses. It cannot be stressed enough that the human factor remains a critical piece of the cybersecurity puzzle.
Don’t underestimate traditional monitoring tools
Just as humans will continue to play an important role in network security in the age of AI, tools such as security information and event management (SIEM) systems, network configuration management, and user device monitoring programmes should remain a foundational element of all organisation initiatives. These solutions supplement AI by extracting information from the constant noise, allowing public sector IT managers to focus on truly critical issues and pinpoint security threats.
Similar to AI tools, traditional network monitoring programmes have the ability to analyse and correlate huge amounts of data. They complement this ability with continuous monitoring of user activity and network devices and provide automated threat intelligence alerts along with contextual information to help public sector IT managers act on that information. Indeed, our US-based survey indicated that these types of tools will continue to play a significant role in protecting networks. For example, 44% of respondents using some form of device protection solution declared they still have the ability to detect suspicious devices within minutes.
To summarise, AI is an extremely powerful tool that should not be utilised alone, but more as a component within an organisation’s security strategy. While the fight against cybersecurity threats is continuous, it can be strengthened by employing a combination of tried and tested solutions, alongside human intelligence and AI.
Paul Parker, Chief Technologist of Federal & National Government at SolarWinds
ICO warns of privacy concerns on the use of LFR technology
“I am deeply concerned about the potential for live facial recognition (LFR) technology to be used inappropriately, excessively, or even recklessly. When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant,” said Elizabeth Denham, the UK’s Information Commissioner.
Denham explained that with any new technology, building public trust and confidence in the way people’s information is used is crucial so the benefits derived from the technology can be fully realised.
“It is not my role to endorse or ban a technology but, while this technology is developing and not widely deployed, we have an opportunity to ensure it does not expand without due regard for data protection,” Denham added.
The Information Commissioner’s Office has said it will work with organisations to ensure that the use of LFR is lawful, and that a fair balance is struck between their own purposes and the interests and rights of the public. They will also engage with Government, regulators and industry, as well as international colleagues to make sure data protection and innovation can continue to work hand in hand.
What is live facial recognition?
Facial recognition is the process by which a person can be identified or recognised from a digital facial image. Cameras are used to capture these images and FRT software measures and analyses facial features to produce a biometric template. This typically enables the user to identify, authenticate or verify, or categorise individuals.
Live facial recognition (LFR) is a type of FRT that allows this process to take place automatically and in real-time. LFR is typically deployed in a similar way to traditional CCTV in that it is directed towards everyone in a particular area rather than specific individuals. It can capture the biometric data of all individuals passing within range of the camera indiscriminately, as opposed to more targeted “one-to-one” data processing. This can involve the collection of biometric data on a mass scale and there is often a lack of awareness, choice or control for the individual in this process.
Why is biometric data particularly sensitive?
Biometrics are physical or behavioural human characteristics that can be used to digitally identify a person to grant access to systems, devices, or data. Biometric data extracted from a facial image can be used to uniquely identify an individual in a range of different contexts. It can also be used to estimate or infer other characteristics, such as their age, sex, gender, or ethnicity.
The security of the biometric authentication data is vitally important, even more than the security of passwords, since passwords can be easily changed if they are exposed. A fingerprint or retinal scan, however, is immutable.
The UK courts have concluded that “like fingerprints and DNA [a facial biometric template] is information of an “intrinsically private” character.” LFR can collect this data without any direct engagement with the individual. Given that LFR relies on the use of sensitive personal data, the public must have confidence that its use is lawful, fair, transparent, and meets the other standards set out in data protection legislation.