May 17, 2020

97% of risk pros believe unsecured IoT could facilitate cyber attacks

Ponemon Institute
IoT
Shared Assessments Program
Garnet
Jonathan Dyble
2 min
IoT
A collaborative report from the Ponemon Institute and the Shared Assessments Program has revealed that 97% of risk professionals are worried that an uns...

A collaborative report from the Ponemon Institute and the Shared Assessments Program has revealed that 97% of risk professionals are worried that an unsecured IoT devices could result in a significant cyber hack and cause threats to many businesses.

Having surveyed 605 professionals, the report found that the number of IoT devices in the workplace is expected to increase from 15,874 devices in 2017 to 24,762 devices this year.

See also:

However, the growing uptake in IoT devices is likely to lead to a further increase in the number of IoT attacks, with a report from Gartner having revealed that 20% of organizations have been subject to one or more IoT hacks within the past three years.

“The rapid adoption of IoT devices and applications is not slowing down and organizations need to have a clear understanding of the risks these devices pose both inside their own and outside their extended networks,” said Charlie Miller, Senior Vice President of the Shared Assessments Program.

In the aim of keeping devices secure and curbing these threats, Gartner predicts that IoT security spending is expected to reach $1.5bn this year, up from $1.2bn in 2017, with regulatory compliance expected to act as a key driver.

“In IoT initiatives, organizations often don't have control over the source and nature of the software and hardware being utilized by smart connected devices,” said Ruggero Contu, Research Director at Gartner.

“We expect to see demand for tools and services aimed at improving discovery and asset management, software and hardware security assessment, and penetration testing.”

Share article

Jun 21, 2021

ICO warns of privacy concerns on the use of LFR technology

Technology
ICO
LFR
cameras
3 min
Organisations need to justify their use of live facial recognition (LFR) is fair, necessary, and proportionate, says the Information Commissioner’s Office

Live facial recognition (LFR) technology should not be used simply because it is available and must be used for a specific purpose, the Information Commissioner’s Office (ICO) has warned.

“I am deeply concerned about the potential for live facial recognition (LFR) technology to be used inappropriately, excessively, or even recklessly. When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant,” said Elizabeth Denham, the UK’s Information Commissioner.

Denham explained that with any new technology, building public trust and confidence in the way people’s information is used is crucial so the benefits derived from the technology can be fully realised.

“It is not my role to endorse or ban a technology but, while this technology is developing and not widely deployed, we have an opportunity to ensure it does not expand without due regard for data protection,” Denham added.

The Information Commissioner’s Office has said it will work with organisations to ensure that the use of LFR is lawful, and that a fair balance is struck between their own purposes and the interests and rights of the public. They will also engage with Government, regulators and industry, as well as international colleagues to make sure data protection and innovation can continue to work hand in hand.
 

What is live facial recognition? 

Facial recognition is the process by which a person can be identified or recognised from a digital facial image. Cameras are used to capture these images and FRT software measures and analyses facial features to produce a biometric template. This typically enables the user to identify, authenticate or verify, or categorise individuals. 

Live facial recognition (LFR) is a type of FRT that allows this process to take place automatically and in real-time. LFR is typically deployed in a similar way to traditional CCTV in that it is directed towards everyone in a particular area rather than specific individuals. It can capture the biometric data of all individuals passing within range of the camera indiscriminately, as opposed to more targeted “one-to-one” data processing. This can involve the collection of biometric data on a mass scale and there is often a lack of awareness, choice or control for the individual in this process. 

 

Why is biometric data particularly sensitive?

Biometrics are physical or behavioural human characteristics that can be used to digitally identify a person to grant access to systems, devices, or data. Biometric data extracted from a facial image can be used to uniquely identify an individual in a range of different contexts. It can also be used to estimate or infer other characteristics, such as their age, sex, gender, or ethnicity.

The security of the biometric authentication data is vitally important, even more than the security of passwords, since passwords can be easily changed if they are exposed. A fingerprint or retinal scan, however, is immutable. 

The UK courts have concluded that “like fingerprints and DNA [a facial biometric template] is information of an “intrinsically private” character.” LFR can collect this data without any direct engagement with the individual. Given that LFR relies on the use of sensitive personal data, the public must have confidence that its use is lawful, fair, transparent, and meets the other standards set out in data protection legislation.

Share article