Keeping eyes on the road: the role of computer vision
Enabling computers and systems to derive meaningful information from digital images, videos and other visual inputs, computer vision is pretty much exactly what you think: a field of artificial intelligence (AI) that gives computers the ability to see, observe and understand.
“Computer vision tries to understand from a physiological sense how our brains are able to perceive our visual world. One of the most popular and effective glues allowing us to connect these two fields are machine learning techniques, which encode the act of learning – and eventually understanding – computer algorithms,” explains Appu Shaji, Mobius Labs CEO and Chief Scientist.
“Computer vision technology has a role to play in nearly every imaginable walk of life. In the media sector, the technology can not only detect the content of an image but grade the style and quality of the visuals. The aesthetic score can be determined in a couple of seconds, assisting marketing, advertising or editorial departments to select the most pleasing photographs. It can also scrutinise thousands of video clips to provide relevant recommendations, plus flag and/or block inappropriate content. It can also be trained to match influencers with brands to grow new client bases.”
In the automotive industry, computer vision is really showing its worth as manufacturers grapple with autonomous technology to bring in the next generation of self-driving cars. Many different companies, including Tesla, Uber, Baidu and Waymo, have launched ambitious initiatives and even started testing autonomous vehicles on select public roads.
The push for autonomous vehicles is only accelerating and according to CB Insights, funding in autonomous vehicle (AV) companies surpassed US$12bn in 2021, marking a more than 50% increase from 2020.
From self-driving cars to autonomous vehicles: the benefits of computer vision
Introduced by General Motors in 1939, the first concept of a self-driving vehicle was a radio-controlled electric vehicle. Since then, self-driving vehicles have undergone a complete transformation and have now become autonomous, too.
Using a combination of sensors, artificial intelligence, radars, and cameras to operate, autonomous vehicles don’t need any human intervention. Powered by AI, these vehicles are, in some cases, still in the development stage, with many developers looking to computer vision as a means of rendering their vehicles more reliable.
“When driving a car, it’s essential that we accurately see and interpret the information around us. Over many years, computer vision has developed techniques that enable the capture and processing of video to automate tasks. This technology has created the foundations for AI-based decision making in self-driving cars,” says Gilberto Rodriguez, Director of Product Management at Imagination Technologies.
As a core technology for autonomous vehicles, computer vision enables cars to leverage object detection algorithms in combination with advanced cameras and sensors to analyse their surroundings in real-time, enabling recognition of things like pedestrians, road signs, barriers, and other vehicles to safely navigate the road.
With these capabilities, Rodriguez believes that the technology will be critical to the development of such vehicles in the future, noting that “computer vision will continue to be a key technology in enabling autonomous vehicles, with the camera remaining the preferred sensor for data collection”.
He adds: “As software evolves, some of the algorithms used to process the data will become machine learning-based rather than classical computer vision analysis, meaning more efficient processing of large amounts of information and more advanced ADAS deployments.”
It is important that both AI and computer vision technologies work together when applied to autonomous vehicles to ensure safety and reliability. Commenting on this, Rodriguez says: “AI has proven that certain tasks are easier to train than a programme, hence why the technology is replacing some of the more complex tasks needed in autonomous driving. Understanding how computer vision works will, however, enable more efficient solutions, making it a much-needed complement to AI. Using computer vision to interpret the visual world while implementing AI to predict and improve driving outcomes will offer the best solution in terms of efficiency and safety.”
He continues: “The technology allows us to get the basics right, which is vital. Computer vision has amassed many years of analysis and deployment, which are now being enhanced with AI capabilities. As systems are becoming more complex to program, we need that layer of AI which offers more efficient algorithms through machine learning.”
Ensuring challenges are overcome to keep passengers safe
As it is a challenging area for technologists, autonomous vehicles combined with computer vision need to go through rigorous training and development to ensure they are safe for people to use.
Even though computer vision can tell the difference between a car and a human or a tree and a building, it doesn’t mean that the technology has the perceptive skills of a human driver. On top of this, machine vision has limitations in terms of camera sensor capture and the software which enables safe self-driving features.
“When it comes to autonomy, there is a need for better camera sensors, with more depth, faster framerate, better resolution etc., in addition to radar and lidar sensors. Manufacturers would need to leverage advanced software to manage and process the data from these sensors, while also ensuring the self-driving features in their vehicles are safe. This is underpinned by hardware, for example, the GPU or AI accelerator, meeting the safety requirements for autonomous vehicle deployment,” explains Rodriguez.
Concluding – and despite acknowledging the benefits of this technology – Rodriguez shares a word of warning: it is important technologists recognise the challenges to ensure high safety levels are maintained.
“There are multiple challenges for the successful implementation of machine vision. First, manufacturers need to have the right sensor technology (different cameras with different dynamic ranges, frame rates and light sensitivity). This is followed by the challenge of managing and using the sensor data. And, finally, we have the most complex part, which is how the car behaves and drives.”
“In the automotive industry, it takes up to five years to get silicon into production, so a flexible hardware architecture is needed to enable support for continuous software development. At Imagination, we enable this software-defined evolution through our innovative GPU, NNA and EPP IP, which offers the compute and connectivity flexibility required for these implementations.”
- NetApp Cloud Complexity: Reliable Data is Key to AI SuccessCloud & Cybersecurity
- AMD: Expansion, Growth and Doubling Down on AI InnovationAI & Machine Learning
- Flexential: Momentum Report Highlights Hybrid IT InnovationCloud & Cybersecurity
- Microsoft & Alphabet: AI and Cloud Strategy Driving SuccessIT Procurement