Developments within Neural Networks and Deep Learning Evolutions
Risto Miikkulainen, Head of Research at Sentient Technologies
To gain a better appreciation of the current state of...
Latest trends within the AI Space
Risto Miikkulainen, Head of Research at Sentient Technologies
To gain a better appreciation of the current state of play within the developing field of deep learning, Gigabit approached Risto Miikkulainen at Sentient Technologies. This company is at the forefront of utilising AI to improve a number of complex digital operations, including generating customer engagement and conversions. We asked Miikkulainen to expand on the latest advancements in neural networks:
“One of the most important developments has been the ability to run multiple evolutions within a single space. Whilst in the past we focused an evolution on a single solution and progressed it along a learning gradient, developments from companies like Uber Technologies have changed our approach.
“We can now run multiple evolutions within a single solution space, which allows multiple solutions to challenge without putting all our eggs in one basket and then discovering there are unseen constraints to this approach. In these cases, we seed alternative evolutions, which subsequently erodes development time because we can come to a desirable solution faster.”
What roadblocks does Miikkulainen believe that AI and deep learning must overcome to advance in the close future?
“One of the most intriguing challenges facing AI is the human bottleneck. Over the last decade we have gained experience in creating neural networks and learning evolutions in the academic space, but we lack commercial application. We are still looking for the industrialisation of AI, and great minds that understand the possibilities of this medium and can put it to work in real world situations.
“For instance, we perform a significant amount of work within the digital marketing space. We use specific algorithms to optimise pages for conversions. All too often the best performing candidate is the page that ultimately got lucky – it received a distribution of successes that sits somewhere on the outer edge of the bell curve. When companies put all their resource into this candidate, conversions drop. We solve this problem by looking at candidates in the neighbourhood of the most successful candidate, but ultimately we would like to better understand the space to create evolutions that produce the most accurate result.”
Focusing specifically on Sentient Technologies, Miikkulainen describes the innovation that’s taking place in the company’s research labs to progress the use of deep learning algorithms:
“Some of the most interesting work has been combining the massive leaps forward in computer modelling with AI. One of the major limitations within the AI industry is when there is a dollar value attached to each one of the iterative tests that forms the basis of the evolutions within the deep learning algorithms. However, we can solve this issue by using a combination of advanced mathematical modelling that forms an environment within which the evolutions occur.
“One such project is our cyber-agriculture program that we are running with MIT. We found that AI struggled to work with crop yields and care schedules in the real world, so we created a synthetic environment where we could grow different plants. This allows us to formulate systems of crop management for all kinds of flora regardless of growing times. Pecan plants take 200 years to grow in the real world and we would therefore require millennia to train AIs to manage these crops, but we can grow thousands of trees every minute in a modelled environment.
“Our focus was on growing basil. We found out how to grow to the maximum yield and for the best flavour profile according to mass spectrometry. But the really interesting thing was that we found that the computer could tell us something we didn’t know. We had assumed that basil needs six hours of sleep time with no light to maximise growth, but the AI had the basil growing in 24 hours of sustained light for the largest yields. This takes us full circle back to where we started, with an AI working on multiple strategies that may not be obvious to people, in order to find the optimal solution.”
The Application of Algorithms
Charis Doidge, Research Scientist at Ordnance Survey
How are deep learning developments being practically utilised by organisations throughout the UK? At Ordnance Survey, Charis Doidge is heading up a team working in conjunction with Microsoft, training systems to recognise features on aerial photographs for several practical purposes. In the initial phases of this project, the AI focused on identifying roofs. Gigabit asked Charis for more information about this project:
“We are currently investing heavily in new sensed data and extraction techniques. Our ambition is to provide data for a future that is connected and autonomous, expanding our remit as we continue collaborating on ground-breaking smart cities, IoT, connected and autonomous vehicles (CAV) and 5G projects.
“We have the building footprints of all structures within GB, and we investigated the prospect of automatically extracting roof type and adding this to our data. This builds upon the work previously conducted for roof type classification using Digital Surface Models with shallow machine learning networks and deriving building heights automatically for 3D city modelling.
“This included testing deeper machine learning networks, and cloud-based infrastructure, working with a Microsoft team to get an end-to-end process set up. We developed the techniques further and looked at deeper networks using different toolsets. We utilised three systems that allowed us to evolve the overall accuracy of the process, creating a final deep learning system that was accurate to 90%.”
Doidge went on to detail some of the issues involved in the progress of the project:
“Our biggest issue for the hack was learning how to create an end to end flowline on the cloud. Where we could store data, process it, classify it, and then utilise the output in a meaningful way. The Microsoft advisors and our OS experts successfully created a unique solution for the roof hack.
“We also had an issue with ensuring our algorithms worked in a general sense, and geography is deceptively changeable and complex across GB. Since the roof hack where we identified this problem for roof generalisation, we solved this by curating a more geographically diverse data sets for our current deep learning training runs.”
Finally, Doidge describes other functions within the OS that are also utilising deep learning to achieve the required project outcomes:
“We have various streams of deep learning at Ordnance Survey, as well as some traditional computer vision techniques and rule-based classification. These are all geared towards providing more for our customers and enhancing our offering to GB. Some of our upcoming projects include:
- ImageLearn: Our deep learning programme in which we are training a model on our RGB imagery and using a MasterMap topography layer as a highly detailed labelling method for the landscape. We hypothesise that we can decode the signatures of processes that have shaped the landscape, or the underlying explanatory factors hidden in the observed data to extract descriptors of GB.
- Rules-Based Classification: We have developed a rules-based classification system using a detailed ruleset curated by our research team, which can classify RGB imagery and height models to assess the landcover within. The current system has supplied the entirety of England to the Rural Payments Agency for them to accurately assess the hedge cover of rural land to correctly distribute EU payments.
- Wider Research: The ImageLearn project is running in conjunction with the University of Southampton and the University of Lancaster. One candidate is investigating the use of deep learning techniques on extracting previously undiscovered archaeological sites in GB from aerial imagery, LIDAR and height models.”
Jonathan Wilkins, Head of Marketing at EU Automation
It is easy to fall into the idea that AI strictly works in a controlled shadow world of processes that sits outside our own. One of the places where AI meets the practical and very physical real world is in automation. Gigabit asked Jonathan Wilkins at EU Automation about out how the manufacturing industry are benefitting from the latest developments:
“Traditional robots are unable to respond appropriately when unexpected circumstances arise as they do not have the ability to forecast issues and achieve solutions. Machine learning technology is allowing robots to make decisions, based on experience. For example, a robot collects data on a system’s activity and uses it to make decisions that improve the efficiency of its work. This reduces the resources spent during manufacturing.
“When an alteration is made to an automated manufacturing line, somebody must manually program changes. Machine learning enables the machinery to adapt to alterations in the process, without the need for human input. Predictive maintenance systems notify workers before a fault. This is accomplished by sensors, which detect abnormalities, indicating a fault. With machine learning technology however, predictive maintenance becomes more effective, because machines take note of all experiences that coincide with system faults and apply this information in future situations. It is even possible for a machine to analyse the data for each individual situation and decide the next action for itself. In some cases, the system may perform its own corrective function, otherwise it would alert an engineer. If the situation is particularly dangerous, it may shut down the system.”
What does Wilkins believe is the greatest barrier impacting the marriage of manufacturing processes and deep learning?
“One of the biggest challenges in the adoption of machine learning is data handling. Machine learning algorithms collect and analyse data, but irrelevant information can interfere with this process. Ensuring machine learning algorithms function in a way that benefits the business, manufacturers must understand their data and the exact functions they want machine learning to fulfil.”
Finally, Wilkins explains how he believes deep learning will be harnessed by his industry in the future:
“Artificial intelligence is already being used to solve simple problems, such as AGVs overcoming obstacles, on the factory floor and along the supply chain. As the technology develops, we will see it being used to solve more and more complex problems.
“Soon, this could lead to the development of collaborative robots that work alongside humans. They would participate in business meetings and adapt to changing circumstances. These robots would benefit businesses by being able to interpret and analyse larger amounts of data than the human brain to make informed decisions about predicted outcomes from so called soft interactions.”
ICO warns of privacy concerns on the use of LFR technology
“I am deeply concerned about the potential for live facial recognition (LFR) technology to be used inappropriately, excessively, or even recklessly. When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant,” said Elizabeth Denham, the UK’s Information Commissioner.
Denham explained that with any new technology, building public trust and confidence in the way people’s information is used is crucial so the benefits derived from the technology can be fully realised.
“It is not my role to endorse or ban a technology but, while this technology is developing and not widely deployed, we have an opportunity to ensure it does not expand without due regard for data protection,” Denham added.
The Information Commissioner’s Office has said it will work with organisations to ensure that the use of LFR is lawful, and that a fair balance is struck between their own purposes and the interests and rights of the public. They will also engage with Government, regulators and industry, as well as international colleagues to make sure data protection and innovation can continue to work hand in hand.
What is live facial recognition?
Facial recognition is the process by which a person can be identified or recognised from a digital facial image. Cameras are used to capture these images and FRT software measures and analyses facial features to produce a biometric template. This typically enables the user to identify, authenticate or verify, or categorise individuals.
Live facial recognition (LFR) is a type of FRT that allows this process to take place automatically and in real-time. LFR is typically deployed in a similar way to traditional CCTV in that it is directed towards everyone in a particular area rather than specific individuals. It can capture the biometric data of all individuals passing within range of the camera indiscriminately, as opposed to more targeted “one-to-one” data processing. This can involve the collection of biometric data on a mass scale and there is often a lack of awareness, choice or control for the individual in this process.
Why is biometric data particularly sensitive?
Biometrics are physical or behavioural human characteristics that can be used to digitally identify a person to grant access to systems, devices, or data. Biometric data extracted from a facial image can be used to uniquely identify an individual in a range of different contexts. It can also be used to estimate or infer other characteristics, such as their age, sex, gender, or ethnicity.
The security of the biometric authentication data is vitally important, even more than the security of passwords, since passwords can be easily changed if they are exposed. A fingerprint or retinal scan, however, is immutable.
The UK courts have concluded that “like fingerprints and DNA [a facial biometric template] is information of an “intrinsically private” character.” LFR can collect this data without any direct engagement with the individual. Given that LFR relies on the use of sensitive personal data, the public must have confidence that its use is lawful, fair, transparent, and meets the other standards set out in data protection legislation.