Company Profile: Who Is DeepMind?
We take a closer look into the artificial intelligence giant, DeepMind, and how it has come to be so successful in its industry.
DeepMind Technologies is a UK artificial intelligence company founded in September 2010, and acquired by Google in 2014. The company is based in London, with research centres in Canada, France, and the United States. In 2015, it became a wholly-owned subsidiary of Alphabet Inc.
The company has created a neural network that learns how to play video games in a fashion similar to that of humans, as well as a Neural Turing machine, or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain.
DeepMind Technologies' goal is to "solve intelligence", which they are trying to achieve by combining "the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms". They are trying to formalize intelligence in order to not only implement it into machines, but also understand the human brain.
The company first came to public attention in 2016, when DeepMind’s AlphaGo won a best-of-five against Go world champion Lee Sedol. Though chess-playing supercomputers such as IBM’s Deep Blue have been around since the 90s, DeepMind approached the massively more complex game of Go with machine learning techniques including neural networks and reinforcement and imitation learning.
Its impact so far:
Over 100 million people are affected by diabetic retinopathy or age-related macular degeneration. These conditions can cause permanent sight loss unless they’re treated quickly. The results, which were published in Nature Medicine, showed that their AI system could recommend patient referrals as accurately as world-leading expert doctors for over 50 sight-threatening eye diseases. More recently, they showed that the intelligent system can predict whether a patient will develop a more severe form of age-related macular degeneration months before it happens–paving the way for future research in sight-loss prevention.
Knowing how proteins fold to create different shapes could help scientists understand a protein’s role within the body. This discovery might help treat diseases believed to involve misfolded proteins such as Parkinson’s, Huntington’s and cystic fibrosis. Predicting the shape of proteins is a major unsolved challenge in science and the company has already seen early signs that their AI systems could accelerate progress in this field.
DeepMind's teams working on technical safety, ethics, and public engagement aim to address these questions and more. They help to anticipate short and long-term risks, explore ways to prevent these risks from happening, and find ways to address them if they do.
They believe this approach also means ruling out the use of AI technology in certain fields. For example, they have signed public pledges against using their technologies for lethal autonomous weapons, alongside many others from the AI community.
These issues go well beyond any one organisation. DeepMind's ethics team works with many brilliant non-profits, academics, and other companies, and creates forums for the public to explore some of the toughest issues. The safety team also collaborates with other leading research labs, including our colleagues at Google, OpenAI, the Alan Turing Institute, and elsewhere.
It’s also important that the people building AI reflect the broader society. They are working with universities on scholarships for people from underrepresented backgrounds, and support community efforts such as Women in Machine Learning and the African Deep Learning Indaba.
DeepMind and coronavirus:
One of its latest projects involves turning its technology on to the study of coronavirus. The company’s AlphaFold system analyses protein structure and folding.
In a blog post, the company explained the application of its system to the virus. “AlphaFold, our recently published deep learning system, focuses on predicting protein structure accurately when no structures of similar proteins are available, called “free modelling”. We’ve continued to improve these methods since that publication and want to provide the most useful predictions, so we’re sharing predicted structures for some of the proteins in SARS-CoV-2 generated using our newly-developed methods.”
Find out more about the company, here.
AI Shows its Value; Governments Must Unleash its Potential
2020 has revealed just how far AI technology has come as it achieves fresh milestones in the fight against Covid-19. Google’s DeepMind helped predict the protein structure of the virus; AI-drive infectious disease tracker BlueDot spotted the novel coronavirus nine days before the World Health Organisation (WHO) first sounded the alarm. Just a decade ago, these feats were unfathomable.
Yet, we have only just scratched the surface of AI’s full potential. And it can’t be left to develop on its own. Governments must do more to put structures in place to advance the responsible growth of AI. They have a dual responsibility: fostering environments that enable innovation while ensuring the wider ethical and social implications are considered.
It is this balance that we are trying to achieve in the United Arab Emirates (UAE) to ensure government accelerates, rather than hinders, the development of AI. Just as every economy is transitioning at the moment, we see innovation as being vital to realising our vision for a post-oil economy. Our work in his space has highlighted three barriers in the government approach when it comes to realising AI’s potential.
First, addressing the issue of ignorance
While much time is dedicated to talking about the importance of AI, there simply isn’t enough understanding of where it’s useful and where it isn’t. There are a lot of challenges to rolling out AI technologies, both practically and ethically. However, those enacting the policies too often don’t fully understand the technology and its implications.
The Emirates is not exempt from this ignorance, but it is an issue we have been trying to address. Over the last few years, we have been running an AI diploma in partnership with Oxford University, teaching government officials the ethical implications of AI deployment. Our ambition is for every government ministry to have a diploma graduate, as it is essential to ensure policy decision-making is informed.
Second, moving away from the theoretical
While this grounding in the moral implications of AI is critical, it is important to go beyond the theoretical. It is vital that experimentation in AI is allowed to happen for its own sake and not let ethical problems stymie innovations that don’t yet exist. Indeed, many of these concerns – while well-founded – are born out in the practical deployment of these end-use cases and can’t be meaningfully discussed on paper.
If you take facial recognition as an example, looking at this issue in abstract quickly leads to discussions over privacy concerns with potential surveillance and intrusion by private companies or authorities’ regimes.
But what about the more specific issue of computer vision? Although part of the same field, the same moral quandaries do not arise, and the technology is already bearing fruit. In 2018, we developed an algorithmic solution that can be used in the detection and diagnosis of tuberculosis from chest X-rays. You can upload any image of a chest X-ray, and the system will identify if a person has the disease. Laws and regulations must be tailored to unique use-cases of AI, rather than lumping disparate fields together.
To create this culture that encourages experimentation, we launched the RegLab. It provides a safe and flexible legislation ecosystem to supports the utilisation of future technologies. This means we can actually see AI in practice before determining appropriate regulation, not the other way around. Regulation is vital to cap any unintended negative consequences of AI, but it should never be at the expense of innovation.
Finally, understanding the knock-on effects of AI
There needs to be a deeper, more nuanced understanding of AI’s wider impact. It is too easy to think the economic benefits and efficiency gains of AI must also come with negative social implications, particularly concern over job loss.
But with the right long-term government planning, it’s possible to have one without the other; to maximise the benefits and mitigate potential downsides. If people are appropriately trained in how to use or understand AI, the result is a future workforce capable of working alongside these technologies for the better – just as computers complement most people’s work today.
We’ve to start this training as soon as possible in the Emirates. Through our Ministry of Education, we have rolled out an education programme to start teaching children about AI as young as five years old. This includes coding skills and ethics, and we are carrying this right through to higher education with the Mohamed bin Zayed University of Artificial Intelligence set to welcome its first cohort in January. We hope to create future generations of talent that can work in harmony with AI for the betterment of society, not the detriment.
AI will inevitably become more pervasive in society, digitisation will continue in the wake of the pandemic, and in time we will see AI’s prominence grow. But governments have a responsibility to society to ensure that this growth is matched with the appropriate understanding of AI’s impacts. We must separate the hype from the practical solutions, and we must rigorously interrogate AI deployment and ensure that it used to enhance our existence. If governments can overcome these challenges and create the environments for AI to flourish, then we have a very exciting future ahead of us.