Jun 5, 2020

Company Profile: Who Is DeepMind?

AI
Kayleigh Shooter
4 min
Human arm and robot arm
We take a closer look into the artificial intelligence giant, DeepMind, and how it has come to be so successful in its industry...

We take a closer look into the artificial intelligence giant, DeepMind, and how it has come to be so successful in its industry.

Business Overview:

DeepMind Technologies is a UK artificial intelligence company founded in September 2010, and acquired by Google in 2014. The company is based in London, with research centres in Canada, France, and the United States. In 2015, it became a wholly-owned subsidiary of Alphabet Inc.

The company has created a neural network that learns how to play video games in a fashion similar to that of humans, as well as a Neural Turing machine, or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain.

DeepMind Technologies' goal is to "solve intelligence", which they are trying to achieve by combining "the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms". They are trying to formalize intelligence in order to not only implement it into machines, but also understand the human brain.

The company first came to public attention in 2016, when DeepMind’s AlphaGo won a best-of-five against Go world champion Lee Sedol. Though chess-playing supercomputers such as IBM’s Deep Blue have been around since the 90s, DeepMind approached the massively more complex game of Go with machine learning techniques including neural networks and reinforcement and imitation learning.

Its impact so far:

Over 100 million people are affected by diabetic retinopathy or age-related macular degeneration. These conditions can cause permanent sight loss unless they’re treated quickly. The results, which were published in Nature Medicine, showed that their AI system could recommend patient referrals as accurately as world-leading expert doctors for over 50 sight-threatening eye diseases. More recently, they showed that the intelligent system can predict whether a patient will develop a more severe form of age-related macular degeneration months before it happens–paving the way for future research in sight-loss prevention.

Knowing how proteins fold to create different shapes could help scientists understand a protein’s role within the body. This discovery might help treat diseases believed to involve misfolded proteins such as Parkinson’s, Huntington’s and cystic fibrosis. Predicting the shape of proteins is a major unsolved challenge in science and the company has already seen early signs that their AI systems could accelerate progress in this field.

Its approach:

DeepMind's teams working on technical safety, ethics, and public engagement aim to address these questions and more. They help to anticipate short and long-term risks, explore ways to prevent these risks from happening, and find ways to address them if they do.

They believe this approach also means ruling out the use of AI technology in certain fields. For example, they have signed public pledges against using their technologies for lethal autonomous weapons, alongside many others from the AI community.

These issues go well beyond any one organisation. DeepMind's ethics team works with many brilliant non-profits, academics, and other companies, and creates forums for the public to explore some of the toughest issues. The safety team also collaborates with other leading research labs, including our colleagues at Google, OpenAI, the Alan Turing Institute, and elsewhere.

It’s also important that the people building AI reflect the broader society. They are working with universities on scholarships for people from underrepresented backgrounds, and support community efforts such as Women in Machine Learning and the African Deep Learning Indaba.

DeepMind and coronavirus:

One of its latest projects involves turning its technology on to the study of coronavirus. The company’s AlphaFold system analyses protein structure and folding.

In a blog post, the company explained the application of its system to the virus. “AlphaFold, our recently published deep learning system, focuses on predicting protein structure accurately when no structures of similar proteins are available, called “free modelling”. We’ve continued to improve these methods since that publication and want to provide the most useful predictions, so we’re sharing predicted structures for some of the proteins in SARS-CoV-2 generated using our newly-developed methods.”

Find out more about the company, here.

Share article

Jul 14, 2021

Discord buys Sentropy to fight against hate and abuse online

Technology
Discord
Sentropy
AI
2 min
Sentropy is joining Discord to continue fighting against hate and abuse on the internet

Discord, a popular chat app, has acquired the software company Sentropy to bolster its efforts to combat online abuse and harassment. Sentropy, monitors online networks for abuse and harassment, then offers users a way to block problematic people and filter out messages they don’t want to see.

First launched in 2015 and currently boasting 150 million monthly active users, Discord plans to integrate Sentropy’s own products into its existing toolkit and the company will also bring the smaller company’s leadership group aboard. Discord currently uses a “multilevel” approach to moderation, and a Trust and Safety (T&S) team dedicated to protecting users and shaping content moderation policies comprised 15% of Discord’s workforce as of May 2020.

“T&S tech and processes should not be used as a competitive advantage,” Sentropy CEO John Redgrave said in a blog post on the announcement. “We all deserve digital and physical safety, and moderators deserve better tooling to help them do one of the hardest jobs online more effectively and with fewer harmful impacts.”

 

Cleanse platforms of online harassment and abuse

 

Redgrave elaborated on the company’s natural connection with Discord: “Discord represents the next generation of social companies — a generation where users are not the product to be sold, but the engine of connectivity, creativity, and growth. In this model, user privacy and user safety are essential product features, not an afterthought. The success of this model depends upon building next-generation Trust and Safety into every product. We don’t take this responsibility lightly and are humbled to work at the scale of Discord and with Discord’s resources to increase the depth of our impact.”

Sentropy launched out of stealth last summer with an AI system designed to detect, track and cleanse platforms of online harassment and abuse. The company emerged then with $13 million in funding from notable backers including Reddit co-founder Alexis Ohanian and his VC firm Initialized Capital, King River Capital, Horizons Ventures and Playground Global.

“We are excited to help Discord decide how we can most effectively share with the rest of the Internet the best practices, technology, and tools that we’ve developed to protect our own communities,” Redgrave said.

 

Share article