Startup Spotlight: Darktrace’s AI security immune system
Cybersecurity startup Darktrace bases its offering on the immune system, with autonomous response technology responding to threats at inhuman speeds.
The Cambridge and San Francisco-based company was founded in 2013 by mathematicians from the University of Cambridge and cyber security specialists. The company’s premier offering, the Enterprise Immune System brings AI to cybersecurity, and allows for rapid response to incoming threats.
Industry solutions offered stretch from financial services to manufacturing to healthcare to transportation and the media. Customers include the likes of Ebay, Ocado, Peugeot, T-Mobile and BT, as well as motor racing team McLaren.
The latter was recently announced to have appointed Darktrace as its “AI Cyber Security Partner” in the pursuit of protecting the team from cyber attacks. With a multitude of endpoints such as the IoT sensors on cars and cloud-based operational software, McLaren plan to use the AI-enabled immune response to be able to respond at a speed beyond human capabilities.
In a press release, Poppy Gustafsson, CEO, Darktrace, said: “We are excited to be partnering with McLaren, a company with innovation at its core. Cyber-attacks that seek to cause disruption to global events, as well as attacks that subtly steal coveted IP, are on the rise. We are proud that our technology is being trusted to automatically protect the McLaren team, enabling them to race to the finish line in the knowledge that their systems are secured by world-leading Cyber AI.”
The company has raised a total of $230.5mn from a number of funding rounds, with the latest being a $50mn Series E round led by British equity firm Vitruvian Partners. In 2019, Darktrace was named the “Artificial Intelligence Business of the Year” at the Lloyds Bank National Business Awards 2019.
Discord buys Sentropy to fight against hate and abuse online
Discord, a popular chat app, has acquired the software company Sentropy to bolster its efforts to combat online abuse and harassment. Sentropy, monitors online networks for abuse and harassment, then offers users a way to block problematic people and filter out messages they don’t want to see.
First launched in 2015 and currently boasting 150 million monthly active users, Discord plans to integrate Sentropy’s own products into its existing toolkit and the company will also bring the smaller company’s leadership group aboard. Discord currently uses a “multilevel” approach to moderation, and a Trust and Safety (T&S) team dedicated to protecting users and shaping content moderation policies comprised 15% of Discord’s workforce as of May 2020.
“T&S tech and processes should not be used as a competitive advantage,” Sentropy CEO John Redgrave said in a blog post on the announcement. “We all deserve digital and physical safety, and moderators deserve better tooling to help them do one of the hardest jobs online more effectively and with fewer harmful impacts.”
Cleanse platforms of online harassment and abuse
Redgrave elaborated on the company’s natural connection with Discord: “Discord represents the next generation of social companies — a generation where users are not the product to be sold, but the engine of connectivity, creativity, and growth. In this model, user privacy and user safety are essential product features, not an afterthought. The success of this model depends upon building next-generation Trust and Safety into every product. We don’t take this responsibility lightly and are humbled to work at the scale of Discord and with Discord’s resources to increase the depth of our impact.”
Sentropy launched out of stealth last summer with an AI system designed to detect, track and cleanse platforms of online harassment and abuse. The company emerged then with $13 million in funding from notable backers including Reddit co-founder Alexis Ohanian and his VC firm Initialized Capital, King River Capital, Horizons Ventures and Playground Global.
“We are excited to help Discord decide how we can most effectively share with the rest of the Internet the best practices, technology, and tools that we’ve developed to protect our own communities,” Redgrave said.