Governments, enterprise move forwards on AI regulation
Ever since the science-fiction author Isaac Asimov invented his ‘Three Laws of Robotics’ in 1942, the public consciousness has been gripped by the quandary of how we might stop artificial intelligence from turning on us. Asimov’s original laws were:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
While once the preserve of science-fiction authors, now governments are moving forwards on the regulation of AI. In the US, President Trump last year signed an executive order announcing a national strategy on artificial intelligence, while the EU just released a report recommending approaches to the “digital future”.
Along similar lines, private enterprise, too, is getting in on the action, with German manufacturing giant Bosch just announcing an artificial intelligence (AI) ethics policy, reflecting the company’s ambition to have all of its products either contain AI or have been developed with the assistance of the technology by 2025.
Bosch’s guidelines read:
All Bosch AI products reflect our “Invented for life” ethos, which combines a quest for innovation with a sense of social responsibility.
AI decisions that affect people must not be made without a human arbiter. Instead, AI should be a tool for people.
We develop safe, robust, and explainable AI products.
Trust is one of our company’s fundamental values. We want to develop trustworthy AI products.
When developing AI products, we observe legal requirements and orient to ethical principles.
The motives for implementing such guidelines were expressed by Michael Bolle, Bosch’s CDO and CTO: “If AI is a black box, then people won’t trust it. In a connected world, however, trust will be essential.”
Discord buys Sentropy to fight against hate and abuse online
Discord, a popular chat app, has acquired the software company Sentropy to bolster its efforts to combat online abuse and harassment. Sentropy, monitors online networks for abuse and harassment, then offers users a way to block problematic people and filter out messages they don’t want to see.
First launched in 2015 and currently boasting 150 million monthly active users, Discord plans to integrate Sentropy’s own products into its existing toolkit and the company will also bring the smaller company’s leadership group aboard. Discord currently uses a “multilevel” approach to moderation, and a Trust and Safety (T&S) team dedicated to protecting users and shaping content moderation policies comprised 15% of Discord’s workforce as of May 2020.
“T&S tech and processes should not be used as a competitive advantage,” Sentropy CEO John Redgrave said in a blog post on the announcement. “We all deserve digital and physical safety, and moderators deserve better tooling to help them do one of the hardest jobs online more effectively and with fewer harmful impacts.”
Cleanse platforms of online harassment and abuse
Redgrave elaborated on the company’s natural connection with Discord: “Discord represents the next generation of social companies — a generation where users are not the product to be sold, but the engine of connectivity, creativity, and growth. In this model, user privacy and user safety are essential product features, not an afterthought. The success of this model depends upon building next-generation Trust and Safety into every product. We don’t take this responsibility lightly and are humbled to work at the scale of Discord and with Discord’s resources to increase the depth of our impact.”
Sentropy launched out of stealth last summer with an AI system designed to detect, track and cleanse platforms of online harassment and abuse. The company emerged then with $13 million in funding from notable backers including Reddit co-founder Alexis Ohanian and his VC firm Initialized Capital, King River Capital, Horizons Ventures and Playground Global.
“We are excited to help Discord decide how we can most effectively share with the rest of the Internet the best practices, technology, and tools that we’ve developed to protect our own communities,” Redgrave said.