What role will AI play for enterprise in 2020?
As AI becomes increasingly ever-present in the commercial sphere in the form of digital assistants, it’s worth considering to what extent enterprise is keeping up.
The enthusiasm for the implementation of artificial intelligence in a workplace setting may abate in the year ahead according to a report from professional services firm PwC. The third annual report in a series on AI predictions, one of its key findings is that only 4% of over 1,000 surveyed executives have plans to deploy AI across the enterprise, compared to 20% in the year before. That’s due in part, PwC says, to a revising of expectations to focus on fundamental tasks rather than total overhauls.
The US Government is another highly interested party. As reported by Vox, governmental organisations are beginning to set out their stalls for the implementation and regulation of what has previously been something of an AI wild west (West World, anyone?). A White House memo details the balance between regulation and innovation the US is hoping to strike, and also features some juicy tidbits about the potential risks of the technology, reading: “When humans delegate decision-making and other functions to AI applications, there is a risk that AI’s pursuit of its defined goals may diverge from the underlying or original human intent and cause unintended consequences—including those that negatively impact privacy, civil rights, civil liberties, confidentiality, security, and safety.”
On a happier note, AI has also been making less controversial waves at the ongoing Consumer Electronics Show (CES) 2020 in Las Vegas. Potentially monetizable uses of the technology include advertising, as tech firm Mirriad has been demonstrating with the smart insertion of content into online videos through machine learning. The technology identifies blank spaces in the scene, adding advertising after the fact while making it appear to be part of the original video.
Discord buys Sentropy to fight against hate and abuse online
Discord, a popular chat app, has acquired the software company Sentropy to bolster its efforts to combat online abuse and harassment. Sentropy, monitors online networks for abuse and harassment, then offers users a way to block problematic people and filter out messages they don’t want to see.
First launched in 2015 and currently boasting 150 million monthly active users, Discord plans to integrate Sentropy’s own products into its existing toolkit and the company will also bring the smaller company’s leadership group aboard. Discord currently uses a “multilevel” approach to moderation, and a Trust and Safety (T&S) team dedicated to protecting users and shaping content moderation policies comprised 15% of Discord’s workforce as of May 2020.
“T&S tech and processes should not be used as a competitive advantage,” Sentropy CEO John Redgrave said in a blog post on the announcement. “We all deserve digital and physical safety, and moderators deserve better tooling to help them do one of the hardest jobs online more effectively and with fewer harmful impacts.”
Cleanse platforms of online harassment and abuse
Redgrave elaborated on the company’s natural connection with Discord: “Discord represents the next generation of social companies — a generation where users are not the product to be sold, but the engine of connectivity, creativity, and growth. In this model, user privacy and user safety are essential product features, not an afterthought. The success of this model depends upon building next-generation Trust and Safety into every product. We don’t take this responsibility lightly and are humbled to work at the scale of Discord and with Discord’s resources to increase the depth of our impact.”
Sentropy launched out of stealth last summer with an AI system designed to detect, track and cleanse platforms of online harassment and abuse. The company emerged then with $13 million in funding from notable backers including Reddit co-founder Alexis Ohanian and his VC firm Initialized Capital, King River Capital, Horizons Ventures and Playground Global.
“We are excited to help Discord decide how we can most effectively share with the rest of the Internet the best practices, technology, and tools that we’ve developed to protect our own communities,” Redgrave said.