May 17, 2020

Industry 4.0 to facilitate four-hour working days by 2050

Yell
Industry 4.0
IoT
Ar
Jonathan Dyble
2 min
AI
Yell has released a new report revealing its outlook on what businesses will look like by 2050 as a result of emerging technologies such as artificial i...

Yell has released a new report revealing its outlook on what businesses will look like by 2050 as a result of emerging technologies such as artificial intelligence (AI), augmented reality (AR) and the internet of things (IoT).

Partnering with James Wallman, a renowned futurist and journalist, Yell found a number of key trends that it expects to come to fruition with the growth of industry 4.0.

See also:

With greater emphasis being placed on AI by innovators, Yell predicts that businesses will soon be collaborating heavily with robots – something that should be embraced.

“Instead of worrying about robots, we should be excited about ‘cobots’ — robots that are colleagues, designed to work collaboratively alongside humans,” Yell said.

“Machines will conduct more of the repetitive, routine tasks — from chopping and fetching to searching through libraries of information and making complex calculation. As a result, ‘cobots’ will make us more productive, and happier at work.

In line with this, Yell also predicts that increasing automation in business operations will ultimately lead to shorter work days. The report cites research that suggests that four-hour work days are more optimal compared to eight-hour days – something that Yell thinks will be facilitated by the rise of AI and automation.

With just two of seven trends having been mentioned, you can get further insight by reading the full report here.

Share article

Jul 14, 2021

Discord buys Sentropy to fight against hate and abuse online

Technology
Discord
Sentropy
AI
2 min
Sentropy is joining Discord to continue fighting against hate and abuse on the internet

Discord, a popular chat app, has acquired the software company Sentropy to bolster its efforts to combat online abuse and harassment. Sentropy, monitors online networks for abuse and harassment, then offers users a way to block problematic people and filter out messages they don’t want to see.

First launched in 2015 and currently boasting 150 million monthly active users, Discord plans to integrate Sentropy’s own products into its existing toolkit and the company will also bring the smaller company’s leadership group aboard. Discord currently uses a “multilevel” approach to moderation, and a Trust and Safety (T&S) team dedicated to protecting users and shaping content moderation policies comprised 15% of Discord’s workforce as of May 2020.

“T&S tech and processes should not be used as a competitive advantage,” Sentropy CEO John Redgrave said in a blog post on the announcement. “We all deserve digital and physical safety, and moderators deserve better tooling to help them do one of the hardest jobs online more effectively and with fewer harmful impacts.”

 

Cleanse platforms of online harassment and abuse

 

Redgrave elaborated on the company’s natural connection with Discord: “Discord represents the next generation of social companies — a generation where users are not the product to be sold, but the engine of connectivity, creativity, and growth. In this model, user privacy and user safety are essential product features, not an afterthought. The success of this model depends upon building next-generation Trust and Safety into every product. We don’t take this responsibility lightly and are humbled to work at the scale of Discord and with Discord’s resources to increase the depth of our impact.”

Sentropy launched out of stealth last summer with an AI system designed to detect, track and cleanse platforms of online harassment and abuse. The company emerged then with $13 million in funding from notable backers including Reddit co-founder Alexis Ohanian and his VC firm Initialized Capital, King River Capital, Horizons Ventures and Playground Global.

“We are excited to help Discord decide how we can most effectively share with the rest of the Internet the best practices, technology, and tools that we’ve developed to protect our own communities,” Redgrave said.

 

Share article