Vodafone and co use AI to identify disease-beating foods
It’s not every day a telecommunications fiant plays a part in releasing a cookbook, rarer still one “inspired” by AI.
The project involved devising recipes using ingredients understood to have “disease-beating properties”, based on findings from Imperial College’s DRUGS project. DRUGS used AI and machine learning to identify the molecules in foods such as carrots, celery, oranges, grapes, coriander, cabbage, turmeric and dill that help fight diseases.
Dr Kirill Veselkov, lead computational scientist at Imperial College London, said in : “We are seeing a continuous growth in chronic conditions, such as cancer, neurological diseases and heart disorders. A key contributing factor is poor diet; studies suggest that unhealthy diets are responsible for a fifth of deaths globally and it’s estimated that almost half of all cancers could be prevented by good dietary and lifestyle choices.”
The research made use of Vodafone’s app, which leverages mobile phone processing power to speed up scientific research. While phones are being charged, the app aggregates the processing power of phones with the app installed to create a cloud supercomputer.
Helen Lamprell, Vodafone Foundation Trustee and General Counsel & External Affairs Director, Vodafone UK, said: “Technologies such as AI have the potential to create a smarter healthcare system and improve outcomes for patients. We are hugely proud that DreamLab has played such an important role in Imperial’s research project and in the development of this collection of recipes.”
While AI and food may not seem the most natural of bedfellows, there are nevertheless a number of companies looking to bring AI to bear on the subject of food. , for instance, uses AI to create recipes out of the ingredients you have immediately available to you by learning from flavour combinations to create new recipes.
Discord buys Sentropy to fight against hate and abuse online
Discord, a popular chat app, has acquired the software company Sentropy to bolster its efforts to combat online abuse and harassment. Sentropy, monitors online networks for abuse and harassment, then offers users a way to block problematic people and filter out messages they don’t want to see.
First launched in 2015 and currently boasting 150 million monthly active users, Discord plans to integrate Sentropy’s own products into its existing toolkit and the company will also bring the smaller company’s leadership group aboard. Discord currently uses a “multilevel” approach to moderation, and a Trust and Safety (T&S) team dedicated to protecting users and shaping content moderation policies comprised 15% of Discord’s workforce as of May 2020.
“T&S tech and processes should not be used as a competitive advantage,” Sentropy CEO John Redgrave said in a blog post on the announcement. “We all deserve digital and physical safety, and moderators deserve better tooling to help them do one of the hardest jobs online more effectively and with fewer harmful impacts.”
Cleanse platforms of online harassment and abuse
Redgrave elaborated on the company’s natural connection with Discord: “Discord represents the next generation of social companies — a generation where users are not the product to be sold, but the engine of connectivity, creativity, and growth. In this model, user privacy and user safety are essential product features, not an afterthought. The success of this model depends upon building next-generation Trust and Safety into every product. We don’t take this responsibility lightly and are humbled to work at the scale of Discord and with Discord’s resources to increase the depth of our impact.”
Sentropy launched out of stealth last summer with an AI system designed to detect, track and cleanse platforms of online harassment and abuse. The company emerged then with $13 million in funding from notable backers including Reddit co-founder Alexis Ohanian and his VC firm Initialized Capital, King River Capital, Horizons Ventures and Playground Global.
“We are excited to help Discord decide how we can most effectively share with the rest of the Internet the best practices, technology, and tools that we’ve developed to protect our own communities,” Redgrave said.