May 17, 2020

Making the Most of AI and Automation: Steps to success

Neeti Mehta, SVP and Co-Founde...
4 min
How can we navigate conflicts arising from the use of digital workers, and how will they serve society as a whole?
Of all the changes that technology has brought to our lives in recent years, the complete transformation in the way we work is perhaps the most striking...

Of all the changes that technology has brought to our lives in recent years, the complete transformation in the way we work is perhaps the most striking. It’s predicted that within the next five years, RPA will have achieved almost universal adoption, automating the repetitive tasks currently carried out by human workers. This makes RPA the fastest growing sector for enterprise software in the world – soon, software ‘bots’ or ‘digital workers’ will be a standard part of our teams, working alongside us all.  

As we prepare ourselves for this ‘future of work’ where humanity and technology overlap more closely, there are corporate ethical issues to address. How can we navigate conflicts arising from the use of digital workers, and how will they serve society as a whole?

Here are three recommendations that business leaders should keep in mind to ensure an ethical deployment of artificial intelligence (AI) and automation, and to achieve a positive impact on the organisation from these technologies. 

Prioritise diversity. Digital workers can bring about changes to the paradigm of modern work, bringing with them a monumental shift in trust. Not only is this technology new, but it promises to disrupt nearly every existing industry – delivering a set of trust issues unique to automation and AI. 

Diversity is the single most important factor in how automation and AI will transform society. It will impact not only those creating and using the technology but also the people affected by the algorithms generated – so the gender gap that the AI industry faces must be closed, fast. 

The World Economic Forum reports that just 22 per cent of AI professionals are women, and according to research commissioned by Automation Anywhere, women face a higher risk of being negatively impacted by AI and automation technology. Lack of diversity allows the cultivation of biases and narrows the application area of new technologies to the challenges they were intended to solve. But, in contrast, diverse thinking at the highest levels of the industry can help promote innovation and make sure that these technologies are used to benefit different groups and demographics.

My advice to business leaders? Hire more women. Hire people from diverse backgrounds. Hire people with perspectives that don’t match your own. By listening to what others have to say, we all have an opportunity to gain valuable insight and capabilities. 


Put human values first. Bots are made by humans and are only as moral and ethical as the humans behind the strategic decision making. Bots don’t have thoughts, feelings or empathy of their own. So how do you teach a bot to behave? 

Ultimately, it comes down to the human behind the algorithm. Businesses must understand the irreplaceable value of human workers, their empathy, kindness, joy, respect, creativity and passion. This must be at the forefront of strategic decision making on the ethics of the future of work. 

The economic benefit of digital workers is undeniable – it’s found in the increased productivity, fewer errors and lower costs of a digitally augmented workforce. It is by sustaining our human values that we determine where workers will put the time they save: towards ventures that can never be automated, such as applying human ingenuity, tenacity and creativity towards economic growth and society’s most pressing issues. 

Educate for the digital workforce. As with many challenges humanity faces, tackling the future of work begins with education. According to the World Economic Forum, by 2022 new technology will create 133 million new jobs – compared with 75 million declining roles. Key to these jobs will be skills like analytical thinking, creativity and complex problem-solving over abilities like manual dexterity, memory and reading comprehension.

The task of preparing the world for the future workforce doesn’t just fall to academic institutions. Businesses must help today’s employees adapt and thrive in this new economic structure by investing in reskilling programs that equip them for this new world. For example, at Automation Anywhere University we have trained more than 350,000 developers, business analysts, partners and students in RPA. Imagine the impact if every major company launched similar programs – a workforce well-positioned for the 133 million new jobs of tomorrow. 

We must hold ourselves accountable for the ethical deployment of AI and automation, prioritise people over profit, diversity over the status quo and always practice the greatest benefit for the highest number of workers. If we can hold true to these values, I believe the legacy of these technologies will be one of prosperity and human progress. 

By Neeti Mehta, SVP and Co-Founder of Automation Anywhere

Share article

Jul 14, 2021

Discord buys Sentropy to fight against hate and abuse online

2 min
Sentropy is joining Discord to continue fighting against hate and abuse on the internet

Discord, a popular chat app, has acquired the software company Sentropy to bolster its efforts to combat online abuse and harassment. Sentropy, monitors online networks for abuse and harassment, then offers users a way to block problematic people and filter out messages they don’t want to see.

First launched in 2015 and currently boasting 150 million monthly active users, Discord plans to integrate Sentropy’s own products into its existing toolkit and the company will also bring the smaller company’s leadership group aboard. Discord currently uses a “multilevel” approach to moderation, and a Trust and Safety (T&S) team dedicated to protecting users and shaping content moderation policies comprised 15% of Discord’s workforce as of May 2020.

“T&S tech and processes should not be used as a competitive advantage,” Sentropy CEO John Redgrave said in a blog post on the announcement. “We all deserve digital and physical safety, and moderators deserve better tooling to help them do one of the hardest jobs online more effectively and with fewer harmful impacts.”


Cleanse platforms of online harassment and abuse


Redgrave elaborated on the company’s natural connection with Discord: “Discord represents the next generation of social companies — a generation where users are not the product to be sold, but the engine of connectivity, creativity, and growth. In this model, user privacy and user safety are essential product features, not an afterthought. The success of this model depends upon building next-generation Trust and Safety into every product. We don’t take this responsibility lightly and are humbled to work at the scale of Discord and with Discord’s resources to increase the depth of our impact.”

Sentropy launched out of stealth last summer with an AI system designed to detect, track and cleanse platforms of online harassment and abuse. The company emerged then with $13 million in funding from notable backers including Reddit co-founder Alexis Ohanian and his VC firm Initialized Capital, King River Capital, Horizons Ventures and Playground Global.

“We are excited to help Discord decide how we can most effectively share with the rest of the Internet the best practices, technology, and tools that we’ve developed to protect our own communities,” Redgrave said.


Share article