Hate tweets and deepfake pee-tapes: the AI arms race

By Harry Menear
The adoption of artificial intelligence (AI) by law enforcement agencies around the world, like the use of any new technology that provides the potentia...

The adoption of artificial intelligence (AI) by law enforcement agencies around the world, like the use of any new technology that provides the potential for suppression and control instead of safety, has had a mixed reception. 

In the UK, an AI-powered analytics tool called The Online Hate Speech Dashboard is being used to trawl hundreds of thousands of tweets related to Brexit per day and analyse them for speech that is Islamophobic, anti-Semitic or directed against people from certain countries or with disabilities or from LGBT+ groups, according to a New Scientist report. So far, the tool (which was developed by researchers at Cardiff University) aims to use the established correlation between an increase in hate speech on Twitter and a corresponding increase in crimes against minorities on London streets to give advanced warnings to police in the lead up to the October 31 Brexit deadline.

In the US, Guardian Alliance Technologies  announced today a new partnership with Fama Technologies. in order to provide law enforcement agencies with cutting-edge, AI-powered social media screening services.

"When hiring for positions of public trust, such as law enforcement personnel, it is critical that the hiring agency knows all that they can possibly know about the applicant prior to offering them a job," said Ryan Layne, Guardian's CEO. "However, it's impossible for investigators to manually review the complete online identity of every candidate and accurately determine whether or not there are risks associated with their behaviors. Not knowing what sort of social media activity an applicant may be engaged in represents a huge blind spot for law enforcement all across the country, and this powerful new technology eliminates that blind spot efficiently and cost-effectively.

With Guardian's new digital screening service, which utilizes Fama's patented AI technology, agencies can now objectively evaluate social media behaviors and uncover racist, bigoted and/or other toxic behaviors before they're a problem for the agency and the communities they serve. It is our belief that the adoption of this type of technology will become a standard practice as law enforcement agencies continue to do all they can to maintain and protect the public trust. Fama was the clear choice for us, as their focus on AI and technology to solve these issues was far better than anything we were seeing in the market." 

AI used by law enforcement in order to monitor and combat racism and bigotry, protecting a society’s most vulnerable demographics. Sounds good. 

But...

Using AI to generate predictive insights relating to criminal behaviour, and allowing law enforcement agencies to act on this information, has some serious drawbacks. According to a report released by the Royal United Services Institute for Defence and Security Studies, “the use of data analytics and algorithms for policing has numerous potential benefits, but also carries significant risks, including those relating to bias. This could include unfair discrimination on the grounds of protected characteristics, real or apparent skewing of the decision-making process, or outcomes and processes which are systematically less fair to individuals within a particular group.” In short, inherent racial, class-based and other forms of subjective bias are baked into the algorithms at the point of design, and can end up reinforcing existing injustices. 

In a country where black people make up 33% of police killing victims, but just 13% of the overall population, the application of inherently prejudiced analytics could only compound the issue. 

On the other hand, though, advanced AI capabilities may be a necessary tool to combat a growing wave of criminal capabilities that are active on new frontiers of law enforcement. 

SEE ALSO: 

Gone are the days of the masked bandit clutching a sackful of stolen cash, pursued by a lawman on horseback are well and truly over. Released earlier this year, a report by the UK’s Office for National Statistics found that 1.83% of adults experienced a computer misuse crime, making it more likely than violence (1.75%), theft (0.8%) or robbery (0.3%). 

AI-driven crime crime-nouveau

At the end of August, the Wall Street Journal reported that fraudsters used an AI to successfully mimic the voice of the chief executive of a large German corporation, who then called the CEO of the firm’s subsidiary, a UK-energy business (that remains nameless) to demand he transfer $243,000 to another company in Hungary. Given that, “the UK CEO recognised his boss’ slight German accent and the melody of his voice on the phone,” he paid, and the first AI deepfake-enabled example of “vishing” was a success. Earlier this month, the Verge notes that  “cybersecurity firm Symantec says it has come across at least three cases of deepfake voice fraud used to trick companies into sending money to a fraudulent account.” The trend is growing. 

Deepfakes - AI generated, scarily realistic edits made to text, video and audio - are quickly being recognised as an especially dangerous tool in the hands of criminals. In a socio political environment in which the truth is called into question more often than President Trump (who has his own deepfake version of the infamous “pee tape”) advocates for the destruction of the free press, deepfakes have the potential to do serious damage to the credibility of almost any source.

Want to be really freaked out? Check out this site, which generates a new deepfake face using AI every time you refresh the screen. 

Researchers at the University of California, Berkeley last year released a highly convincing video and accompanying paper, in which they demonstrated their work to create “full body” deepfakes - one of the easiest ways to spot a fake person is a close crop picture of their face, as AIs have trouble rendering backgrounds and bodies properly. 

According to Dutch journalist Tom Van de Weghe, who ran afoul of the Chinese government’s propaganda machine a few years previously, “it’s not the big shots, the big politicians, and the big famous guys who are the most threatened,” says Van de Weghe. “It’s the normal people—people like you, me, female journalists, and sort of marginalized groups that could become or are already becoming the victims of deepfakes.”

The article continues, noting that Van de Weghe “thinks full body deepfakes could completely alter the impact of events like the protests in Hong Kong in troubling new ways—such content could make it seem as if protesters are acting violently or portray law enforcement’s crackdown in a positive light.”

Taking on the fakes

While AI is a powerful tool in the creation of deepfakes (in addition to encryption cracking and numerous other unsavoury applications), the technology is also being harnessed to fight against this new form of virtual fraud. 

Earlier this month, Google, in partnership with several thousand consenting actors and Jigsaw, released a huge dataset of deepfake videos in order to help forensics department benchmark their own deepfake detection software. 

Google’s blog on the topic said: “To make this dataset, over the past year we worked with paid and consenting actors to record hundreds of videos. Using publicly available deepfake generation methods, we then created thousands of deepfakes from these videos. The resulting videos, real and fake, comprise our contribution, which we created to directly support deepfake detection efforts. As part of the FaceForensics benchmark, this dataset is now available, free to the research community, for use in developing synthetic video detection methods.”

Cybersecurity firms the world over are racing to improve their ability to detect and prevent deepfake scamming, which is “just the beginning of what could become a major threat for organisations in the future, regardless of business size. As machine learning escalates, social engineering attacks will also increase in complexity and the damage that businesses could face will be immeasurable,” according to Lee Johnson, Chief Information Security Officer at Air Sec

Share

Featured Articles

Vodafone’s Maria Grazia Pecorari joins Tech & AI LIVE London

Maria Grazia Pecorari, Director of Strategy and Wholesale at Vodafone UK to speak at Tech & AI LIVE London

How Alteryx Aims to Bring Data Analytics Skills to All

With digital leaders citing skills shortages as a major business obstacle, Alteryx has announced partnerships to tackle the data and analytics skills gap

Ivanti’s David Shepherd joins Tech & AI LIVE London

David Shepherd, Senior Vice President of EMEA Sales at Ivanti to speak at Tech & AI LIVE London

Dell Technologies: Firms Expect AI to Transform Industries

AI & Machine Learning

Top 100 Women 2024: Robyn Denholm, Tesla - No. 8

AI & Machine Learning

Cognizant and Microsoft Partner to Drive Enterprise Gen AI

AI & Machine Learning