May 17, 2020

Hate tweets and deepfake pee-tapes: the AI arms race

AI
Machine Learning
Harry Menear
6 min
Gigabit Magazine takes a deep dive into the evolving relationship between AI and law enforcement, and explores the deepfake arms race heralding the next step into the "post-truth" future
The adoption of artificial intelligence (AI) by law enforcement agencies around the world, like the use of any new technology that provides the potentia...

The adoption of artificial intelligence (AI) by law enforcement agencies around the world, like the use of any new technology that provides the potential for suppression and control instead of safety, has had a mixed reception. 

In the UK, an AI-powered analytics tool called The Online Hate Speech Dashboard is being used to trawl hundreds of thousands of tweets related to Brexit per day and analyse them for speech that is Islamophobic, anti-Semitic or directed against people from certain countries or with disabilities or from LGBT+ groups, according to a New Scientist report. So far, the tool (which was developed by researchers at Cardiff University) aims to use the established correlation between an increase in hate speech on Twitter and a corresponding increase in crimes against minorities on London streets to give advanced warnings to police in the lead up to the October 31 Brexit deadline.

In the US, Guardian Alliance Technologies  announced today a new partnership with Fama Technologies. in order to provide law enforcement agencies with cutting-edge, AI-powered social media screening services.

"When hiring for positions of public trust, such as law enforcement personnel, it is critical that the hiring agency knows all that they can possibly know about the applicant prior to offering them a job," said Ryan Layne, Guardian's CEO. "However, it's impossible for investigators to manually review the complete online identity of every candidate and accurately determine whether or not there are risks associated with their behaviors. Not knowing what sort of social media activity an applicant may be engaged in represents a huge blind spot for law enforcement all across the country, and this powerful new technology eliminates that blind spot efficiently and cost-effectively.

With Guardian's new digital screening service, which utilizes Fama's patented AI technology, agencies can now objectively evaluate social media behaviors and uncover racist, bigoted and/or other toxic behaviors before they're a problem for the agency and the communities they serve. It is our belief that the adoption of this type of technology will become a standard practice as law enforcement agencies continue to do all they can to maintain and protect the public trust. Fama was the clear choice for us, as their focus on AI and technology to solve these issues was far better than anything we were seeing in the market." 

AI used by law enforcement in order to monitor and combat racism and bigotry, protecting a society’s most vulnerable demographics. Sounds good. 

But...

Using AI to generate predictive insights relating to criminal behaviour, and allowing law enforcement agencies to act on this information, has some serious drawbacks. According to a report released by the Royal United Services Institute for Defence and Security Studies, “the use of data analytics and algorithms for policing has numerous potential benefits, but also carries significant risks, including those relating to bias. This could include unfair discrimination on the grounds of protected characteristics, real or apparent skewing of the decision-making process, or outcomes and processes which are systematically less fair to individuals within a particular group.” In short, inherent racial, class-based and other forms of subjective bias are baked into the algorithms at the point of design, and can end up reinforcing existing injustices. 

In a country where black people make up 33% of police killing victims, but just 13% of the overall population, the application of inherently prejudiced analytics could only compound the issue. 

On the other hand, though, advanced AI capabilities may be a necessary tool to combat a growing wave of criminal capabilities that are active on new frontiers of law enforcement. 

SEE ALSO: 

Gone are the days of the masked bandit clutching a sackful of stolen cash, pursued by a lawman on horseback are well and truly over. Released earlier this year, a report by the UK’s Office for National Statistics found that 1.83% of adults experienced a computer misuse crime, making it more likely than violence (1.75%), theft (0.8%) or robbery (0.3%). 

AI-driven crime crime-nouveau

At the end of August, the Wall Street Journal reported that fraudsters used an AI to successfully mimic the voice of the chief executive of a large German corporation, who then called the CEO of the firm’s subsidiary, a UK-energy business (that remains nameless) to demand he transfer $243,000 to another company in Hungary. Given that, “the UK CEO recognised his boss’ slight German accent and the melody of his voice on the phone,” he paid, and the first AI deepfake-enabled example of “vishing” was a success. Earlier this month, the Verge notes that  “cybersecurity firm Symantec says it has come across at least three cases of deepfake voice fraud used to trick companies into sending money to a fraudulent account.” The trend is growing. 

Deepfakes - AI generated, scarily realistic edits made to text, video and audio - are quickly being recognised as an especially dangerous tool in the hands of criminals. In a socio political environment in which the truth is called into question more often than President Trump (who has his own deepfake version of the infamous “pee tape”) advocates for the destruction of the free press, deepfakes have the potential to do serious damage to the credibility of almost any source.

Want to be really freaked out? Check out this site, which generates a new deepfake face using AI every time you refresh the screen. 

Researchers at the University of California, Berkeley last year released a highly convincing video and accompanying paper, in which they demonstrated their work to create “full body” deepfakes - one of the easiest ways to spot a fake person is a close crop picture of their face, as AIs have trouble rendering backgrounds and bodies properly. 

According to Dutch journalist Tom Van de Weghe, who ran afoul of the Chinese government’s propaganda machine a few years previously, “it’s not the big shots, the big politicians, and the big famous guys who are the most threatened,” says Van de Weghe. “It’s the normal people—people like you, me, female journalists, and sort of marginalized groups that could become or are already becoming the victims of deepfakes.”

The article continues, noting that Van de Weghe “thinks full body deepfakes could completely alter the impact of events like the protests in Hong Kong in troubling new ways—such content could make it seem as if protesters are acting violently or portray law enforcement’s crackdown in a positive light.”

Taking on the fakes

While AI is a powerful tool in the creation of deepfakes (in addition to encryption cracking and numerous other unsavoury applications), the technology is also being harnessed to fight against this new form of virtual fraud. 

Earlier this month, Google, in partnership with several thousand consenting actors and Jigsaw, released a huge dataset of deepfake videos in order to help forensics department benchmark their own deepfake detection software. 

Google’s blog on the topic said: “To make this dataset, over the past year we worked with paid and consenting actors to record hundreds of videos. Using publicly available deepfake generation methods, we then created thousands of deepfakes from these videos. The resulting videos, real and fake, comprise our contribution, which we created to directly support deepfake detection efforts. As part of the FaceForensics benchmark, this dataset is now available, free to the research community, for use in developing synthetic video detection methods.”

Cybersecurity firms the world over are racing to improve their ability to detect and prevent deepfake scamming, which is “just the beginning of what could become a major threat for organisations in the future, regardless of business size. As machine learning escalates, social engineering attacks will also increase in complexity and the damage that businesses could face will be immeasurable,” according to Lee Johnson, Chief Information Security Officer at Air Sec

Share article

Jun 11, 2021

Google AI Designs Next-Gen Chips In Under 6 Hours

Google
AI
Manufacturing
semiconductor
3 min
Google AI’s deep reinforcement learning algorithms can optimise chip floor plans exponentially faster than their human counterparts

In a Google-Nature paper published on Wednesday, the company announced that AI will be able to design chips in less than six hours. Humans currently take months to design and layout the intricate chip wiring. Although the tech giant has been working in silence on the technology for years, this is the first time that AI-optimised chips have hit the mainstream—and that the company will sell the result as a commercial product. 

 

“Our method has been used in production to design the next generation of Google TPU (tensor processing unit chips)”, the paper’s authors, Azalea Mirhoseini and Anna Goldie wrote. The TPU v4 chips are the fastest Google system ever launched. “If you’re trying to train a large AI/ML system, and you’re using Google’s TensorFlow, this will be a big deal”, said Jack Gold, President and Principal Analyst at J.Gold Associates

 

Training the Algorithm 

In a process called reinforcement learning, Google engineers used a set of 10,000 chip floor plans to train the AI. Each example chip was assigned a score of sorts based on its efficiency and power usage, which the algorithm then used to distinguish between “good” and “bad” layouts. The more layouts it examines, the better it can generate versions of its own. 

 

Designing floor plans, or the optimal layouts for a chip’s sub-systems, takes intense human effort. Yet floorplanning is similar to an elaborate game. It has rules, patterns, and logic. In fact, just like chess or Go, it’s the ideal task for machine learning. Machines, after all, don’t follow the same constraints or in-built conditions that humans do; they follow logic, not preconception of what a chip should look like. And this has allowed AI to optimise the latest chips in a way we never could. 

 

As a result, AI-generated layouts look quite different to what a human would design. Instead of being neat and ordered, they look slightly more haphazard. Blurred photos of the carefully guarded chip designs show a slightly more chaotic wiring layout—but no one is questioning its efficiency. In fact, Google is starting to evaluate how it could use AI in architecture exploration and other cognitively intense tasks. 

 

Major Implications for the Semiconductor Sector 

Part of what’s impressive about Google’s breakthrough is that it could throw Moore’s Law, the axion that the number of transistors on a chip doubles every five years, out the window. The physical difficulty of squeezing more CPUs, GPUs, and memory on tiny silicon die will still exist, but AI optimisation may help speed up chip performance.

 

Any chance that AI can help speed up current chip production is welcome news. Though the U.S. Senate recently passed a US$52bn bill to supercharge domestic semiconductor supply chains, its largest tech firms remain far behind. According to Holger Mueller, principal analyst at Constellation Research, “the faster and cheaper AI will win in business and government, including with the military”. 

 

All in all, AI chip optimisation could allow Google to pull ahead of its competitors such as AWS and Microsoft. And if we can speed up workflows, design better chips, and use humans to solve more complex, fluid, wicked problems, that’s a win—for the tech world and for society. 

 

 

Share article