May 17, 2020

Wimbledon's 2017 video output to be served up by IBM's AI

AI
Romily Broad
3 min
Wimbledon, the world's oldest tennis tournament, will serve up a cutting-edgemedia experience powered by artificial intelligence in 2017, courtesy of it...

Wimbledon, the world's oldest tennis tournament, will serve up a cutting-edge media experience powered by artificial intelligence in 2017, courtesy of its partnership with IBM.

The US tech giant will use its third year as a sponsor of the tournament to further embed its Watson AI platform into operations at the All England Lawn Tennis Club (AELTC) in July. As well as enhancing its provision of data insights into matters on and off the court, 2017 will see the addition of automatic video editing to Watson's task list.

With three games per day across six courts, your average Wimbledon-employed human is faced with hundreds of hours of footage from which to curate the best highlights for distribution to fans and broadcasters. Doing that in real-time, as today's always-on audience demands, is a costly challenge. 

Enter IBM, which says its Watson scientists have built a system utilising "analysis of crowd noise, players’ movements and match data" to automatically compile highlights for the use of Wimbledon's editorial team. The machine has learned how to do it by ingesting thousands of hours of court action, and IBM says it will even monitor the tenor of social media activity to inform its choices.

The match data component itself is another string to the Watson bow. Separately, its APIs will tap a baffling array of real-time metrics, from radar guns to court-side statisticians, to illustrate and present real-time insight to fans via a service dubbed IBM SlamTracker.

It's all part of a Wimbledon drive in recent years to embrace digital innovation, not just as a means of shedding a stuffy image, but to seize control of its commercial destiny longer term.

Next year's tournament will be the first where the AELTC itself becomes the lead broadcaster, eschewing an 80-year relationship with the BBC, and sees this year's showpiece as an opportunity to test the scope of technologies it can deploy as it takes control.

Where TV infrastructure has historically been determined by the requirements of an external lead broadcaster, it will now decided by AELTC itself. It's a move aimed at diversifying the coverage options available to international audiences and broadcasting rights holders, and in so doing unlock more lucrative commercial opportunities around the world.

“We are excited for this year’s developments, yet again improving and developing our digital strategy for fans to make the most of their experience year-on-year," said  Alexandra Willis, Head of Communications, Content & Digital at the AELTC.

"In an increasingly competitive sporting landscape, IBM’s technology innovations are critical to continuing our journey towards a great digital experience that ensures we connect with our fans across the globe – wherever they may be watching and from whatever device that may be.”

For IBM, it's another opportunity to demonstrate the ability of its AI technologies to translate vast amounts of often unstructured data into meaningful commercial results.

"Cognitive computing is the next revolution in sports technology and working with us, Wimbledon is exposed to the foremost frontier of what technology can do, as we work together to achieve the best possible outcome for the brand and the event," added IBM's Sam Seddon, who's working with Wimbledon for IBM.

"Cognitive is now pervasive from driving the fan experience, to providing efficiency for digital editors to IT operations.”

Share article

Jun 11, 2021

Google AI Designs Next-Gen Chips In Under 6 Hours

Google
AI
Manufacturing
semiconductor
3 min
Google AI’s deep reinforcement learning algorithms can optimise chip floor plans exponentially faster than their human counterparts

In a Google-Nature paper published on Wednesday, the company announced that AI will be able to design chips in less than six hours. Humans currently take months to design and layout the intricate chip wiring. Although the tech giant has been working in silence on the technology for years, this is the first time that AI-optimised chips have hit the mainstream—and that the company will sell the result as a commercial product. 

 

“Our method has been used in production to design the next generation of Google TPU (tensor processing unit chips)”, the paper’s authors, Azalea Mirhoseini and Anna Goldie wrote. The TPU v4 chips are the fastest Google system ever launched. “If you’re trying to train a large AI/ML system, and you’re using Google’s TensorFlow, this will be a big deal”, said Jack Gold, President and Principal Analyst at J.Gold Associates

 

Training the Algorithm 

In a process called reinforcement learning, Google engineers used a set of 10,000 chip floor plans to train the AI. Each example chip was assigned a score of sorts based on its efficiency and power usage, which the algorithm then used to distinguish between “good” and “bad” layouts. The more layouts it examines, the better it can generate versions of its own. 

 

Designing floor plans, or the optimal layouts for a chip’s sub-systems, takes intense human effort. Yet floorplanning is similar to an elaborate game. It has rules, patterns, and logic. In fact, just like chess or Go, it’s the ideal task for machine learning. Machines, after all, don’t follow the same constraints or in-built conditions that humans do; they follow logic, not preconception of what a chip should look like. And this has allowed AI to optimise the latest chips in a way we never could. 

 

As a result, AI-generated layouts look quite different to what a human would design. Instead of being neat and ordered, they look slightly more haphazard. Blurred photos of the carefully guarded chip designs show a slightly more chaotic wiring layout—but no one is questioning its efficiency. In fact, Google is starting to evaluate how it could use AI in architecture exploration and other cognitively intense tasks. 

 

Major Implications for the Semiconductor Sector 

Part of what’s impressive about Google’s breakthrough is that it could throw Moore’s Law, the axion that the number of transistors on a chip doubles every five years, out the window. The physical difficulty of squeezing more CPUs, GPUs, and memory on tiny silicon die will still exist, but AI optimisation may help speed up chip performance.

 

Any chance that AI can help speed up current chip production is welcome news. Though the U.S. Senate recently passed a US$52bn bill to supercharge domestic semiconductor supply chains, its largest tech firms remain far behind. According to Holger Mueller, principal analyst at Constellation Research, “the faster and cheaper AI will win in business and government, including with the military”. 

 

All in all, AI chip optimisation could allow Google to pull ahead of its competitors such as AWS and Microsoft. And if we can speed up workflows, design better chips, and use humans to solve more complex, fluid, wicked problems, that’s a win—for the tech world and for society. 

 

 

Share article