Jan 22, 2021

AI is powering better recommendations on streaming services

AI
streaming
Marcus Bergström
4 min
AI and machine learning can gain a deep understanding of content and serve more relevant and intuitive recommendations to audiences
AI and machine learning can gain a deep understanding of content and serve more relevant and intuitive recommendations to audiences...

In 2021, for the first time in history, more people will pay for online streaming services than for traditional pay-TV. But the streaming market is now more crowded and competitive than ever before. In recent years, high-quality original programming has been the primary way streaming providers enhance and differentiate their services. Netflix is estimated to house around 1,500 TV series and 4,000 films, Amazon Prime Video is home to almost 20,000 titles, and a subscription to Disney+ adds around 7,000 more TV episodes and 500 films for viewers to choose from. 

However, high-quality programming alone is not enough to keep consumers subscribed to a service. One of the most common problems today’s audiences face is finding something they want to watch. As recently as 2017, viewers were spending almost an hour a day searching for content. It is a daily dilemma which often results in endless scrolling before the consumer simply chooses something that vaguely interests them, because they do not want to waste more time searching for something truly compelling. The reality is that offering a superior user experience is the key to a video streaming provider breaking away from the competition and becoming the go-to service. The only way a streaming service can achieve this is by using AI and machine learning to gain a deep understanding of its content and serve more relevant and intuitive recommendations to audiences. 

Currently, many streaming services are using content discovery systems which often provide simplistic and inaccurate recommendations. Many content discovery systems rely on basic metadata, which broadly labels content based on data points such as genre, the actors starring in it, or even just picking up on keywords in content titles. Think of it like this: how likely is it that after watching Marley & Me, the family comedy starring Owen Wilson and Jennifer Aniston, that the viewer will want to watch Marley, the biographical documentary on reggae icon Bob Marley?

The power of content

The output of recommendations will only be as good as the input. So when streaming platforms don’t know enough about their content, their recommendations will be poor. To take recommendation systems to the next level, streaming providers need to harness AI and machine learning technologies to gain a deep understanding of the content in a scalable way by analysing the audio and video file itself. 

Content analysis based on AI and machine learning can have different neural networks to identify patterns in colour, audio, pace, stress levels, positive/negative emotions, camera movements and many other characteristics. It can then evaluate how similar each asset is to every other asset and combine this information with an AI engine that analyses a household’s watchlist, drawing together a more advanced and nuanced understanding of the content asset and its relevance at any particular time. 

A user that watches a disturbing horror film on a Friday night may well want something more light-hearted immediately after, and a recommendation system that is being fed this type of detailed content data can offer this level of intuition. Over time, it can analyse each viewer’s consumption patterns and data points – not just each device, but each individual user profile – and perfectly tailor recommendations for their watch preferences, suggesting the right content, at the right time.  

There’s a mood (category) for that

Understanding the content itself goes beyond just understanding similarity, it opens the door to a whole range of new use-cases that traditional metadata won’t allow you to tap into. With the emotional data of the content coming from the audio/video file itself, we can automatically curate entire mood categories and channels for viewers. One of the easiest ways streaming providers can reduce the amount of time viewers spend looking for content is to categorise by mood. The type of content we want to watch is often strongly related to how we feel at that particular moment, so grouping content by mood makes the user experience more intuitive. An advanced AI engine can analyse the intrinsic emotional profile of each content asset to create nuanced categories. For example, moods can be categorised as “tense, fast-paced horror” or “light-hearted escapism”. Therefore, someone who has just got home from work after a stressful day will know to avoid content in the first category if they want to watch something to unwind. Additionally, in group settings when there’s a lot of debate over what to watch, it’s much easier to find something that interests everyone by asking, “what is everyone in the mood for?” and then finding the appropriate category. 

Right now, there’s a great number of streaming platforms available to consumers. Forward-thinking players that want to stand out from the crowd and build brand loyalty among consumers need to offer an enhanced user experience that is differentiated. The only way video streaming providers can achieve this is by using AI and machine learning to gain a deep understanding of their content, so they can better understand their customers and provide them with the best viewing experience possible. 

By Marcus Bergström, CEO of Vionlabs  

Share article

May 7, 2021

AI Shows its Value; Governments Must Unleash its Potential

AI
Technology
digitisation
Digital
His Excellency Omar bin Sultan...
4 min
His Excellency Omar bin Sultan Al Olama talks us through artificial intelligence's progress and potential for practical deployment in the workplace.
His Excellency Omar bin Sultan Al Olama talks us through artificial intelligence's progress and potential for practical deployment in the workplace...

2020 has revealed just how far AI technology has come as it achieves fresh milestones in the fight against Covid-19. Google’s DeepMind helped predict the protein structure of the virus; AI-drive infectious disease tracker BlueDot spotted the novel coronavirus nine days before the World Health Organisation (WHO) first sounded the alarm. Just a decade ago, these feats were unfathomable. 

Yet, we have only just scratched the surface of AI’s full potential. And it can’t be left to develop on its own. Governments must do more to put structures in place to advance the responsible growth of AI. They have a dual responsibility: fostering environments that enable innovation while ensuring the wider ethical and social implications are considered.

It is this balance that we are trying to achieve in the United Arab Emirates (UAE) to ensure government accelerates, rather than hinders, the development of AI. Just as every economy is transitioning at the moment, we see innovation as being vital to realising our vision for a post-oil economy. Our work in his space has highlighted three barriers in the government approach when it comes to realising AI’s potential. 

First, addressing the issue of ignorance 

While much time is dedicated to talking about the importance of AI, there simply isn’t enough understanding of where it’s useful and where it isn’t. There are a lot of challenges to rolling out AI technologies, both practically and ethically. However, those enacting the policies too often don’t fully understand the technology and its implications. 

The Emirates is not exempt from this ignorance, but it is an issue we have been trying to address. Over the last few years, we have been running an AI diploma in partnership with Oxford University, teaching government officials the ethical implications of AI deployment. Our ambition is for every government ministry to have a diploma graduate, as it is essential to ensure policy decision-making is informed. 

Second, moving away from the theoretical

While this grounding in the moral implications of AI is critical, it is important to go beyond the theoretical. It is vital that experimentation in AI is allowed to happen for its own sake and not let ethical problems stymie innovations that don’t yet exist. Indeed, many of these concerns – while well-founded – are born out in the practical deployment of these end-use cases and can’t be meaningfully discussed on paper.

If you take facial recognition as an example, looking at this issue in abstract quickly leads to discussions over privacy concerns with potential surveillance and intrusion by private companies or authorities’ regimes. 

But what about the more specific issue of computer vision? Although part of the same field, the same moral quandaries do not arise, and the technology is already bearing fruit. In 2018, we developed an algorithmic solution that can be used in the detection and diagnosis of tuberculosis from chest X-rays. You can upload any image of a chest X-ray, and the system will identify if a person has the disease. Laws and regulations must be tailored to unique use-cases of AI, rather than lumping disparate fields together.

To create this culture that encourages experimentation, we launched the RegLab. It provides a safe and flexible legislation ecosystem to supports the utilisation of future technologies. This means we can actually see AI in practice before determining appropriate regulation, not the other way around. Regulation is vital to cap any unintended negative consequences of AI, but it should never be at the expense of innovation. 

Finally, understanding the knock-on effects of AI

There needs to be a deeper, more nuanced understanding of AI’s wider impact. It is too easy to think the economic benefits and efficiency gains of AI must also come with negative social implications, particularly concern over job loss. 

But with the right long-term government planning, it’s possible to have one without the other; to maximise the benefits and mitigate potential downsides. If people are appropriately trained in how to use or understand AI, the result is a future workforce capable of working alongside these technologies for the better – just as computers complement most people’s work today.

We’ve to start this training as soon as possible in the Emirates. Through our Ministry of Education, we have rolled out an education programme to start teaching children about AI as young as five years old. This includes coding skills and ethics, and we are carrying this right through to higher education with the Mohamed bin Zayed University of Artificial Intelligence set to welcome its first cohort in January. We hope to create future generations of talent that can work in harmony with AI for the betterment of society, not the detriment.

AI will inevitably become more pervasive in society, digitisation will continue in the wake of the pandemic, and in time we will see AI’s prominence grow. But governments have a responsibility to society to ensure that this growth is matched with the appropriate understanding of AI’s impacts. We must separate the hype from the practical solutions, and we must rigorously interrogate AI deployment and ensure that it used to enhance our existence. If governments can overcome these challenges and create the environments for AI to flourish, then we have a very exciting future ahead of us.

Share article