Jul 2, 2020

OpenAI - Accelerating Artificial General Intelligence

Technology
AI
Kayleigh Shooter
3 min
.
How is OpenAI helping to accelerate safe artificial general intelligence in the technology industry? We find out below...

Business Overview:

OpenAI is an artificial intelligence research laboratory consisting of OpenAI LP and its parent organization, the non-profit OpenAI Inc. The company, a competitor to technology giant DeepMind, conducts research in the field of artificial intelligence (AI) with the joint goal of promoting and developing friendly AI in a way that benefits humanity as a whole. The organization was founded in San Francisco in late 2015 by Elon Musk, Sam Altman, and others, who collectively pledged US$1 billion. Musk resigned from the board in February 2018, but remained a donor. In 2019, OpenAI LP received a US$1 billion investment from Microsoft. In June 2020, OpenAI announced GPT-3, a language model trained on trillions of words from the Internet. It also announced an associated API, named simply "the API", would form the heart of its first commercial product. GPT-3 is aimed at natural language answering of questions, but can also translate between languages and can coherently generate improvised text.

Jukebox:

Jukebox is a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artistic styles. They are releasing the model weights and code, along with a tool to explore the generated samples. Automatic music generation dates back to more than half a century. A prominent approach is to generate music symbolically in the form of a piano roll, which specifies the timing, pitch, velocity, and instrument of each note to be played. This has led to impressive results like producing Bach chorals, polyphonic music with multiple instruments, as well as minute long musical pieces.

But symbolic generators have limitations—they cannot capture human voices or many of the more subtle timbres, dynamics, and expressivity that are essential to music. A different approach is to model music directly as raw audio. Generating music at the audio level is challenging since the sequences are very long.17 A typical 4-minute song at CD quality (44 kHz, 16-bit) has over 10 million timesteps. For comparison, GPT-2 had 1,000 timesteps and OpenAI Five took tens of thousands of timesteps per game. Thus, to learn the high-level semantics of music, a model would have to deal with extremely long-range dependencies.

MuseNet:

MuseNet is a deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country to Mozart to the Beatles. MuseNet was not explicitly programmed with their understanding of music, but instead discovered patterns of harmony, rhythm, and style by learning to predict the next token in hundreds of thousands of MIDI files. MuseNet uses the same general-purpose unsupervised technology as GPT-2, a large-scale transformer model trained to predict the next token in a sequence, whether audio or text. MuseNet uses the recompute and optimized kernels of Sparse Transformer to train a 72-layer network with 24 attention heads—with full attention over a context of 4096 tokens. This long context may be one reason why it is able to remember long-term structure in a piece.

Its mission:

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which they mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. The company will attempt to directly build safe and beneficial AGI, but will also consider their mission fulfilled if their work aids others to achieve this outcome.

Share article

May 7, 2021

AI Shows its Value; Governments Must Unleash its Potential

AI
Technology
digitisation
Digital
His Excellency Omar bin Sultan...
4 min
His Excellency Omar bin Sultan Al Olama talks us through artificial intelligence's progress and potential for practical deployment in the workplace.
His Excellency Omar bin Sultan Al Olama talks us through artificial intelligence's progress and potential for practical deployment in the workplace...

2020 has revealed just how far AI technology has come as it achieves fresh milestones in the fight against Covid-19. Google’s DeepMind helped predict the protein structure of the virus; AI-drive infectious disease tracker BlueDot spotted the novel coronavirus nine days before the World Health Organisation (WHO) first sounded the alarm. Just a decade ago, these feats were unfathomable. 

Yet, we have only just scratched the surface of AI’s full potential. And it can’t be left to develop on its own. Governments must do more to put structures in place to advance the responsible growth of AI. They have a dual responsibility: fostering environments that enable innovation while ensuring the wider ethical and social implications are considered.

It is this balance that we are trying to achieve in the United Arab Emirates (UAE) to ensure government accelerates, rather than hinders, the development of AI. Just as every economy is transitioning at the moment, we see innovation as being vital to realising our vision for a post-oil economy. Our work in his space has highlighted three barriers in the government approach when it comes to realising AI’s potential. 

First, addressing the issue of ignorance 

While much time is dedicated to talking about the importance of AI, there simply isn’t enough understanding of where it’s useful and where it isn’t. There are a lot of challenges to rolling out AI technologies, both practically and ethically. However, those enacting the policies too often don’t fully understand the technology and its implications. 

The Emirates is not exempt from this ignorance, but it is an issue we have been trying to address. Over the last few years, we have been running an AI diploma in partnership with Oxford University, teaching government officials the ethical implications of AI deployment. Our ambition is for every government ministry to have a diploma graduate, as it is essential to ensure policy decision-making is informed. 

Second, moving away from the theoretical

While this grounding in the moral implications of AI is critical, it is important to go beyond the theoretical. It is vital that experimentation in AI is allowed to happen for its own sake and not let ethical problems stymie innovations that don’t yet exist. Indeed, many of these concerns – while well-founded – are born out in the practical deployment of these end-use cases and can’t be meaningfully discussed on paper.

If you take facial recognition as an example, looking at this issue in abstract quickly leads to discussions over privacy concerns with potential surveillance and intrusion by private companies or authorities’ regimes. 

But what about the more specific issue of computer vision? Although part of the same field, the same moral quandaries do not arise, and the technology is already bearing fruit. In 2018, we developed an algorithmic solution that can be used in the detection and diagnosis of tuberculosis from chest X-rays. You can upload any image of a chest X-ray, and the system will identify if a person has the disease. Laws and regulations must be tailored to unique use-cases of AI, rather than lumping disparate fields together.

To create this culture that encourages experimentation, we launched the RegLab. It provides a safe and flexible legislation ecosystem to supports the utilisation of future technologies. This means we can actually see AI in practice before determining appropriate regulation, not the other way around. Regulation is vital to cap any unintended negative consequences of AI, but it should never be at the expense of innovation. 

Finally, understanding the knock-on effects of AI

There needs to be a deeper, more nuanced understanding of AI’s wider impact. It is too easy to think the economic benefits and efficiency gains of AI must also come with negative social implications, particularly concern over job loss. 

But with the right long-term government planning, it’s possible to have one without the other; to maximise the benefits and mitigate potential downsides. If people are appropriately trained in how to use or understand AI, the result is a future workforce capable of working alongside these technologies for the better – just as computers complement most people’s work today.

We’ve to start this training as soon as possible in the Emirates. Through our Ministry of Education, we have rolled out an education programme to start teaching children about AI as young as five years old. This includes coding skills and ethics, and we are carrying this right through to higher education with the Mohamed bin Zayed University of Artificial Intelligence set to welcome its first cohort in January. We hope to create future generations of talent that can work in harmony with AI for the betterment of society, not the detriment.

AI will inevitably become more pervasive in society, digitisation will continue in the wake of the pandemic, and in time we will see AI’s prominence grow. But governments have a responsibility to society to ensure that this growth is matched with the appropriate understanding of AI’s impacts. We must separate the hype from the practical solutions, and we must rigorously interrogate AI deployment and ensure that it used to enhance our existence. If governments can overcome these challenges and create the environments for AI to flourish, then we have a very exciting future ahead of us.

Share article