OpenAI release preparedness framework to improve AI safety

OpenAI has released details of its new preparedness framework that aims to mitigate AI risks and prioritise safe and responsible model development

OpenAI has this week (18th December 2023) released an initial version of its preparedness framework to better facilitate safe and responsible AI models.

As part of the AI company expanding its safety processes, a new safety advisory group has been put in place to make recommendations to leadership. Most notably, the board will maintain veto power and can choose to prevent the release of an AI model even if leadership declares the AI as safe.

This news comes at the end of what has been a very exciting year for OpenAI. In addition to experiencing fast-paced development, the company has also seen turbulence in its executive board, with Sam Altman having been ousted and then reinstated as the company CEO in a space of one week in November 2023.

Advancing the study into AI risk

The ChatGPT developer says in its framework: “The study of frontier AI risks has fallen far short of what is possible and where we need to be. To address this gap and systematise our safety thinking, we are adopting the initial version of our Preparedness Framework. 

It describes OpenAI’s processes to track, evaluate, forecast and protect against risks posed by increasingly powerful AI models.

“By catastrophic risk, we mean any risk which could result in hundreds of billions of dollars in economic damage or lead to the severe harm or death of many individuals - this includes, but is not limited to, existential risk,” the company says.

As reported by The Washington Post, Sam Altman says that regulation to try to prevent harmful impacts of AI shouldn’t make it harder for smaller companies to compete. It also highlighted that at the same time, Altman has pushed the company to commercialise its technology to facilitate faster growth.

OpenAI’s decision to publicise its framework highlights how every company developing AI needs to hold itself to account - balancing business growth with responsibility. Given the immense popularity that ChatGPT has seen in just one year, the company clearly recognises the significance of ensuring AI is without risk.

Eliminating bias and mitigating global concerns

Its framework will focus on mitigating the misuse of current AI models and products like ChatGPT. The preparedness team will be led by Professor Aleksander Madry and will hire AI researchers, computer scientists, national security experts and policy professionals to monitor the technology, continually test it and warn the company if it believes any of its AI capabilities are becoming dangerous.

The Preparedness team will also map out the emerging risks of frontier models, with the company investing in capability evaluations and forecasting to better detect emerging risks. In particular, the company wishes to go beyond the hypothetical and work with data-driven predictions. 

In addition, the company has said that it will run evaluations and continually update ‘scorecards’ for its models. It will evaluate all of its frontier models to help the team assess the risks of its models to develop protocols for added safety and outside accountability. This will include preventing racial biases, for instance, to ensure that the AI systems do not develop to the point of causing harm. 

Previously, the company was a part of forming the Frontier Model Forum with Google, Anthropic and Microsoft with the goal of regulating AI development to ensure it is developed and harnessed responsibly. 

The forum aims to help advance research into AI safety, identity safety best practices for frontier models and share knowledge with policymakers and academics to advance responsible AI development and leverage AI to address social challenges.

******

Make sure you check out the latest edition of Technology Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

Technology Magazine is a BizClik brand

Share

Featured Articles

Dell Technologies: Powering Reliable Global Connectivity

Dell Technologies is announcing new solutions to help communications and service providers (CSPs), so that their systems are faster and more flexible

MWC Barcelona 2024: Unveiling the Future of Technology

Technology Magazine is live at MWC Barcelona 2024 this week, where global industry leaders come to reveal cutting-edge innovations in connectivity

Google Gemma: An AI Model Small Enough to Run on a Laptop

Tech giant Google, with Google DeepMind, has launched its latest AI model Gemma which it says achieves best-in-class performance for its size

Why Tech Leaders Should Attend Sustainability LIVE: Net Zero

Digital Transformation

OpenText Report: IT at Forefront of Sustainability Efforts

Digital Transformation

‘Magnificent Seven’ Tech Companies Driving Forward With AI

AI & Machine Learning