Frontier Model Forum: How will responsible AI body work?
As discussions around the regulation of AI continue to take place, last week four of the leading players - Anthropic, Google, Microsoft, and OpenAI - announced they would be founding members of an industry body aimed at overseeing the technology’s safe development.
As reported by AI Magazine, The Frontier Model Forum aims to help advance research into AI safety, identity safety best practices for frontier models - large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models - and share knowledge with policymakers and academics to advance responsible AI development and leverage AI to address social challenges.
With increasing numbers of workers using generative AI, Technology Magazine looks at how the forum will work and what its founding members hope it will achieve.
Industry body to advance AI safety research and identify best practices for responsible development
As the world embraces the transformative potential of AI, businesses across industries have doubled down on their commitment to harness its capabilities.
But as some of the world’s biggest companies invest huge sums in the technology, with generative AI increasingly a boardroom topic - work is needed to address social, ethical and security risks.
“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation,” Kent Walker, President, Global Affairs at Google & Alphabet commented. “We're all going to need to work together to make sure AI benefits everyone.”
Brad Smith, Vice Chair & President at Microsoft added: “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
In a blog post the organisations set out the core objectives for the forum:
- Advancing AI safety research to promote responsible development of frontier models, minimise risks, and enable independent, standardised evaluations of capabilities and safety.
- Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
- Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.
- Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
“Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance,” said Anna Makanju, Vice President of Global Affairs at OpenAI. “It is vital that AI companies–especially those working on the most powerful models–align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible.
“This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.”
“Anthropic believes that AI has the potential to fundamentally change how the world works,” described Dario Amodei, CEO at Anthropic. “We are excited to collaborate with industry, civil society, government, and academia to promote safe and responsible development of the technology. The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety.”
How the Frontier Model Forum will work
According to the founding members, the forum will establish an Advisory Board to help guide its strategy and priorities in the coming months, representing a diversity of backgrounds and perspectives.
The founding companies will also establish key institutional arrangements including a charter, governance and funding with a working group and executive board to lead these efforts. We plan to consult with civil society and governments in the coming weeks on the design of the Forum and on meaningful ways to collaborate.
“The Frontier Model Forum welcomes the opportunity to help support and feed into existing government and multilateral initiatives such as the G7 Hiroshima process, the OECD’s work on AI risks, standards, and social impact, and the US-EU Trade and Technology Council,” the companies said.
“The Forum will also seek to build on the valuable work of existing industry, civil society and research efforts across each of its workstreams. Initiatives such as the Partnership on AI and MLCommons continue to make important contributions across the AI community, and the Forum will explore ways to collaborate with and support these and other valuable multi-stakeholder efforts.”
******
For more insights into the world of Technology - check out the latest edition of Technology Magazine and be sure to follow us on LinkedIn & Twitter.
Other magazines that may be of interest - AI Magazine | Cyber Magazine.
Please also check out our upcoming event - Cloud and 5G LIVE on October 11 and 12 2023.
******
BizClik is a global provider of B2B digital media platforms that cover Executive Communities for CEOs, CFOs, CMOs, Sustainability leaders, Procurement & Supply Chain leaders, Technology & AI leaders, Cyber leaders, FinTech & InsurTech leaders as well as covering industries such as Manufacturing, Mining, Energy, EV, Construction, Healthcare and Food.
BizClik – based in London, Dubai, and New York – offers services such as content creation, advertising & sponsorship solutions, webinars & events.
- ServiceNow & Microsoft Partnership Driving Enterprise Gen AIDigital Transformation
- HPE: Businesses Must Tackle Blind Spots in AI StrategiesIT Procurement
- Microsoft Invests $1.7bn in Indonesia's Cloud and AI FutureCloud & Cybersecurity
- Flexential: Momentum Report Highlights Hybrid IT InnovationCloud & Cybersecurity