The AI safety debate: Proposed EU AI Act guidelines released
The EU has reached a landmark provisional agreement on the EU AI Act.
The draft regulations aim to ensure that AI systems are safe and respect the rights of people, in addition to giving out fines for those that break the new rules.
It comes in the wake of 36 hours of talks and negotiations as rules around AI systems like ChatGPT were reached. Next year, the European Parliament will vote on the proposals reached, with legislation not expected to take effect before 2025.
According to the BBC, the US, UK and China are now working to publish their own guidelines.
A starting point to help enterprises consider their use of AI
President of the European Commission, Ursula von der Leyen, released a statement which states: “I very much welcome today's political agreement by the European Parliament and the Council on the Artificial Intelligence Act.
“AI is already changing our everyday lives. And this is just the beginning. Used wisely and widely, AI promises huge benefits to our economy and society. [The] agreement focuses regulation on identifiable risks, provides legal certainty and opens the way for innovation in trustworthy AI.”
She continues: “By guaranteeing the safety and fundamental rights of people and businesses, the Act will support the human-centric, transparent and responsible development, deployment and take-up of AI in the EU.”
The European Parliament defines AI as software that can “generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.” This includes AI systems like ChatGPT and DALL-E, for example.
Current proposals within the EU AI Act summarise that rules will be brought in on high-impact general-purpose AI models that can cause systemic risk in the future, as well as on high-risk AI systems.
The EU proposes that a revised system of governance with some enforcement powers at EU level will be brought in, with the possibility of law enforcement using “remote biometric identification” in public spaces, subject to safeguards.
Ultimately, the proposed regulations are designed to offer a better protection of rights through the obligation for those who deploy high-risk AI systems to undergo fundamental rights impact assessments prior to putting AI systems to use.
They would also take into account situations where AI systems can be used for many different purposes, or where general-purpose AI technology is integrated into another high-risk system. Specific rules have also been proposed for foundation models, suggesting that these must comply with “specific transparency obligations” before being placed into the market.
Proposed penalties for those who ‘break the rules’
The EU has proposed fines for violations of the AI act, which will be set as a percentage of the company’s annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would mean €35m (US$37.6m) or 7% for violations of the banned AI applications, for instance.
In addition, the proposed agreement states that a natural or legal person may make a complaint to the relevant market surveillance authority concerning any non-compliance with the AI act.
It will certainly be interesting to see how these proposed regulations will be received over the next year. By regulating AI that could cause biases, or that are unsafe, these proposals could work to significantly transform the AI ethics landscape.
More globally, tech giants have already been discussing how AI can be created in a more regulated way to promote safe use. IBM and Meta in particular have already announced an AI Alliance just before the EU AI Act announcement (in December 2023) to advocate for more open-source AI.
Bernd Greifeneder, Founder and CTO of Dynatrace, offers insight into what the EU may need to consider as it formalises the specific aspects of the regulations.
He says: “The EU’s provisional agreement is a promising first step on what is likely to be a long road ahead. There is no doubt that global cooperation between both governments and technologists will be a cornerstone for the future of AI-led innovation. Alongside the implications for the use of the technology in law enforcement, much of the focus is on the regulation of general purpose AI models, such as ChatGPT.”
He continues: “As the finer points of the regulations are hammered out over the coming weeks, the EU will need to acknowledge that not all AI is created equal. The regulatory framework will therefore need to establish internationally-defined trust, rules, and risk profiles for each class of AI to govern the ways they can be used.
“The EU AI Act will get off to a great start if it can provide clarity around these key differences between AI models.”
******
For more insights into the world of Technology - check out the latest edition of Technology Magazine and be sure to follow us on LinkedIn & Twitter.
Other magazines that may be of interest - AI Magazine | Cyber Magazine | Data Centre Magazine
Please also check out our upcoming event - Sustainability LIVE Net Zero on 6 and 7 March 2024.
******
BizClik is a global provider of B2B digital media platforms that covers executive communities for CEOs, CFOs, CMOs, sustainability leaders, procurement & supply chain leaders, technology & AI leaders, fintech leaders as well as covering industries such as manufacturing, mining, energy, EV, construction, healthcare and food.
Based in London, Dubai, and New York, Bizclik offers services such as content creation, advertising & sponsorship solutions, webinars & events.
- How Alibaba Cloud’s AI Ecosystem is Impacting Global GrowthCloud & Cybersecurity
- Salesforce: How the UK Has Emerged to Lead in AI ReadinessDigital Transformation
- SAVE THE DATE – Cyber LIVE London 2025Cloud & Cybersecurity
- Tech & AI LIVE London is Co-Located With Cyber LIVE LondonCloud & Cybersecurity