The EU AI Act may be put on hold. Regulatory oversight of foundational models is said to be a conflicting focal point

According to sources cited by the media, the EU Artificial Intelligence (AI) Act is at risk of being shelved due to difficulties in reaching a consensus on regulating systems like ChatGPT.

After two years of negotiations, the European Parliament passed the draft AI Act in June this year. The act is currently in a three-way negotiation phase among member states, parliament, and the European Commission with the aim of finalizing its version. If formally approved, it will become the world’s first comprehensive regulation on AI.

Negotiators are scheduled to hold a meeting on Friday local time for crucial discussions before the final talks on December 6th. Thierry Breton, Commissioner for Internal Market and one of the negotiators for the AI Act, along with Dragoș Tudorache have expressed their hope that it will be approved by the end of this year.

However, sources claim that regulation of base models has become a major obstacle in AI Act negotiations.

Some experts and legislators have proposed a tiered approach to regulate base models. The EU defines base models as those with over 45 million users. ChatGPT chatbots are considered highly capable base models that would carry additional obligations such as regular audits to identify potential vulnerabilities.

However, some lawmakers argue that smaller-scale models may pose similar risks.

It is reported that France, Germany, and Italy present significant challenges towards reaching an agreement since they lean towards allowing development companies of generative AI models to self-regulate rather than imposing strict rules.

Sources state that during a meeting held in Rome on October 30th, France convinced Italy and Germany to support this proposal after smooth progress had been made previously with compromises reached regarding other contentious areas such as regulating high-risk AIs.

Breton, MEPs (Members of European Parliament), and dozens of AI researchers oppose self-regulation by companies. This week Geoffrey Hinton and other researchers published an open letter warning that self-regulation may fall far short of the safety standards required for base models.

Sources also revealed that other unresolved issues in the negotiations include the definition of AI, impact assessments on fundamental rights, enforcement exceptions, and national security exceptions.

As the next European Parliament elections are scheduled for next year, if lawmakers fail to reach a consensus on the AI Act by the end of this year, it may be shelved.

Mark Brakel, Policy Director at Future of Life Institute, stated: “If you had asked me six or seven weeks ago I would have said we were seeing compromises on all key issues but now it’s become much more difficult.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top