< Back to news

15 December 2023

EU tentative political agreement on proposed AI regulation (AI Act)

In April 2021, the European Commission published a proposal for an AI regulation. This was the world's first proposal for comprehensive horizontal regulation of AI. On 8 December 2023, the EU (European Parliament, Member States and European Commission) agreed on the content of the European AI Act.
What does the AI Act mean?
 
The AI Act offers opportunities for developers and entrepreneurs as well as safeguards for European citizens. The agreement includes basic agreements on how AI works in products and services, requirements for potentially risky applications and support for developers to develop products and services within these frameworks. This gives AI in Europe and the Netherlands an important boost aimed at economic opportunities as well as public values.

The AI Act sets different requirements for different categories of AI. A total of five categories have been identified, each with its own requirements. The agreement contains agreements on what high-risk applications are and what requirements they must meet. In addition, some applications will soon be prohibited, such as manipulative techniques aimed at vulnerable groups (e.g. children). It must also be clear, for example, when you communicate with an AI application (chatbot). For low-risk AI systems, no additional requirements need to be met. For the really big (system) AI models, there will be separate requirements and European supervision.
 
Who does the AI Act affect?
 
From the perspective of citizens, it means that AI systems offered in Europe are trusted and risky products where AI is embedded become safer to use. In addition, parties applying AI will be regulated to prevent risks.

For businesses, there will be clarity. When the AI Act enters into force, companies deploying AI in high-risk categories will be able to assume, even for that application, that it is responsible and qualitative through testing against European standards. For companies developing high-risk AI, it will be clear what requirements (standards) the AI system must meet. Standardisation organisations have been asked to come up with a concrete technical elaboration for these standards. With the aim that reliable AI products can be sold throughout the European market. Supervision should ensure that AI systems that do not meet the requirements are taken off the market and fines can be imposed on providers for this.
 
Deployment of regulatory sandboxes
 
A so-called AI regulatory sandbox is being deployed. This involves a controlled environment in which developers can not only experiment with innovative AI products and services, but also gain knowledge about which regulatory frameworks apply and how they should be applied. The regulator is closely involved and will have to provide feedback to the developer. The latter can then make the AI application comply with the standards already in the development phase. This promotes the development of European innovative AI applications and is crucial to boost the competitive position of entrepreneurs and organisations in the Netherlands and Europe and to address societal challenges with AI.

Support from the Dutch AI Coalition
 
Kees van der Klauw, Coalition manager NL AIC: "AI is developing rapidly as a system technology and has major social and economic implications. As NL AIC, we support the EU agreement and see the AI Act not only as a protection against possible irresponsible applications of AI, but also as an opportunity for the Netherlands and Europe to come up with strong AI applications that give substance to responsible and human-centred AI. But to do so, we have to work quickly and in cooperation".
 
"Sandboxes here are the means to develop AI applications that comply with legal frameworks by design and can be scaled up quickly. These sandboxes will then have to be low-threshold and practically usable for companies and especially startups/scale-ups. The NL AIC is happy to help think about the practical frameworks for an AI regulatory sandbox," said Kees van der Klauw.

As an entrepreneur or organisation, how can you prepare for this AI Act? The NL AIC believes it is important for businesses and other organisations to prepare properly for this complex topic. The rapid development of AI combined with the state of the AI regulation raises many questions.

The NL AIC will guide its participating organisations through the European AI Act and related laws and regulations and their application. This means sharing best practices, experiences and insights. And not only with regard to the AI Act, but also to other digitisation regulations from Europe. The AI Act cannot be separated from the AVG, the Data Act, the Data Governance Act and the Cyber Resilience Act.

What does the follow-up look like?
 
The agreement has yet to be formally approved by the European Parliament. The law will enter into force two years after its publication. Some parts already a year earlier.
 
More information
 
As the Dutch AI Coalition, we would like to keep you informed about developments on the AI Act in the future. Read the news release from the central government or the news release from the EU for more information. Consult the European Q&A overview or visit the Working Group on Human-centred AI page.

This article was published on the NL AIC website (in Dutch).
© NL AIC
 
 

Vergelijkbaar >

Similar news items

>View all news items >
 CuspAI Introduces Itself at LAB42

6 September 2024

CuspAI Introduces Itself at LAB42 >

On September 5, 2024, Max Welling and Chad Edwards, founders of CuspAI, presented their innovative company during the IvI coffee & cake gathering. 

read more >

 Advanced AI for Surveillance Robots: A Collaborative Project by Dutch Institutions

5 September 2024

Advanced AI for Surveillance Robots: A Collaborative Project by Dutch Institutions >

A consortium of leading Dutch research institutions and government agencies, including TU Delft, the University of Amsterdam, TNO, and the Royal Netherlands Marechaussee, has launched an ambitious project aimed at developing advanced artificial intelligence (AI) for surveillance robots. Officially initiated on September 4, the OpenBots consortium focuses on creating AI systems designed to assist human security officers in various security settings.

read more >

NeuroAI: Charlotte Frenkel explores the future of AI inspired by the human brain

September 5, 2024

NeuroAI: Charlotte Frenkel explores the future of AI inspired by the human brain >

With the award of an AiNed Fellowship grant, Dr. Charlotte Frenkel from TU Delft delves into neuromorphic computing, pioneering research aimed at creating energy-efficient and powerful AI systems inspired by the human brain. This research bridges AI and neuroscience to develop faster, more energy-efficient, and smarter computing systems.

read more >