< Back to news
The AI Act sets different requirements for different categories of AI. A total of five categories have been identified, each with its own requirements. The agreement contains agreements on what high-risk applications are and what requirements they must meet. In addition, some applications will soon be prohibited, such as manipulative techniques aimed at vulnerable groups (e.g. children). It must also be clear, for example, when you communicate with an AI application (chatbot). For low-risk AI systems, no additional requirements need to be met. For the really big (system) AI models, there will be separate requirements and European supervision.
For businesses, there will be clarity. When the AI Act enters into force, companies deploying AI in high-risk categories will be able to assume, even for that application, that it is responsible and qualitative through testing against European standards. For companies developing high-risk AI, it will be clear what requirements (standards) the AI system must meet. Standardisation organisations have been asked to come up with a concrete technical elaboration for these standards. With the aim that reliable AI products can be sold throughout the European market. Supervision should ensure that AI systems that do not meet the requirements are taken off the market and fines can be imposed on providers for this.
Support from the Dutch AI Coalition
As an entrepreneur or organisation, how can you prepare for this AI Act? The NL AIC believes it is important for businesses and other organisations to prepare properly for this complex topic. The rapid development of AI combined with the state of the AI regulation raises many questions.
The NL AIC will guide its participating organisations through the European AI Act and related laws and regulations and their application. This means sharing best practices, experiences and insights. And not only with regard to the AI Act, but also to other digitisation regulations from Europe. The AI Act cannot be separated from the AVG, the Data Act, the Data Governance Act and the Cyber Resilience Act.
What does the follow-up look like?
15 December 2023
EU tentative political agreement on proposed AI regulation (AI Act)
In April 2021, the European Commission published a proposal for an AI regulation. This was the world's first proposal for comprehensive horizontal regulation of AI. On 8 December 2023, the EU (European Parliament, Member States and European Commission) agreed on the content of the European AI Act.
What does the AI Act mean?
The AI Act offers opportunities for developers and entrepreneurs as well as safeguards for European citizens. The agreement includes basic agreements on how AI works in products and services, requirements for potentially risky applications and support for developers to develop products and services within these frameworks. This gives AI in Europe and the Netherlands an important boost aimed at economic opportunities as well as public values.
The AI Act sets different requirements for different categories of AI. A total of five categories have been identified, each with its own requirements. The agreement contains agreements on what high-risk applications are and what requirements they must meet. In addition, some applications will soon be prohibited, such as manipulative techniques aimed at vulnerable groups (e.g. children). It must also be clear, for example, when you communicate with an AI application (chatbot). For low-risk AI systems, no additional requirements need to be met. For the really big (system) AI models, there will be separate requirements and European supervision.
Who does the AI Act affect?
From the perspective of citizens, it means that AI systems offered in Europe are trusted and risky products where AI is embedded become safer to use. In addition, parties applying AI will be regulated to prevent risks.
For businesses, there will be clarity. When the AI Act enters into force, companies deploying AI in high-risk categories will be able to assume, even for that application, that it is responsible and qualitative through testing against European standards. For companies developing high-risk AI, it will be clear what requirements (standards) the AI system must meet. Standardisation organisations have been asked to come up with a concrete technical elaboration for these standards. With the aim that reliable AI products can be sold throughout the European market. Supervision should ensure that AI systems that do not meet the requirements are taken off the market and fines can be imposed on providers for this.
Deployment of regulatory sandboxes
A so-called AI regulatory sandbox is being deployed. This involves a controlled environment in which developers can not only experiment with innovative AI products and services, but also gain knowledge about which regulatory frameworks apply and how they should be applied. The regulator is closely involved and will have to provide feedback to the developer. The latter can then make the AI application comply with the standards already in the development phase. This promotes the development of European innovative AI applications and is crucial to boost the competitive position of entrepreneurs and organisations in the Netherlands and Europe and to address societal challenges with AI.
Support from the Dutch AI Coalition
Kees van der Klauw, Coalition manager NL AIC: "AI is developing rapidly as a system technology and has major social and economic implications. As NL AIC, we support the EU agreement and see the AI Act not only as a protection against possible irresponsible applications of AI, but also as an opportunity for the Netherlands and Europe to come up with strong AI applications that give substance to responsible and human-centred AI. But to do so, we have to work quickly and in cooperation".
"Sandboxes here are the means to develop AI applications that comply with legal frameworks by design and can be scaled up quickly. These sandboxes will then have to be low-threshold and practically usable for companies and especially startups/scale-ups. The NL AIC is happy to help think about the practical frameworks for an AI regulatory sandbox," said Kees van der Klauw.
As an entrepreneur or organisation, how can you prepare for this AI Act? The NL AIC believes it is important for businesses and other organisations to prepare properly for this complex topic. The rapid development of AI combined with the state of the AI regulation raises many questions.
The NL AIC will guide its participating organisations through the European AI Act and related laws and regulations and their application. This means sharing best practices, experiences and insights. And not only with regard to the AI Act, but also to other digitisation regulations from Europe. The AI Act cannot be separated from the AVG, the Data Act, the Data Governance Act and the Cyber Resilience Act.
What does the follow-up look like?
The agreement has yet to be formally approved by the European Parliament. The law will enter into force two years after its publication. Some parts already a year earlier.
More information
As the Dutch AI Coalition, we would like to keep you informed about developments on the AI Act in the future. Read the news release from the central government or the news release from the EU for more information. Consult the European Q&A overview or visit the Working Group on Human-centred AI page.
Vergelijkbaar >
Similar news items
14 November 2024
The Amsterdam Vision on AI: A Realistic View on Artificial Intelligence
In its new policy, The Amsterdam Vision on AI , the city outlines how artificial intelligence (AI) should be integrated into urban life and how it should influence the city according to its residents. This vision was developed through months of conversations and dialogues with a wide range of Amsterdammers—from festival-goers to schoolchildren, experts to novices—who shared their thoughts on the future role of AI in Amsterdam.
read more >
14 November 2024
Interview: KPN Responsible AI Lab with Gianluigi Bardelloni and Eric Postma
ICAI's interview appeared this time with Gianluigi Bardelloni and Eric Postma, they talk about the developments in their ICAI Lab.
read more >
November 14
AI pilots TLC Science: generative AI in academic education
The University of Amsterdam has launched a new project through its Teaching & Learning Centre Science, exploring how Generative AI, like ChatGPT, can enhance academic education. This pilot program at the Faculty of Science tests and evaluates various applications of GenAI in higher education.
read more >