< Back to news
"What algorithms do is they predict. They classify people into groups or they profile them," Sven Stevenson, programme director for AI at the Personal Data Authority, told NOS. "And that carries the risk that the outcome is unfair to you. That you are sometimes discriminated against. Or that you even have physical risks."
Asked for an example where things go wrong with AI, Stevenson refers to scanning cars that drive around municipalities and check whether people are paying for a parking space based on a license plate. "That sometimes turns out completely wrong. People getting hundreds of parking fines from one incident because that scan car keeps driving by."
Major rise in AI incidents worldwide
There are no Dutch figures. "The honest answer is that we just know very poorly," says Stevenson. That is worrying, he acknowledges. "That's something we're all going to work on, though. But it's also really a call to citizens and businesses in that respect: if there's something going on where someone feels like, 'hey this isn't right, this is weird', ask the question or check whether an algorithm is involved."
Delta Plan 2030
The watchdog calls the advent of the European AI Act an important milestone. But the fact that supervision in the Netherlands is still in its infancy can be seen by the size of the coordinating AI team within the Personal Data Authority. There are seven people working on it, including Stevenson. Next year, this number should double. It does stress that this is 'pioneering work', the team being one of the first in Europe.
29 December 2023
Supervisor concerns about AI risks, little visibility into incidents
The risks of using artificial intelligence (AI) have increased sharply this year, warns the Personal Data Authority in its first annual report on AI.
In particular, the concerns relate to the rise of generative AI, technology that generates text, images or pictures based on a command. A well-known example of this is ChatGPT.
"What algorithms do is they predict. They classify people into groups or they profile them," Sven Stevenson, programme director for AI at the Personal Data Authority, told NOS. "And that carries the risk that the outcome is unfair to you. That you are sometimes discriminated against. Or that you even have physical risks."
Asked for an example where things go wrong with AI, Stevenson refers to scanning cars that drive around municipalities and check whether people are paying for a parking space based on a license plate. "That sometimes turns out completely wrong. People getting hundreds of parking fines from one incident because that scan car keeps driving by."
Major rise in AI incidents worldwide
In the report, the regulator cited data on the number of AI incidents recorded. This has increased tenfold. These are global figures, from the Organisation for Economic Cooperation and Development. The suspicion is that these are just the tip of the iceberg.
There are no Dutch figures. "The honest answer is that we just know very poorly," says Stevenson. That is worrying, he acknowledges. "That's something we're all going to work on, though. But it's also really a call to citizens and businesses in that respect: if there's something going on where someone feels like, 'hey this isn't right, this is weird', ask the question or check whether an algorithm is involved."
Delta Plan 2030
In the report, the Personal Data Authority calls for a 2030 Delta Plan to get algorithms and AI under control. "It means first of all that people who work with AI are going to be supported, for example police officers, teachers and doctors. They need to have a proper understanding of how algorithms and AI work, what the strengths and what the weaknesses are," says Stevenson.
And perhaps most importantly, it means, according to the regulator, that citizens will soon have to understand how an AI system decision was made. "That, for example, such a letter will say 'the AI system was a determining factor in the preparation of this decision'," says Stevenson.
The watchdog calls the advent of the European AI Act an important milestone. But the fact that supervision in the Netherlands is still in its infancy can be seen by the size of the coordinating AI team within the Personal Data Authority. There are seven people working on it, including Stevenson. Next year, this number should double. It does stress that this is 'pioneering work', the team being one of the first in Europe.
This article was published by NOS (in Dutch).
Vergelijkbaar >
Similar news items
14 November 2024
Interview: KPN Responsible AI Lab with Gianluigi Bardelloni and Eric Postma
ICAI's interview appeared this time with Gianluigi Bardelloni and Eric Postma, they talk about the developments in their ICAI Lab.
read more >
13 November
Did you know that 13 AI pilots were at the FNWI this year?
The rise of Generative Artificial Intelligence (GenAI) has a significant impact on academic education. That’s why the Teaching & Learning Centre Science launched a project integrating GenAI, specifically ChatGPT, into the teaching process.
read more >
November 13
AI in the Hospital: Physician or Assistant?
The Amsterdam Economic Board’s website recently published an article about the Zorg2025 meeting, where experts discussed AI's role in healthcare, its potential, and the ethical challenges it brings to the field.
read more >