< Back to news
"What algorithms do is they predict. They classify people into groups or they profile them," Sven Stevenson, programme director for AI at the Personal Data Authority, told NOS. "And that carries the risk that the outcome is unfair to you. That you are sometimes discriminated against. Or that you even have physical risks."
Asked for an example where things go wrong with AI, Stevenson refers to scanning cars that drive around municipalities and check whether people are paying for a parking space based on a license plate. "That sometimes turns out completely wrong. People getting hundreds of parking fines from one incident because that scan car keeps driving by."
Major rise in AI incidents worldwide
There are no Dutch figures. "The honest answer is that we just know very poorly," says Stevenson. That is worrying, he acknowledges. "That's something we're all going to work on, though. But it's also really a call to citizens and businesses in that respect: if there's something going on where someone feels like, 'hey this isn't right, this is weird', ask the question or check whether an algorithm is involved."
Delta Plan 2030
The watchdog calls the advent of the European AI Act an important milestone. But the fact that supervision in the Netherlands is still in its infancy can be seen by the size of the coordinating AI team within the Personal Data Authority. There are seven people working on it, including Stevenson. Next year, this number should double. It does stress that this is 'pioneering work', the team being one of the first in Europe.
29 December 2023
Supervisor concerns about AI risks, little visibility into incidents
The risks of using artificial intelligence (AI) have increased sharply this year, warns the Personal Data Authority in its first annual report on AI.
In particular, the concerns relate to the rise of generative AI, technology that generates text, images or pictures based on a command. A well-known example of this is ChatGPT.
"What algorithms do is they predict. They classify people into groups or they profile them," Sven Stevenson, programme director for AI at the Personal Data Authority, told NOS. "And that carries the risk that the outcome is unfair to you. That you are sometimes discriminated against. Or that you even have physical risks."
Asked for an example where things go wrong with AI, Stevenson refers to scanning cars that drive around municipalities and check whether people are paying for a parking space based on a license plate. "That sometimes turns out completely wrong. People getting hundreds of parking fines from one incident because that scan car keeps driving by."
Major rise in AI incidents worldwide
In the report, the regulator cited data on the number of AI incidents recorded. This has increased tenfold. These are global figures, from the Organisation for Economic Cooperation and Development. The suspicion is that these are just the tip of the iceberg.
There are no Dutch figures. "The honest answer is that we just know very poorly," says Stevenson. That is worrying, he acknowledges. "That's something we're all going to work on, though. But it's also really a call to citizens and businesses in that respect: if there's something going on where someone feels like, 'hey this isn't right, this is weird', ask the question or check whether an algorithm is involved."
Delta Plan 2030
In the report, the Personal Data Authority calls for a 2030 Delta Plan to get algorithms and AI under control. "It means first of all that people who work with AI are going to be supported, for example police officers, teachers and doctors. They need to have a proper understanding of how algorithms and AI work, what the strengths and what the weaknesses are," says Stevenson.
And perhaps most importantly, it means, according to the regulator, that citizens will soon have to understand how an AI system decision was made. "That, for example, such a letter will say 'the AI system was a determining factor in the preparation of this decision'," says Stevenson.
The watchdog calls the advent of the European AI Act an important milestone. But the fact that supervision in the Netherlands is still in its infancy can be seen by the size of the coordinating AI team within the Personal Data Authority. There are seven people working on it, including Stevenson. Next year, this number should double. It does stress that this is 'pioneering work', the team being one of the first in Europe.
This article was published by NOS (in Dutch).
Vergelijkbaar >
Similar news items
14 November 2024
The Amsterdam Vision on AI: A Realistic View on Artificial Intelligence
In its new policy, The Amsterdam Vision on AI , the city outlines how artificial intelligence (AI) should be integrated into urban life and how it should influence the city according to its residents. This vision was developed through months of conversations and dialogues with a wide range of Amsterdammers—from festival-goers to schoolchildren, experts to novices—who shared their thoughts on the future role of AI in Amsterdam.
read more >
14 November 2024
Interview: KPN Responsible AI Lab with Gianluigi Bardelloni and Eric Postma
ICAI's interview appeared this time with Gianluigi Bardelloni and Eric Postma, they talk about the developments in their ICAI Lab.
read more >
November 14
AI pilots TLC Science: generative AI in academic education
The University of Amsterdam has launched a new project through its Teaching & Learning Centre Science, exploring how Generative AI, like ChatGPT, can enhance academic education. This pilot program at the Faculty of Science tests and evaluates various applications of GenAI in higher education.
read more >