< Back to news

29 December 2023

Supervisor concerns about AI risks, little visibility into incidents

The risks of using artificial intelligence (AI) have increased sharply this year, warns the Personal Data Authority in its first annual report on AI.
In particular, the concerns relate to the rise of generative AI, technology that generates text, images or pictures based on a command. A well-known example of this is ChatGPT.

"What algorithms do is they predict. They classify people into groups or they profile them," Sven Stevenson, programme director for AI at the Personal Data Authority, told NOS. "And that carries the risk that the outcome is unfair to you. That you are sometimes discriminated against. Or that you even have physical risks."

Asked for an example where things go wrong with AI, Stevenson refers to scanning cars that drive around municipalities and check whether people are paying for a parking space based on a license plate. "That sometimes turns out completely wrong. People getting hundreds of parking fines from one incident because that scan car keeps driving by."

Major rise in AI incidents worldwide
 
In the report, the regulator cited data on the number of AI incidents recorded. This has increased tenfold. These are global figures, from the Organisation for Economic Cooperation and Development. The suspicion is that these are just the tip of the iceberg.

There are no Dutch figures. "The honest answer is that we just know very poorly," says Stevenson. That is worrying, he acknowledges. "That's something we're all going to work on, though. But it's also really a call to citizens and businesses in that respect: if there's something going on where someone feels like, 'hey this isn't right, this is weird', ask the question or check whether an algorithm is involved."

Delta Plan 2030
 
In the report, the Personal Data Authority calls for a 2030 Delta Plan to get algorithms and AI under control. "It means first of all that people who work with AI are going to be supported, for example police officers, teachers and doctors. They need to have a proper understanding of how algorithms and AI work, what the strengths and what the weaknesses are," says Stevenson.
 
And perhaps most importantly, it means, according to the regulator, that citizens will soon have to understand how an AI system decision was made. "That, for example, such a letter will say 'the AI system was a determining factor in the preparation of this decision'," says Stevenson.

The watchdog calls the advent of the European AI Act an important milestone. But the fact that supervision in the Netherlands is still in its infancy can be seen by the size of the coordinating AI team within the Personal Data Authority. There are seven people working on it, including Stevenson. Next year, this number should double. It does stress that this is 'pioneering work', the team being one of the first in Europe.
 

Vergelijkbaar >

Similar news items

>View all news items >
 CuspAI Introduces Itself at LAB42

6 September 2024

CuspAI Introduces Itself at LAB42 >

On September 5, 2024, Max Welling and Chad Edwards, founders of CuspAI, presented their innovative company during the IvI coffee & cake gathering. 

read more >

 Advanced AI for Surveillance Robots: A Collaborative Project by Dutch Institutions

5 September 2024

Advanced AI for Surveillance Robots: A Collaborative Project by Dutch Institutions >

A consortium of leading Dutch research institutions and government agencies, including TU Delft, the University of Amsterdam, TNO, and the Royal Netherlands Marechaussee, has launched an ambitious project aimed at developing advanced artificial intelligence (AI) for surveillance robots. Officially initiated on September 4, the OpenBots consortium focuses on creating AI systems designed to assist human security officers in various security settings.

read more >

NeuroAI: Charlotte Frenkel explores the future of AI inspired by the human brain

September 5, 2024

NeuroAI: Charlotte Frenkel explores the future of AI inspired by the human brain >

With the award of an AiNed Fellowship grant, Dr. Charlotte Frenkel from TU Delft delves into neuromorphic computing, pioneering research aimed at creating energy-efficient and powerful AI systems inspired by the human brain. This research bridges AI and neuroscience to develop faster, more energy-efficient, and smarter computing systems.

read more >