< Back to news
14 August 2023
AI legislation at least as far-reaching as European privacy rules
Jacintha Walters recently graduated from the HvA master's programme Applied Artificial Intelligence on a topical subject: the European AI Act, which comes into force at the end of 2023. She examined the extent to which a number of large and small companies are prepared for these regulations. On a number of points, that preparation still falls short, she finds in her thesis and in a paper, which will be out soon. "Better preparation is badly needed, because this is going to have as much impact as the AVG," she says.
Walters' extensive analysis of 15 large and small companies, and additional literature review, shows that many organisations' preparation for the AI Act is far from optimal. Training on the risks of 'bias' in data collection and A.I. models is often still lacking. There are also often still no guidelines for technical documentation, while the AI Act identifies specific points that such documentation must meet'.
Still much unclear
Walters observes that it is not surprising that companies are not yet sufficiently prepared, as much is still unclear around the European regulations, which are worded quite generally. "Companies have been working on this for two years, as the basis for the AI Act was already established in 2021. On certain points, the Act is very specific though; especially when it comes to very risky A.I., such as models that determine what benefits someone is entitled to. But on other points, the Act is quite unclear; such as that it is high risk if you have incorporated A.I. into your 'critical infrastructure'. On those points, it is still ambiguous what the law will soon mean in practice."
More impact than AVG
Nevertheless, preparation is essential because the regulations are going to have far-reaching consequences, Walters believes. "The impact for companies is going to be at least as big as with the AVG. Most companies are already using A.I. by now; sometimes they don't even know where or how it is used. If you run a webshop yourself, you might even realise that this is in a recommendation system, for example."
Bigger task
What also makes the impact of this European legislation so big is that privacy is something companies can still add afterwards. "The tricky thing about the AI ACT is that you have to carry out risk analyses in terms of rights and discrimination, and be able to demonstrate that you have thought this through. For this, you also have to use certain methodologies, and companies do not yet know which ones. The AVG is much more concrete, as it mostly deals with anonymising or pseudonymising personal data. So this becomes a much bigger task for organisations."
Bottlenecks
For her research, Walters highlighted relevant parts of the AI Act and turned them into 90 survey questions. A mix of organisations participated, after which Walters assigned scores for how often a company or organisation performs a particular required action (sometimes, regularly, rarely or always). "The average score was 58 per cent; so there is definitely room for improvement".
In particular, technical documentation appears to be point where it is scored low. "Companies do not yet have protocols for writing technical documentation, while the AI Act requires very specific things in this area," Jacintha says.
Knowledge of the risks of "internal A.I. models and data" also still falls short. "Most companies surveyed reported that they have not encountered any risks in two years when using their datasets. This is unlikely, as all datasets and models contain risk. More training in this area is therefore needed." Incidentally, the companies did score well on keeping their A.I. models up-to-date.
Surprised
"What really surprised me during this master's is that there are SO many risks involved in deploying AI," says Jacintha. "Bias is a really big thing-there are so many groups you have to take into account. But the more variations you add, the less well a model works. You therefore have to ask yourself as an organisation every time whether it is better to automate things, or whether there are such risks involved that it is better to refrain from doing so. There will be many dilemmas about this in the near future, also among municipalities in the Netherlands that are going to deploy A.I. more."
More information
Jacintha graduated from the master's degree in Applied AI through the Centre for Market Insights, led by researcher Diptish Dey and lecturer Jesse Weltevreden. The upcoming scientific paper ('Complying with the EU AI ACT') Jacintha wrote together with HvA researchers Diptish Dey, Debarati Bhaumik and Sophie Horsman.
Following her insights, Jacintha decided to start her own consultancy: Babelfish, to assist organisations in the responsible deployment of AI models and support preparations for AI regulations.
Vergelijkbaar >
Similar news items
14 November 2024
The Amsterdam Vision on AI: A Realistic View on Artificial Intelligence
In its new policy, The Amsterdam Vision on AI , the city outlines how artificial intelligence (AI) should be integrated into urban life and how it should influence the city according to its residents. This vision was developed through months of conversations and dialogues with a wide range of Amsterdammers—from festival-goers to schoolchildren, experts to novices—who shared their thoughts on the future role of AI in Amsterdam.
read more >
14 November 2024
Interview: KPN Responsible AI Lab with Gianluigi Bardelloni and Eric Postma
ICAI's interview appeared this time with Gianluigi Bardelloni and Eric Postma, they talk about the developments in their ICAI Lab.
read more >
November 14
AI pilots TLC Science: generative AI in academic education
The University of Amsterdam has launched a new project through its Teaching & Learning Centre Science, exploring how Generative AI, like ChatGPT, can enhance academic education. This pilot program at the Faculty of Science tests and evaluates various applications of GenAI in higher education.
read more >