< Back to news

9 October 2023

Ethical AI applications: this remains human work

Set up a well-thought-out architecture for an AI system in one go and then assume you will have a good foundation for years to come? Forget it.
To offer an ethical AI application, it must be possible to monitor and adjust that system at any time. And yes, that remains human work. How best to approach that? The Ethics team of the Human-Focused AI Working Group has clear ideas on that.
AI ethics have been a topic of discussion for some time. Those were mostly philosophical discussions. Those explorations definitely helped to arrive at important moral values and principles. But it generally remained rather abstract. And when the generative AI hype erupted in early 2023, it quickly became clear that when it comes to ethics, good guardrails are not yet in place. For instance, AI bots have quite a tendency to 'hallucinate', revealing that their underlying language models are not yet good at distinguishing fact from fiction.
 
Social unrest
 
The general public is currently experiencing the power and benefits of AI applications. At the same time, it is more than clear to everyone that that technology is not yet completely reliable. In addition, there are concerns about the fact that advanced AI applications are now available to anyone, including people with malicious intentions. And that causes social unrest, especially as developments in AI are currently accelerating. In Europe, regulations are indeed in the pipeline to minimise the potential risks of AI, but before that law is actually in place at national level, we are likely to be a few years down the line.
 
Lively debate
 
One advantage of recent developments, however, is that ethical application of AI is now really high on the agenda everywhere. At the Human-centred AI working group of the Dutch AI Coalition, they notice this too. The success of the AI Parade organised by that working group together with partners is a good example. This initiative to reach and involve the general public in the discussion about the social impact of AI via public libraries is proving to be a good way to share concerns and clear up misunderstandings about that technology.
 
Team Ethics
 
As generative AI applications tumble over each other, calls for ethical principles are ringing louder than ever. And the Ethics Team of the Working Group on Human-centred AI realises this all too well. The team consists of three experts who have been deeply concerned with the ethical side of AI for many years: Jeroen van den Hoven (professor of Technology and Ethics at TU Delft), Sophie Kuijt (Data & AI Ethics community leader at IBM) and Kolja Verhage (manager of Digital Ethics at Deloitte). The approach now being further developed by the team focuses on the practical side of the challenge of achieving ethical AI applications.
 
Systems perspective
 
"When it comes to ethical application of AI, everyone can very quickly write down all kinds of nice principles. But that is rather non-committal," Jeroen van den Hoven stresses. "What does it mean concretely? And what exactly needs to happen to actually make those fine words a reality? In doing so, it is good to realise that the focus should not only be on the algorithm and the quality of the available data. With digital ethics, it is important to take a systems perspective."
 
Read the full article on the Nederlandse AI Coalition's website (in Dutch). 
 

Vergelijkbaar >

Similar news items

>View all news items >
 CuspAI Introduces Itself at LAB42

6 September 2024

CuspAI Introduces Itself at LAB42 >

On September 5, 2024, Max Welling and Chad Edwards, founders of CuspAI, presented their innovative company during the IvI coffee & cake gathering. 

read more >

 Advanced AI for Surveillance Robots: A Collaborative Project by Dutch Institutions

5 September 2024

Advanced AI for Surveillance Robots: A Collaborative Project by Dutch Institutions >

A consortium of leading Dutch research institutions and government agencies, including TU Delft, the University of Amsterdam, TNO, and the Royal Netherlands Marechaussee, has launched an ambitious project aimed at developing advanced artificial intelligence (AI) for surveillance robots. Officially initiated on September 4, the OpenBots consortium focuses on creating AI systems designed to assist human security officers in various security settings.

read more >

NeuroAI: Charlotte Frenkel explores the future of AI inspired by the human brain

September 5, 2024

NeuroAI: Charlotte Frenkel explores the future of AI inspired by the human brain >

With the award of an AiNed Fellowship grant, Dr. Charlotte Frenkel from TU Delft delves into neuromorphic computing, pioneering research aimed at creating energy-efficient and powerful AI systems inspired by the human brain. This research bridges AI and neuroscience to develop faster, more energy-efficient, and smarter computing systems.

read more >