< Back to news

5 October 2023

The vital importance of human-centred AI, a renewed call

Artificial Intelligence (AI) is undoubtedly the most influential systems technology today. AI is having a major impact on the way people interact with digital systems and services.
How should our society relate to this digital transformation? How do we strike a balance between freedom of action and the functional added value of AI? How do we safeguard public values, our fundamental rights and democratic freedoms? And how do we ensure that everyone can benefit from the positive effects of AI and not experience negative effects?
 
Impact of AI on people and society
 
AI offers many opportunities and can make a positive contribution to society and the economy. Think, for instance, of the accelerated development of medicines for rare diseases, facilitating sustainability and supporting professionals, freeing up more time for hands at the bedside and in the classroom. However, AI also has a downside.  
 
It is extremely important that the AI application is developed without bias to avoid large-scale exclusion or negative assessment of people. Besides, can people still oversee how AI systems work, and take responsibility for decisions and actions they take based on AI? The level of citizen involvement in the implementation of AI within our society will determine how successfully we can live together with this technology.
 
Responsible deployment of AI
 
In the manifesto Human-centred AI, a renewed call for meaningful and responsible applications, the Dutch AI Coalition argues for full commitment to the development of human-centred AI with a learning approach.  
 
"It is important to ensure that people working with AI technology have a good understanding of the points of interest and limitations. This means not only having good ethical and legal frameworks, but also supporting them with instructions, courses and training," said Irvette Tempelman, chair of the Human-centred AI working group. "As Europe, we are lagging behind the US and China in terms of AI development. That is why it is also so important to realise that we only have a limited option to go full throttle as is sometimes suggested. However, we can fully commit to responsible, human-centred AI."
 
Development and application of AI
 
How can we design and use AI systems to reap its benefits responsibly? How can we determine whether these systems use data responsibly? And how can we properly embed and use them socially? The importance of these questions is widely recognised. The huge positive and negative impact of AI on people and society brings with it a special responsibility. Both for the designers and developers of AI systems and for their applicators and users.  
 
ELSA concept
 
The ELSA concept provides a good basis for the development and application of human-centred AI solutions. ELSA stands for Ethical, Legal and Societal Aspects. By using clear ethical and legal frameworks and helpful regulations to develop human-centred AI in a European context, actively involving stakeholders. Resulting in manageable socio-economic impacts of AI and trust in how AI works.
 
This pragmatic approach provides a solid basis to not only talk ethically about AI, but also to shape AI ethically. This is not just about the algorithms, but also how, where and by whom they are applied.
 
Interested?
 
The Dutch AI Coalition is developing several initiatives for the responsible and meaningful development of human-centred AI. Interested in learning more about the coalition's vision or the ELSA concept? Then visit the Working Group on Human-centred AI page or contact Náhani Oosterwijk for more information.

Vergelijkbaar >

Similar news items

>View all news items >
 CuspAI Introduces Itself at LAB42

6 September 2024

CuspAI Introduces Itself at LAB42 >

On September 5, 2024, Max Welling and Chad Edwards, founders of CuspAI, presented their innovative company during the IvI coffee & cake gathering. 

read more >

 Advanced AI for Surveillance Robots: A Collaborative Project by Dutch Institutions

5 September 2024

Advanced AI for Surveillance Robots: A Collaborative Project by Dutch Institutions >

A consortium of leading Dutch research institutions and government agencies, including TU Delft, the University of Amsterdam, TNO, and the Royal Netherlands Marechaussee, has launched an ambitious project aimed at developing advanced artificial intelligence (AI) for surveillance robots. Officially initiated on September 4, the OpenBots consortium focuses on creating AI systems designed to assist human security officers in various security settings.

read more >

NeuroAI: Charlotte Frenkel explores the future of AI inspired by the human brain

September 5, 2024

NeuroAI: Charlotte Frenkel explores the future of AI inspired by the human brain >

With the award of an AiNed Fellowship grant, Dr. Charlotte Frenkel from TU Delft delves into neuromorphic computing, pioneering research aimed at creating energy-efficient and powerful AI systems inspired by the human brain. This research bridges AI and neuroscience to develop faster, more energy-efficient, and smarter computing systems.

read more >