< Back to news

6 December 2023

How do you humanise talking machines (but without the bad parts)?

This year, AI systems that can write almost human-like texts have made a breakthrough worldwide. However, many academic questions on how exactly these systems work remain unanswered. Three UvA researchers are trying to make the underlying language models more transparent, reliable and human.
The launch of ChatGPT by OpenAI on 30 November 2022 was a game changer for artificial intelligence. All of a sudden the public became aware of the power of writing machines. Some two months later, ChatGPT already had 100 million users.
Now, students are using it to write essays, programmers are using it to generate code, and companies are automating everyday writing tasks. At the same time, there are significant concerns about the unreliable nature of automatically generated text, and about the adoption of stereotypes and discrimination found in the training data.
 
The media across the world quickly jumped on ChatGPT, with stories flying around on the good, the bad and everything in between. ‘Before the launch of ChatGPT, I hadn’t heard a peep from the media about this topic for a long time,’ says UvA researcher Jelle Zuidema, ‘while my colleagues and I have tried to tell them multiple times over the years that important developments were on the horizon.’
 
Zuidema is an associate professor of Natural Language Processing, Explainable AI and Cognitive Modelling at the Institute for Logic, Language and Computation (ILLC). He is advocating for a measured discussion on the use of large language models, which is the kind of model that forms the basis for ChatGPT (see Box 1). Zuidema: ‘Downplaying or acting outraged about this development, saying things like “it’s all just plagiarism”, is pointless. Students use it, scientists use it, programmers use it, and many other groups in society are going to be dealing with it. Instead, we should be asking questions like: What consequences will language models have? What jobs will change? What will happen to the self-worth of copywriters?’
 
Read more here.
 
This article was published by the UvA.
The image was generated by the University of Amsterdam using Adobe Firefly (keywords: shallow brain architecture).

Vergelijkbaar >

Similar news items

>View all news items >
 CuspAI Introduces Itself at LAB42

6 September 2024

CuspAI Introduces Itself at LAB42 >

On September 5, 2024, Max Welling and Chad Edwards, founders of CuspAI, presented their innovative company during the IvI coffee & cake gathering. 

read more >

 Advanced AI for Surveillance Robots: A Collaborative Project by Dutch Institutions

5 September 2024

Advanced AI for Surveillance Robots: A Collaborative Project by Dutch Institutions >

A consortium of leading Dutch research institutions and government agencies, including TU Delft, the University of Amsterdam, TNO, and the Royal Netherlands Marechaussee, has launched an ambitious project aimed at developing advanced artificial intelligence (AI) for surveillance robots. Officially initiated on September 4, the OpenBots consortium focuses on creating AI systems designed to assist human security officers in various security settings.

read more >

NeuroAI: Charlotte Frenkel explores the future of AI inspired by the human brain

September 5, 2024

NeuroAI: Charlotte Frenkel explores the future of AI inspired by the human brain >

With the award of an AiNed Fellowship grant, Dr. Charlotte Frenkel from TU Delft delves into neuromorphic computing, pioneering research aimed at creating energy-efficient and powerful AI systems inspired by the human brain. This research bridges AI and neuroscience to develop faster, more energy-efficient, and smarter computing systems.

read more >