< Back to news

19 December 2023

TNO and Kieskompas research: AI language models are inconsistent and tend towards the left

Generative AI language models ('large language models') that fill out Kieskompas' voting aid come out on the left side of the political spectrum. In addition, the models do not consistently answer subjective questions and quickly exhibit very different behaviour due to small changes in the question.
This is according to an experiment by TNO and Kieskompas in which they had different language models answer the questions of the Kieskompas 2023 voting aid multiple times.

The presence of bias and inconsistency in current language models may seem harmless, but the consequences can be significant. Large-scale use of such language models can increase the long-term effects of bias in people.

With the development of generative AI, Large Language Models have been on the rise recently, with ChatGPT being the most famous example. More and more people are using these models especially in matters that involve a lot of reading, for instance the different positions of our national political parties.

With these developments also comes a lot of uncertainty about the current capacity and quality of AI, especially in terms of reliability, potential bias, and limited consistency of models. Together with Kieskompas, TNO has therefore tested how these language models complete a voting aid.
 
Popular models
 
The models studied were assessed for popularity, availability, accessibility and provenance. Here, Meta's Llama-2, OpenAI's GPT3.5, 4, and 4.5-turbo and TII's Falcon-40b-Instruct were chosen. Each model was prepared for the experiment, e.g. by setting up one consistent question. This was slightly modified for each model to meet the specific format of the model (such as translation into English).

Models can use context in addition to directly answering a question. Adding context gives models an indication of how the user would like to see the answers returned, sometimes returning better results. Hence, each model was tested both with and without contextual additions (see also one of the examples in the table with the context provided in green. All models were given the same contextual question). For example, they understand better what kind of answer is expected from them if they have already had an example of a question with an answer that falls within the Electoral Compass choice ("totally agree", "agree", "agree", "neutral", "disagree", "totally disagree" and "no opinion").
 
The answers were translated by Kieskompas into coordinates that TNO placed over the political landscape of the Netherlands as shown in the figures. Each turn of a model that gave at least 10 answers can be seen here.

Varying answers
 
From the results it can be seen that these models are very volatile, shown by the coloured-in area, and also that the models are particularly left-oriented (see Figures 1 and 2). Manual analysis showed that OpenAI's GPT models are very quick to answer the propositions.

Meta's Llama better indicates that it is actually a model that answers. But once context is given, even Llama always answers the question. Falcon is still the most careful, but even with given context by giving a sample question, this model's behaviour changes significantly and it does give its opinion (as also seen below).

TII and Meta have trained their models to be more cautious and not to answer controversial questions. Moreover, OpenAI's models are more consistent; if you present them with the same question several times, you get the same answer more often than with the tested competitors.
 
Black box
 
Because the models' method of training is not transparent, it is impossible to tell whether a model might give a completely different opinion in a different context. This black box approach makes it impossible to identify why the model arrives at certain answers. The subsequent bias and inconsistency in current language models may not seem like a big deal, but it can have major consequences. Large-scale use of such models, e.g. by third parties using such language models without better knowledge, can increase the long-term effect of bias.

Partly for this reason, the Netherlands is going to develop its own open language model: GPT-NL. This model is needed to develop, strengthen and perpetuate digital sovereignty. TNO, NFI and SURF will jointly develop the model to take an important step towards transparent, fair and verifiable use of AI according to Dutch and European values and guidelines and with respect for data ownership.

Read the full article and the results of the research on the TNO website (in Dutch).

Vergelijkbaar >

Similar news items

>View all news items >
 CuspAI Introduces Itself at LAB42

6 September 2024

CuspAI Introduces Itself at LAB42 >

On September 5, 2024, Max Welling and Chad Edwards, founders of CuspAI, presented their innovative company during the IvI coffee & cake gathering. 

read more >

 Advanced AI for Surveillance Robots: A Collaborative Project by Dutch Institutions

5 September 2024

Advanced AI for Surveillance Robots: A Collaborative Project by Dutch Institutions >

A consortium of leading Dutch research institutions and government agencies, including TU Delft, the University of Amsterdam, TNO, and the Royal Netherlands Marechaussee, has launched an ambitious project aimed at developing advanced artificial intelligence (AI) for surveillance robots. Officially initiated on September 4, the OpenBots consortium focuses on creating AI systems designed to assist human security officers in various security settings.

read more >

NeuroAI: Charlotte Frenkel explores the future of AI inspired by the human brain

September 5, 2024

NeuroAI: Charlotte Frenkel explores the future of AI inspired by the human brain >

With the award of an AiNed Fellowship grant, Dr. Charlotte Frenkel from TU Delft delves into neuromorphic computing, pioneering research aimed at creating energy-efficient and powerful AI systems inspired by the human brain. This research bridges AI and neuroscience to develop faster, more energy-efficient, and smarter computing systems.

read more >