< Back to news
The presence of bias and inconsistency in current language models may seem harmless, but the consequences can be significant. Large-scale use of such language models can increase the long-term effects of bias in people.
With the development of generative AI, Large Language Models have been on the rise recently, with ChatGPT being the most famous example. More and more people are using these models especially in matters that involve a lot of reading, for instance the different positions of our national political parties.
With these developments also comes a lot of uncertainty about the current capacity and quality of AI, especially in terms of reliability, potential bias, and limited consistency of models. Together with Kieskompas, TNO has therefore tested how these language models complete a voting aid.
Models can use context in addition to directly answering a question. Adding context gives models an indication of how the user would like to see the answers returned, sometimes returning better results. Hence, each model was tested both with and without contextual additions (see also one of the examples in the table with the context provided in green. All models were given the same contextual question). For example, they understand better what kind of answer is expected from them if they have already had an example of a question with an answer that falls within the Electoral Compass choice ("totally agree", "agree", "agree", "neutral", "disagree", "totally disagree" and "no opinion").
Varying answers
Meta's Llama better indicates that it is actually a model that answers. But once context is given, even Llama always answers the question. Falcon is still the most careful, but even with given context by giving a sample question, this model's behaviour changes significantly and it does give its opinion (as also seen below).
TII and Meta have trained their models to be more cautious and not to answer controversial questions. Moreover, OpenAI's models are more consistent; if you present them with the same question several times, you get the same answer more often than with the tested competitors.
Partly for this reason, the Netherlands is going to develop its own open language model: GPT-NL. This model is needed to develop, strengthen and perpetuate digital sovereignty. TNO, NFI and SURF will jointly develop the model to take an important step towards transparent, fair and verifiable use of AI according to Dutch and European values and guidelines and with respect for data ownership.
Read the full article and the results of the research on the TNO website (in Dutch).
19 December 2023
TNO and Kieskompas research: AI language models are inconsistent and tend towards the left
Generative AI language models ('large language models') that fill out Kieskompas' voting aid come out on the left side of the political spectrum. In addition, the models do not consistently answer subjective questions and quickly exhibit very different behaviour due to small changes in the question.
This is according to an experiment by TNO and Kieskompas in which they had different language models answer the questions of the Kieskompas 2023 voting aid multiple times.
The presence of bias and inconsistency in current language models may seem harmless, but the consequences can be significant. Large-scale use of such language models can increase the long-term effects of bias in people.
With the development of generative AI, Large Language Models have been on the rise recently, with ChatGPT being the most famous example. More and more people are using these models especially in matters that involve a lot of reading, for instance the different positions of our national political parties.
With these developments also comes a lot of uncertainty about the current capacity and quality of AI, especially in terms of reliability, potential bias, and limited consistency of models. Together with Kieskompas, TNO has therefore tested how these language models complete a voting aid.
Popular models
The models studied were assessed for popularity, availability, accessibility and provenance. Here, Meta's Llama-2, OpenAI's GPT3.5, 4, and 4.5-turbo and TII's Falcon-40b-Instruct were chosen. Each model was prepared for the experiment, e.g. by setting up one consistent question. This was slightly modified for each model to meet the specific format of the model (such as translation into English).
Models can use context in addition to directly answering a question. Adding context gives models an indication of how the user would like to see the answers returned, sometimes returning better results. Hence, each model was tested both with and without contextual additions (see also one of the examples in the table with the context provided in green. All models were given the same contextual question). For example, they understand better what kind of answer is expected from them if they have already had an example of a question with an answer that falls within the Electoral Compass choice ("totally agree", "agree", "agree", "neutral", "disagree", "totally disagree" and "no opinion").
The answers were translated by Kieskompas into coordinates that TNO placed over the political landscape of the Netherlands as shown in the figures. Each turn of a model that gave at least 10 answers can be seen here.
Varying answers
From the results it can be seen that these models are very volatile, shown by the coloured-in area, and also that the models are particularly left-oriented (see Figures 1 and 2). Manual analysis showed that OpenAI's GPT models are very quick to answer the propositions.
Meta's Llama better indicates that it is actually a model that answers. But once context is given, even Llama always answers the question. Falcon is still the most careful, but even with given context by giving a sample question, this model's behaviour changes significantly and it does give its opinion (as also seen below).
TII and Meta have trained their models to be more cautious and not to answer controversial questions. Moreover, OpenAI's models are more consistent; if you present them with the same question several times, you get the same answer more often than with the tested competitors.
Black box
Because the models' method of training is not transparent, it is impossible to tell whether a model might give a completely different opinion in a different context. This black box approach makes it impossible to identify why the model arrives at certain answers. The subsequent bias and inconsistency in current language models may not seem like a big deal, but it can have major consequences. Large-scale use of such models, e.g. by third parties using such language models without better knowledge, can increase the long-term effect of bias.
Partly for this reason, the Netherlands is going to develop its own open language model: GPT-NL. This model is needed to develop, strengthen and perpetuate digital sovereignty. TNO, NFI and SURF will jointly develop the model to take an important step towards transparent, fair and verifiable use of AI according to Dutch and European values and guidelines and with respect for data ownership.
Read the full article and the results of the research on the TNO website (in Dutch).
Vergelijkbaar >
Similar news items
14 November 2024
The Amsterdam Vision on AI: A Realistic View on Artificial Intelligence
In its new policy, The Amsterdam Vision on AI , the city outlines how artificial intelligence (AI) should be integrated into urban life and how it should influence the city according to its residents. This vision was developed through months of conversations and dialogues with a wide range of Amsterdammers—from festival-goers to schoolchildren, experts to novices—who shared their thoughts on the future role of AI in Amsterdam.
read more >
14 November 2024
Interview: KPN Responsible AI Lab with Gianluigi Bardelloni and Eric Postma
ICAI's interview appeared this time with Gianluigi Bardelloni and Eric Postma, they talk about the developments in their ICAI Lab.
read more >
November 14
AI pilots TLC Science: generative AI in academic education
The University of Amsterdam has launched a new project through its Teaching & Learning Centre Science, exploring how Generative AI, like ChatGPT, can enhance academic education. This pilot program at the Faculty of Science tests and evaluates various applications of GenAI in higher education.
read more >