< Back to news
19 October 2023
Scientific oversight as a requirement for responsible AI
AMSTERDAM - Interdisciplinary experts from Amsterdam UMC, and the University of Amsterdam, two institutions within the Amsterdam AI ecosystem, have published their 'living guidelines' for responsible use of generative AI today in Nature.
Lead author Claudi Bockting, professor of Clinical Psychology of Psychiatry at Amsterdam UMC and co-director of the Centre for Urban Mental Health believes that, ‘AI tools could flood the internet with misinformation and ‘deep fakes’ that can be indistinguishable from real individuals. This could over time, erode trust between people, in politicians, institutions, and science. Independent scientists must take the lead in testing, proving, and improving the safety and security of generative AI. However, most scientists don’t have access to the facilities or public funding to develop or evaluate generative AI tools.’
The guidelines were crafted after two international summits with members of international organizations such as the International Science Council, the University Based Institutes for Advanced Study, the European Academy of Sciences and Arts, and members of global institutions like UNESCO and United Nations. This initiative emerges from a pressing need to ensure scientific and societal oversight in a swiftly evolving sector.
In the view of the authors, oversight should be modeled on a scientific institute. With a focus on quantitative measurements of real-world impacts, both positive and potentially detrimental, and apply the scientific method in its evaluations. Maintaining a distance from dominating commercial interests, the consortium prioritizes public welfare and the authenticity of scientific research. This initiative is a proactive response to potential gaps in current governance, offering a balanced perspective amidst the slow pace of governmental regulations, the fragmentation of guideline developments, and the unpredictability of self-regulation by major tech entities.
The 'living guidelines' revolve around three key principles:
- Accountability: Advocating for a human-augmented approach, the consortium believes that while generative AI can assist in low-risk tasks, essential endeavors like scientific manuscript preparation or peer reviews should retain human oversight.
- Transparency: Clear disclosure of generative AI use is imperative. This will allow the broader scientific community to assess the implications of generative AI on research quality and decision-making. Furthermore, the consortium urges AI tool developers to be transparent about their methodologies, enabling comprehensive evaluations.
- Independent Oversight: Given the vast financial implications of the generative AI sector, relying solely on self-regulation is not feasible. External, independent objective audits are crucial to ensure ethical and high-quality use of AI tools. The proposed scientific body must have sufficient computing power to run full-scale models, and enough information on source cases, to judge how AI-tools were trained, even before they are released. Effective guidelines will require international funding and broad legal endorsement and will only work in collaboration with tech industry leaders while at the same time safeguarding its independence. The authors underscore the urgent need for their proposed scientific body, which can also address any emergent or unresolved issues in the domain.
In essence, the consortium emphasizes the need for focused investments in an expert committee and oversight body. This ensures that generative AI progresses responsibly, striking a balance between innovation and societal well-being.
About the Consortium
The consortium comprises AI experts, computer scientists, and specialists in the psychological and social impacts of AI from AmsterdamUMC, IAS and the Science faculty of UvA, included in the Amsterdam AI ecosystem and Indiana University (USA). This joint effort, supported by members of global and science institutions, seeks to navigate a future for generative AI that is both innovative and ethically conscious.
The consortium comprises AI experts, computer scientists, and specialists in the psychological and social impacts of AI from AmsterdamUMC, IAS and the Science faculty of UvA, included in the Amsterdam AI ecosystem and Indiana University (USA). This joint effort, supported by members of global and science institutions, seeks to navigate a future for generative AI that is both innovative and ethically conscious.
Vergelijkbaar >
Similar news items
14 November 2024
The Amsterdam Vision on AI: A Realistic View on Artificial Intelligence
In its new policy, The Amsterdam Vision on AI , the city outlines how artificial intelligence (AI) should be integrated into urban life and how it should influence the city according to its residents. This vision was developed through months of conversations and dialogues with a wide range of Amsterdammers—from festival-goers to schoolchildren, experts to novices—who shared their thoughts on the future role of AI in Amsterdam.
read more >
14 November 2024
Interview: KPN Responsible AI Lab with Gianluigi Bardelloni and Eric Postma
ICAI's interview appeared this time with Gianluigi Bardelloni and Eric Postma, they talk about the developments in their ICAI Lab.
read more >
November 14
AI pilots TLC Science: generative AI in academic education
The University of Amsterdam has launched a new project through its Teaching & Learning Centre Science, exploring how Generative AI, like ChatGPT, can enhance academic education. This pilot program at the Faculty of Science tests and evaluates various applications of GenAI in higher education.
read more >