< Back to news

19 October 2023

Scientific oversight as a requirement for responsible AI

AMSTERDAM - Interdisciplinary experts from Amsterdam UMC, and the University of Amsterdam, two institutions within the Amsterdam AI ecosystem, have published their 'living guidelines' for responsible use of generative AI today in Nature.

Lead author Claudi Bockting, professor of Clinical Psychology of Psychiatry at Amsterdam UMC and co-director of the Centre for Urban Mental Health believes that, ‘AI tools could flood the internet with misinformation and ‘deep fakes’ that can be indistinguishable from real individuals. This could over time, erode trust between people, in politicians, institutions, and science. Independent scientists must take the lead in testing, proving, and improving the safety and security of generative AI. However, most scientists don’t have access to the facilities or public funding to develop or evaluate generative AI tools.’

The guidelines were crafted after two international summits with members of international organizations such as the International Science Council, the University Based Institutes for Advanced Study, the European Academy of Sciences and Arts, and members of global institutions like UNESCO and United Nations. This initiative emerges from a pressing need to ensure scientific and societal oversight in a swiftly evolving sector.
In the view of the authors, oversight should be modeled on  a scientific institute. With a focus on quantitative measurements of real-world impacts, both positive and potentially detrimental, and apply the scientific method in its evaluations. Maintaining a distance from dominating commercial interests, the consortium prioritizes public welfare and the authenticity of scientific research. This initiative is a proactive response to potential gaps in current governance, offering a balanced perspective amidst the slow pace of governmental regulations, the fragmentation of guideline developments, and the unpredictability of self-regulation by major tech entities.
 
The 'living guidelines' revolve around three key principles:
  • Accountability: Advocating for a human-augmented approach, the consortium believes that while generative AI can assist in low-risk tasks, essential endeavors like scientific manuscript preparation or peer reviews should retain human oversight.

  • Transparency: Clear disclosure of generative AI use is imperative. This will allow the broader scientific community to assess the implications of generative AI on research quality and decision-making. Furthermore, the consortium urges AI tool developers to be transparent about their methodologies, enabling comprehensive evaluations.

  • Independent Oversight: Given the vast financial implications of the generative AI sector, relying solely on self-regulation is not feasible. External, independent objective audits are crucial to ensure ethical and high-quality use of AI tools. The proposed scientific body must have sufficient computing power to run full-scale models, and enough information on source cases, to judge how AI-tools were trained, even before they are released. Effective guidelines will require international funding and broad legal endorsement and will only work in collaboration with tech industry leaders while at the same time safeguarding its independence. The authors underscore the urgent need for their proposed scientific body, which can also address any emergent or unresolved issues in the domain.

In essence, the consortium emphasizes the need for focused investments in an expert committee and oversight body. This ensures that generative AI progresses responsibly, striking a balance between innovation and societal well-being.
 
About the Consortium
The consortium comprises AI experts, computer scientists, and specialists in the psychological and social impacts of AI from AmsterdamUMC, IAS and the Science faculty of UvA, included in the Amsterdam AI ecosystem and Indiana University (USA). This joint effort, supported by members of global and science institutions, seeks to navigate a future for generative AI that is both innovative and ethically conscious.

Vergelijkbaar >

Similar news items

>View all news items >
 CuspAI Introduces Itself at LAB42

6 September 2024

CuspAI Introduces Itself at LAB42 >

On September 5, 2024, Max Welling and Chad Edwards, founders of CuspAI, presented their innovative company during the IvI coffee & cake gathering. 

read more >

 Advanced AI for Surveillance Robots: A Collaborative Project by Dutch Institutions

5 September 2024

Advanced AI for Surveillance Robots: A Collaborative Project by Dutch Institutions >

A consortium of leading Dutch research institutions and government agencies, including TU Delft, the University of Amsterdam, TNO, and the Royal Netherlands Marechaussee, has launched an ambitious project aimed at developing advanced artificial intelligence (AI) for surveillance robots. Officially initiated on September 4, the OpenBots consortium focuses on creating AI systems designed to assist human security officers in various security settings.

read more >

NeuroAI: Charlotte Frenkel explores the future of AI inspired by the human brain

September 5, 2024

NeuroAI: Charlotte Frenkel explores the future of AI inspired by the human brain >

With the award of an AiNed Fellowship grant, Dr. Charlotte Frenkel from TU Delft delves into neuromorphic computing, pioneering research aimed at creating energy-efficient and powerful AI systems inspired by the human brain. This research bridges AI and neuroscience to develop faster, more energy-efficient, and smarter computing systems.

read more >