< Back to news
With examples such as the large-scale deployment of smart drones by Ukraine and the autonomous detection of missiles by Israel's Iron Dome system, we see a rapidly evolving landscape in which AI plays an increasingly important role on the battlefield. These developments bring new challenges, where traditional rules of war and international law may no longer be sufficient.
Kwik's research highlights the complexity of AWS and raises important questions about how these systems fit within the existing legal framework of warfare. The autonomy of these systems, their ability to make independent decisions, and the unpredictability of their actions all contribute to the challenge of applying traditional principles of responsibility.
This issue requires in-depth discussion and awareness, not only within military and legal circles, but also among the broader public. As we move into the future of warfare, we must continue the dialogue on leveraging AI's benefits without losing sight of our ethical principles and responsibilities.
The full article in Het Parool provides an in-depth analysis of these issues and is a must-read for anyone interested in the intersectionality of technology, ethics and warfare. It invites us to think about the future we want to shape and how we can navigate the challenges ahead with wisdom and prudence.
Read more here.
Published by Het Parool.
20 February 2024
Between Innovation and Ethics: AI-Driven Weapons Under the Scrutiny
At a time when artificial intelligence (AI) is gaining traction, we are at a critical crossroads where technological advances are challenging our traditional views on warfare.
The recent research of war law lawyer Jonathan Kwik, presented in an article by Het Parool, shines a spotlight on this complex issue, specifically focusing on the implications of autonomous weapon systems (AWS) and the need for human control.
With examples such as the large-scale deployment of smart drones by Ukraine and the autonomous detection of missiles by Israel's Iron Dome system, we see a rapidly evolving landscape in which AI plays an increasingly important role on the battlefield. These developments bring new challenges, where traditional rules of war and international law may no longer be sufficient.
Kwik's research highlights the complexity of AWS and raises important questions about how these systems fit within the existing legal framework of warfare. The autonomy of these systems, their ability to make independent decisions, and the unpredictability of their actions all contribute to the challenge of applying traditional principles of responsibility.
This issue requires in-depth discussion and awareness, not only within military and legal circles, but also among the broader public. As we move into the future of warfare, we must continue the dialogue on leveraging AI's benefits without losing sight of our ethical principles and responsibilities.
The full article in Het Parool provides an in-depth analysis of these issues and is a must-read for anyone interested in the intersectionality of technology, ethics and warfare. It invites us to think about the future we want to shape and how we can navigate the challenges ahead with wisdom and prudence.
Read more here.
Published by Het Parool.
Vergelijkbaar >
Similar news items
14 November 2024
The Amsterdam Vision on AI: A Realistic View on Artificial Intelligence
In its new policy, The Amsterdam Vision on AI , the city outlines how artificial intelligence (AI) should be integrated into urban life and how it should influence the city according to its residents. This vision was developed through months of conversations and dialogues with a wide range of Amsterdammers—from festival-goers to schoolchildren, experts to novices—who shared their thoughts on the future role of AI in Amsterdam.
read more >
14 November 2024
Interview: KPN Responsible AI Lab with Gianluigi Bardelloni and Eric Postma
ICAI's interview appeared this time with Gianluigi Bardelloni and Eric Postma, they talk about the developments in their ICAI Lab.
read more >
November 14
AI pilots TLC Science: generative AI in academic education
The University of Amsterdam has launched a new project through its Teaching & Learning Centre Science, exploring how Generative AI, like ChatGPT, can enhance academic education. This pilot program at the Faculty of Science tests and evaluates various applications of GenAI in higher education.
read more >