About REAIM 2023
The Netherlands believes the responsible development, deployment and use of artificial intelligence (AI) in the military domain must be given a higher place on the international agenda. It has therefore decided to host the first summit on Responsible AI in the Military Domain – REAIM 2023.
REAIM 2023 took place at the World Forum, The Hague, on 15 and 16 February 2023. The summit provided a platform for all stakeholders (governments, industry, civil society, academia and think tanks) to forge a common understanding of the opportunities, dilemmas and vulnerabilities associated with military AI.
Check out the REAIM 2023 programme.
Wopke Hoekstra, Minister of Foreign Affairs: The rise of AI is one of the greatest future challenges in international security and arms control.
Opportunities and concerns
Artificial intelligence is bringing about fundamental changes to our world, including in the military domain. While the integration of AI technologies creates unprecedented opportunities to boost human capabilities, especially in terms of decision-making, it also raises significant legal, security-related and ethical concerns in areas like transparency, reliability, predictability, accountability and bias. These concerns are amplified in the high-risk military context.
Purpose of REAIM 2023
REAIM 2023 aimed to:
- put the topic of responsible AI in the military domain higher on the political agenda;
- mobilise and activate a wide group of stakeholders to contribute to concrete next steps;
- foster and increase knowledge by sharing experiences, best practices and solutions.
Themes of REAIM 2023
REAIM 2023 was organised along the following themes:
- Mythbusting AI: breaking down the characteristics of AI – what do we need to know about the technical aspects of AI to understand how it can be applied responsibly in a military context?
- Responsible deployment and use of AI – what do military applications of AI mean in practice? What are the main benefits and vulnerabilities?
- Governance frameworks – which frameworks exist to ensure AI is applied responsibly in the military domain? What additional instruments and tools could strengthen governance frameworks, and how can stakeholders contribute?