dc.description.abstract | This thesis examines the profound challenges autonomous weapon systems (AWS) pose to the accountability mechanisms of international humanitarian law (IHL), arguing that their machine-driven agency fundamentally disrupts legal frameworks designed for human actors. Through a legal and interdisciplinary analysis, the thesis explores how AWS undermine IHL’s core principles of intent, foreseeability, causation, and control, creating conceptual and practical accountability gaps. The thesis first demonstrates how AWS, exemplified by cases like the 2020 Kargu-2 strike in Libya, evade traditional liability due to absent human intent, diffused responsibility, and inadequate weapons review processes. It then identifies structural barriers to enforcement, including AI’s opaqueness, battlefield complexity, jurisdictional voids, and states’ exploitation of legal ambiguity, which render prosecutions impossible. Drawing on judicial precedents, academic literature, and real-world incidents, the analysis contrasts IHL’s effectiveness in human-driven cases, such as Prosecutor v. Tadic, with its failure to address AWS violations. The findings highlight an accountability vacuum that risks normalizing civilian harm and eroding IHL’s deterrent effect. To address this, the thesis proposes legal, technical, and policy reforms, including strengthened Article 36 reviews, new treaties, and soft-law governance to adapt IHL to autonomous warfare. By exposing the incompatibility of current frameworks with AWS and advocating actionable solutions, this thesis contributes to ongoing debates on regulating emerging technologies in armed conflict, emphasizing the urgency of preserving IHL’s protective mandate in an era of machine agency. | en_US |