The Ethics of Autonomous Weapons Systems
Weapons Systems (AWS) are defined by the U.S. Department of Defense as “a weapon system(s) that, once activated, can select and engage targets without further intervention by a human operator.” Since the crucial distinguishing mark of human
reasoning is the capacity to set ends and goals, the AWS suggests for the first time the possibility of eliminating the human operator from the battlefield. The development of AWS technology on a broad scale, therefore, represents the
potential for a transformation in the structure of war that is qualitatively different from previous military technological innovations.
The idea of fully autonomous weapons systems raises a host of intersecting philosophical and psychological issues, as well as unique legal challenges. For example, it sharply raises the question of whether moral decision-making by human beings
involves an intuitive, non-algorithmic capacity that is not likely to be captured by even the most sophisticated of computers? Is this intuitive moral perceptiveness on the part of human beings ethically desirable? Does the legitimate
exercise of deadly force should always require a “meaningful human control?” Should the very definition of AWS focus on the system’s capabilities for autonomous target selection and engagement, or on the human operator’s use of such
capabilities? Who, if anyone, should bear the legal liability for decisions the AWS makes? The purpose of this conference is to address such questions by bringing together distinguished scholars and practitioners from
various fields, to engage in constructive discussion and exploration of the moral and legal challenges posed by Autonomous Weapons Systems.