CENTER FOR ETHICS AND THE RULE OF LAW​

Ethical and Legal Dilemmas of Autonomous Weapons in War and National Security

April 11 -
 13, 2024

Co-sponsored By: The Annenberg Public Policy Center, General Robotics, Automation, Sensing & Perception (GRASP) Lab, and MITRE

Annenberg_PPC_Logo_RGB
UPenn-GRASP-Stacked-Logo
MITRE_Mission_Vertical_Blue

The Conference

The expanded use of semi-autonomous weaponry and the fast-paced development of fully autonomous weapons present an opportunity at the same time as an ethical and legal challenge. Autonomous weapons hold promise for enhanced warfighting, the advantages of which are already clear from the benefits presented by semi-autonomous technologies in use in the Ukraine and Israel conflicts. Yet autonomous technologies threaten to revolutionize the nature of warfare in ways scholars of war and policy makers can only begin to predict, with potentially disastrous results if this technology is not deployed in a manner consistent with the laws of war. To date, there are no legal constraints on the development and use of such weapons and little ethical guidance. The only formal U.S. guidance on the use of autonomous weapons systems (AWS) appears in DoD Directive 3000.09, but even that document is open-ended and subject to interpretation.

Global conflicts involving autocratic regimes, such as Putin’s Russia, suggest that democratic nations need all the military help they can get in the face of dramatic threats to free and democratic governance. But insisting on ethical constraints in the fighting of international armed conflict is a critical part of ensuring that warfighting remains guided by the rule of law and is subject to international diplomatic control. National security and law of war scholars and practitioners must consider in detail the advantages autonomous weapons may provide on the battlefield. This may include not only enhanced warfighting but heightened ability to minimize civilian casualties. How might recent international conflicts have been different had fully autonomous weapons been in use?

Notwithstanding the complex mix of opportunities and challenges associated with the development of increasingly autonomous weapons—and the diversity of views regarding their deployment—there is at least a long legal tradition and internationally well-established code of military conduct that subserves the consideration of new obligations and constraints to be placed on new technologies of war. In contrast, we find that neither existing international humanitarian law nor broadly applicable civil codes hold comparable sway over the circumstances and ethical obligations of those who might develop or use weaponized robots in non-military settings. Yet advanced mobile robot technologies are already penetrating civilian spaces, and it seems clear that they will eventually become as ubiquitous as today’s smartphones. Accordingly, a parallel focus of this conference will be placed on exploring the legal, ethical, and technical prospects for protecting civilian life from weaponized autonomous systems. The top priority will be to examine the issues and potential pitfalls arising from the need to articulate responsibilities for developers and to guide policy for legislators that address some of the most immediate gaps. How might the mix of roboticists and experts in the social spheres we aim to convene at this conference collaborate to imagine the promulgation of more durable guidelines and comprehensive best practices?

This conference will bring together distinguished scholars and practitioners from the academy, civil society, government service, industry, and the military for a three-day agenda to discuss the ethical and legal dilemmas of autonomous weapons with the aim of identifying a series of ethical principles to apply to autonomous weaponry at many different levels. The conference will begin on Thursday evening with a public keynote address, followed by a cocktail reception and dinner for all participants. On Friday, closed workshop sessions will consider Lethal Autonomous Weapon Systems (LAWS) generally, including the possible use of LAWS in connection with nuclear weapons launch—a prospect that many regard as particularly dangerous and ill-conceived. The closed workshop sessions will continue on Saturday, addressing Embodied AWS, namely armed physical mechanisms whose advanced mobility and Artificial Intelligence (AI) can in theory identify and physically engage targets without human intervention—or potentially cause unanticipated harm through accidents arising from their close physical proximity to the general public.

Keynote

Thursday, April 11, 2024


This program has been approved for a total of 1.5 Ethics CLE credits for Pennsylvania lawyers. CLE credit may be available in other jurisdictions as well. Attendees seeking CLE credit can make a payment via cash or check made payable to “The Trustees of the University of Pennsylvania” on the day of the event in the amount of $60.00 ($30.00 public interest/non-profit attorneys). In order to receive the appropriate amount of credit, evaluation forms must be completed.

Penn Carey Law Alumni receive CLE credits free through The W.P. Carey Foundation’s generous commitment to Lifelong Learning.

Keynote
4:30 – 6:00 pm (Open to Public)

The Promises and Perils of Artificial Intelligence and Autonomous Weaponry

Join the Center for Ethics and the Rule of Law, the Annenberg Public Policy Center, Penn Engineering’s GRASP Lab, together with MITRE, for a moderated public keynote panel on the ethical and legal dilemmas of autonomous weapons and artificial intelligence (AI) in war and national security, featuring experts in national security, the military, legal ethics, and AI. What are the promises and perils of emerging technology on the battlefield? How can the United States and other actors ensure AI is used responsibly and in compliance with existing normative and legal frameworks? What kinds of ethical principles should guide the development and deployment of AI and autonomous weaponry?

A public reception will follow.

Speaker
GEN. (RET.) JAMES “HOSS” CARTWRIGHT
Former Commander of U.S. Strategic Command; CERL Executive Board Member

Speaker
MS. DAWN MEYERRIECKS
MITRE Senior Visiting Fellow; Former Deputy Director of the CIA for Science and Technology

Moderator
PROF. CLAIRE FINKELSTEIN
Algernon Biddle Professor of Law and Professor of Philosophy; CERL Faculty Director, University of Pennsylvania

Moderator
CRAIG WIENER, PH.D.
Technical Fellow, MITRE


WORKSHOP Schedule

Friday, April 12, 2024

This program has been approved for a total of 4.0 (1.5 Substantive and 2.5 Ethics) CLE credits for Pennsylvania lawyers. CLE credit may be available in other jurisdictions as well. Attendees seeking CLE credit can make a payment via cash or check made payable to “The Trustees of the University of Pennsylvania” on the day of the event in the amount of $160.00 ($80.00 public interest/non-profit attorneys). In order to receive the appropriate amount of credit, evaluation forms must be completed.

Penn Carey Law Alumni receive CLE credits free through The W.P. Carey Foundation’s generous commitment to Lifelong Learning.


9:00 – 9:30 am Breakfast and Registration


9:30 – 10:45 am
Welcome and Session I (Invitation Only)

AI, National Security, and Defense: Balancing the Benefits and Dangers of Emerging Technology

Moderator: Professor Claire Finkelstein
Discussion Leader: Principal Deputy Assistant Secretary Paul Dean

Artificial intelligence (AI) is a transformative technology that has profound implications for national security and defense. In this session, participants will reflect on AI as a tool which can enhance the capabilities and performance of military and intelligence systems. This will touch on intelligence gathering and analysis, military decision making, the use of semi-autonomous and autonomous vehicles, weapons systems, and command and control. Participants will also discuss how AI poses significant challenges and risks, such as the management of bias, other ethical, legal, and moral dilemmas, technical vulnerabilities, and global competition, as well as the balance between commercial and government funding for the development of AI. How can the United States and its allies harness the benefits of AI while mitigating its dangers? What are the best practices and policies for developing and deploying AI in a responsible and effective manner? How can the United States and its allies cooperate and coordinate on AI issues to ensure interoperability, trust, and the alignment of interests?


10:45 – 11:15 am Break


11:15 am – 12:30 pm
Session II (Invitation Only)

AI on the Battlefield: The Opportunities and Challenges of Autonomous Weapon Systems for Armed Conflict

Moderator: Dean Kevin Govern
Discussion Leader: Director Jonathan Elliott

Autonomous Weapon Systems (AWS) are machines that can select and engage targets without human intervention, using AI and machine learning algorithms. Participants in this session will discuss the potential of AWS to enhance military capabilities, efficiency, and performance in armed conflict, as well as to reduce human casualties and errors in fighting “cleaner” wars. This session will explore how AWS, and nuclear weapons more narrowly, pose significant legal, ethical, and humanitarian challenges. These include ensuring compliance with the law of armed conflict (LOAC), maintaining human control and responsibility, preventing escalation and instability, and addressing moral and social concerns. How can states and other actors develop and use AWS in a way that respects LOAC, acknowledges the varying capabilities between states, and protects civilians in armed conflict? What are the best practices and policies for ensuring accountability, transparency, and the reliability of AWS? How can states and other actors cooperate and coordinate on AWS issues to prevent arms races, foster trust, and promote peace and security? With respect to nuclear weapons, how can the international community ensure that AI is used in a responsible and effective manner to enhance nuclear governance and stability? What are the best practices and policies for developing and deploying AI in compliance with the existing nuclear norms and regimes—including those codified in President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the DoD’s Ethical Principles for Artificial Intelligence and Responsible Artificial Intelligence Strategy and Implementation Highway, and the State Department’s Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy—and in adherence with such conventions as the principle of distinction under international law? How can states forestall the extreme risk of the conversion of their nuclear arsenals into AI governed systems?


12:30 – 2:00 pm Lunch


2:00 – 3:15 pm
Session III (Invitation Only)

AI for Civilian Protection: How Artificial Intelligence Can Help Mitigate Civilian Harm in Military Operations

Moderator: Professor Michael Meier
Discussion Leader: Mr. Dan Stigall

The Department of Defense (DoD) has recently released its Civilian Harm Mitigation and Response Action Plan (CHMR-AP), which lays out a series of major actions DoD will implement to mitigate and respond to civilian harm resulting from U.S. Military operations. The plan recognizes that protecting civilians from harm is not only a moral imperative, but also a strategic necessity, as civilian harm can undermine operational effectiveness, create grievances, and erode legitimacyOne of the key elements of the CHMR-AP is to leverage emerging technologies such as artificial intelligence (AI) to enhance DoD’s ability to prevent, assess, and respond to civilian harm. This session will explore how AI can help DoD achieve its CHMR objectives, and will address the following questions: How can AI improve DoD’s understanding of the civilian environment and the potential sources and pathways of civilian harm? How can AI enhance DoD’s decision-making and targeting processes to minimize the risk of civilian harm? How can AI assist DoD in conducting timely and thorough assessments and investigations of civilian harm incidents? How can AI facilitate DoD’s response and remediation efforts when civilian harm occurs, such as providing ex gratia payments, condolence messages, or apologies? What are the best practices and policies for developing and deploying AI for civilian protection in a responsible and effective manner? What are the ethical, legal, and moral challenges and dilemmas of using AI for civilian protection? How can DoD cooperate and coordinate with allies, partners, and civil society on AI and civilian protection issues?


3:15 – 3:45 pm Break


3:45 – 5:00 pm
Session IV (Invitation Only)

Challenges and Responsibilities of Developing Advanced Mobile Robots: The View from Industry

Moderator: Dr. Jean-Luc Cambier
Discussion Leader: Dr. Daniel Koditschek

The embodiment of increasingly autonomous capabilities in small, advanced mobile robots promises to deliver huge benefits. These same prospects also raise concerns about the possibility of their causing unethical or illegal harm along the spectrum of their use cases ranging from the well-discussed, but still perplexing, challenges associated with military applications through their potential deployment by police forces and extending to their eventual ubiquitous penetration into civilian spaces. Some companies at the forefront of these technologies have begun to develop a range of official policies in response to the novel mix of clear benefits and potential harms they promote. For example, Boston Dynamics’ public Statement of Ethical Principles explicitly eschews weaponization of their products. A group of similarly prominent small, advanced, mobile robot manufacturers (such as Agility Robotics, Anybotics, and Unitree) have joined Boston Dynamics in signing an open letter pledging not to weaponize their hardware or software products. This letter acknowledges the legitimacy of existing technologies applied in the interests of upholding laws and in national defense. Indeed, the U.S. Department of Defense has been a huge investor in—and is arguably largely responsible for—the development of many of these new as well as existing technologies. The specific combination of capabilities that now trigger recent concerns can be difficult to identify, and the responsibilities of commercial developers to address them remain very unclear.  What, if anything in today’s advanced platforms, is so different from robots that have been developed for and deployed in defense applications for decades? What are the best practices for the advanced mobile robotics industry in relation to this emerging spectrum of opportunities and challenges? What assistance and collaboration from robotics researchers and experts in policy and ethics would be helpful in better understanding the nature of possible harms, as well as the obligation and opportunities for industry to mitigate them?


Saturday, April 13, 2024

This program has been approved for a total of 3.5 Ethics CLE credits for Pennsylvania lawyers. CLE credit may be available in other jurisdictions as well. Attendees seeking CLE credit can make a payment via cash or check made payable to “The Trustees of the University of Pennsylvania” on the day of the event in the amount of $140.00 ($70.00 public interest/non-profit attorneys). In order to receive the appropriate amount of credit, evaluation forms must be completed.

Penn Carey Law Alumni receive CLE credits free through The W.P. Carey Foundation’s generous commitment to Lifelong Learning.


9:00 – 9:30 am Breakfast


9:30 – 10:45 am
Session V (Invitation Only)

Aligning Robotics Research and Development with Human Goals: The View from Academia

Moderator: Professor Peter Asaro
Discussion Leader: Dr. Lisa Titus

Given the mix of great benefits and potential harms from the rapidly improving capabilities of advanced mobile robots, how can designers and developers ensure that their work is aligned with the greatest human good? A great majority of U.S. (and many international) robotics researchers enjoy substantial support from the military. It seems fair to assert that many of the impressive technologies we now see emerging from academic and corporate laboratories would not have been possible without that support. Scientists and engineers have had to balance their need for research funding with their responsibility to promote social good for many decades. What are the similarities or differences, for example, between the situation of physicists at the dawn of the atomic age and today’s AI and robotics researchers whose fundamental research seems to have produced technologies that may be crossing some qualitatively new threshold of capabilities? What are these capabilities, and how can their creators ensure they line up with human needs and goals? What are the opportunities for collaborating with industry and policy makers to better define and implement this alignment?  What responsibilities and possible obligations are incumbent upon researchers, teachers, and their students for helping develop such principles and putting them into practice? How can ethical thinkers and legal scholars aid in addressing these questions?


10:45 – 11:15 am Break


11:15 am – 12:30 pm
Session VI and Conclusion (Invitation Only)

Articulating the Social Responsibilities of Roboticists: The View of Ethicists and Policy Makers

Moderator: Ms. Jessica Rajkowski
Discussion Leader: Professor Finkelstein

How do well-established international frameworks, such as LOAC and IHL, impact the work of robot designers and developers whose technologies may end up in military applications? What, if anything, is new about the nature of their fundamental research and corporate products that might require further development of such frameworks? How can the insights and expertise of roboticists be most usefully recruited to help advance progress in conceptualizing the new frameworks that may be needed. How can roboticists support international negotiations aimed at ensuring that militarized robots fit within the bounds of existing or suitably revised international frameworks? What principles exist—and what new ethical principles or laws may be required—to articulate guidelines and obligations bearing on the design and sale of robots to mitigate potential harm upon their entry into the civilian sphere? How can roboticists contribute usefully to the development and application of such principles? Given the increasingly ubiquitous nature of these machines, what best practices should be immediately adopted by researchers and developers to align robot capabilities and behaviors with human needs and goals in civilian spaces?


12:30 – 2:00 pm Keynote Lunch Event Featuring Congressman Ted W. Lieu (Invitation Only), Moderated by Jules Zacher, Esq.


Participants

Professor Ronald Arkin

Professor Emeritus, Georgia Tech University

Professor Peter Asaro

Associate Professor of Media Studies, The New School

Dr. Ruzena Bajcsy

University of Pennsylvania

Professor Gary Brown

Bush School of Government & Public Service, Texas A&M University

Major Kyle Brown

Visiting Fellow, Center for Ethics and Rule of Law

William W. Burke-White

Professor of Law, University of Pennsylvania

Dr.  Jean-Luc Cambier

Program Director, Office of the Under Secretary of Defense for Research and Engineering/Basic Research

General (Ret.) James Cartwright

Executive Board Member, Center for Ethics and Rule of Law; Former Vice Chairman Joint Chiefs of Staff

Ms. Yin Chen

US Army Devcom Armaments Center

Dr. Thompson Chengeta

Commissioner, Global Commission on Responsible Artificial Intelligence in the Military Domain; Board Member, UN Secretary General Advisory Board on Disarmament Matters

Colonel Vincent Ciuccoli

Commanding Officer, Philadelphia NROTC Consortium

Ms. Jennifer Cohen

Director of Engagement, Center for Ethics and the Rule of Law

Ms.  Ariel Conn

Mag10 Consulting

Dr. Jared Culbertson

Research Mathematician, Autonomous Capabilities Team (ACT3), Air Force Research Laboratory

Dr. Kate Darling

Ethics & Society Research Lead, AI Institute, Boston Dynamics

Dr. Jeremy Davis

Assistant Professor of Philosophy, University of Georgia

Dr. Woody Davis

MITRE Center for Policy and Strategic Competition

Mr. Paul Dean

Principal Deputy Assistant Secretary of State, Bureau of Arms Control, Deterrence and Stability

Mr. Gary A. Deutsch

Executive Board Member, Center for Ethics and the Rule of Law; Managing Chief Counsel, PNC Bank

Mr. Clever Earth

University of Pennsylvania

Director Jonathan Elliott

Director of Assessment & Assurance, Chief Digital and Artificial Intelligence Office, Department of Defense

Ms. Arlene Fickler

Counsel, Dilworth Paxson LLP

Professor Claire Finkelstein

Faculty Director, Center for Ethics and the Rule of Law; Algernon Biddle Professor of Law and Professor of Philosophy, University of Pennsylvania

Ms. Rayanne Fujimoto

Office of Emerging Security Challenges, Bureau of Arms Control, Deterrence, and Stability

Professor Denise Garcia

Professor and Founding Faculty Experiential Robotics Institute, Northeastern University

Commissioner at the Global Commission on Responsible AI in the Military

Mr. Stuart Gerson

Executive Board Member, Center for Ethics and the Rule of Law; Member of the Firm, Epstein Becker Green

Captain David Glinbizzi

Philosophy Instructor, United States Military Academy

Ms. Melissa Goldate

Thraxos

Mr. Walker Gosrich

Ph.D. Candidate, University of Pennsylvania

Mr. Julian Gould

University of Pennsylvania

Dean Kevin Govern

Executive Board Member, Center for Ethics and the Rule of Law; Associate Dean for Academic Affairs & Professor of Law, Ave Maria School of Law

Dr. Laura Grego

Senior Scientist and Research Director, Global Security Program, Union of Concerned Scientists

Mr. Jesse Hamilton

University of Pennsylvania

Dr. Blake Hereth

University of Pennsylvania Perelman School of Medicine

Mr. Mark Hoffman

Senior Program Manager, Lockheed Martin

Dr. Robert “Riddle” Houston

Chief of Test & Evaluation, Chief Digital and Artificial Intelligence Office, Department of Defense

Professor Ani Hsieh

Deputy Director, GRASP Lab; Graduate Program Chair, ROBO; Associate Professor, MEAM, University of Pennsylvania

Brigadier General (Ret.) Patrick Huston

U.S. Army (ret.)

Ms. Ramatoulie Isatou Jallow

CERL-APPC Mary Frances Berry Postdoctoral Fellow, University of Pennsylvania

Mr. David Joanson

Executive Director, Center for Ethics and the Rule of Law

Dr. Gavin Kenneally

CEO, Ghost Robotics

Dr. Daniel Koditschek  

Electrical Systems & Engineering, School of Engineering and Applied Sciences, University of Pennsylvania

Colonel (R) Christopher Korpela

Johns Hopkins Applied Physics Laboratory

Professor Benjamin Kuipers

Computer Science & Engineering, University of Michigan

Major General (Ret.) Robert Latiff, PhD

University of Notre Dame

Dr. Richard Love

Professor, National Defense University

Professor Duncan MacIntosh

Philosophy Department, Dalhousie University

Dr. Robert Mandelbaum

Lockheed Martin Corporation

Mr. Pratheek Manjunath

United States Military Academy

Professor Michael Meier

Emory University School of Law

Lieutenant Joseph Miller

Naval Science Instructor, University of Pennsylvania Naval Reserve Officers Training Corps

Dr. David Montgomery

Director, Social Science, OUSD (Research & Engineering; Director, Minerva Research Initiative, U.S. Department of Defense

Ms. Jessica Rajkowski

Department Manager for the Robotics and Autonomous Systems Department, MITRE Labs

Dr. Alfred Rizzi

Chief Technology Officer, The AI Institute, Boston Dynamics

Professor Sonia Roberts

Math & Computer Science, Wesleyan University

Dr. Heather Roff

Center for Naval Analyses

Dr. Steven ‘Cap’  Rogers

Senior Scientist, Air Force Research Laboratory, U.S. Air Force

Dr. Nicholas Roy

Professor of Aeronautics and Astronautics, Director of Engineering, MIT Quest for Intelligence, Massachusetts Institute of Technology

Professor Ilya Rudyak

McGeorge School of Law; Non-Resident Fellow, Center for Ethics and the Rule of Law

Lieutenant Colonel Kevin Schieman

United States Military Academy

Dr. Damion Shelton

Co-Founder, Chief Executive Officer, Agility Robotics

Mr.  Shawn Steene

Sr. Policy Advisor for Emerging Capabilities, OSD Policy, U.S. Dept. of Defense

Mr. Dan Stigall

Director, Civilian Harm Mitigation and Response Policy, Department of Defense

Colonel Jeffrey Thurnher

U.S. Army JAG Corps

Dr. Lisa Titus

Associate Professor of Philosophy, University of Denver

Dr. Tobias Vestner

Director of the Research and Policy Advice Department, Geneva Center for Security Policy (GCSP)

Mr. Michael Wagner

Chief Executive Officer and Co-Founder, Edge Case Research

Dr. Craig Wiener

Technical Fellow, MITRE

Mrs. Jennifer Arbittier Williams  

Executive Board Member, Center for Ethics and the Rule of Law; Partner at Freeh, Sporkin & Sullivan LLP, former U.S. Attorney of the Eastern District of PA

Brigadier General (Ret.) Stephen N. Xenakis, MD

Executive Board Member, Center for Ethics and the Rule of Law; Advisor for Physicians for Human Rights and Center for Victims of Torture

Dr. Mark Yim

Director, GRASP Lab; Faculty Director, Design Studio (Venture Labs); Asa Whitney Professor, MEAM

Jules Zacher, Esq.

Executive Board Member, Center for Ethics and the Rule of Law; Council for a Livable World, Lawyers Committee on Nuclear Policy

Mr. James Zhu

PhD Student, Carnegie Melon University

Readings for CLE Credit

Session I: AI, National Security, and Defense: Balancing the Benefits and Dangers of Emerging Technology

Required

Recommended

Session II: AI in the Battlefield: The Opportunities and Challenges of Autonomous Weapons for Armed Conflict

Required

Recommended

Session III: AI for Civilian Protection: How Artificial Intelligence Can Mitigate Civilian Harm in Military Operations

Required

Session IV: Challenges and Responsibilities of Developing Advanced Mobile Robots: The View from Industry

Required

Recommended

Conference readings

Conference registrants may access required readings. To gain access, please enter the password provided to you by the CERL conference team. If you have any trouble accessing the materials, please contact CERL (cerl@appc.upenn.edu). 

Contact us

For any questions regarding the conference or registration, please contact: CERL at cerl@appc.upenn.edu

Share Ethical and Legal Dilemmas of Autonomous Weapons in War and National Security on:

LinkedIn
Twitter
Facebook
Reddit
Email
Print
Ethical and Legal Dilemmas of Autonomous Weapons in War and National Security