Saturday, July 27, 2024

When AI makes a fatal mistake, who's to blame? Air Force Secretary weighs morality and reality - Breaking Defense

When AI makes a fatal mistake, who's to blame? Air Force Secretary weighs morality and reality - Breaking Defense

breakingdefense.com

Michael Marrow

My Summary

This article discusses Air Force Secretary Frank Kendall's views on the ethical and accountability challenges associated with autonomous weapon systems, particularly in the context of the Air Force's Collaborative Combat Aircraft (CCA) program. Key points include:

1. Kendall supports the Air Force's drone wingman efforts but believes more work is needed to establish clear liability standards for autonomous systems that might violate laws of armed conflict.

2. The main challenge is determining who should be held accountable if autonomous weapons make mistakes - the user, designer, tester, or someone in the chain of command.

3. The Air Force is working with vendors like General Atomics and Anduril on CCA platforms, with classified autonomy vendors also involved.

4. Kendall emphasizes the policy of maintaining "meaningful human control" over the use of force, but acknowledges there's a "gray space" in determining threats and avoiding collateral damage.

5. The article notes that DoD policy has been revised to streamline approval for highly automated weapons, and that some systems already have fully automated modes.

6. Kendall has previously stated that having a human in the decision loop could be a disadvantage against fully AI-controlled threats.

7. There are concerns that adversaries like China may not adhere to the same ethical constraints in developing autonomous weapons.

8. The Biden Administration is focusing on building international norms for "responsible" military AI use with allies rather than pursuing binding arms control agreements with adversaries.

SecAF visits Arnold Air Force Base

Secretary of the Air Force Frank Kendall, center, listens as Scott Meredith, technical director of the Arnold Engineering Development Complex 716th Test Squadron, left, discusses the aerodynamic test capabilities of the 716TS. (U.S. Air Force photo by Keith Thornburgh)

FARNBOROUGH 2024 — Air Force Secretary Frank Kendall is a strong proponent of the Air Force’s rapidly accelerating drone wingman effort, but the service’s top civilian also believes that more work is needed to establish a clear standard for liability if unmanned systems violate the laws of armed conflict. 

“It’s obviously something we’re very concerned about,” Kendall remarked about fusing autonomy and lethal weapon systems in a wide-ranging interview with Breaking Defense over the weekend, which was frequently interrupted by the roar of jet engines at the Royal International Air Tattoo (RIAT).

Kendall, who has previously worked as a human rights lawyer, is acutely aware of the ethical conundrums wrapped up in the Air Force’s Collaborative Combat Aircraft (CCA) program. And as the head of the Air Force, Kendall is arguably at the forefront of the Pentagon’s efforts to confront, and ultimately incorporate, more aspects of autonomy and artificial intelligence in weapons as CCA rapidly progress.  

“Whatever weapon systems we employ have to be consistent with the laws of armed conflict. The problem isn’t that. We know what those rules are and I think we know how to impose them on our systems,” he said.

The more vexing issue, Kendall said, is how to seek accountability when things go awry.

“It’s who do you hold accountable,” he continued. “And I think we’ve got to think through. Is it the person who used the weapon? Is it the designer? Is it the tester? Is it somebody in the chain of command? I think there needs to be a discussion about the mechanism by which people are held responsible for whatever weapons do when they do something that’s not allowed.”

The Air Force currently has two vendors under contract — General Atomics and Anduril — to provide the physical platform for the CCA program’s first round of drone production, or “increment.” The service is also working with several autonomy vendors that will plug into the drones, though Air Force spokesperson Ann Stefanek told Breaking Defense that the autonomy vendor pool is classified. CCA are expected to be operational before the end of the decade. 

“Our policy is to have meaningful human control of the application of force, and we’re gonna keep that. But that leaves a lot of gray space in terms of how certain are you, what’s the degree of certainty you have that that’s a threat before you commit a weapon, and what degree of competency you want to have that you’re not going to impose collateral damage and kill civilians unnecessarily,” he observed.

That “gray space” is wider than commentators often assume. The Pentagon’s official policy on autonomous weapons, DoD Directive 3000.09 [PDF], was revised in 2023 to streamline the approval process to field more highly automated weapons. Even before this change, anti-aircraft and missile defense systems like Patriot and Aegis have long had fully automated modes for threats too fast or numerous for humans to respond in time. Despite what is often assumed, DoD policy has never actually required a “human in the loop” for the decision to use lethal force.

Kendall himself has worried aloud for years that a human operator might make decisions too slowly to survive against a fully computer-controlled threat, and has personally witnessed the capabilities of an AI-controlled jet. At the Reagan Forum last December, he declared: “If the human being is in the loop, you will lose. You can have human supervision, you can watch over what the AI is doing. If you try to intervene, you’re going to lose.”

In this latest interview with Breaking Defense, Kendall noted, “I think there are a lot of details to be worked out, but I think the principles are there, and I think we’re going to be compliant.”

Beyond the accountability problem, Kendall also raised concerns he’s voiced repeatedly: that America’s adversaries, namely China, won’t abide by the same ethical constraints.

The Biden Administration has recently gotten Beijing to agree to broad, non-binding discussions of “AI risk.” But its focus has been on building international norms for “responsible” military AI with its allies, while putting little faith or effort into binding AI arms control with adversaries.

“The risk we’re running is that our adversaries won’t be bothered by this at all,” Kendall said. “They will field systems which are clearly about their operational effectiveness, without regard to collateral damage or inappropriate engagements. And the more stressing the operational situation is, the more inclined they’ll be to relax their constraints.” 

Sydney J. Freedberg in Washington and Valerie Insinna in London contributed to this report.

 

No comments:

Post a Comment

TMTT CFP Special Issue on Latest Advances on Radar-Based Physiological Sensors and Their Applications

Radar can be used for human non-contact monitoring and interaction TMTT CFP Special Issue on Latest Advances on Radar-Based Physiological Se...