The Department of Defense now has broad principles outlining ethical use of artificial intelligence by the military.

DoD Chief Information Officer Dana Deasy announced Feb. 24 that he had been directed by Secretary of Defense Mark Esper to formally adopt five AI principles recommended by the Defense Innovation Board.

The announcement “lays the foundation for the ethical design, development, deployment and the use of AI by the Department of Defense,” Deasy said at a Feb. 24 press conference at the Pentagon.

Lt. Gen. Jack Shanahan, the director of the Joint Artificial Intelligence Center (JAIC), said the decision to adopt the principles separates the United States and its allies from adversaries whose use of AI is concerning.

“My conversations with our allies and partners in Europe reveal that we have much in common regarding principles relating to the ethical and safe use of AI-enabled capabilities in military operations,” said Shanahan. “This runs in stark contrast to Russia and China, whose use of AI technology for military purposes raises serious concerns about human rights, ethics and international norms.”

The five principles apply to both the combat and non-combat use of AI technologies, said Deasy.

The five principles are as follows:

  1. Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  2. Equitable. The department will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable. The department’s AI capabilities will be developed and deployed so that staffers have an appropriate understanding of the technology, development processes, and operational methods that apply to AI. This includes transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable. The department’s AI capabilities will have explicit, well-defined uses, and the safety, security and effectiveness of such capabilities will be subject to testing.
  5. Governable. The department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

The principles follow recommendations made by the Defense Innovation Board to Secretary of Defense Mark Esper in October following a 15-month process, where the board met with AI experts from industry, government and academia.

Shanahan described most of the differences in language between the DIB’s recommendations and the DoD’s final version as changes made by lawyers to make sure the language was appropriate for the department, but he maintained that the final language kept “the spirit and intent” of the DIB’s recommendations.

Some of these changes could be contentious for those concerned about the development of military AI.

For example, in the board’s formulation of the “Governable” principle was whether to include an explicit requirement for AI systems to have a way for humans to deactivate or disengage the system. The DIB’s ultimate recommendations included a compromise, calling “for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.” However, the final DoD language removed that wording, and requires AI systems to have “the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”

However, Shanahan emphasized how the Pentagon’s final language went even further than the board’s recommendations. He pointed to the “Traceable” principle, where the adopted wording apples to all “relevant personnel,” which he said is broader than the “technical experts” language used by the board.

In a statement, Eric Schmidt, the board’s chair and former head of Google, praised the move.

“Secretary Esper’s leadership on AI and his decision to issue AI principles for the department demonstrates not only to DoD, but to countries around the world, that the U.S. and DoD are committed to ethics and will play a leadership role in ensuring democracies adopt emerging technology responsibly,” he said.

The JAIC is expected to lead the effort to implement the principles. Shanahan said that he had followed through on his earlier promise to hire an AI ethicist within the JAIC, and she and other JAIC staff would bring in AI leaders from across the department to hash out implementation.

“This will be a rigorous process aimed at creating a continuous feedback loop to ensure the department remains current on the emerging technologies and innovations in AI. Our teams will also be developing procurement guidance, technological safeguards, organizational controls, risk mitigation strategies and training measures,” said Shanahan.

Nathan Strout covers space, unmanned and intelligence systems for C4ISRNET.

Share:
In Other News
Load More