The Ministry of Defense issues ethical guidelines for artificial intelligence to technical contractors

The purpose of the guidelines is to ensure that tech entrepreneurs adhere to DoD’s existing ethical principles for AI, Goodman says. DoD announced these principles last year following a two-year study commissioned by the Defense Innovation Board, an advisory panel of leading technology researchers and businessmen set up in 2016 to bring the spark from Silicon Valley to the U.S. military. The board was chaired by former Google chief Eric Schmidt until September 2020, and its current members include Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Lab.

Still, some critics question whether the work promises any meaningful reform.

During the investigation, the board consulted a number of experts, including outspoken critics of the military’s use of artificial intelligence, such as members of the Campaign for Killer Robots and Meredith Whittaker, a former Google researcher who helped organize the Project Maven protests.

Whittaker, now faculty director at New York University’s AI Now Institute, was not available for comment. But according to Courtney Holsworth, a spokeswoman for the institute, she attended a meeting where she discussed with senior board members, including Schmidt, about the direction it was taking. “She was never meaningfully consulted,” Holsworth says. “To claim that she was could be read as a form of ethics-washing, in which the presence of dissenting voices during a small part of a long process is used to claim that a given result has broad support from relevant stakeholders.”

If DoD does not have a broad buy-in, can its guidelines still help build trust? “There will be people who will never be happy with a set of ethical guidelines that DoD produces because they find the idea paradoxical,” Goodman says. “It’s important to be realistic about what guidelines can and cannot do.”

The guidelines, for example, say nothing about the use of deadly autonomous weapons, a technology that some proponents argue should be banned. But Goodman points out that the rules for such technology are decided higher up the chain. The purpose of the guidelines is to make it easier to build artificial intelligence that meets these rules. And part of that process is to express any concerns that third-party developers have. “A valid application of these guidelines is to decide not to pursue a particular system,” said Jared Dunnmon of DIU, who co-authored them. “You may decide it’s not a good idea.”

Leave a Comment