The Pentagon is looking for an AI ethicist

military AI

AI & Analytics

The Pentagon is looking for an AI ethicist

After a rash of tech employee protests, the Defense Department wants to hire an artificial intelligence ethicist.

"We are going to bring on someone who has a deep background in ethics," tag-teaming with DOD lawyers to make sure AI can be "baked in," Lt. Gen. Jack Shanahan, who leads the Joint Artificial Intelligence Center, told reporters during an Aug. 30 media briefing.

The AI ethical advisor would sit under the JAIC, the Pentagon's strategic nexus for AI projects and plans, to help shape the organization's approach to incorporating AI capabilities in the future. The announcement follows protests by Google and Microsoft employees concerned about how the technology would be used -- particularly in lethal systems -- and questioning whether major tech companies should do business with DOD.

The JAIC is building an AI ethics process with input from the Defense Innovation Board, the National Security Council and the Office of the Secretary of Defense Policy to address AI ethics policy concerns and offer recommendations to the defense secretary.

The JAIC, which is only about a year old, is still "trying to fill a lot of gaps," Shanahan said, but installing an ethicist is a top priority.

"One of the positions we are going to fill will be someone who's not just looking at technical standards, but who is an ethicist," Shanahan said. "In Maven, these questions really did not really rise to the surface every day because it was really still humans looking at object detection classification and tracking -- there were no weapons involved in that."

Project Maven is an image and data tagging initiative that started with a DOD-Google partnership. Google passed on renewing its contract in 2018, which raised concerns that other tech contracts -- such as the DOD's $10 billion JEDI cloud contract -- could fall through.

Shanahan said there was no "backlash" and that Google not renewing its contract with Maven was not linked to the need for an ethicist. But he admitted that DOD probably should be more involved in the ethical-AI conversation.

Patrick Lin, the director of the ethics and emerging sciences board at California Polytechnic State University, told FCW that private-sector employee protests are part of a changing dynamic where workers feel empowered to lobby for or against their company's business dealings -- especially when they could potentially involve autonomous vehicles, weapons systems and facial recognition tech that could be lethal or eventually trickle down to law enforcement agencies.

"I think with these protests there's much more balance. Now employees have more voice, they have more weight and they have more influence on the decisions made by capital," Lin said. However, a "healthier balance doesn't mean it's necessarily good for business, right?" he asked. "Business folks want to move fast, they want to break things, they don’t want to ask 10,000 other workers what they think about a specific project."

Winning the public trust means having a "two-way conversation on every level," not just with company heads, Lin said.

“There’s so much distrust in the current administration that even if [the DOD] built the technology right -- it's to spec, it's ethical -- [the public's] not sure how it's going to be used,” Lin said. "There's really just no guarantee. Trust is a big issue."

Lin said corporate leadership also must do "some accounting of why they accepted a project a good chunk of their employees might have questions about," with DOD and DARPA (the Defense Department’s research arm) actively partaking in the debate.

Shanahan owned DOD's failure in that.

"There are always concerns in any workforce about what is this technology going to be used for," Shanahan said. As the Department of Defense, "it's incumbent upon us, I think we have to do a better job, quite honestly, to provide a little bit more clarity and transparency about what we're doing with artificial intelligence without having to delve into deep operational details."

Lin and Shanahan agree that developing and sharing international norms and standards will be key going forward as AI becomes more deeply integrated in everyday life -- and on the battlefield.

Shanahan told reporters he was "strongly in favor" of international discussions on AI norms and a DOD-State Department partnership to "understand what the future should be in terms of this question of norms and behavior" with AI, but he doesn't think there should be "outright bans" yet, as the technology is still so immature.

But Lin said it's important to remember that even with careful and deliberate consideration of the legal, moral and ethical consequences of a technology, that doesn’t mean it won’t be used. Instead, it may at least help pre-empt surprises later. 

"Just because there are ethical issues, doesn't mean it's a deal breaker," Lin said. "Not necessarily a fatal blow, but there are some issues to think through, some red lines you have to map out."

"Any communication, any information … is better than what we have now," including DOD being open about the red lines it won't cross, Lin said. "It won't be perfect."

Shanahan said he hopes more transparency going forward will reassure companies and the public, especially as the JAIC expects to produce more AI capabilities in 2020.

Since initially standing up, the JAIC has focused on five initiatives: predictive maintenance for the H60 helicopter, humanitarian assistance and disaster relief (e.g. wildfires and flooding), cyber sense-making (e.g. event detection, user activity monitoring, and network mapping), information operations and intelligent business operation.

The biggest effort for fiscal 2020 will be AI for "maneuver and fires," which will focus on products that target warfighting operations, such as operations intelligence fusion, joint all-domain command and control, accelerated sensor-to-shooter timelines, autonomous and swarming systems, target development and operation-centered workflows. JAIC is also teaming with the Defense Innovation Unit and the armed services on a predictive health project that includes health record analysis, medical imagery classification, and Post Traumatic Stress Disorder mitigation and suicide prevention.

Maneuver and fires will take Project Maven’s metadata, fusing with other intelligence data and overlaying operating and sensor information in a common experience. JAIC will work with the operations and C2 side of the problem using cause-for-fire data from Iraq, curating the data to hopefully speed up the fire support coordination decision process.

JAIC is also working on its Joint Common Foundation, an enterprise cloud-based platform for access to data, tools, libraries and other platforms that will help facilitate rapid software development and deployment.

Shanahan said that two years ago, he couldn't have conceived of hiring an ethicist, or emphasizing the unintended moral and ethical consequences of AI technology. Now it's one of the JAIC's chief concerns.

"Humans are fallible; in combat, humans are particularly fallible. And mistakes will happen. AI can help mitigate the chances of those mistakes -- not eliminate, but reduce," Shanahan said. "Maybe we have a lower incidence of civilian casualties because we’re using artificial intelligence."

This article first appeared on FCW, a partner site to Defense Systems. 


About the Author

Lauren C. Williams is a staff writer at FCW covering defense and cybersecurity.

Prior to joining FCW, Williams was the tech reporter for ThinkProgress, where she covered everything from internet culture to national security issues. In past positions, Williams covered health care, politics and crime for various publications, including The Seattle Times.

Williams graduated with a master's in journalism from the University of Maryland, College Park and a bachelor's in dietetics from the University of Delaware. She can be contacted at lwilliams@fcw.com, or follow her on Twitter @lalaurenista.

Click here for previous articles by Wiliams.


Let's block ads! (Why?)



from Defense Systems: All Articles https://ift.tt/2ZzeGxT
via Defens News
The Pentagon is looking for an AI ethicist The Pentagon is looking for an AI ethicist Reviewed by Unknown on 18:41:00 Rating: 5

No comments:

Defense Alert. Powered by Blogger.