Press "Enter" to skip to content

How the Pentagon’s AI Center Aims to Advance ‘Responsible AI Literacy’ in 2021

In 2021, the Pentagon’s Joint Artificial Intelligence Center intends to further push forward foundational guidance and projects promoting responsible AI use and strengthened awareness across the entire Defense enterprise—and the National Defense Authorization Act contained a few mandates that could support those fresh efforts.  

“The passing of the 2021 NDAA on Jan. 1, 2021 was an important milestone for the JAIC,” according to Megan Lamberth, a research associate for the Center for a New American Security’s Technology and National Security Program. 

Lamberth and the JAIC’s Chief of Ethics Alka Patel each separately shared context with Nextgov on what the next year of the center’s trustworthy AI-driving work might bring, and elements of the NDAA that could play a role. 

Coming in 2021

Established in 2018 to strategize and accelerate AI adoption across U.S. military branches and the rest of the Defense Department’s vast organization, the JAIC assumes a wide range of evolving responsibilities. One is to help steer the coordination of oversight and implementation of DOD’s Ethical Principles for Artificial Intelligence in warfare that were instituted in early 2020. Around that time, a senior JAIC official revealed Patel had been hired to help Defense operationalize those principles at-scale, which the center aims to do through what it refers to as an enterprisewide “Responsible AI” framework.

A blog post published on the JAIC’s site last week highlights some of the efforts initiated over the last year to help ultimately ensure Pentagon-centered, AI-enabled work adheres to ethical standards, and that coming capabilities are embedded with them. The publication notes that the center produced “a model card for traceability and documentation of a trained model” in one notable use case, stood up the “DOD-wide Responsible AI Subcommittee” that’s since met monthly to discuss policy and governance topics, and started working to modify procurement processes so that they better incorporate the established principles.

Among several other initiatives listed, the post notes that the center also created a Responsible AI Champions pilot program, “which convened 15 cross-functional individuals … through an experiential learning journey to understand the AI ethics principles, identify tactics for operationalization,” and seed a network of what it called Responsible AI ambassadors.

“We have to start educating our workforce who will be designing, developing, procuring and using AI capabilities about the DOD AI Ethics Principles. Part of the efforts is to increase awareness of these Principles and in tandem, ensure that there is an understanding of what they are and how they are relevant to and applicable to individuals in their respective roles,” Patel told Nextgov

“We are trying to create muscle memory,” she added.

And in 2021, that pursuit will blossom. This spring, Patel plans to launch a Responsible AI Champions pilot beyond the JAIC and into the broader department. While the substantive goals of this new pilot will be similar to the previous one, the latest will vary in both format and structure.  

“Additionally, we want to create a DOD-wide network of Responsible AI Champions from across components who can be a resource within their respective areas as well as with each other,” Patel explained. “They will also be extremely important creating a feedback loop to help us understand and inform future policies and guidance.”

The center’s post on its 2021 “ethics journey” further confirms plans to develop and refine a flexible, Responsible AI Strategy and Implementation document that’ll provide Defense insiders with a more centralized direction on each principle as they move forward. It’s intent, according to Patel, is to “provide guidance on a Responsible AI strategy at the enterprise-level, recognizing that when it comes to implementation, the Components will need their autonomy to execute based on their respective resources, workflows, and structures.”

In the JAIC’s publication, Patel notes that these moves could extend understanding that’s needed ahead of the release of new policies. Regarding when any more concrete policies might be introduced, she told Nextgov that it “would be premature for us to discuss details of the ongoing policy development, but there will be more to follow in the coming months on these initiatives.” 

“However, we can say that our primary focus is on creating awareness of the DOD AI Ethics Principles and increasing awareness of Responsible AI literacy across the enterprise,” she said.

Missing from the blog post was any indication as to whether the JAIC’s ethics policy-focused team will grow in 2021. 

“We do not comment on any ongoing hiring or human resourcing in these forums,” Patel said.

Boosted by the NDAA 

After a long legislative slog that culminated in lawmakers overriding a veto from President Trump, the 2021 NDAA passed—and in it were provisions of note for the center’s work in this realm. 

The legislation “has a number of new requirements for the JAIC and/or the DoD that can inform and support our efforts in operationalizing the DOD AI Ethics Principles and more broadly advancing Responsible AI,” Patel said. 

She pointed specifically to Section 233, which orders the Defense secretary to establish a board of advisers for the JAIC, that—among multiple other duties—will “evaluate and advise [leadership] on ethical matters relating to the development and use of artificial intelligence by the Department.” Patel also mentioned Section 235 of the document, which hones in on the “acquisition of ethically and responsibly developed artificial intelligence technology,” and calls for an assessment of the Pentagon’s ability to procure ethically-made AI technology. 

“The NDAA also grants the JAIC its own acquisition authority, which JAIC leaders have been requesting for some time,” Lamberth, a CNAS research associate focused on U.S. emerging technology-rooted topics, told Nextgov. “This will impact the JAIC’s speed and flexibility, and also allows them to focus on their own specific priorities.”

Lamberth, who previously published an analysis on the JAIC’s early-stage work on AI ethics, noted that going forward, the center could confront some issues finding the right balance between acquiring and fielding AI-enabled technologies speedily and at-scale—while also ensuring those systems are safe, reliable and secure.

“The JAIC’s prioritization of AI ethics and reliability should not be seen as a constraint, however,” Lamberth said. “On the contrary, the DOD has a real opportunity to work and lead among its allies and partners to establish policies and standards around the use of these technologies.” 

On top of other new requirements, the NDAA also provides the JAIC with that board of advisers, and elevates it within the department to report directly to the deputy secretary of Defense, Lamberth noted. 

“Those are important steps,” she said. “By moving the JAIC up in the chain-of-command, this gives it more visibility and momentum within the department.”

Though heaps of work remain, Lamberth said she views the center’s expressed plans for 2021 as a “good start,” and “certainly a step in the right direction.”

“I’m interested to see how the JAIC approaches some of the institutional and bureaucratic resistance that will almost certainly dampen wide-scale adoption. Some of this resistance is rooted in the fact that the DOD is a massive, massive bureaucracy,” Lamberth added. “But resistance may also be the result of a lack of familiarity or literacy on AI in the department.”

source: NextGov