Experts on artificial intelligence and government procurement urged lawmakers to articulate guardrails and guidelines for how the government purchases AI, set up more training for government buyers on AI and more during a Thursday hearing of the Homeland Security and Governmental Affairs Committee.
The committee’s chair, Sen. Gary Peters, D-Mich., agreed on the need for action. And as Capitol Hill considers how to regulate AI more broadly, Peters has already been zeroed in on AI in the government context specifically, although it’s unclear exactly what, if any, additional legislative proposals might emerge on how the government purchases AI.
“Now is the time to ensure that the algorithmic systems that the government buys do not have unintended or harmful consequences,” Peters said Thursday. “Guardrails are more important than ever. Federal agencies are inundated with sales pitches and technology demos promising the next big thing, and while the federal government must be forward thinking, we also have to be cautious in procuring these new tools.”
Over half of the federal government’s AI is bought from commercial vendors, said Peters.
Rayid Ghani, a professor at Carnegie Mellon University who focuses on the intersection of AI and public policy problems, told lawmakers that there’s been a lack of attention to scoping, where decisions are made that can cause or prevent harms later on.
There’s also too much attention on the mechanics of systems, instead of what they’re meant to do and whether or not they’re accomplishing that goal, he said.
“Government agencies often go on the market to buy AI without understanding and defining and scoping the problem they want to tackle, without assessing whether AI is even the right tool and without including individuals and communities that will be affected,” he said.
William Roberts, director of emerging technology at ASI Government and former acquisition director at the Defense Department’s Joint AI Center, told lawmakers that his team at JAIC had to “rethink the way we do procurement,” bringing in ethics professionals, policy people and end users at the beginning. “What we need is a team that works in an iterative fashion, which is hard right now,” he said.
He suggested that lawmakers give other parts of the government more of the types of contracting authorities he had at DOD.
Roberts also said that training will be critical to “completely reskill” the acquisition workforce with not only technical knowledge about AI, but also AI business acumen and know-how about intellectual property issues, ethics and agile contracting.
“Right now, [contracting officers] are set up for failure,” he said.
Sen. Maggie Hassan, D-N.H., floated the idea of a program for AI modeled after the FedRAMP cloud security assessment program.
Devaki Raj, former CEO and co-founder of CrowdAI, told lawmakers that contracting officials would benefit from “testing and evaluation datasets” to put AI applications through their paces. Even after systems are purchased, responsible AI requires ongoing testing and monitoring, as well as access to government-curated data to train and update models, she said.
Sen. James Lankford, R-Okla., asked witnesses about licensing and intellectual property for AI.
“How do we handle who owns it once actually the federal government is the user,” he asked.
Roberts said that part of buying AI is negotiating “intellectual property terms, which require knowledge of the various components of AI — the data rights, the cloud, the platforms, the infrastructure, the trained and untrained model — all of which could have a separate intellectual property strategy, which could make or break the project.”
“First and foremost,” he said, “the diligent AI acquisition official must realize that AI is a means to the end, and the end is always the mission. So we’re never really buying AI, we’re buying an enhancement to our mission.”