Press "Enter" to skip to content

New Biden AI Framework a ‘Blueprint’ for Future Regulations

The Biden administration unveiled a blueprint Tuesday for a new national framework governing the design and usage of artificial intelligence, a landmark federal initiative emphasizing the need for civil rights protections and greater accountability in AI, bolstered by complimentary work from other federal agencies.

Announced by the White House Office of Science and Technology Policy, the framework, dubbed the AI Bill of Rights, is composed of five guiding principles to be considered when developing AI technologies. These include: creating safe and effective systems, data privacy, algorithm discrimination protections, user notices and human alternatives. 

Officials confirmed that while the framework sets fresh standards for AI developers and users, it is only a guidebook and not enforceable legislation. The White House hopes that it will lay the groundwork for current and future bills in local governments and on Capitol Hill, alike.

“It’s a technical plan to bring principles into practice,” a White House spokesperson told reporters on a Monday call. “This blueprint provides examples and concrete steps for technologists, companies, governments, civil society and communities to take in order to build these key civil rights protections into policy practice or technological design.”

Entities like private and public organizations, as well as individual users, are asked to abide by the guidelines at their own discretion.

“We’re calling on technologists to integrate the safeguards called for in the blueprint into teams’ design plans and product launch checklists, to commit publicly to take the action to align with these principles and protect the public,” the spokesperson said. “We’re calling on policymakers to use the practical steps described to inform future laws and requirements. We’re calling on researchers and innovators to lead the way by implementing these ideals and creating new ways to protect the public. And we’re calling on advocates, civil society and the public to ask the hard questions and to hold governments and industry accountable.”

This framework marks the Biden administration’s latest initiative to bring more accountability into Big Tech and apply regulations to the emerging technology space. 

The five pillars comprising the blueprint were developed in partnership with industry experts to determine the areas of AI that demand more regulation. Other federal agencies also contributed to the framework and are set to announce their own individual initiatives designed to promote trust in AI technologies. 

Preventing harmful biases in AI algorithms is a chief concern addressed in the framework. Citing examples of algorithms that use data to discriminate in the bases of characteristics such as race or sex, administration officials said that system testing and consulting with impacted communities is one of the ways to mitigate algorithmic discrimination.

“Unfortunately, we’ve repeatedly seen instances where the use of automated systems leads to discriminatory outcomes,” a spokesperson said. “Just like systems need to be tested to see if they work, they need to be assessed to see if they lead to disparities in cases where one demographic group receives worse outcomes.”

Implementing a human touch is one of the solutions to algorithmic biases offered by the framework. Other federal agencies, namely the National Institutes of Standards and Technology, have advocated a similar socio-human approach to developing AI systems. 

“No matter how much consultation, and testing, and refinement is done, there are going to be systems that just don’t work for some people or some situations,” a spokesperson said. “In order to make use of the benefits of technology in more settings, we also have to be realistic about its limits.”

Some of the key points in the AI framework are also present in recently proposed legislation. Multiple lawmakers have introduced bills that work to establish better user data privacy from third party brokers while others specifically address challenges associated with AI technology. 

source: NextGov