Press "Enter" to skip to content

Algorithmic Accountability Act: What tech leaders need to know and do now

The Algorithmic Accountability Act is likely to be passed. Are companies ready for it?

Image: Diego Gomez/Adobe Stock

In April 2022, the Algorithmic Accountability Act was reintroduced in both the House and Senate after undergoing modifications.

What is the Algorithmic Accountability Act?

“Houses that you never know are for sale, job opportunities that never present themselves and financing that you never become aware of — all due to biased algorithms,” said Sen. Cory Booker, a bill sponsor. “This bill requires companies to regularly evaluate their tools for accuracy, fairness, bias and discrimination. It’s a key step toward ensuring more accountability from the entities using software to make decisions that can change lives.”

If the Algorithmic Accountability Act passes, it is likely to trigger an auditing of artificial intelligence systems at the vendor level — and also within companies themselves that use AI in their decision making.

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)

“The Act would require all companies utilizing AI to conduct critical impact assessments of the automated systems they use and sell in accordance with regulations set forth by the Federal Trade Commission,” said Siobhan Hanna, managing director of global AI systems for TELUS International. “Compelling tech firms to self-audit and report is a first step, but moving towards the implementation of strategies and processes to mitigate bias more proactively will also be key in helping to address discrimination earlier in the AI value chain.”

Are companies ready for the challenge?

As many as 188 different human biases that can influence AI have been identified. Many of these biases are deeply embedded in our culture and our data. If AI training models are based on this data, bias can filter in. While it’s possible that companies and their AI developers can deliberately include bias in their algorithms, bias is more likely to develop from data that is incomplete, skewed or not from a sufficiently diverse set of data sources.

“The Algorithmic Accountability Act would present the most significant challenges for businesses that have yet to establish any systems or processes to detect and mitigate algorithmic bias,” said Hanna. “Entities that develop, acquire and utilize AI must be cognizant of the potential for biased decision making and outcomes resulting from its use.”

If the bill becomes law, the FTC would have the authority to conduct AI bias impact assessment within two years of passage. Healthcare, banking, housing, employment and education would likely be high profile targets for examination.

“Specifically, any person, partnership or corporation that is subject to federal jurisdiction and makes more than $50 million per year, possesses or controls personal information on at least one million people or devices, primarily acts as a data broker that buys and sells consumer data, will be subject to assessment,” said Hanna.

What companies can do now

Bias is inherent in society, and there is really no way that a totally “zero bias” environment can be achieved. But this doesn’t excuse companies from making best efforts to ensure that data and the AI algorithms that operate on it are as objective as possible.

Steps companies can take to facilitate this are:

  • Use diverse AI teams that bring in many different views and perspectives on AI and data.
  • Develop internal methodologies for auditing the AI for bias.
  • Require bias assessment results from third party AI system and data vendors that they purchase services from.
  • Place a heavy emphasis on data quality and preparation in their daily AI work.

Source: TechRepublic