Press "Enter" to skip to content

Gary Gensler calls for US to regulate AI risks to financial stability

The rapid adoption of artificial intelligence (AI) in the financial sector could pose significant risks to financial stability and lead to another crisis within the next decade, top regulators have warned.

In a recent interview with the Financial Times, Gary Gensler, chair of the U.S. Securities and Exchange Commission (SEC), said the concentration of power in a few dominant AI platforms is creating dangerous systemic risks that could trigger a crisis as early as the late 2020s.

It’s “nearly unavoidable” for AI to trigger a financial crisis within a decade, according to Gensler. What concerns regulators is the potential for herd behavior if many firms rely on the same underlying AI models and data aggregators.

For example, mortgage lenders might all use an AI system from one tech company to assess creditworthiness. If that model has flaws, it could lead to a surge in defaults and threaten the housing market.

I do think we will in the future have a financial crisis . . .[and] in the after action reports people will say ‘Aha! There was either one data aggregator or one model . . . we’ve relied on.’ Maybe it’s in the mortgage market. Maybe it’s in some sector of the equity market

Gensler added that AI’s “economics of networks” makes this scenario likely. The more companies use an AI system, the better its predictions become with more data. This creates a winner-takes-all environment where one or two AI models dominate an industry.

The problem of regulating ‘horizontal’ AI risks

Much of current financial regulation focuses on individual companies and sectors, which poses challenges for overseeing AI risks that cut across markets.

“It’s frankly a hard challenge […] It’s a hard financial stability issue to address because most of our regulation is about individual institutions, individual banks, individual money market funds, individual brokers; it’s just in the nature of what we do. And this is about a horizontal [matter whereby] many institutions might be relying on the same underlying base model or underlying data aggregator.”

The SEC has proposed requiring broker-dealers and investment advisors to disclose potential conflicts of interest in their predictive analytics. But Gensler said this “still doesn’t get to this horizontal issue” of interconnected AI dependencies.

U.S. regulators are now exploring cross-agency coordination and new oversight frameworks to monitor systemic AI risks, though progress has been slow. Gensler has raised the issue at international bodies like the Financial Stability Board.

The concentration risks of AI “as a service”

Another concern is the consolidation of AI supply among Big Tech firms. Companies like Google, Amazon, and Microsoft have robust cloud infrastructure to host complex AI models and sell them “as a service” to financial institutions.

“How many cloud providers [which tend to offer AI as a service] do we have in this country?”

This concentration creates single points of failure. If an AI model on Amazon’s servers has problems, it could impact many banks, insurers, and trading firms that rely on it.

Europe leads in AI governance

While U.S. regulators are still studying AI risks, Europe has taken more decisive action. THIS YEAR, the EU is set to pass legislation that imposes strict requirements around transparency, data privacy, and reducing bias in AI systems.

The SEC chair has an ambitious regulatory agenda targeting issues like climate change disclosures, cryptocurrency oversight, and private equity rules – all of which face legal challenges. Ensuring AI stability may be one of the most complex tests for U.S. regulators in the years ahead.

Featured Image Credit: Tara Winstead; Pexels; Thank you!

Source: ReadWriteWeb