Press "Enter" to skip to content

Assessing Google CEO Sundar Pichai’s call for fair AI regulations

Google CEO Sundar Pichai and other AI researchers say limited regulation is needed to protect people from irresponsible use cases.

Google CEO Sundar Pichai and other executives working on artificial intelligence are now calling for limited government regulation as the European Union mulls potential five-year bans of facial recognition software. 

Pichai called for governments to take a bigger role in regulating the use of artificial intelligence (AI), and he published his beliefs in a Financial Times editorial while speaking out on the topic in speeches around the world. 

This is in stark opposition to comments he made in an interview with the Financial Times in September that called for caution with any potential government intrusion of how tech companies deploy AI.

“There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone,” Pichai wrote in The Financial Times.

“The EU and the US are already starting to develop regulatory proposals. International alignment will be critical to making global standards work. To get there, we need agreement on core values. Companies such as ours cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone,” he said in the editorial.

SEE: 60 ways to get the most value from your big data initiatives (free PDF) (TechRepublic)

Facial recognition concerns

Since Pichai made his comments, regulators, analysts, scientists, and developers have backed his statement and called for targeted rules governing very specific AI use cases, particularly around facial recognition programs. Although Google has become the biggest AI company on earth, they have pledged not to use AI in applications related to weapons, surveillance that violates international norms while other major corporations like Amazon and Microsoft have made billions selling riskier AI tools for more nefarious purposes. 

But some working in the facial recognition software space have questioned whether the backlash to facial recognition is about the software itself or more about data collection, which has more to do with the proliferation of cameras in society. Jon Gacek, head of government, legal and compliance at Veritone, cited dozens of use cases for facial recognition that he believed would benefit society, like helping police scan through thousands of hours of Ring camera footage or casinos using AI to sort through images coming from dozens of cameras.

“If it’s something already being done by humans but can be done more effectively or efficiently with AI, I think it would be a mistake to put regulations in place that limited that advancement. It’s important that whatever the legislation is, it’s very specific to whatever the problem is that is trying to be solved,” Gacek said.

“There are a lot of uses of AI that the public will be big endorsers of and would be super useful in improving society. We shouldn’t limit our growth and our improvements of our lives because we haven’t thought through the regulation, so doing something quick and all-encompassing seems like a mistake to me.”

Pichai’s comments about AI regulation came just two days after leaks out of the European Commission suggested they were considering a temporary ban on facial recognition technologies used by both public and private actors. The leak caused shockwaves in Europe because dozens of countries like Germany and France had plans to roll out facial recognition software in train stations, stadiums and airports.

“Use of facial recognition technology by private or public actors in public spaces would be prohibited for a definite period (e.g. three–five years) during which a sound methodology for assessing the impacts of this technology and possible risk management measures could be identified and developed,” the leaked draft white paper on AI said.

Facial recognition software has been a particular sore spot for the backers of AI and continues to have problems with accuracy despite continuing deployment. In 2018, police officials in South Wales faced a mountain of criticism for how they deployed facial recognition software during the June 2017 UEFA Champions League Final in Cardiff. 

They were forced to admit that the program had a 92% fail rate, meaning only 8% of the people “identified” were actual matches with names and faces in the criminal database.

“What Google said is right on. AI is an emerging class of technologies that is certainly very powerful. As with any new powerful tech, we need to be careful about how we use it, and how we don’t use it. There’s a lot of benefits to AI tech in the world that are being deployed all the time, but there is overstepping that occurs and companies that are probably pushing things in the wrong direction,” said Eric Sydell, PhD and executive vice president of innovation at Modern Hire. The company’s hiring platform uses technology to predict performance and improve hiring results.

“One of the issues with AI is that the hype engine of a lot of these companies got about a decade ahead of the science engines of the companies. It’s very very powerful but right now, there’s probably more harm than benefit to trying to use facial recognition. It feels invasive and the validity or utility of it is unproven,” he addedSEE: 

SEE: EU General Data Protection Regulation (GDPR) policy (TechRepublic Premium)

Legislation for AI

Pichai said regulations for AI don’t have to start from scratch and can build off of the success of the GDPR while still promoting innovation. Both Pichai and Sydell said the goal is always to balance protection against the need to support innovation.

In his editorial, Pichai said any legislation governing AI had to consider “safety, explainability, fairness and accountability” when regulating how tech companies use their systems and where they deploy them.

Juan José López Murphy, AI and Big Data Tech Director at Globant, said the sea change against AI emerged as more reports came out about facial recognition’s wild inaccuracy and terrifying
deepfakes
. His company has created their own AI manifesto and ethical guidelines for how they use AI.  

“The whole community of AI researchers think that there’s large potential for misuse and for social problems. The attitude has been changing over the last few years from ‘This is just a tool, so it’s not our problem’ toward ‘We need to be careful.’ These calls for AI-specific regulation are very healthy,” Murphy said.

“There has been enough evidence that we need to take a serious look at the impact of AI and how we use it and the reasons why we use it. People are agreeing that it’s necessary to think about guidelines we should follow.”

He cited dozens of cases where AI systems inadvertently discriminated against a group, often through no fault of the people who created the system. One specific case he mentioned that made news in October involved an unnamed US hospital that was found to have used an AI algorithm that was systematically discriminating against African American patients. The program essentially pushed white patients to the top of the list for care programs and screened out other races.

In spite of these cases, Murphy said outright bans would not be effective because these programs are already in use. But there was a need to protect people from biased data sets that backed AI, and regulators on both sides of the Atlantic had to take a thorough look at what could be done. 

“In the EU, the main concern is about the individual and in the US it’s about the business. They’re coming from two opposite ends. The EU wants to protect the privacy of the individual, their safety and how it might be applied to them. In the US they’re trying to foster innovation,” Murphy added. 

US mulls over legislation

Right now, the United States, EU and Australia are all mulling different kinds of legislation, with each taking a different approach. 

Officials in the US have repeatedly pushed the EU to avoid bans or onerous rules. Earlier this month, the White House released a memo cautioning states and federal legislators against any regulations that would be “inconsistent, burdensome, and duplicative.”

“Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth. Where permitted by law, when deciding whether and how to regulate in an area that may affect AI applications, agencies should assess the effect of the potential regulation on AI innovation and growth,” wrote Acting Director of the Office of Management and Budget Russell Vought.

Other government bodies in the US have urged the EU to steer clear of anything too heavy-handed regarding how AI is used or deployed. 

Pichai, on the other hand, said regulations could provide tech companies with broad guidance as well as industry-specific rules governing things like self-driving vehicles and facial recognition. Throughout his editorial, the Google CEO said he wanted the company to work with regulators to find a happy medium that allowed both to protect the public and spread the benefits of AI.

Sydell said it generally was harder for legislators to keep up with evolving technology but that it was incumbent upon people working in technology to use AI in the right ways in an effort to avoid problems that would scare or harm the public.  

“The rules, guidelines and laws are all slower to be created than the technology itself. So they have some catching up to do. Thoughtful people working on AI want to help that happen and help it be used in the correct manner,” Sydell added. “Otherwise, there will be more blowback to it and society won’t benefit from the good part of it, from the things that AI can do for us that help us live and lead better lives.”

Also see

Facial Recognition System concept.” data-credit=”Image: Getty Images/iStockphoto” rel=”noopener noreferrer nofollow”>istock-872707982.jpg
Facial Recognition System concept.

Image: Getty Images/iStockphoto

Source: TechRepublic