Press "Enter" to skip to content

AI regulation is critical, says 54% of tech executives

The risk posed by the abuse of artificial intelligence in facial recognition and creation of “deepfakes” could erode public trust, according to an Edelman survey.

In the midst of a political climate already fraught with distrust, the potential for artificial intelligence (AI) to be weaponized is giving pause to tech executives, over half of whom state that regulation of AI is “critical for its safe development,” according to the 2019 Edelman Artificial Intelligence survey, conducted in coordination with the World Economic Forum (WEF).

The survey found that 54% of tech executives and 60% of the general population said they believe that regulation is necessary. The report cites cases in which AI is used to evaluate attributes about someone’s life: “Loan analyses including credit card applications are now often performed using AI algorithms. Yet, how can an algorithm be held accountable if a customer feels that a decision about their credit card application was wrong? Many argue that people have a right to know how decisions that affect them are being made.” Likewise, the report cites a need for transparency to ensure the AI is not developed with an inherent bias.

SEE: Malicious AI: A guide for IT leaders (Tech Pro Research)

The use of AI in law enforcement is a source of controversy, with Amazon garnering criticism last year for tailoring the AWS Rekognition service for, and marketing it to, law enforcement, going so far as to tout it as being usable with police body camera systems. Mentions of that capability were scrubbed from the AWS website after complaints from the ACLU. In tests of the service, the ACLU found that Rekognition incorrectly matched 28 members of Congress with criminal mugshots. Following this controversy, Microsoft called for regulation of AI-powered facial recognition to prevent abuse, and Google published a set of AI ethics principles.

Deepfakes—video or audio recordings altered to change reality, depicting events that never occurred—are also causing consternation among tech executives, with 45% of tech executives indicating that “deepfakes could mean that no information is believable and is highly corrosive to public trust,” while 33% indicated that the weaponization of deepfakes “could lead to an information war that in turn might lead to a shooting war,” compared to 51% and 30% of the general public, respectively.

For more on the dangers that the abuse of AI can lead to, check out TechRepublic’s coverage of 3 ways state actors target businesses in cyber warfare, and how to protect yourself, as well as Facial recognition’s failings: Coping with uncertainty in the age of machine learning.

Also see

istock-643956998.jpg
Image: Jirsak, Getty Images/iStockphoto

Source: TechRepublic