Press "Enter" to skip to content

AI and ethics: One-third of executives are not aware of potential AI bias

The majority of consumers expect companies to be accountable for their AI systems, yet about half of companies do not have a dedicated member overseeing ethical AI implementation.

Image: iStock/metamorworks

In the age of digital transformation, more companies are tapping artificial intelligence (AI) systems to enhance workflows, streamline operations, and more. However, in recent months, these technologies have come under increased scrutiny due to underlying biases in these systems. On Wednesday, Capgemini, a technology services consulting company, released a report assessing consumer and executive sentiment regarding AI and ethical implementation around the globe.

“Given its potential, the ethical use of AI should of course ensure no harm to humans, and full human responsibility and accountability for when things go wrong. But beyond that there is a very real opportunity for a proactive pursuit of environmental good and social welfare,” said Anne-Laure Thieullent, AI and analytics group offer leader at Capgemini, in a press release.

SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)

The Capgemini Research Institute report titled “AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust” is based on a global survey conducted from April to May of this year. The survey involved 2,900 consumers from six countries and 884 executives in 10 countries.

Overall, 65% of executives said “they were aware of the issue of discriminatory bias” with these systems, and a number of respondents said their company had been negatively impacted by their AI systems. For example, six-in-10 organizations had “attracted legal scrutiny” and nearly one-quarter (22%) have experienced consumer backlash within the last three years due to “decisions reached by AI systems.”

Despite backlash, legal scrutiny, and awareness of potential bias, not all companies have an employee responsible for ethically implementing AI systems. About half (53%) of respondents said that they had a dedicated leader responsible for overseeing AI ethics. Moreover, about half of organizations have an “ombudsman” or a confidential hotline where employees and customers are able to “raise ethical issues with AI systems,” per Capgemini.

More about artificial intelligence

The report also details high consumer expectations when it comes to AI and organizational accountability. Nearly seven-in-10 expect a company’s AI models to be “fair and free of prejudice and bias against me or any other person or group.” Additionally, 67% of customers said they expect a company to “take ownership of their AI algorithms” when these systems “go wrong.”

A portion of the report juxtaposes the responses of IT and AI data professionals alongside those of marketing and sales executives. While four-in-10 IT and data professionals said they had “detailed knowledge of how and why our systems produce the output that they do,” about one-quarter (27%) of marketing and sales executives agreed. About half (51%) of marketing and sales executives said that they realized their “AI systems sometimes make decisions which are incompatible with our corporate values,” compared to only 40% of IT and data professionals.

The report provides a series of tips companies can follow to “build an ethically robust AI system.” This includes outlining an AI system’s purpose and potential impact, embedding principles of inclusivity and diversity “proactively throughout the lifecycle of AI systems,” using tools to increase transparency, providing human oversight to AI systems, among others.

SEE: Natural language processing: A cheat sheet (TechRepublic)

“AI is a transformational technology with the power to bring about far-reaching developments across the business, as well as society and the environment. Instead of fearing the impacts of AI on humans and society, it is absolutely possible to direct AI towards actively fighting bias against minorities, even correcting human bias existing in our societies today,” Thieullent said.

“This means governmental and non-governmental organizations that possess the AI capabilities, wealth of data, and a purpose to work for the welfare of society and environment must take greater responsibility in tackling these issues to benefit societies now and in the future, all while respecting transparency and their own accountability in the process,” Thieullent continued.

Also see

Source: TechRepublic