Press "Enter" to skip to content

IDC: Ethical AI is a team sport that requires smart and strong referees

New research finds that the lack of responsible artificial intelligence guidelines is one of the top three barriers to wider adoption.

IDC analysts recommend that companies develop comprehensive guidelines for ethical artificial intelligence and an ongoing review process.” data-credit=”Image: IDC”>
IDC analysts recommend that companies develop comprehensive guidelines for ethical artificial intelligence and an ongoing review process.

Image: IDC

Companies using artificial intelligence should start thinking about ethical AI as make or break, not nice to have, according to IDC research. In a webinar on Thursday, March 4, analysts explained why the lack of guidelines for AI is holding back implementation as well as how companies can address this problem. Analysts Bjoern Stengel, Ritu Jyoti and Jennifer Hamel shared new research at the session, “Increasing Trust and Accountability Through Responsible AI and Digital Ethics.”

Hamel, a research manager of analytics and intelligent automation services, said that ethical AI is a team sport. This means AI teams should include data scientists, governance experts and service providers. 

“There’s a lot of drive to roll out AI at scale but at the same time those solutions need to be built responsibly at the beginning otherwise problems will expand across the organization,” Hamel said.

SEE: Natural language processing: A cheat sheet (TechRepublic)

Stengel, a senior research analyst in business consulting services and sustainability/environmental, social and governance services, said that one thing companies can do is to use an ESG lens when thinking about AI projects. This helps define the comprehensive set of stakeholders that can be affected by AI.

“Customer experience is a major concern around the ethical use of AI and the brand aspect is definitely an important one, too,” Stengel said.

Employees are another group to consider from a social impact perspective, especially when it comes to hiring practices, he said. 

“There are many organizations that still have this impression that spending more is better and will get us ahead of the competition without thinking about what’s the business case and what are the risks,” Stengel said.

Companies that use an ESG approach to AI will find it easier to measure progress and benchmark performance, he said. 

“If companies manage these topics properly, there’s enough research that shows companies can benefit from integrating ESG into their business, including lower risk profiles, greater financial and operational performance and better employee experience,” he said.

Stengel said that one contradiction he found in recent survey results was that the maturity levels for AI are low, but at the same time companies feel confident about their ability to deploy AI in an ethical manner.

“I expect concerns to grow over time as users start to develop a more mature understanding of the risks associated with AI,” he said.

Ethical guidelines are a barrier to adoption

In a recent survey of companies buying AI platforms, IDC analysts found that a lack of responsible AI guidelines is one of the top three barriers to deploying the technology into production:

  1. Cost: 55%
  2. Lack of machine learning ops: 52%
  3. Lack of  responsible AI: 49%

Jyoti sees a lot of concern about explainability and due diligence around AI.  

“A lot of organizations are terrified of the negative consequences if they don’t do the right due diligence with AI,” Jyoti said. 

Jyoti described these five foundational elements of responsible AI:

  • Fairness – Algorithms are not neutral and they can reflect societal biases.
  • Explainability – This should be a priority for data scientists developing the algorithms all the way through to business analysts reviewing results. 
  • Robustness – AI algorithms should incorporate societal norms and be rested for safety, security and privacy against multiple use cases.
  • Lineage – AI teams must document the development, deployment and maintenance of algorithms so they can be audited throughout the lifecycle.
  • Transparency – AI teams must describe all the ingredients that went into an algorithm as well as the process of building and deploying it.

Jyoti’s other recommendation for companies developing AI products is to develop a comprehensive corporate governance plan that covers all phases of the product lifecycle.  

She said that many organizations think AI should be siloed in one department that is not the case. “Everyone who is involved in the whole lifecycle needs to be involved,” she said. 

Also, governance practices are not a one-off activity but a repeatable process that operates through the entire lifecycle, she said.

Jyoti recommended that companies develop corporate governance structures for AI, create a thought leadership plan relevant to the industry, and build user personas. These activities will create a complete picture of the potential impacts of AI as well as the stakeholders who could be affected.

Also see

Source: TechRepublic