Press "Enter" to skip to content

AI ethics governance: 7 key factors to consider

Tech companies that create AI systems might encounter ethical challenges. When you’re forming an AI ethics committee, be sure to think about these seven critical points.

Leaders at companies and governmental organizations around the world have accurately identified that artificial intelligence (AI) ethics issues can cause concern. In June 2018, Google CEO Sundar Pichai posted seven objectives for AI applications, which included statements that asserted that “we believe AI should avoid creating or reinforcing unfair bias” and “we believe AI should be accountable to people.” The company also referenced a set of Responsible AI Practices at the time.

Several companies, including Microsoft, Amazon, Intel, Apple, and others, have joined the Partnership on AI. The MIT Media Lab and Harvard Berkman-Klein Center for Internet and Society launched the Ethics and Governance of Artificial Intelligence Initiative in 2017. Facebook backed a new Institute for Ethics in Artificial Intelligence at the Technical University of Munich in early 2019.

SEE: Managing AI and ML in the enterprise (ZDNet special report) | Download the report as a PDF (TechRepublic)

Companies sometimes struggle with self-governance. Google, for example, disbanded at least two AI ethics councils as of early 2019: One, the Advanced Technology External Advisory Council, after public criticism, and the other associated with DeepMind.

A private, internal review of AI ethics (no matter how rigorous the analysis) will not likely build as much community trust as public, external oversight. Any company that seeks to form an AI ethics governance council should follow these steps and processes to help ensure credible oversight.

1. Invite ethics experts that reflect the diversity of the world

The goal should be to include people who represent the diversity of the world that your company wishes to create, support, and serve. This might mean creating a larger committee. For example, the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) included 52 people representing “academia, civil society, as well as industry.” It also might mean a systematic rotation of members or a well-considered sub-committee structure.

2. Include people who might be negatively impacted by AI

By including people who potentially could be negatively impacted by AI systems, it could help the group make decisions with people, instead of for people. Examples of people who are potentially vulnerable to AI systems are: People whose faces AI system don’t consistently recognize; people whose voices aren’t understood by current recognition systems; and people who are members of any group that might face oppression, persecution, or challenges as a result of AI systems.

3. Get board-level involvement

A formal board committee has more clout than any advisory board, and an AI oversight system with board member involvement signals a level of attention above that of compliance. One or more board members could be designated to attend committee meetings to listen and subsequently inform board discussions and decisions.

4. Recruit an employee representative

Much like board involvement, one or more specific employees could be designated to attend, listen, and represent employees’ perspectives, expertise, and concerns.

5. Select an external leader

An AI governance committee with a systematically selected external leader will signal strength and independence. A committee with no formally recognized leader often increases the possibility of inaction or indecision.

6. Schedule enough time to meet and deliberate

The time and attention needed for a new group of people to learn enough about each other to listen, debate, and then create effective policies will vary. Quarterly meetings are likely insufficient for an in-depth, informed debate of difficult issues—in varying situations. I’ve seen board committees meet daily, weekly, monthly, or every other month. A variable schedule that includes significant time together in the first year or so, with the option to meet with a different frequency in future years, might make sense.

7. Commit to transparency

Let the committee discuss and debate in private, but then publicly share the recommended actions, along with any details the group wishes to disclose. Require a formal and public response from the company to any committee recommendations.

Your thoughts?

The challenge will be that effective AI ethics committee members must understand both technology (i.e., what is currently or potentially possible) and human morality (i.e., what is good, desired, right, or proper). That’s a sufficient challenge in and of itself. When any company announces an AI ethics effort, the presence—or absence—of each of the above items will signal the strength of the company’s commitment to effective AI governance.

If your company develops AI systems, how does your organization address ethical concerns? Do you rely on staff, on internal review boards, or external oversight? Let me know in the comments below or on Twitter (@awolber).

Also see

Illustration shows a balance, with "AI" on both sides, with left balance labeled "Wrong" and right balance labeled "Right".
Image: TechRepublic/Andy Wolber

Source: TechRepublic