Press "Enter" to skip to content

Ethical questions in AI use cannot be solved by STEM grads alone

Humanity is facing a future of AI-guided decision making. Multidisciplinary councils should give everyone a seat at the table when developing and deploying AI.

Practical adoption of artificial intelligence (AI) faces a variety of roadblocks—splashy, high-profile deployments of AI have not been received well, with Microsoft’s “Tay” bot on Twitter parroting anti-Semetic vitriol just 16 hours after launch. Similarly, Amazon’s AI-powered hiring process displayed bias against women and the company marketed unreliable facial recognition technology to municipal law enforcement. AI often reflects the biases—including, and especially, unconscious biases—of the designers, which would make Facebook attempting to build an AI with an “ethical compass” a concerning prospect, given the multitude of other problems the social network has experienced.

This is a problem that necessarily requires diversity of thought, according to Northeastern University’s Ethics Institute and professional services firm Accenture, which published a guide to building data and AI ethics committees. Such committees are, by definition, not achievable by pooling together people of similar backgrounds to debate the merits of AI design. The authors advocate for the inclusion of technical, ethical, legal, and subject matter experts, as well as citizen participants to “raise potential or actual community concerns and views, and that can
take a civic or community-oriented perspective.”

SEE: Artificial intelligence: A business leader’s guide (free PDF) (TechRepublic)

“[It’s] a field that’s analogous in some ways to bioethics, that can provide resources to organizations to help identify and address preemptively these issues that we’re seeing,” said Ron Sandler, professor of philosophy and director of the Ethics Institute at Northeastern. “That’s going to be a highly interdisciplinary field that integrates technical expertise with business, organizational, humanities, social science, and legal expertise, just like it has in other robust fields of ethics.”

The importance of legal and social guidance for the creation of adoption of AI is paramount, as legal codes are largely influenced by social customs. “One of the books that has helped lull to sleep at night a number of times is Custom as a Source of Law. The idea is that it’s really, if we can establish that something is a custom for a community… we can actually draw, over time, a direct line from behaviors and philosophies within a community to what our laws become,” said Steven Tiell, senior principal for responsible innovation at Accenture Labs.”We can see where, in the future, this becomes a standard practice that might even become a best practice. If it becomes a best practice, then we can say that might be a duty.”

More about artificial intelligence

Concern and consideration for adherence to social norms, though this concern should likewise be heightened when an AI is developed in one country and exported to another. “When we design an algorithmic system, it’s going to have certain kinds of values built into it because [designers] make certain design choices, and those design choices reflect values,” Sandler said. “Do those same value assumptions work in those other contexts? Whether it’s a different culture or a different field, it could be moving from financial services to social services. Part of the idea behind ethics councils [is] being able to have the capacity within an organization, to have people who are responsible for not only identifying, but [addressing] those types of questions.”

The classic trolley problem is an overused, though useful, means of framing the issue, according to Tiell, who points to Mercedes-Benz’s plans for autonomous car technology, which will prioritize protecting drivers over passengers. “The philosophical underpinnings of the German state have a heavier weight or burden or care for the individual. If you cross the border into Belgium, for instance, their care is really about society as a whole. You might get to a point where, when you cross a border in an autonomous car, it makes a different decision because the customs on one side say ‘save the driver’ and customs on the other side say ‘protect the most number of people.’ If you’re developing technology based on a mindset in one region or country and that’s being exported elsewhere, you really do need some kind of control or feedback loops so that it’s appropriately applied within that country.”

For more on AI, check out “Pharma researchers test AI for predicting vision loss” and “Why artificial intelligence leads to job growth” at TechRepublic.

Also see

istock-916414870ai.jpg
metamorworks, Getty Images/iStockphoto

Source: TechRepublic