Press "Enter" to skip to content

Gartner IT Symposium/Xpo 2019: Gartner Fellow discusses ethics in artificial intelligence

Gartner Fellow Frank Buytendijk said it’s important to get ahead of AI projects with ethics training

TechRepublic’s Associate Managing Editor Teena Maddox talked with Frank Buytendijk, distinguished VP and Gartner Fellow in Gartner’s Data Analytics Group, at the Gartner IT Symposium/Xpo 2019 about ethics in tech and artificial intelligence. The following is an edited transcript of the conversation. 

More about artificial intelligence

Frank Buytendijk: Whenever there is a new technology that develops so fast that as people and as organizations and even as a society, we cannot put our arms around and really understand what it does, the question of ethics comes up. Is this good? What is happening? Does this need some kind of parental oversight, or can this develop in an autonomous way? And AI is certainly the most impactful technology that begs for these questions. So how do we make sure that AI behaves in the right ways, and how do we avoid that it behaves in the wrong ways?

SEE: The ethical challenges of AI: A leader’s guide (free PDF) (TechRepublic)

There’s a big debate in AI, and the big future vision is that we would want to have ethical rules built into the AI. In fact this was pioneered by the science fiction writer, Isaac Asimov, when he introduced his robotic rules (The Three Laws of Robotics). The irony is that he used those as a literary instrument, and wrote really cool stories on how those rules never worked, and that still is the case today. The idea that you can build ethical rules into AI is a future vision.

The state of pay in the market today is that developers of AI need to take the responsibility for the behaviors of AI. Even if the algorithms, in production when they’re running, learn in unanticipated and unintended ways. It is always a developer responsibility that needs to be maintained even when systems are in production.

There are tons of ways you can use AI ethically and also unethically. One way that is typically being cited is, for instance, using attributes of people that shouldn’t be used. For instance, when granting somebody a mortgage or access to something or making other decisions. Racial profiling is typically mentioned as an example. So, you need to be mindful which attributes are being used for making decisions. How do the algorithms learn? Other ways of abuse of AI is, for instance, with autonomous killer drones. Would we allow algorithms to decide who gets bombed by a drone and who does not? And most people seem to agree that autonomous killer drones are not a very good idea.

The most important thing that a developer can do in order to create ethical AI is to not think of this as technology, but an exercise in self-reflection. Developers have certain biases. They have certain characteristics themselves. For instance, developers are keen to search for the optimal solution of a problem; it is built into their brains. But ethics is a very pluralistic thing. There’s different people who have different ideas. There is not one optimal answer of what is good and bad. First and foremost, developers should be aware of their own ethical biases of what they think is good and bad and create an environment of diversity where they test those assumptions and where they test their results. The the developer brain isn’t the only brain or type of brain that is out there, to say the least.

So, AI and ethics is really a story of hope. It’s for the very first time that a discussion of ethics is taking place before the widespread implementation, unlike in previous rounds where the ethical considerations were taking place after the effects. For instance, with big data and privacy, all the discussion took place when it is too little, too late. With the impact of smartphones on human communications and relationships, it kind of happened to us. There never was that discussion up front. So with AI, the discussion is taking place up front. And what we see—and that is a really good thing—is that organizations that take AI seriously typically also invest in ethical training. So, ethics becomes parts of AI training in universities, part of data science training in universities, and the companies that bet on AI are also taking care of ethical training. This is really something that is becoming a best practice.

Also see

Source: TechRepublic