Press "Enter" to skip to content

How AI can be used to remove bias in business

Salesforce’s Architect for Ethical AI Practices sat down with Dan Patterson to discuss how artificial intelligence can be used to enhance business processes and reduce bias.

CNET and CBS News Senior Producer Dan Patterson spoke with Salesforce’s Architect for Ethical AI Practices Kathy Baxter about how businesses can use artificial intelligence (AI) to identify bias and better understand their own processes. The following is an edited transcript of the interview.

Kathy Baxter: Some of the things that I foresee coming, are businesses being able to use AI to identify bias in their processes. So again, if they take a look at their training data, and they can see a pattern. For example, African-Americans are much more likely to be denied loans, than Caucasians are. Before you go and start editing the training data, and editing your model to try to strip away that imbalance, that lack of fairness, first take a step back, and take a look at the business processes. Why is this happening? Is this perhaps isolated to a particular region, or office, or set of sales or loan managers? How can you fix that?

SEE: Managing AI and ML in the enterprise 2019: Tech leaders expect more difficulty than previous IT projects (Tech Pro Research)

One of the things I foresee is businesses using AI to really better understand their own business processes, and make more fair and just decisions with them. It can also be used to really address the inequities in our society. We know that they exist. In government policies—if the government is using AI systems, they tend to have a lot of data about the people who use those services the most, and that’s minorities and the poor. The more data you have about one group, and the less data you have about another group—the most affluent in our society—that are less likely to interact with those government services, you’re going to have an imbalance, and you’re going to make unfair decisions.

So, how do we think about our government policies as a whole when we deal with people? How do we evaluate what makes them most likely to recidivate? To be less likely to be approved for bail or parole? There are some historical factors that are unfairly impacting those decisions. Again, it can’t just be fixing the data, fixing the model, it needs to be fixing our processes.

A lot of time is spent talking about the singularity and AI is going to come and kill us. That really distracts from what we need to be talking about today and that is a lack of fairness. We don’t have enough diversity in the people who are creating these systems. We don’t have a good enough understanding of how to measure the impact of AI and its decisions on society.

How do we evaluate and know, are we doing better today than yesterday? It’s very hard to measure a negative. How do you know if because of these ethical practices you’ve put in place, you have avoided some harm? These are very difficult issues to tackle, but they are things that we have to address today. Being concerned about the singularity, and the robots are coming to kill us, really does distract from that conversation that we should be having.

Also see

kathy-baxter.png

Source: TechRepublic