Press "Enter" to skip to content

Dreamforce 2019: Salesforce wants ethics to be baked in to their business

Salesforce is integrating ethics and values into its use of AI, day-to-day decision making processes, and to connect with customers.

At Dreamforce 2019 in San Francisco, TechRepublic’s Bill Detwiler spoke with Salesforce Chief Ethical and Humane Use Officer Paula Goldman and Salesforce Architect of Ethical AI Kathy Baxter about the methods Salesforce is using to add values and ethics into technology use and everyday business practices. The following is an edited transcript of the interview. 

Bill Detwiler: A big part of what companies are trying to do today to attract top talent, to engage with customers, what they expect, is incorporating ethics and values into their business practices on a daily basis. I’m delighted to be here with two individuals to talk about how Salesforce is doing that and I’ll let them introduce themselves. 

Paula Goldman: I’m Paula Goldman and I am the chief ethical and humane use officer at Salesforce. This is, we believe, a first-of-its-kind position, and it is really about looking across all of our technology products and making sure that we are baking ethics into the way that they’re designed and the way that they live in the world. 

We have a number of really exciting processes that we’ve developed, recognizing that we’re at a critical time in the tech industry, that the technology industry is increasingly under fire and people understand the impact of technology on everyday lives. So we work with, across our tech and product teams, to integrate ethics into the day-to-day decision making.

We also have an Ethical Use Advisory Council, and that advisory council has employees and executives and external experts and within that we have issues that get brought up in terms of how our technology gets used in the world and we have a policy making process to ensure that our technology is being used for the greatest possible benefit.

SEE: The ethical challenges of AI: A leader’s guide (free PDF) (TechRepublic)

Bill Detwiler: And Kathy, we were talking a little bit before the interview started about some of the technology, or some of the practice that goes into doing that, and you were giving me some examples. First introduce yourself to the viewers and then tell us a little bit about that technology.

Kathy Baxter: I am Kathy Baxter and I’m architect of ethical AI. There are a lot of different things that we are doing when we’re thinking about baking ethics. In addition to the Ethics Advisory Council, we have a data science review board, so as teams are working on models, they can come in and get feedback on those models, not only from a quality standpoint, but also from an ethical consideration standpoint. We have specific features in our Einstein products that help empower our customers to identify: Are the training and evaluation data representative? Are the models biased? So it gives them the tools to help them use our Einstein AI responsibly. And I think one of the examples of how we’ve really baked ethical considerations into a specific product is our AI for Good product called a SharkEye.

It’s a really amazing project with the Benioff Oceans Institute and UC, San Diego for studying sharks. Great white sharks are coming to the North American coast in greater numbers, for longer periods of time, and we need to study them and find out what’s the impact of climate change on them. How do we create tools to help humans and sharks share the ocean safely? 

So we combined our Field Service Live product with our Einstein Vision Project product to be able to track the sharks in real time and notify lifeguards when they need to close the beach. I think a lot of people might assume that these kinds of AI for Good projects don’t have ethical risks, but they actually do. Any kind of technology can have unintended consequences.

We spent a lot of time thinking about what might some of those be. We wanted to protect citizen privacy out on the beach. Using drone technology to track the sharks–recording doesn’t begin until it’s over the ocean. We’ve never trained Einstein Vision on humans, so it’s only trained to track the great whites [sharks]. 

In addition, we ensure that it’s only certified drone operators employed by Benioff Oceans Initiative, so we don’t have to worry about a whole bunch of people coming out on to the beach with drones and interrupting the experience, or using the information to hunt sharks or somehow in other ways, harm the sharks. There are a lot of considerations that went into place to ensure that what we created would protect both the sharks and the citizens.

Bill Detwiler: Paula, that’s so important when you consider those unintended consequences because you talk about a lot of technology companies, it was collect all the data we can and then we’ll figure out how to protect it or what to do with it later. 

What that’s done is bred a lot of, you can say animosity, maybe distrust, at the customer level, at the individual level. How do you address that Paula? From a corporate perspective–how do you go back to company or to customers and say, ‘We know you have these concerns, but here’s how we’re going to address them.’ 

Paula Goldman: I think the key question that you asked is, how do we think about unanticipated consequences? And that is the key question for the tech industry at this moment in time. It’s a simple question that can have profound implications for the products that we design. I will say we’ve recently been experimenting with a number of methodologies with our tech and product teams on that. We’ve run some workshops called Consequence Scanning where we sit down with a cross functional product team and say, ‘Okay, you’re about to build this feature, what are the anticipated consequences and what are the unanticipated consequences?’

It sounds super simple, but you start to see the light bulb go off, and people think like, ‘Oh, wow, I hadn’t thought about that possible direction. Let’s build this type of feature to help our products use it responsibly and help our customers use it responsibly.’ I think what you heard Kathy talking about. There are a number of features coming out within our AI products that help people understand why are they getting the predictions they’re getting. How can they make sure that when they want to, for example, not use ways as a factor in decision making, they can easily make that choice? Those are the types of differentiators that I think start to set the Salesforce product apart because we’re asking ourselves these questions early in the process.

Bill Detwiler: So Kathy, from a technical perspective, you talked about the teams that you have put together when you’re designing a new product, a new service to help think through some of those unconscious biases or the unintended consequences that could come out of that. How do we address that from a technical perspective? You were talking about that with Einstein and AI and so the decision you made not to train Einstein on people and just have it focused on sharks. What else can we do from a technical perspective to help solve that problem?

Kathy Baxter: We have a brilliant research scientist named Nazneen, whose expertise is in explainable AI and identifying bias in models. We know that trusted AI needs to be responsible, accountable, transparent, empowering, and inclusive. Explainability of what a model is doing touches on so many of those things because you are able to see why the AI making the recommendation that it’s making. What are the factors that are being included? Who are the individuals that are being excluded and who’s being included? Is someone getting an advantage over somebody else? 

Being able to understand if that AI is making inclusive decisions by being transparent–then you can hold it accountable. These things are all interchangeable with each other, and so that’s where it’s really important to think from the very beginning, from the conception of the feature, from the conception of the model, from the collection of the data.

If you’ve got a chance to see the research keynote on Tuesday, there was a demonstration of our voice assistant, and we’re really proud because we put a tremendous amount of effort into getting representative training data for it across different genders, as well as English with a dozen different accents. So English with a German accent, with a French accent, so that it’s as inclusive as possible. It works for as many people as possible. These are all of the different decisions at every point of the way that we need to make sure that we’re investing in.

Paula Goldman: I think equality is a core value for Salesforce and I think it’s the core question for tech ethics. At the end of the day, is technology making the world a more equal place or a less equal place? And we ask ourselves that question every day. We ask ourselves, who’s at the table making these decisions? How do we ensure that, whether it’s the training data or a policy decision, how are we making sure that we’re inclusive in that decision?

Bill Detwiler: And this is a little tougher question and we saw Marc [Benioff] deal with a little bit of this at the keynote the other day. How do you as a company [in general], not Salesforce just specifically, deal with the myriad of ethical positions that different customers or different people in society actually have? 

We all have different expectations and experiences and different opinions on things. As a complete sideline, I actually teach as an adjunct professor, and one of the classes I teach is ethics for criminal justice professionals. One of the things that I always talk about are building those ethical frameworks. My audience and my students are usually criminal justice majors, public safety majors. So for them it has real life and death consequences. 

But you haven’t seen that as much in the corporate world. So I’m really interested how you address what one person considers ethical, another person does not. How do you address that within Salesforce maybe, and then how do you think companies should address that more broadly?

Paula Goldman: So that’s a really important question, and I will say we approach this also with a spirit of humility. We don’t pretend to have all the answers, but one thing that’s very important is that we are very actively and intentionally listening. I’m trained as an anthropologist. I have a PhD in anthropology, and then it’s the core competence is really being able to understand different worldviews and integrate them together. 

That’s why we have the council, that’s why we are constantly talking to civil society members, activists, government employees, creating as many channels for voice and for people to feel comfortable expressing a concern or a view.

The other thing that I will say is it’s a really important, this is values driven. So we have a set of values as a company. From that we derived a set of principles. Actually, we surveyed all of our employees and we said, ‘What should be the most important ethical use principles?’ And in the rank order that they told us, we incorporated that into our decision making process. So it’s really about broad listening and values based decision making for companies.

Bill Detwiler: Anything you want to add to that, Kathy?

Kathy Baxter: Yeah, I would say I think whenever I engage with customers and they ask if we wanted to create an ethical AI process, how do we begin? We always say, ‘begin with your values.’ I would be surprised to find a company or an organization that doesn’t have some kind of mission statement, some kind of values that they are founded upon. And that’s the framework that you build upon, and then you can begin creating an outline of framework of priorities. What are those things that we want to make sure that we focus on giving back to society or focus on building? Those are the things we want to focus our efforts on.

And what are the things that either are the red lines, we’re not going to do those things or they’re just lower priority? By having that, now you can really help focus decision making so that when a team comes up with a great idea like, ‘Oh, let’s do a feature that does X.’ Or a customer asks, ‘Can you build a feature that does Y?’ You have that framework to make that decision and say, ‘Yes, this totally fits within our values. This is one of our priorities,’ or ‘This isn’t one of our priorities. We’ve decided we’re not going to invest our resources into this area, so we’re just not going to go into that particular area.’ It really helps to have that documented and everybody is really clear about it.

Bill Detwiler: Is that really the challenge? When you have a large organizations, tens of thousands, hundreds of thousands of employees, how do you address that pushback that you may get internally from, ‘Hey, with this new feature, this new product, we think there’s a really large market for us. We think this is a profit center. We think this is a great revenue opportunity.’ And as Marc has said, we need a new form of capitalism to some degree. We need to take into consideration factors beside pure profit. How do you address that pushback within a private commercial enterprise?

Paula Goldman: I think it’s a fair question, and I don’t want to minimize the complexity of it, but what I will say is this is all about trust. At the end of the day, we really believe that the more that we integrate ethics into our product development, the more it benefits the trust between us and our customers and the trust between our customers and their customers. Everyone is worried about where technology is going, and if we can help people with appropriate guard rails, it actually I think becomes a win-win. And that’s why I’m so optimistic about what we’re doing.

Kathy Baxter: Because we’re a platform, there’s so much flexibility with that platform. With our tools, for example, we can raise flags. With Einstein Discovery, we allow you to check in what we call protect certain fields that you don’t want to use in a model, age, race, gender. Then we find other fields in your data set that are highly correlated with that. 

So we’ll say, ‘Oh, Zip Code is highly correlated with race. Do you want to protect that as well?’ We’re not going to do it for them, we give the customers the control. It’s all about giving customers the tools to make educated decisions for themselves so that they are using tools in a way that match their values as well.

Also see

kathypaulabill.jpg

Source: TechRepublic