When we were asked to help promote the “Resident Evil” film franchise for Sony Pictures a couple of years ago, we came up with the idea of altering the fictional artificial intelligence character (The Red Queen) into a real AI character — for which the fans could interact. It was a fun concept that was quite successful, but it created some serious challenges and reminded us how hard it is to build truly meaningful AI.
Creating AI, including smart speakers like Alexa and smartphone assistants like Siri, is challenging. These devices offer a helpful utility function and are good for amusement, but they are created and trained by humans, which can introduce biases and a power dynamic that should be addressed.
The Red Queen AI
Engagement was what we were aiming for when we started on the Red Queen AI. We began by collecting all the scripts that had been created by the writers of the films in the series. We trained the AI to learn the character using natural language processing techniques and then generated new dialogue written entirely by the AI to see how it would work.
The first few AI outputs were a nightmare. There wasn’t enough training data in the model, so the new AI version of the character was overly aggressive. We needed more data to soften the harsh villain character and enable it to work for a wider audience.
The film character’s catchphrase was “You’re all going to die down here,” but the first version of the AI couldn’t quite get it right. It gave us some pretty funny results, including “You must die” and “Your death is here.” As you might imagine, it could be a bit heavy out of context and could have hindered our ability to reach a new audience that hadn’t seen the previous films.
To add more training data and to make the AI smarter, we decided to tap into literature by authors like Charles Dickens and Shakespeare so the AI could learn from the more gentle communication styles of classic villains. Then, we added real conversations from police engagements with criminals to provide more realism and modern communication, as well as examples of people on psychoactive drugs recounting the things they saw, which ended up providing some rather creative dialogue.
We trained and retrained, and finally settled on the AI’s output: “I’m not sure I’m done playing with you yet.” This statement would then appear more playful and not as murderous. Plus it worked for the context of the engagement, which allowed people back into the game.
Everyone was happy with the end result, and the game was a hit. But it’s important to note that our decisions about which training data to use had biases. The decisions of the writers as to what made a good villain had biases. All of those biased slants can be OK when the aim is entertainment — but they should be approached with caution for more serious conversations managed by voice assistants, such as for healthcare, finances, and nutrition.
The Challenges of AI Assistants
The creators of AI assistants are often a small, homogenized group of people behind the curtain who decide what answers are true (or the most accurate) for billions of people. These arbitrary statements create a distorted view of reality that users of AI assistants might take as gospel.
For instance, more than a year ago, Alexa was accused of a liberal bias. And last January, a video went viral when someone asked Google Home who Jesus was and the device couldn’t answer but could tell users who Buddha was. The company explained that it didn’t allow the device to answer the question because some answers come from the web, which might prompt a response a Christian would find disrespectful.
As the use of smart speakers continues to climb, so do expectations. The number of smart speakers in U.S. homes increased 78% from December 2017 to December 2018 to a whopping 118.5 million, according to “The Smart Audio Report.” But users need to be mindful of the way the AI platforms work.
Digital assistants have the potential to limit the scope of what products and platforms we use.
After all, when one device (and, therefore, one company) owns the road to external knowledge, that company can act unethically in its own interest.
For example, if I ask Siri to play a song by The Beatles, the device might automatically play the song from Apple Music instead of my Spotify library. Or I might ask Alexa to order AA batteries, and Alexa could happily order Amazon’s own brand.
Combatting the Limited Scope of AI Devices
In free markets, where competition is supposed to benefit consumers, these flaws can present significant obstacles. The companies that own the speakers could conceivably gain even more control over commerce than they already have.
To combat this, users should be as transparent as possible with their requests to AI devices. “Play The Beatles on Spotify” or “Order the cheapest AA batteries,” for instance, are more thorough instructions. The more aware users are of how companies engage with them, the more they can enjoy the benefits of AI assistants while maintaining control of their environment.
You can also ask an AI device to communicate with a specific company when you are buying items. For instance, Best Buy offers exclusive deals that you can only get when ordering through your smart speaker. You can also get updates on your orders, help with customer service needs, and updates on new releases.
Users should remember that AI assistants are tools, and they need to think about how they manage them in order to have a good experience.
And users should report responses if assistants make them feel uncomfortable so the makers of these devices and skills can improve the experience for everyone. Natural language processing requires a considered focus, as the potential benefits are just as significant as the liability of things going wrong.
As for our natural language processing and the Red Queen, we discovered that some users were signing off at night with “Good night, Red Queen,” which means she clearly wasn’t too aggressive in the end.