Far from just being an internet diversion, the Mozilla-funded project makes users question whether bots can distort truth, erode privacy, and deceive the public.
Bot or Not, winner of a Mozilla Creative Media Award, is inviting everyone on the internet to give it a playthrough to see if they can distinguish between a bot and another human being.
The Bot or Not web app is simple, has a minimal interface, and presents its users with a simple premise: Chat with someone for three minutes and try to determine if they’re a bot or another human using the app.
SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)
A single game of Bot or Not is divided into several rounds: A free chat where you can talk about anything you want, and some alternating question and answer in which one user asks a truth-or-dare-style question to the other to see what they say. At the end of four rounds, users are prompted to decide whether the person they were chatting with is real or not—and both parties are making a call.
Bot or Not was created as part of the Mozilla Creative Media Awards, which funded eight different projects in 2019 for release in May 2020. The theme of the awards reveals that the purpose of the project is greater than just a few minutes of interrogating strangers or robots on the internet: It raises questions of, “how (artificial intelligence) AI-like machine learning impacts our understanding of the truth.”
Gary Zhexi Zhang of Foreign Objects, the organization behind Bot or Not, said the app should make people look at bots with a critical eye: “Bot or Not seeks to give people a critical perspective over these non-human conversational agents, both in terms of how they work and how the proliferation of increasingly advanced human-like bots are likely to complicate how we navigate our highly-mediated society.”
Once users finish playing a round of Bot or Not they’re directed to learn more about the project through Bot or Not’s Guide for the Bot Curious, where its authors make arguments that public-facing bots, at least in their current form, come with some problems.
The problem with human-like bots
Foreign Objects’ co-director Agnes Cameron said that the proliferation of bots (in 2016, Gartner predicted that the average American would have more interactions online with bots than with their spouses by 2020) spells trouble for personal privacy, both online and off.
“The increasing presence of bots in both the domestic sphere and in the workplace presents a huge risk to privacy, so long as personal data remains the primary business model for most major tech platforms,” Cameron said.
SEE: Rural America is in the midst of a mental health crisis. Tech could help some patients see a way forward. (cover story PDF) (TechRepublic)
Along with the risk to privacy posed by bots like digital assistants that constantly listen, or customer service bots that record their interactions, Bot or Not’s authors make the argument that bots have deception baked into their very design.
“Even with legal disclosure, human-like bots are built by corporations to elicit users’ emotional responses in order to make sales and gather data,” Foreign Objects said in its Bot Curious post. It gives credence to the argument of deception by pointing out things like the eerily human Google Duplex, which closely mimics a real human voice on phone calls.
The Bot Curious article also mentions a study that found bots were four times better at making a sale than inexperienced humans, at least when customers didn’t know they were bots—the second they found out, their success rate dropped by 80%.
“Put simply, people make emotional connections and give one another the benefit of the doubt, even if it’s just a stranger over the phone–which is precisely the emotional response that Duplex and others seek to exploit,” Foreign Objects said.
If you think you’re talking to a bot, but aren’t sure, there are ways to figure it out: Just use sarcasm or be unpredictable, Cameron said.
“Even the most advanced chatbots today have a hard time responding to highly context-based questions that humans would have no problems with, like a reference back to something you discussed earlier, sarcasm, or a joke.”
She added that bots also tend to use repetitive and formulaic syntax, and generally rely on prompting users instead of forming more than the most basic sentences.
Can businesses use bots responsibly?
Not all uses of bots are bad. Foreign Objects points out that they can be great for people with health conditions and limited mobility who need digital assistants to perform tasks and simplify life, and they can also be a positive force in basic customer interaction on places like Facebook, where bots can handle a lot of simple questions that would otherwise have to be answered by a human.
“With this project we’re not interested in saying ‘bots are bad’ or ‘bots are good,’ necessarily… bots are here, they’re not human, and the reasons why they’re pretending to be human normally represent another agenda, and it’s vitally important that we recognise that rather than just taking their pseudo-humanness for granted,” Cameron said.
Can businesses use bots in a responsible way? Absolutely, Cameron said, but with a few qualifiers: Be sure you’re telling people they’re interacting with a bot, and be clear about what data is being stored and how it’s going to be used afterward.
“Chatbots can definitely make intuitive and easy-to-navigate interfaces, and we’re definitely not advocating for them not to be used,” Cameron said.
Should your bot try to be more human? Maybe not, she added. “Someone wants you to trust this thing and treat it like a person: Why? Is it a good idea for us to start talking to these bots, day-to-day, if we know that a company like Amazon or Google is using everything we say to track us and sell us stuff?”