The government is preparing its antitrust strategies and has begun the battle to “break up Big Tech.” The problem is that they’ve been studying the wrong maps, brought the wrong weapons and aren’t even on the right battlefield.
The uncomfortable truth is the government is the underdog in this fight. The big four—Amazon, Apple, Facebook and Google—have thousands of lawyers, lobbyists and corporate communication staffers, better equipped and motivated to win. Therefore, for the government to properly regulate and enact change, it needs to focus on neutralizing Big Tech’s behavioral weapons.
To win the future, look to the past. Technology has given the government headaches, none bigger than the rise of unsolicited spam mail when email took off in the early 2000s. These unrelenting digital trojan horses brought viruses, crashed servers, and were quickly diluting the value of electronic communication. The government didn’t go after AOL, Yahoo or MSN in hopes to break up the gatekeepers, they went after the behaviors. Congress passed the CAN-SPAM Act of 2003, which doled out harsh penalties of up to $43,000 in fines per email violation.
Today’s predicament is direr. Big Tech has permeated beyond our inboxes, it’s at our doorsteps (Amazon), it hosts the relationships we have with family and friends (Facebook), and even tucks us into bed at night with a dim blue glow (Apple/Google). In order to stop the addictive flywheel of technology from consuming future generations, two areas the government should focus on regulating, include:
A dark pattern is manipulating a user interface to trick users into performing a platform-desired action. It is a common practice, like Robinhood gamifying trading which led to a 20-year-old committing suicide, or Uber exploiting drivers through faux rewards. The Big Four all utilize dark patterns, so if legislators can properly identify and define these patterns regulation can be applied.
While the four most prominent dark patterns are temporal, social, monetary and psychological, it’s the latter two that offer legislators a clear path to attack: They are the most harmful and easy to identify. Like the CAN-SPAM Act, if you penalize per instance, it has the potential to scale quickly to, say, Facebook’s 2.26 billion daily active users.
The past four years have proven that enragement is engagement, and with advertising-driven business models, the endorphin-inducing algorithms of Facebook, YouTube, and TikTok provide us a firehose of curated content to hold our attention. The current Band-Aid fixes of banning, moderating or determining blame are reactive.
To fix this problem, legislators need to take a proactive and transparent approach. Fighting for transparency requires understanding algorithms, recognizing signals and their subsequent content recommendations. This is the first step in lowering the social temperature. Will this hurt advertising revenue? Yes. Will it cause irreparable damage to Facebook’s business model? No.
Is There Hope?
While the past few years have been filled with embarrassing Senate and House hearings where lawmakers weren’t even sure how Big Tech’s business models worked, the future is much brighter. A new wave of Congressional lawmakers whose digital savviness and use of these tech giants’ platforms to aid in their own campaigns provide hope that these regulators will step into antitrust battles “knowing thy enemy.” To succeed, they need not attack Big Tech for the results of their unintended consequences but instead shine a light on the people, process and algorithms that got us here in the first place.
Matt Maher is the founder of M7 Innovations.