Press "Enter" to skip to content

The Internet’s Titans Make a Power Grab

The ordinary laws no longer govern. Every day, new rules are being written to deal with the crisis. Freedoms are curtailed. Enforcement is heavy-handed. Usual civil-liberties protections, such as rights of appeal, are suspended. By act, if not by word, a state of emergency has been declared. This is not a description of the United States, or even Hungary. It’s the internet during the coronavirus pandemic. We are living under an emergency constitution invoked by Facebook, Google, and other major tech platforms. In normal times, these companies are loath to pass judgment about what’s true and what’s false. But lately they have been taking unusually bold steps to keep misinformation about COVID-19 from circulating.

As a matter of public health, these moves are entirely prudent. But as a matter of free speech, the platforms’ unconstrained power to change the rules virtually overnight is deeply disconcerting.

As this crisis unfolds, Facebook says it is “limiting misinformation and harmful content” at an unprecedented level. In March alone, it displayed fact-checking warnings on 40 million posts related to the pandemic and took down hundreds of thousands of posts that it said could lead to physical harm. On Thursday, the company went so far as to announce that it will individually alert users who have commented upon, liked, or otherwise reacted to debunked myths about COVID-19. Meanwhile, Twitter’s blog post about how it is “broadening [its] definition of harm” contains a long and growing list of types of tweets that the platform is removing. Google says it has “taken down thousands of videos” to protect people from misinformation. These are all important steps, so why not just applaud and move on?

Mark Zuckerberg has justified Facebook’s newly hands-on approach on the basis that “you can’t yell fire in a crowded theatre.” But what constitutes shouting “fire”—and how far platforms should go in their willingness to intervene—is a matter of significant controversy.

The urgent need for platforms to do something should not stop us from asking questions about the powers these and other titans of the online world have assumed under a state of emergency. The platforms, which operate across national borders, are private companies that generally have the right to pick and choose what they allow on their services. But the awesome nature of this power—the ability to decide what is hate speech, permissible political campaigning, or too much bare skin—has led to calls for more accountability in the way platforms exercise it. Time and again, Facebook and other platforms have disavowed any role as “arbiters of truth,” and they have laid out ostensibly neutral rules for deciding what to take down, tried to set processes for enforcing them nonarbitrarily, and offered limited right of appeal to users who have been found to violate them. These efforts were evolving into a silent constitution that bound users and the platforms themselves. But now it’s been rewritten.

In constitutional-law scholarship, an emergency constitution describes an exceptional state of governance that operates during a crisis and enhances the powers of those in charge. About 90 percent of countries have explicit provisions for how to deal with states of emergency. Their form varies a lot, but typically certain rights and liberties are curtailed, and checks and balances removed, to allow for a more decisive and forceful response to whatever disaster threatens the constitutional order. The ongoing pandemic has, unsurprisingly, triggered the invocation of many emergency constitutions around the world, so governments can take exceptional measures to deal with the unfolding public-health crisis. People’s privacy, freedom of movement, and freedom of speech are being restricted in ways that would not be accepted in ordinary times.

Major tech companies, too, have responded to the pandemic in ways that expose just how much power they can exercise when they decide to do so.

First, many platforms—not just Facebook, Twitter, and Google—have adopted new rules specifically addressing coronavirus-related content. Amazon has quietly removed dozens of books containing conspiracy theories or medical misinformation. Medium is aggressively taking down viral posts under a new policy on COVID-19 content, despite the site’s mission to be a platform for “whatever you have to say.” Reddit has added warning messages on two subreddits for boosting misinformation. Pinterest is limiting all search results about the coronavirus to those from “internationally-recognized health organizations.” Internet companies, in short, are trying to impose guardrails.

Second, enforcement during the state of emergency is swift and blunt. With most human content moderators at home and unable to work remotely for logistical reasons, the major platforms have to rely on their automated tools more than normal. Facebook, Twitter, and YouTube all acknowledged that they would make more mistakes as a result. In other words, they would remove speech that should stay up. This speech becomes collateral damage in the mobilization around the pandemic, and a concession to the exigencies of the moment. With misinformation a potential matter of life and death, and simply no way of having humans review every post, the choice between blunt tools and no moderation at all is simple.

Even the usually sacred principle that platforms will not interfere with the speech of political figures has been abandoned. After Twitter removed tweets from the Brazilian President Jair Bolsonaro for violating its policies by tweeting false or misleading information about COVID-19 cures, Facebook and YouTube quickly followed. For a tech platform to suppress statements by a democratically elected leader is a truly remarkable step—and potentially one that makes it harder for voters to hold their representatives accountable in the future.

Third, even with these sweeping new rules and blunter enforcement, platforms have been suspending their usual due-process protections. Being muted by an algorithm on Facebook or YouTube may have no legal consequence—unlike, say, being silenced by police in the public square. Still, the former is a much greater hindrance than the latter is on a person’s ability to connect with an audience, especially at a moment of social distancing. Nevertheless, without as many human content moderators on deck, the major platforms have all scaled back their appeals processes for people who feel their posts were taken down incorrectly.

The platforms are revealing their far-reaching power in other ways. For some time before the pandemic, members of Congress and regulators around the world had been attacking major internet companies over their data-collection and data-sharing practices. Yet in recent weeks, Facebook and Google have presented their troves of hyper-detailed data as a boon to disease researchers and have unveiled new products that employ user information to help document the pandemic’s spread and organize response efforts. As the tech journalist Casey Newton wrote recently, “Big tech companies, which have spent the past three years on the defensive over their data collection practices, are now promoting them.”

If ever an emergency justified a clampdown on misinformation and other extraordinary measures, the coronavirus pandemic is surely it. The tech companies’ swift action in the current crisis has been widely praised, and so it should be. But this still leaves very real questions. Unlike most countries’ emergency constitutions, those of major platforms have no checks or constraints. Are these emergency powers temporary? Will there be any oversight to ensure these powers are being exercised proportionately and even-handedly? Are data being collected to assess the effectiveness of these measures or their cost to society, and will those data be available to independent researchers? The question is already being asked whether things should ever go back to “normal”—or whether this more iron-fisted rule is what the internet needed all along. The favorable news coverage that platforms are receiving will no doubt make similar heavy-handedness more tempting in the future—and in circumstances far less dire than a global pandemic.

Users have no way of forcing platforms to answer any of these concerns. Indeed, the state of emergency throws into sharp relief what is always true about the majority of regulation of online speech: the powers of rule making, enforcement, and review are all concentrated in the same hands. What’s happening during the pandemic is just an accentuated version of the norm. It has shown that even the most seemingly entrenched rules can be instantly overthrown. Right now, this may be helpful. But what about once the worst of this crisis subsides?

Evelyn Douek, a doctoral student at Harvard Law School, is an affiliate at Harvard’s Berkman Klein Center for Internet and Society.

source: NextGov