Press "Enter" to skip to content

No, the Internet Is Not Good Again

The weekend before bars and restaurants closed in New York, I sat on the ground in the park with three friends in a creepy circle—six feet away, six feet away, six feet away. We were just going around saying things we were newly shocked by, really. “I don’t care about anything I cared about a week ago,” I said. “I love Instagram now! And it’s my job not to love Instagram.” I’m supposed to be somewhat critical of basically everything about the internet, but I didn’t think I could do it anymore, given that it was the only thing tethering me to my loved ones, none of whom live in my tiny apartment with me. I said, “Elbow bump! See you later!” to my former roommate as we left the park, and I next saw him on FaceTime.

Everybody loves the internet now. As traditional public life has shut down for much of the population, we’re moving online to stay connected to people we miss, and to raise money for people who need it, and to coordinate all kinds of collective action that can no longer happen in physical places. Since stay-at-home orders began in the United States, use of online platforms has ballooned to the point of absurdity: In a recent blog post, the Zoom CEO, Eric Yuan, said that the service’s number of daily meeting participants had gone from 10 million in December to 200 million in March. Daily usage of Google’s videoconferencing platform is 25 times higher now than it was in January. According to Facebook, messaging across its services was up 50 percent at the end of March in the countries hit hardest by the pandemic, and video calling had more than doubled on Facebook Messenger and WhatsApp in areas with major COVID-19 outbreaks.

The early promise of the web—that it would be a place for ingenuity and shared knowledge—has been glimmering for everyone to notice. Though just months ago we were a couple of solid years into a big-tech backlash, each day bringing new questions about the surreal powers of companies such as Facebook and Google and Apple, today we feel grateful to have them, and blessed to use their products for most of our waking hours.

“The coronavirus crisis is showing us how to live online,” The New York Times’ Kevin Roose argued, as states directed residents not to leave their home. “After spending years using technologies that mostly seemed to push us apart, the coronavirus crisis is showing us that the internet is still capable of pulling us together,” he wrote. “Has coronavirus made the internet better?” The New York Times’ Jenna Wortham asked a couple of weeks later, concluding that it had.

It’s a tempting thought, but a premature one. Major platforms are struggling to adapt to enormous amounts of additional activity and strange new use cases. Moderation decisions that were difficult under the best of circumstances, with people responsible for them, are now being made by artificial intelligence. Platforms that had big user bases now have huge user bases, making the exploitation of security flaws far more worthwhile. Companies that were hoovering up our personal data when we spent eight hours a day on our phones are now in touch with our most intimate anxieties and desires around the clock. The internet feels better only because it’s all we have—and all the pressure we’re putting on it may, ultimately, make things worse.

As stay-at-home orders rolled out across the country, Facebook announced that it would send workers home, including content moderators, explaining that many of them would be unable to do their jobs at home for various reasons: The data they look at are sensitive, and shouldn’t be pulled up on a home network, the jobs they perform are emotionally taxing and require on-site resources, etc. Some human moderators are still working, but Facebook, along with other major internet platforms such as YouTube and Twitter, announced that it would be relying far more on artificial intelligence than before, which it acknowledged would lead to mistakes.

AI content moderation has a lot of limitations. It’s a blunt instrument solving a problem that has endless permutations, and it can produce both false negatives and false positives. A computer can deduce a lot about a video or a sentence: how many people have seen it, which IP addresses are sharing it, what it’s been tagged as, how many times it’s been reported, whether it matches with already-known illegal content. “What it’s not doing is looking, and itself making a decision,” says Sarah T. Roberts, an internet-governance researcher at UCLA. “That’s what a human can do.” As a result, moderation algorithms are likely to “over-police in some contexts, over-moderate in some contexts, and leave some other areas virtually uncovered,” Roberts told me. “All of the benefit of having the human moderation team, their cognitive ability, their sense-making ability, their ability to deal with a whole host of types of content, not just the ones for which they were expressly designed, and so on, get lost.”

“Actually, it turns out low-paid, low-status work that’s often precarious and contract-based might be some of the most important work on the internet,” she continued. “And when we lose it, we notice.”

Especially because people who shouldn’t be spending more time online—because they’re already susceptible to conspiratorial thinking, because they’re prone to anxiety spirals, or because they have a substantial platform that they use in dumb ways—are spending a lot more time online. When I spoke with Sarah Pollack, a communications manager for Facebook’s policy team, Facebook had just finished wiping a slew of new pages created for the Northwest Front, a white-supremacist organization based in Washington State that has been podcasting about how white people might come out ahead after the “Chinese coronavirus crisis” subsides.

Nothing has gone horribly wrong on Facebook yet. But some cracks are showing: Though the company promised that for the average person, the site experience wouldn’t be different, the dialog box that pops up when a user tries to report content reads, “Please note we have fewer reviewers available right now.” Last week, Facebook’s automated systems sent warnings to people who were posting about making hand-sewn masks, threatening to ban them—the artificial intelligence mistook the posts for medical-mask ads, which have been banned because of price gouging and supply shortages.

Facebook was having enough trouble moderating the content on its platform before any of this happened, and faced near-constant criticism for decisions made by even human moderators. Not to mention the fact that even a “small number” of mistakes, for a platform with 2.5 billion users, is going to be a lot of mistakes. Potentially millions of mistakes!

That scale is precisely what should have us worried, Roberts said. Facebook was already huge; now it’s even huger. “This situation is an opportunity for us to collectively acknowledge and then consider the way that we have put pretty much the entirety of our digital landscape into the hands of a really small and select group of firms that are now, in essence, our first source for many major parts of human life,” Roberts said. “Reliance upon these systems has only deepened. And in that deepening, there’s an opportunity to note where they actually fail us.”

With the switch to more AI moderation, Facebook has chosen to de-prioritize some things. The biggest is spam. And though extremism is one of the site’s biggest concerns right now, there is a new triage within that. According to Pollack, the company’s 350-person team dedicated to dangerous individuals and organized hate is working full-time from home to combat extremist rhetoric during the crisis, but it will also have to make some concessions—focusing more on immediate threats and bad actors who have tried to get back on the platform after they’ve been banned, and less on individuals who express support for these groups and ideas on personal pages.

The screeching track-switch to predominantly automated moderation is possible for only the biggest tech companies, which have the best engineering resources and have been developing these tools internally for years. Smaller and newer platforms, which are currently drowning in new sign-ups and seeing their sites used for things they never had been before, will have a harder time.

Discord’s CEO, Jason Citron, told me that the overall sign-up rate for his platform is 200 percent higher than it was before the crisis, and users are averaging four hours a day on the site. The chat platform is less than five years old. It has spent the past several years dealing with entry-level platform problems—from April to December 2019, it banned 5.2 million accounts, mostly for spam or “exploitative content,” including revenge porn and child-porn grooming. (The latter was investigated by the FBI.) It is well known as an organizing space for white supremacists and other extremist groups.

When I spoke with Citron, he talked about the technical challenges of scaling up to serve a much larger user base with a more diverse range of use cases—Bible groups and Boy Scout meetings have been coming online alongside meme groups and gamer chats. The company’s first major change was upping the number of people who could simultaneously participate in a live-stream from 10 to 50. “We’ve just been scaling our infrastructure and we’ve been pressing pretty hard to make sure Discord is functioning so that people have a place to hang out with their friends,” Citron said. (Discord depends, like Reddit, on community-level moderation. It has a customer-safety team that moderators can turn to when they need help, and Citron said those people are still working full-time from home. “My guess is that it’s stable there,” Citron said.)

Unruly growth has been its own problem already. The breakout hit of coronavirus internet so far has clearly been Zoom, the videoconferencing software designed for office environments and now used for everything including school and dating and seders. It’s also been a case study in what happens when a limited-use platform is flooded with new users.

Security vulnerabilities that made dialing into random calls easy resulted in high-profile companies such as Google and SpaceX banning the use of Zoom by their employees. New York City prohibited the platform’s inclusion in its public school’s remote-learning classes. Last week, the company was sued by a shareholder. The FBI issued a warning about videoconferencing security, specifically citing Zoom.

“Zoombombing” pranks, in which a rogue actor takes over a call and screen-shares something disruptive or horrifying, have escalated from the infamous “Two Girls, One Cup” YouTube video to pornography to hate speech. An investigation by The New York Times found bad actors organizing on Instagram, Twitter, Reddit, 4Chan, and especially Discord to infiltrate Zoom Alcoholics Anonymous meetings, church congregations, trans and nonbinary youth groups, and least one middle school. (In the past week, Discord has removed about 500 servers. But a spokesperson says these kinds of groups are difficult to chase, as they move quickly across platforms to reconnect.)

Zoom has made a number of changes in the past week, including turning on passwords for meetings by default, and taking Facebook’s software-development kit out of its iOS app—a pretty significant move that stopped Facebook from collecting information about Zoom users.

The company announced a 90-day freeze on new features so that engineering resources could focus on privacy and security problems, and permanently removed a feature that reported to a meeting’s host whether participants were paying attention during a call by tracking how long participants clicked into other windows for. All of these changes have come quickly, in response to crisis.

“The systems were constantly being improvised before this, but the degree of improvisation and the need to act quickly is just stronger,” James Grimmelmann, an internet-policy expert at Cornell Law School, told me. “It’s increased volume. It’s doing their work with less ability to have teams in place. Huge volumes of content on topics they didn’t have a lot of institutional depth on before.”

Gamergate was the industry’s introduction to targeted, coordinated harassment, and it produced policies and features and institutional knowledge around those things, Grimmelmann argued. The 2016 election was its crash course in political disinformation and organized falsified activity. The live-streaming of a mass shooting in Christchurch last year was a pivotal lesson in reacting quickly to violent content.

“I think we’re getting a similar scaling up in terms of the dissemination of public-health information,” Grimmelmann said. “Some platforms are familiar with these challenges; some of them are learning.”

There’s plenty to love about the internet right now, for all its flaws. Important information is circulating. Resources are being pooled and redistributed by networks of individuals, while the government is lagging behind. Digital projections of people we can’t see in person are available at any time.

The worst events in internet history have also tended to lead to the biggest changes: Gamergate was social media’s first big existential crisis. Some signs indicate that the coronavirus could be another. “The pandemic has helped to foreground how contestable—and, we argue, utterly frail—platform governance is,” the researchers João Carlos Magalhães and Christian Katzenbach wrote in a group paper published in Internet Policy Review late last month. They note that Twitter’s community guidelines have changed 300 times since 2009, often as a consequence of specific cataclysmic events. Tiny changes are happening again now, under pressure, and specific to the moment. Just a few days ago, Facebook announced that it would make forwarding messages in WhatsApp more difficult, a change presumably intended to add friction to the process of spreading misinformation. Now you can forward some bogus link to one person at a time, but not hundreds at once.

We could see hundreds more incremental changes in the months to come, all of which will make the internet a better place long after the crisis is over. The coronavirus has not revealed the internet’s potential to become a utopia through the sheer force of collective tenderness and goodwill. It’s not all dance crazes and donations to strangers’ Cash App accounts. But the crisis has revealed an opportunity to be more aggressively critical of the companies that wanted to be our whole world, and got their wish. Now they’ll have to learn what that responsibility really means.

source: NextGov