Press "Enter" to skip to content

So Much for the Decentralized Internet

Kanye West, Elon Musk, Bill Gates, and Barack Obama were all feeling generous on the evening of July 16, according to their Twitter accounts, which offered to double any payments sent to them in bitcoin. Not really, of course; they’d been hacked. Or, rather, Twitter itself had been hacked, and for apparently stupid reasons: The perpetrators stole and resold Twitter accounts and impersonated high-follower users to try to scam people out of cryptocurrency.

“The attack was not the work of a single country like Russia,” Nathaniel Popper and Kate Conger reported at The New York Times. “Instead, it was done by a group of young people … who got to know one another because of their obsession with owning early or unusual screen names.” The hackers gained access to Twitter’s tools and network via a “coordinated social engineering attack,” as Twitter’s customer-support account called it—a fancy way of admitting that their employees got played. All told, 130 accounts were compromised. “We feel terrible about the security incident,” Twitter CEO Jack Dorsey said last week, in prepared remarks on an earnings call.

The hack makes Twitter look incompetent, and at a bad time; its advertising revenues are falling, and the company is scrambling to respond. It also underscores the impoverished cybersecurity at tech firms, which provide some employees with nearly limitless control over user accounts and data—as many as 1,000 Twitter employees reportedly had access to the internal tools that were compromised. But the stakes are higher, too. Though much smaller than Facebook in terms of its sheer number of users, Twitter is where real-time information gets published online, especially on news and politics, from a small number of power users. That makes the service’s vulnerability particularly worrisome; it has become an infrastructure for live information. The information itself had already become weaponized; now it’s clear how easily the actual accounts publishing that information can be compromised too. That’s a terrifying prospect, especially in the lead-up to the November U.S. presidential election featuring an incumbent who uses Twitter obsessively, and dangerously. It should sound the internet equivalent of civil-defense sirens.

Like many “verified” Twitter users who compose its obsessive elite, I was briefly unable to tweet as the hack played out, Twitter having taken extreme measures to try to quell the chaos. I updated my password, a seemingly reasonable thing to do amid a security breach. Panicked, Twitter would end up locking accounts that had attempted to change their password in the past 30 days. A handful of my Atlantic colleagues had done the same and were similarly frozen out. We didn’t know that at the time, however, and the ambiguity brought delusions of grandeur (Am I worthy of hacking?) and persecution (My Twitterrrrrr!). After less than a day, most of us got our accounts back, albeit not without the help of one of our editors, who contacted Twitter on our behalf.

Read: Twitter’s least-bad option for dealing with Donald Trump

The whole situation underscores how centralized the internet has become: According to the Times report, one hacker secured entry into a Slack channel. There, they found credentials to access Twitter’s internal tools, which they used to hijack and resell accounts with desirable usernames, before posting messages on high-follower accounts in an attempt to defraud bystanders. At The Atlantic, those of us caught in the crossfire were able to quickly regain access to the service only because we work for a big media company with a direct line to Twitter personnel. The internet was once an agora for the many, but those days are long gone, even if everyone can tweet whatever they want all the time.

It’s ironic that centralization would overtake online services, because the internet was invented to decentralize communications networks—specifically to allow such infrastructure to survive nuclear attack.

In the early 1960s, the commercial telephone network and the military command-and-control network were at risk. Both used central switching facilities that routed communications to their destinations, kind of like airport hubs. If one or two of those facilities were to be lost to enemy attack, the whole system would collapse. In 1962, Paul Baran, a researcher at RAND, had imagined a possible solution: a network of many automated nodes that would replace the central switches, distributing their responsibility throughout the network.

The following year, J. C. R. Licklider, a computer scientist at the Pentagon’s Advanced Research Projects Agency (ARPA), conceived of an Intergalactic Computer Network that might allow all computers, and thereby all the people using them, to connect as one. By 1969, Licklider’s successors had built an operational network after Baran’s conceptual design. Originally called the ARPANet, it would evolve into the internet, the now-humdrum infrastructure you are using to read this article.

Over the years, the internet’s decentralized design became a metaphor for its social and political ethos: Anyone could publish information of any kind, to anyone in the world, without the assent of central gatekeepers such as publishers and media networks. Tim Berners-Lee’s World Wide Web became the most successful interpretation of this ethos. Whether a goth-rock zine, a sex-toy business, a Rainbow Brite fan community, or anything else, you could publish it to the world on the web.

For a time, the infrastructural decentralization of the web matched that of its content and operations. Many people published to servers hosted at local providers; most folks still dialed up back then, and local phone calls were free. But as e-commerce and brochureware evolved into blogs, a problem arose: distributed publishing still required a lot of specialized expertise. You had to know how to connect to servers, upload files, write markup and maybe some code, and so on. Those capacities were always rarefied.

Read: The people who hated the web even before Facebook

So the centralization began. Blogs, which once required installing software on your own server, became household services such as Blogger, Typepad, WordPress, and Tumblr. Social-media services—Facebook, Twitter, Instagram, Snapchat, TikTok, and the rest—vied for user acquisition, mostly to build advertising-sales businesses to target their massive audiences. The services began designing for compulsion, because more attention makes advertising more valuable. Connections, such as friends on Facebook, followers on Twitter, colleagues on LinkedIn, all became attached to those platforms rather than owned by individuals, and earlier efforts to decentralize those relationships effectively vanished. Even email, once local to ISPs or employers, became centralized in services such as Hotmail and Gmail.

In the process, people and the materials they create became the pawns of technology giants. Some of the blog platforms were acquired by larger tech companies (Blogger by Google, Tumblr by Yahoo), and with those roll-ups came more of the gatekeeping (especially for sexually explicit material) that decentralization had supposedly erased. One of the most urgent questions in today’s information wars surrounds how—not whether—Facebook should act as a gatekeeper of content across its massive, centralized global network. “Content,” in fact, might be the most instructive fossil of this age, a term that now describes anything people make online, including the how-to videos of amateur crafters, the articles that journalists write, and policy pronouncements by world leaders. Whereas one might have once been a writer or a photographer or even a pornographer, now publishing is a process of filling the empty, preformed vessels provided by giant corporations. A thousand flowers still bloom on this global network, but all of them rely on, and return spoils to, a handful of nodes, just as communications systems did before the ARPAnet.

Nuclear war is less of a threat than it was in 1963, but the risks of centralized communications systems have persisted and even worsened. By contrast, centralized online services such as Twitter and Facebook have become vectors for disinformation and conspiracy, conditions that have altered democracy, perhaps forever. A global, decentralized network promised to connect everyone, as its proponents had dreamed it would. But those connections made the network itself dangerous, in a way Licklider and others hadn’t anticipated.

The cybersecurity implications of this sort of centralization are deeply unnerving. With titans of business, popular celebrities, and world leaders all using (and sometimes abusing) Twitter and other services, the ability to centrally attack and control any or all of those accounts—Taylor Swift’s or Donald Trump’s—could wreak far more havoc than a day of bitcoin fraud. What if businesses or elections were targeted instead?

The fact that the Twitter hack wasn’t consequential further alienates the public from the risks of centralization in information infrastructure. Most Twitter users probably didn’t even notice the drama. As far as we know, the few who were hacked suffered limited ill effects. And the low-grade power users, like me, who were caught in the crossfire either got their account back and carried on as before or didn’t (yet) and amount to uncounted casualties of centralized communications.

One of those casualties is another of my Atlantic colleagues, Ellen Cushing. She had been on the outs with Twitter for some time, she told me, and just decided not to bother regaining control of her account. Instead, she’s rekindled an interest in media-outlet homepages. But already, Cushing has realized what she’s missing: the view of what’s happening now that Twitter uniquely offers. Twitter got its wish of becoming the place people go for real-time updates on news. But that also means that when it fails, as it did during this screen-name hack, part of our communications infrastructure also fails. Twitter isn’t just a place for memes or news, or even presidential press releases meted out in little chunks. It’s where the weather service and the bank and your kid’s school go to share moment-to-moment updates. Though seemingly inessential, it has braided itself into contemporary life in a way that also makes it vital.

Read: The Twitter hacks have to stop

That leaves us with a worst-of-all-worlds situation. The physical and logical infrastructure that helped communications networks avoid catastrophic failure has devolved back into a command-and-control model. At the same time, public reliance on those networks has deepened and intensified. Together, Facebook (which owns Instagram and WhatsApp), Google (which includes YouTube), Twitter, and Snap are worth a combined $1.7 trillion. In China, where Western services are banned, WeChat, Weibo, and Qzone count roughly 2 billion users among them. The drive to “scale” technology businesses has consolidated their wealth and power, but it has also increased their exposure. Bad actors target Facebook and Twitter for disinformation precisely because those services can facilitate its widespread dissemination. In retrospect, Licklider’s Intergalactic Computer Network should have sounded like an alien threat from the start.

But even after years of breaches and threats on Facebook, Twitter, YouTube, and beyond, tech can’t shake its obsession with centralization. The Facebook and Palantir Technologies investor Peter Thiel has exalted monopoly as the apotheosis of business. Content-creation start-ups enter the market not to break the grip of Google or Facebook, but in hopes of being acquired and rolled up into them. New social networks, such as TikTok, capture novel attention among younger audiences who find Facebook unappealing, but they still operate from the same playbook: Capture as many users as possible on a single platform in order to monetize their attention.

In the process, new platforms introduce new risks too. Because it is based in China, U.S. officials fear that TikTok could be a national-security threat, used to spread disinformation or amass American citizens’ personal information. But rather than seeing the service as intrinsically threatening, as all its predecessors have been, some critics are tripping over themselves to celebrate TikTok as a charming new cultural trend. Writing at The Washington Post, Geoffrey A. Fowler compared national-security concerns about the service to xenophobia. During the Cold War, American policy responded to potential threats, even remote ones, with huge investments of money and manpower for disaster preparedness. The internet itself rose from that paranoid fog. Now we download first and ask questions later, if ever, until something goes terribly wrong—and maybe not even then.

The risks to online services are more numerous and varied than ever, especially now that they cater to billions of people all over the world. Somehow, the imagined consequences still seem minor—virtual real-estate or cryptocurrency grifts—even as the actual stakes are as fraught as, or worse than, they were half a century ago. The internet was invented to anticipate the aftermath of nuclear war, which thankfully never happened. But the information war that its technological progeny ignited happens every day, even if you can’t log in to Twitter to see it.

This article was originally published in The Atlantic. Sign up for its newsletter. 

source: NextGov