Skip to main contentSkip to navigationSkip to navigation
The icons of social media apps, including Facebook, Instagram, YouTube and WhatsApp, displayed on a mobile-phone screen
‘With very few exceptions, ‘good’ information is always slower to spread than bad, because it is rarely engineered with emotional hooks and outrage triggers.’ Photograph: Yui Mok/PA
‘With very few exceptions, ‘good’ information is always slower to spread than bad, because it is rarely engineered with emotional hooks and outrage triggers.’ Photograph: Yui Mok/PA

No more going viral: why not apply social distancing to social media?

This article is more than 3 years old

By limiting the number of people a user can share posts with, Facebook et al could help flatten the curve of misinformation

The dawn of the current era of weaponised misinformation can be dated with precision to September 2012. That was when Facebook started favouring posts by publishers and sending them enormous amounts of traffic. Over the next 12 months, Facebook referrals to Time magazine jumped 208%, to BuzzFeed 855% and to Bleacher Report more than 1,000%. Websites such as Upworthy, BuzzFeed, Mic and their imitators grew massive by figuring out how to appeal to the algorithm. Soon, so did others: dodgy conservative news sites from publishers such as Liftable Media were in the ascendant by 2015.

It took until late 2016 and Donald Trump’s election victory for the world to wake up to the awesome power of the social network’s algorithms. Another US presidential election looms and still the debates triggered by the last one are unresolved: should Facebook and other social networks have the power to decide what constitutes acceptable speech? Should political ads be allowed on social networks? What can be done about lies from politicians?

These are the wrong questions.

The spread of misinformation is enabled by the structures of social networks. These structures reduce friction in sharing. They speed up flows of information and incentivise users to post things that will earn likes, replies and shares. The same incentives are weaponised by malicious actors, who rely on regular people to amplify their message.

Without a change in this design, nothing else can change. Moderation is impractical when you have 3 billion users speaking hundreds of languages in dozens of political cultures. AI is hopeless at nuance. And asking society to change itself – by telling people to be more cautious about what they read and repost or adding fact-checks to posts – is like replacing plastic straws to ameliorate environmental catastrophe. It makes for good PR, but the effects are so small as to be inconsequential.

The answer, then, is to change the networks themselves. But in what way? The language of epidemiology, so familiar in the midst of a pandemic, suggests a solution. Just as information is “viral”, so the antidote to misinformation ought to be reducing its virality.

To do this, social networks should reintroduce friction into their sharing mechanisms. Think of it as the digital equivalent of social distancing. In the absence of a vaccine or cure for Covid-19, the approach to dealing with the deadly virus has been to introduce barriers to transmission: physical-distancing rules, stay-at-home measures, barriers such as screens and face masks.

Now apply that to social networks. Two years ago, WhatsApp limited the number of people to whom an individual can forward a single message at one time. The result was, according to an MIT study, that “80% of messages died within two days”. WhatsApp says forwards overall declined 25% as a result of the change. The messaging app followed up this year by imposing restrictions on sending messages that had already been forwarded lots of times. In less than a month, that caused “a 70% reduction in the number of highly forwarded messages sent on WhatsApp”.

WhatsApp’s example shows that changes to the structure of a network can have enormous effects. But it is more messaging app than social network. How would the same idea work on other networks?

Start with Facebook. Its algorithm should deprioritise items that are being shared abnormally quickly or widely, and give messages with emotional content less weight. It should move share buttons to more inconvenient locations. It can insert pop-ups asking if people are sure they want to share something. Twitter should adopt some of the same measures, as well as making the retweet button harder to find and abolishing “trending topics”. Instagram’s main feed has no native sharing function, and so remains mostly shielded from virality. But Instagram Stories introduced easy sharing and has, in fact, become a conduit for politics and misinformation. Removing the native share function from Stories would help. YouTube should ensure it includes more diversity of content in its recommendations. At the moment it simply offers up more – and more extreme versions – of whatever you’re looking at. The level of virality each network decides on need not be punitively low or set in stone. But it must be regulated by more than the demands of quarterly results.

An obvious counterargument to these suggestions is that valuable information would also be slow to spread. But that is not much different from the current situation. With very few exceptions, “good” information is always slower to spread than bad, because it is rarely engineered with emotional hooks and outrage triggers.

Such structural changes have the virtue of being fair to all types of information: a bunch of thirtysomething white men in Silicon Valley do not decide who says what to whom, or what is permissible speech. They do not prevent the flow of information; they only slow it down.

This prescription does not address what to do about Trump’s tweets, because it is not meant to; 2,999,999,999 other people in the world use US social networks, too. And it is better for social networks to institute this change than to have awkward and incompatible legislation thrust upon them in each of the countries in which they offer their services.

Nor do these changes mean that social networks should abandon their moderation efforts. Instead, the digital-distancing measures can help flatten the curve of misinformation, helping keep dodgy stuff within the capacity of moderation teams. How those moderators work is a separate debate.

It is true that WhatsApp has no ads and makes no money, and can therefore limit sharing without hurting itself. That is not the case for Facebook, Instagram, YouTube or Twitter. To be sure, such changes will cause engagement to decline and will slow revenue growth. But none of these tweaks are fatal to the business models of these networks. And here again we can learn from our current situation. Lockdowns and distancing carry enormous economic costs. We, as societies, accepted that the price was worth paying. For the good of the societies they serve, and indeed for their own long-term health, social networks should do the same.

  • Leo Mirani is a correspondent with the Economist

Most viewed

Most viewed