Two articles in the recent issue of Science examine the phenomenon of fake news on social media.  Vosugi et al (2018) demonstrate that false claims are retweeted faster, further, and for longer than true claims on twitter. Lazer et al (2018) discuss the spread of fake news online and mull over some means of controlling it, or at least of limiting the damage it does.

There seem to be a few important pieces missing from Vosugi et al before we can declare that false information spreads more quickly than true information, and even more pieces missing before we can endorse some of the more draconian solutions proposed by Lazer et al.

Active vs passive spread

From what I understand of their supplemental material, Vosugi et al actually show that an impression (i.e. a sighting) of a false claim on twitter is more likely to result in a retweet than an impression of a true claim. However, they don’t report whether a false claim creates more overall impressions than a true claim.

In other words, they only look at active spreading through retweeting, and not at passive spread through people simply seeing the tweet, making note of the information, and moving on. Neither do they look at what happens when a true claim to become ‘fixed’, i.e. when it is added to a static information resource such as a news website, where it can then be seen by a very large number of people and enters a passive spreading phase.

If one considers that cultural systems are, at least in part, distributed information evaluation and vetting systems, it makes sense that false information is more actively spread than true information. When true information starts to spread in a system, it is rapidly vetted because it can be tested against reality. On a social network like Twitter, that means it can be seen by many, and once it is determined to be reliable by a critical mass of users, it stops being surprising and noteworthy, and stops being re-tweeted. It might, however, pop up on static sources. In the terms used by Vosugi et al, the cascade stops. That doesn’t mean the information stops spreading through the system. It merely stops being actively disseminated and starts spreading more passively through impressions on static sources.

It is entirely possible that the rapid spread of false claims is driven by the need to allow more agents in the system to evaluate the information. If the truth status of a claim is unknown, the logical thing is to submit it to more evaluation through active spreading. That is not to say that this is conscious strategy on the part of users. I think it may simply be an evolved feature of human cultural systems. The algorithm might go something like this: if you see an obviously true claim, make note of it. If you see an obviously false claim, ignore it. If you see a claim whose status is uncertain, actively spread it and listen for feedback. Claims that are surprising or novel but not obviously false, might get the most attention.

One might very much want to know, for example, whether Mark Hamill really is dead (again). One might retweet the news, actively looking for vetting through broadcasting. This creates a cascade. Once Mark Hamill tweets a denial, it might very quickly get picked up by a static source and be seen by many more people than participated in the original false information cascade.

How do we find out?
As a first approximation, I would be interested in seeing whether false claims generate more impressions (i.e. are seen by more people) in addition to generating larger cascades of retweets. According to their supplemental materials, Vosugi et al have this information. As more thorough test would involve tracking rumours from their twitter cascade stage to their fixation on a range of static sources and their transition to a passive spread phase, and seeing how many impressions they generate overall. If I am right, I would expect true claims to eventually generate more impressions even though they have smaller twitter cascades (or other active spread phase).

Does true information become fixed more rapidly than false information? This would be consistent with the rapid attenuation of true information cascades on twitter. If false information does not rapidly become fixed on static sources, it would also explain the need to further actively spread the rumour. Does true information generate more overall impressions than false information once it becomes fixed? This again would be consistent with a model in which active spreading serves to vet information, whereas the main means of dissemination is exposure to static sources.

Should we do anything about fake news?

If fake news and false claims are spread more actively because they are being more aggressively vetted than true claims, and because they are less present on static sources, we should let the process take its course. The fact that Vosugi et al (2018) were able to use as their source for false claims a number of vetting websites and organizations would suggest that the natural process is working.

Both Vosugi et al (2018) and Lazer et al (2018) propose two kinds of measures. They propose the active restriction of the spread of false claims by the use of bots and greater corporate responsibility. They also propose a more passive kind of measure in the form of educating the public and preparing them to better discriminate between true and false claims. I think there is a real possibility that the public are already quite capable of doing this. Their main tool for doing so is the rapid spread of uncertain claims.

As a university professor (and an academic blogger), I obviously have no problem with the idea of preparing and equipping people to evaluate claims. That’s an important part of my job every day. However, before we start thinking of active measures to restrict the flow of information online because of some estimate (who’s?) of its reliability, I think we need at least to know with greater certainty that false information and fake news are actually spreading faster than the rest.


Lazer DM, MA Baum, Y Benkler, AJ Berinsky, KM Greenhill, F Menczer, MJ Metzger, B Nyhan, G Pennycook, D Rothschild, M Schudson, SA Sloman, CR Sunstein, EA Thorson, DJ Watts, JL Zittrain 2018. The science of fake news, Science 359:1094-1096.

Vosoughi S, D Roy, S Aral 2018. The spread of true and false news online, Science 359:1146-1151.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s