A Model of Online Misinformation

Daron Acemoglu, Massachusetts Institute of Technology, Asuman Ozdaglar, Massachusetts Institute of Technology, and James Siderius, Dartmouth College

We present a model of online content sharing where agents sequentially observe an article and decide whether to share it with others. This content may or may not contain misinformation. Each agent starts with an ideological bias and gains utility from positive social media interactions but does not want to be called out for propagating misinformation. We characterize the (Bayesian-Nash) equilibria of this social media game and establish that it exhibits strategic complementarities. Under this framework, we study how a platform interested in maximizing engagement would design its algorithm. Our main result establishes that when the relevant articles have low-reliability and are thus likely to contain misinformation, the engagement-maximizing algorithm takes the form of a “filter bubble”— creating an echo chamber of like-minded users. Moreover, filter bubbles become more likely when there is greater polarization in society and content is more divisive. Finally, we discuss various regulatory solutions to such platform-manufactured misinformation.