Cailin O’Connor and James Owen Weatherall, The Misinformation Age: How False Beliefs Spread
One of the first appearances of the word “misinformation” is in “Poverties Patience,” a lengthy poem Arthur Warren wrote while in debtors’ prison in 1604. Warren cautions his readers about shipwrecks caused by the “sweete” sonnets sung by “Syrens,” imparting “misinformation” to unsuspecting sailors’ “minde.” And yet, the beguiling, destructive charms of misinformation have only increased in the last 400 years—so much so that in December 2018, “misinformation” was declared “the word of the year.” To understand the reasons for the enduring power of the phenomenon, we may turn to The Misinformation Age: How False Beliefs Spread, the work of two philosophers of science: Cailin O’Connor and James O. Weatherall.
The authors begin their journey even earlier, in the fourteenth century, when travelogues and eye witness accounts perpetuated the story of “the Vegetable Lamb of Tartary,” a “gourdlike fruit, within which could be found tiny lambs.” Almost three centuries passed before the myth was debunked by a Swedish naturalist in 1683. Some may consider the belief in the vegetable lamb or in the “horned hare” of the American Southwest—a creature with a penchant for whiskey—as evidence of our gullible ancestors’ ignorance. Yet, as O’Connor and Weatherall point out, we can easily find similar incidents of false beliefs—with more frightening consequences—in our so-called enlightened age, the sorry tales woven on conspirators’ looms, bought and sold on the internet market.
The task the authors set for themselves is to answer questions we all ponder: “Why are false beliefs so intransigent, even in the face of overwhelming evidence to the contrary? And…what can we do to change them?” The authors begin, first, by dismissing the idea that human beings are rational creatures capable of making well-informed, logical decisions. This is a distorted view of humanity, they argue, that fails to recognize that people are, first and foremost, social animals, with abilities that have evolved to maintain their safety in groups. More troubling are their findings that reveal how “individually rational agents can form groups that are not rational at all.”
Using scientific models and studies, the authors explore the “social character of belief.” Since most knowledge is socially produced, disseminated, and absorbed, we acquire our beliefs—including false ones—from social groups with which we identify. And the intense desire to belong to one’s preferred group is what often shapes our so-called “personal” opinions. Thanks to the “conformity bias” in human psychology, we tend to agree with the members of our clique—increasingly our preferred online social networks—to the point that we may even find others’ judgments more trustworthy than our own.
Among these judgments are those emanating from propagandists, whose methods for producing and propagating misinformation have “exploded in the past century.” These pernicious strategies are often subtle enough to go unnoticed. Aware of the inefficacy of trying to persuade consumers by using fraudulent results, sophisticated propagandists exploit what philosophers of science call “the Problem of Induction,” an argument first introduced in the eighteenth century by David Hume. Nothing in the natural world can be known for certain, even if a phenomenon occurs on a regular daily basis. (The sun rises every day, but “tomorrow could be the day it explodes.”) The lack of absolute certainty about scientific discoveries allows the creation of doubt in the public mind, regarding climate change, for example, or the dangers of smoking tobacco. Exploring the long and sad history of the tobacco industry’s claims about the safety of their products, O’Connor and Weatherall highlight the industry’s plans not to suppress science but in fact to “fight science with more science”: by funding and publicizing studies (in a process called “selective sharing”) that would create confusion by stressing the “uncertainty” in scientific conclusions that linked smoking to cancer. In the words of an industry executive, “Doubt is…the best means of competing with the ‘body of fact’ that exists in the mind of the public.” Once the issue is labelled as “controversial,” the propagandists have succeeded. Witness the current day “climate deniers,” whose efforts parallel those of the tobacco industry.
Focusing on scientific information, the authors examine the complex social and political elements in its production and dissemination, the debates and conflicts among scientists often resulting in bitter polarizations. Using various statistical models and figures, the authors reveal the difficulties scientists encounter when trying to reach consensus in their treatment of certain diseases. For example, a “decades-long battle” is still raging over the treatment of chronic Lyme disease: whether or not antibiotics should be used on a long-term basis for patients. In such volatile situations, an “influencer”—even without expertise in the subject—can sway the public opinion. Using “the weaponization of reputation,” such a person (or social group) can have tremendous power over the actions of large segments of the population. The “real threat” to science, the authors argue, is “from those people who manipulate scientific knowledge for their own interests or obscure it from the policy makers who might act on it.”
One solution the authors offer is to “abandon industry funding for scientific research.” Given the exorbitant cost of conducting research, industries can skew the results by being selective in their choice of the projects they fund and by monitoring the publication of the experiments’ findings. Yet, if the cost has to be publicly funded, as the authors argue, they remain silent on which segment of the population is to shoulder the cost and how such a shift should be implemented. They also propose that scientists wait and reach consensus before publishing the results of their experiments since “dissent often sows confusion.” Furthermore, journalists should also be careful in their efforts at fairness to “both sides” of a scientific debate, particularly in cases such as climate change, when the numbers and solid evidence offered by those holding one position far outweigh those of the opposing one.
The authors further recommend we follow Europeans countries which have created “task forces” designed to combat the spread of disinformation, in part by holding media companies responsible for content considered “unlawful” (such as hate speech). Anticipating the objections based on “free speech” arguments, the authors point out that the goal here is not to limit speech but prevent it from “posing as something it is not.” The question one may ask, however, is how the agencies in charge of such task forces will be elected and what tools they can implement.
The book was published in 2019, in pre-pandemic times and before recent political upheavals; thus the authors’ proposal that we “re-imagine” our democracy seems prescient. In such a society, “a well-ordered science…free of the corrupting forces of self-interest, ignorance, and manipulation” can operate. The vision may seem utopian, they admit, but it is an ideal towards which we can aspire.
Unlike many treatises by philosophers, O’Connor and Weatherall’s study remains accessible to the general reader. The prose is free of jargon, with choice anecdotes and examples from a shared cultural history (the saga of the tobacco industry among others) supporting what may seem like dry scientific data and models. Readers interested in the topic will also find the extensive notes and substantial bibliography valuable. However, given the authors’ emphasis on the production and distribution of knowledge as a social phenomenon, I expected at least a basic analysis of the dynamics of online interactions that generate misinformation: how human emotions are manipulated on social media, for example, to spread falsehoods. I was also hoping the authors would take on the issue of trust and trust-building, not only between the general public and scientists, but also between communities with conflicting viewpoints. More fundamental to the topic at hand—and perhaps beyond the scope of the authors’ discussion—is a rigorous study of the role of language, which is the crucial means by which misinformation is conveyed. Information literacy, after all, depends a great deal on developing the kind of deciphering skills that allow us to recognize logical fallacies. As the authors state, while algorithmic solutions are useful, “determining what is true is a difficult, time-consuming, and human-labor-intensive process.” That labor falls on all our shoulders, especially on educators as leaders who, like Odysseus, can steer the crew-members on deck clear of the false seductions of sirens’ songs.