What’s the Harm in Medical Misinformation?
Earlier this week I asked, “What should be done about medical misinformation, if anything? Why?” I noted that one faction wants to take action against it while another wants institutions to stay viewpoint-neutral and allow all perspectives to be aired.
Carol argues that the stakes are high:
Medical misinformation is contributing to America’s growing death toll, now passing 900,000. It’s a matter of life and death. By the time many learn they’ve been lied to they’re suffocating and it’s too late to take a vaccine. Opinions can kill in this case, and those forcefully defending their right to their own opinion are risking not only their own lives but the lives of those they’re closest to. This is a case where the truth can save your life.
But Sebastian is skeptical that what constitutes medical misinformation can be rigorously defined and warns against the state infringing on free speech in a country where power regularly changes hands:
What is medical misinformation? It seems truly fact-based remarks draw much less attention and ire than remarks like, “We should not get vaccinated!” But this remark is not medical misinformation. The “should” within that sentence signals it is a normative claim (i.e., refers to what someone ought to do). A normative claim cannot be “misinformation” as it is removed from the black and white world (insofar as that exists, even) of truth. Perhaps some swath of the population disagrees with the statement, but having the state label normative claims—opinions—with which it disagrees as “misinformation” is a dangerous move, as it opens an avenue for the state to curtail basic liberties.
We need to consider whose opinions are being censored by whom. Today that may align with your beliefs. But power swings. As such, censoring today is a short-term, politically misguided move. While misinformation is “bad,” allowing its dissemination is a much lesser evil than deviating from an equilibrium in which every person’s right to free speech is equally guaranteed.
The fact that we even have the term “misinformation” is, in some sense, evidence that the “public square,” or the “marketplace of ideas,” is effectively self-policing. It seems much more sensible to let people duke it out, working their way to the truth—even if we incur some costs along the way—than have the state step in and define the truth for us.
In contrast, Lisa believes that speech restrictions to safeguard public health are reasonable even in a free society:
I think a complex country of freedoms and civic responsibilities and laws can tolerate the notion that medical disinformation should not be allowed. It is dangerous and not disseminated as a differing view on the outcomes of research, but purely for purposes other than informing the public about the latest rigorously reviewed new information. Healthy disputing views of what medical research reveals and how to interpret it can prove synergizing and lead to advancements via more nuanced research and the asking of new questions. But this is very different from unregulated spouting of disinformation by entities that have no background equipping them to make these claims.
If we’ve adapted to laws that require us to wear seatbelts, to not smoke in indoor public spaces, to not sell alcohol to kids under 21, and to keep your hate-speak to yourself, I certainly think we can handle Federal regulation around spouting equally dangerous medical disinformation. And it does not seem beyond a fact-checking committee to assess whether any legitimate research exists to support outrageous claims.
For context, there is no hate-speech exception to the First Amendment––a law requiring Americans to “keep your hate-speak to yourself” would be struck down as a civil-rights violation.
Varun argues that “instead of attempting to silence medical misinformation,” society might be better off amplifying “voices of reason,” which he sees as especially important for the innumerate:
For example, the current debate on vaccine misinformation misses the mark by framing it as group A (<1% of doctors) versus group B (>99% of doctors). That is technically not incorrect. But this also somewhat gerrymanders the >99% of doctors into a single group, despite their overwhelming numbers. Having an equally powerful group unfortunately gives a significantly greater voice to the <1% than their size warrants.
For every one fringe doctor’s face, we don’t see the faces of 99 reasonable doctors (we instead see one or two representatives of this group). In my opinion, the audience’s mind needs to physically count the overwhelming majority of doctors in favor of the vaccine. The gravity of 99 people in statistics and percentiles is not fully processed by some people, especially for those people who aren’t fluent in basic math.
Not everyone pays attention in school but just about anyone can count. I believe that people need/deserve to see 99 faces for the vaccine for every 1 face against the vaccine. When millions of people are refusing something that’s in their best interest, we as society need to reevaluate our methods of messaging. Others may not process the same complexities that we can. Our messaging needs to be inclusive of them as well.
Dane wants to impose large monetary fines on any platform that streams Joe Rogan’s podcast:
What harms does medical misinformation create? Who is harmed? And who should bear responsibility for the harms? The Joe Rogan controversy highlights why these questions are so important: A man with a national following larger than the CDC freely asserts that young adults don’t need the vaccine, all manner of nonsense about ivermectin, and more. It’s not difficult to see the harm done––the correlation between this misinformation and critically ill ICU patients who avoided the vaccine or mistakenly believed that ivermectin would cure them––even if proving direct causation is near impossible.
How can Spotify justify paying one man $100 million and shift all of the costs of what this foolish man says onto all of us? What should we do? One idea: Assess steep fines for publishing medical misinformation. Spotify, not Joe Rogan. Fines that are large enough to be financially painful on an earnings call for every podcast containing misinformation. If Rogan bolts to YouTube? Fine YouTube, or Amazon Web Services, or whichever platform recklessly shifts the cost of medical misinformation onto the public.
It should be straightforward enough to differentiate between free speech and outright misinformation. Rogan’s podcast with Robert Malone wasn’t an exercise in carefully representing what credible studies have concluded, and then offering his own skeptical opinion (which would be his right). Over three hours it recycled nonsense without any reference to vaccine efficacy. A steep fine doesn’t negate Rogan’s right to promulgate harmful misinformation; rather, it seeks compensation from Rogan’s publisher to address the harms it created. Perhaps if Spotify paid enough fines, it might force Rogan to draw cleaner lines between free speech and causing harm that hurts everyone.
Leslie believes that the question of which approach to take to combat COVID-19 is so high stakes that it demands debate:
The history of medicine over the centuries is replete with egos, weird theories, deliberate harm, and true progress in understanding disease. With the vast profits being made by pharmaceutical companies, and with their less than stellar histories, what could possibly go wrong in giving them legal carte blanche? The vaccines are indeed saving the lives of countless people and should be available to countries worldwide. And at first public health guidelines were voluntary and logical given what was known …
We do agree to limit much individual autonomy in this country for the public good, as in traffic rules and taxes and regulations on many aspects of life … though, a person has always had the right to choose or forgo any medical procedure involving that person's own body, with informed consent the gold standard. If we waive that right and standard or transfer them to any other body, I feel that we embark on a very slippery slope. A different viewpoint, prioritizing the public good over individual rights, would see these actions as necessary in a global crisis. We need to respect these divergent views, and we deeply need public debate on these issues.
Tamara speculates as to “why even smart, scientifically-literate people sometimes fall for pseudoscience”:
We live in a world of ever-expanding technical and medical mastery. It’s hard to overstate what a blessing this is for most suffering people. It poses moral problems, though, for ill people left behind, for whom medicine still cannot offer ways for “improving themselves” out of illness. In a prosperity-gospel inflected meritocratic culture like ours, we treat overcoming illness as a self-improvement project. If you’re not well yet, the way you show you’re not malingering is by investing in “wellness,” and having run out of scientifically plausible ways to get well isn’t really treated as an excuse.
Being ill means having medical reasons for not being your “best self,” for needing others’ help and patience. In a meritocracy, it means proving you’re not malingering, showing you’re upholding your end of the social bargain by doing whatever it takes to get better and not burden others. “Whatever it takes” means pursuing scientifically implausible treatments once you’ve run out of scientifically plausible treatments. Even exceptionally intelligent, scientifically-literate people find themselves pursuing pseudoscientific treatment if mainstream medicine hasn’t helped them because pressure to do “whatever it takes” to improve yourself trumps pressure to “stick to the science.”
Cynthia thinks that misinformation is a problem caused by willful liars:
We’re very clear that free speech is a cornerstone of democracy and must be safeguarded. We’ve also clearly identified its boundary: when someone deliberately puts others in danger by falsely crying out “fire” in a crowded theater. We have been skating the edges between sharing ideas to expand perspective and ideas to evoke chaos for quite some time now.
The challenge is that there are just a few truly malicious people to whom self interest trumps any other value, and that a fast course to power and money is to spread misinformation. The people who actively promote misinformation for gain can be identified. Transparency around who benefits can become the equivalent of an “FBI’s most wanted” list for criminal misinformation. I would vote for some kind of journalistic task force to be funded.
My instinct is that most people who spread misinformation are earnestly mistaken.
Finally, two correspondents have advice for the government. Jack’s counsel: Be farsighted. “Avoid arguments that can be played as ‘flip flopping,’” he writes. “On Day One of the pandemic the message should have been that things would change as our body of knowledge evolved.”
Errol’s counsel: When you’re wrong, just say so.
The way to battle misinformation is to encourage people to acknowledge mistakes and to admit them publicly. The more honest people in medicine are about having been wrong in the past, the more encouraged the public will be to listen to them in the present. Honesty is the best policy, and that includes admitting that they didn’t think masks worked when the pandemic started, but later got information to change that opinion.
Thanks for your contributions. I read every one that you send. See you next week.