In 2009, former Governor Sarah Palin sprinkled magic dust on a provision of the Affordable Care Act (ACA) that allows Medicare to pay for doctors to take time to discuss end-of-life issues with patients, turning it into the idea that Obamacare has “death committees.” It was like pouring gasoline on an already intense partisan battle over the law. Republican politicians started repeating the myth. Death committees were soon in the news. As part of a perfectly well-intentioned effort to judge the American people, the news media tried to provide facts, including debunking the lies about death committees. Pundits, reporters and experts on TV news and in newspaper articles told the truth: there are no death committees in the law. Some of the facts may have reached the public, but what the public mainly “heard” was the reiteration of death committees. In the end, the lie was accentuated rather than debunked, and the news agenda was hijacked by the misinformation it was trying to correct.
For Palin’s misinformation ploy to succeed, it had to tap into something real (there is usually an underlying belief or nugget of fact that people can latch onto). In this case, it was far-right dislike of President Obama and the federal government. And it was true that doctors were supposed to have end-of-life conversations with older people, something nearly everyone in the medical profession wanted, but it was about respecting patients’ wishes, and notably, the provision allowing Medicare payment was not included in the final law.
One way to see the spread of the death committee myth is through our polls (the kind we now replicate in polls that track health misinformation). In 2010, a staggering 41% of the public said they believed the ACA had death committees. And the lie didn’t die away: in 2014, the same number said the law had death committees. In 2019, that number was 38%. By 2023, as the law grew more popular and Obama faded from the spotlight, the number of people who believed the lie had dropped to 8%. But the myth still had some staying power: 70% said they didn’t know if the ACA had death committees.
The Palin Death Penalty Committee example is eerily familiar by now. Most health misinformation today is initially generated by a few actors and, despite the impression of its ubiquity, is viewed by a relatively small number of people on social media. An even smaller number actively engage with it by posting about it or sharing it with others. When misinformation is then mixed into and amplified in political and news media coverage, it can spread, reach a significant number of people, and have a larger national impact.
A large, yet somewhat unique, example of this multiplier effect is the COVID-19 vaccine. Former President Trump, other Republican governors, and conservative media have sharply divided the country along partisan lines over the COVID vaccine, making it a symbol of resistance to heavy-handed federal government measures and not getting vaccinated an affirmation of personal freedom (Understanding America’s Failure Against Coronavirus—An Essay by Drew Altman | The BMJ). As a result, in our monthly Vaccine Monitor surveys throughout the pandemic, party affiliation was the strongest predictor of nearly every position we asked about COVID. But the vast majority of health misinformation is not fueled by the president, nor does it capture the attention of the entire country. Vaccines are also a somewhat unique case, with a long history of well-organized anti-vaccination movements.
Another example is the “Meet Baby Olivia” video. Baby Olivia is a video about fetal development that was posted to Facebook in 2021 by the anti-abortion group Live Action. The American College of Obstetricians and Gynecologists said the video was “designed to manipulate viewers’ emotions rather than share evidence-based information about embryo and fetal development.” At its peak, in June 2022, the Baby Olivia video attracted 4,700 comments on Facebook, and estimates based on similar social media posts suggest that overall engagement in the form of likes, shares, and comments may have been three to four times that amount. That’s a large number for a town hall or campaign rally, but a tiny number on Facebook, both in terms of daily numbers and in terms of impact on the general public. But then Baby Olivia became a political issue, like a smaller version of the dynamics seen in the death penalty council. A bill was introduced in North Dakota, and then in nine more states, to require schools to show Baby Olivia or similar videos to students. The controversy surrounding the bill received extensive media coverage, and Baby Olivia grew out of her initial niche presence on Facebook to become a much larger phenomenon.
We may be overstating the impact of a lot of sensational, false, and ideologically motivated misinformation on social media. One reason is that misinformation can be so egregious that it still reaches a small number of already like-minded people who seek it out. Far more importantly, misinformation can spread from social media into politics, find prominent political agents, and attract the attention of the general media. Then it reaches a much larger number of people who are unsure of what is true and what is not, and who may be persuaded by the misinformation. Media fragmentation along partisan lines and the pursuit of clicks creates perverse incentives that can amplify misinformation even further. The more outrageously sensational the misinformation is, the more attention it and its purveyors are likely to attract. Amplification by political and news media leads to new social media attention, creating a vicious cycle of misinformation.
The best solution is to prevent misinformation and those who spread it from gaining a foothold on social media in the first place. However, policing misinformation on social media is primarily the job of social media platform companies, which have moved away from self-regulation under the cover of the tech recession. In the United States, the government does not have the authority to regulate misinformation on platforms, but a recent Supreme Court decision has allowed the government to continue communicating with platform companies about misinformation for now.
News media will and should report compelling and timely health policy news, such as anti-Obamacare rallies or state laws mandating the viewing of anti-abortion videos in schools. As the death penalty board experience shows, it can be hard to report on it without inadvertently elevating misinformation and those who spread it. But reporters and editors’ primary focus is their own field and news stories, not confronting misinformation. Fact-checking in news organizations is organized as a separate function and product, and its purpose is narrower: primarily to hold candidates and officeholders accountable for false figures and statements. There are a few journalists who make misinformation a field or regular focus, but very few in the health field (and we have a few in our newsroom).
While this isn’t health misinformation, it shows the media struggling between denouncing extreme misinformation on social media and elevating it in the aftermath of the assassination attempt on President Trump.
More fundamentally, the news media typically thinks it’s in their job to report the news, not to educate the public or address public knowledge gaps. How to navigate the minefield of misinformation is one of the issues we plan to address in collaboration with the national journalism community in our new Health Misinformation and Trust Initiative.
See all of Drew’s Beyond the Data columns