The scientific process, and how to handle misinformation Britain get its own, much sillier Mueller report

After Donald Trump was elected, in 2016, misinformation—and its more toxic cousin, disinformation—began to feel like an increasingly urgent social and political emergency. Concerns about Russian trolls meddling in American elections were soon joined by hoaxes and conspiracy theories involving covid-19. Even those who could agree on how to define mis- and disinformation, however, debated what to do about the information itself: Should Facebook and Twitter remove “fake news” and disinformation, especially about something as critical as a pandemic? Should they “deplatform” repeated disinfo spreaders such as Trump and his ilk, so as not to infect others with their dangerous delusions? Should federal regulations require the platforms to take such steps?

After coming under pressure, both from the general public and from President Biden and members of Congress, Facebook and Twitter—and, to a lesser extent, YouTube—started actively removing such content. They began by banning the accounts of people such as Trump and Alex Jones, and later started blocking or “down-ranking” covid-related misinformation that appeared to be deliberately harmful. Is this the best way to handle the problem of misinformation? Some argue that it is, and that “deplatforming” people like Trump—or even blocking entire platforms, such as the right-wing Twitter clone Parler—works, in the sense that it quiets serial disinformers and removes misleading material. But not everyone agrees.

The Royal Society, a scientific organization based in the United Kingdom, recently released a report on the online information environment in which it states that “censoring or removing inaccurate, misleading and false content, whether it’s shared unwittingly or deliberately, is not a silver bullet and may undermine the scientific process and public trust.” Frank Kelly, a professor at the University of Cambridge and the chairman of the report, wrote that the nature of science includes uncertainty, especially when it is trying to deal with an unprecedented medical crisis like the pandemic. “In the early days of the pandemic, science was too often painted as absolute and somehow not to be trusted when it corrects itself,” Kelly wrote, “but that prodding and testing of received wisdom is integral to the advancement of science, and society.”

The scientific process, and how to handle misinformation Britain get its own, much sillier Mueller report

Listen: Russia, Ukraine, and the front lines of information warfare

Early last year, Facebook and other social platforms said they would remove any content that suggested the virus that causes covid-19 came from a laboratory, since this was judged to be harmful misinformation. Later, however, a number of reputable scientists said the possibility couldn’t be ruled out. Facebook and other platforms were forced to reverse their initial policies. Blocking or removing content that is outside the scientific consensus may seem like a wise strategy, but it can “hamper the scientific process and force genuinely malicious content underground,” Kelly wrote, in a blog post published in conjunction with the Royal Society report.

The report notes that, while misinformation is commonplace, “the extent of its impact is questionable.” After surveying the British public, the Royal Society concludes that “the vast majority of respondents believe the covid-19 vaccines are safe, that human activity is responsible for climate change, and that 5G technology is not harmful.” In addition, the report states that the existence of echo chambers “is less widespread than may be commonly assumed, and there is little evidence to support the filter bubble hypothesis (where algorithms cause people to only encounter information that reinforces their own beliefs).”

What should platforms like Facebook do instead of removing misinformation? The report suggests that a more effective approach is to allow it to remain on social platforms with “mitigations to manage its impact,” including demonetizing the content (by disabling ads, for instance) or reducing distribution by preventing misleading content from being recommended by algorithms. The report also suggests that adding fact-checking labels could be helpful, something that both Facebook and Twitter have implemented, although there is still some debate in research circles about whether fact-checks can actually stop people from believing misinformation they find on social media.

Here’s more on misinformation:

Other notable stories:

ICYMI: Omicron, false dichotomies, and the ‘new normal’

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.
Populární články