Why a Storm of Misinformation May Loom in 2024

From a Washington Post analysis by Cristiano Lima headlined “Why a ‘perfect storm’ of misinformation may loom in 2024”:

A majority of researchers expect global misinformation to worsen in 2024, with politicians and social media posing the most serious threats, according to a new survey.

The poll, which surveyed almost 300 researchers across 50 countries, found that only a small fraction — 12 percent — think the information environment in their countries will improve next year, while 54 percent said it will deteriorate.

The International Panel on the Information Environment (IPIE), the Swiss-based nongovernmental organization behind the survey, said the results demonstrate “significant pessimism” among experts.

The findings arrive as platforms including Meta and Elon Musk’s X roll back policies and scale back teams dedicated to combating misinformation, ahead of major elections in the United States, Europe and India.

The expected dynamic could put tech companies’ defenses under new levels of strain as some of the world’s biggest democracies will be entering critical periods.

“This has the potential to create a perfect storm next year,” Philip Howard, a professor at the University of Oxford and the chair of the IPIE, said.

Howard said that recent staffing cuts to tech companies’ moderation forces pose a huge risk.

“Those teams are the ones that hold the line against the most ridiculous misinformation, the worst of the content, and they’re just not there anymore in a consistent way,” he said.

New global content regulations, such as the European Union’s landmark Digital Services Act, seemingly haven’t quelled researchers’ fears, he said.

“Even though there are some reasonable policies being implemented by the [European] Commission and a couple of other governments around the world … it’s not clear that the social media firms will comply,” Howard said.

According to the survey, two-thirds of researchers said that the “inability to hold social media companies accountable” over botched content moderation efforts posed a significant hurdle, while 55 percent reported that poorly done automated moderation was another.

While a third of researchers globally flagged social media companies as one of the most serious threats to healthy discourse, those that specialize in regions with democratic governments said politicians themselves posed a bigger risk.

Howard said the prevailing concern is over politicians picking up communications tactics from authoritarian regimes and using them to try to gain an edge in democratic elections.

“The concern is that politicians themselves, while they’re running for office, will use some of these techniques because they’re just so eager to get elected,” he said.

One area where there was broad agreement among researchers was on the need for greater access to data from platforms about their content moderation efforts.

Seventy percent of those polled said that lack of access to research data posed “the major barrier to improving our understanding of the global information environment.”

“Without that kind of validation, policymakers really don’t have the evidence they need to know … whether policy needs to be more aggressive or more invasive or whether the firms can act responsibly on their own,” Howard said.

Speak Your Mind