January 4, 2024

The Pope in a Balenciaga puffer coat. An explosion at the Pentagon. Donald Trump fleeing the police.

None of these things happened last year. But, thanks to generative artificial intelligence, realistic images of them went viral.

Generative AI has added a new threat to the information ecosystem. Although the technology, available to anyone and easily accessible for the first time in 2023, did not have an outsized impact in terms of misinformation, I worry about where the technology will take us.

In 2024, billions of people in dozens of nations will head to the polls for the biggest election year in history. Gearing up for that — and the inevitable falsehoods that come with every election — here is a rundown of the biggest trends in misinformation in 2023, and what to expect this year.

Generative AI: Threat, overblown or solution to the misinformation problem?

In early December, I attended the Google News Initiative’s Trusted Media Summit APAC, where I had two conversations dozens of times.

The first: Generative AI is going to supercharge the spread of misinformation, the tools to identify it are inadequate and fact-checkers are underresourced to fight it at scale. The second: The AI panic is overblown, and fact-checkers should start using the technology themselves (Full Fact, Chequadeo, Aos Fatos and others have AI tools in place now.)

I think we’re somewhere in between. The technology underpinning generative AI is advancing quickly, with Google recently announcing a new AI model that allegedly outperforms the current standard-bearer, GPT-4 (as in ChatGPT from OpenAI.)

With the technology evolving so fast, we’re bound to see more AI misinformation this year, especially with misinformers looking to swing elections. Bad actors won’t necessarily be trying to fool anyone with AI-generated images or videos, they’ll aim to create enough poor-quality content — such as these images of a pro-Israel rally featuring 3- and 4-armed people in balconies — to make social media users skeptical of anything they see.

It’s similar to the “flood the zone” tactic touted by U.S. political operatives in past elections.

By the end of this year, people might not trust anything they see — or hear. Audio deepfakes will be the main source of generative AI misinformation, since it’s the easiest kind to make and has already proven impactful in some countries.

But I don’t see generative AI overtaking cheap fakes, dumb memes or political statements as the primary vectors for misinformation. And I wholeheartedly agree with my colleagues at the Trusted Media Summit that fact-checkers and newsrooms should suspend their fear of generative AI and experiment with tools like ChatGPT, Bard, Bing or Claude.

X inches toward death, as crowdsourced fact-checking gets attention

X was already a misinformation disaster before Elon Musk restored Alex Jones’ account in mid-December.

Musk increased the incentive to lie for clout on the platform when he elevated posts from anyone who pays for a blue checkmark — and promised them advertising revenue on viral tweets. The results have been disastrous: The amount of misinformation about the Israel-Hamas war on the platform is the worst I’ve seen for any topic.

Musk spent much of 2023 touting crowdsourced fact-checking as a solution to the problem. It’s not.

Community Notes allows roughly 270,000 users to add “context notes” to misleading posts. It has promise to be a powerful new tool to help trust and safety teams flag misinformation on social platforms. I liked it before it was cool.

But it doesn’t scale. And it cannot supplant the work of trust and safety teams.

Notes have appeared on about 35,000 tweets since 2021 (compare that to the millions of tweets posted every day). That’s because for notes to become public, users from diverse ideological backgrounds must agree they’re useful, accurate and contain high-quality sources, among other variables. That’s no easy feat in this polarized environment, which has led to less than 9% of notes going public.

Further, Musk himself claimed state actors are able to manipulate note status, elevating questionable notes to be public or unpublish legitimate ones, and I’ve heard similar claims from fact-checkers in Brazil.

I’ve long advocated for human intervention that could quickly surface legitimate notes regardless of ideological agreement (facts are facts regardless).

After Jones’ reinstatement, I’m fairly certain X will go the way of Myspace before 2024’s end. And with it, so will go an incredibly interesting — yet woefully underutilized — crowdsourced fact-checking experiment.

Israel-Hamas war proves to be a misinformation nightmare

Speaking of X, the platform perpetuated some of the most egregious misinformation coming out of the Israel-Hamas war. And it was a lot.

My colleague Angie Drobnic Holan, director of the International Fact-Checking Network, has a great write-up of how fact-checkers have responded to the flood of misinformation around the topic. And I wrote up some tips for anyone to navigate social media factually on their own.

The war served as a microcosm of the biggest trends in misinformation last year:

  • We’ve seen cheap fakes outshine images and videos created with generative AI, although a few examples of the latter have gone viral.
  • The false narrative that those injured or killed in the war are “faking it” has been promoted endlessly.
  • X has relied on Community Notes to address misinformation — but it has not functioned at scale and surfaced falsehoods itself.
  • Video game footage was commonly packaged as real war footage.
  • State actors have latched onto the conflict to push false narratives about other conflicts, like the Russia-Ukraine war.

It’s worth watching the misinformation surrounding the Israel-Hamas war as we move into 2024, because it will likely foreshadow what fact-checkers will be fighting across the world.

Disinformation researchers under threat

Clearly, 2024 is shaping up to be a scary year for misinformation. Unfortunately, lawmakers are using subpoenas, records requests and the accusation of “censorship” to stifle academics who research falsehoods and the information ecosystem.

U.S. Rep Jim Jordan led the charge. This, along with social media platforms’ reduction in trust and safety staff and collaboration with researchers, means fact-checkers and journalists will have fewer sources to help fight misinformation in 2024.

And I was shocked when the Harvard Crimson broke the news that disinformation expert Joan Donovan was being forced out of the Shorenstein Center on Media, Politics and Public Policy. Donovan, now at Boston University, recently accused Harvard University of pushing her out at the behest of donors with ties to Meta.

Still, I’m hopeful knowing how determined academics are to keep up their work in 2024.

“It’s clear to me that researchers and their institutions won’t be deterred by conspiracy theorists and those seeking to smear and silence this line of research for entirely political reasons,” Kate Starbird, co-founder of the University of Washington Center for an Informed Public, told The Washington Post.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Alex Mahadevan is director of MediaWise, Poynter’s digital media literacy project that teaches people of all ages how to spot misinformation online. As director, Alex…
Alex Mahadevan

More News

Back to News