Factually is a newsletter about fact-checking and accountability journalism, from Poynter’s International Fact-Checking Network & the American Press Institute’s Accountability Project. Sign up here.

Reflecting on fact-checking in 2019

For fact-checkers around the world, 2019 was a big year.

In October, the Duke Reporters’ Lab counted more than 200 fact-checking projects around the world. Facebook continued to grow its partnership with such organizations, hosting its first fact-checking summit at the company’s Menlo Park, California, headquarters. And misinformation continued to grow as a global problem.

Each year, the IFCN makes a series of predictions for how fact-checking and misinformation will change. As we say goodbye to 2019, we wanted to check on how the predictions we made last year held up.

Below are the five predictions we made in December 2018, followed by reflections on the news of the year.

  1. We’ll see more credibility scores deployed — and possibly misfiring

Maybe it’s because we kind of stopped paying attention to these tools, but this didn’t seem to be as big of a trendline in 2019 as we thought.

It’s true that projects like NewsGuard, which has its trademark nutrition labels for the reputability of websites, has grown over the past year. Newer projects, such as the Global Disinformation Index, are working on a similar approach. And there were some misfires; the IFCN even had its own snafu with trying to compile a database of misinforming websites.

But overall, it seems that the industry has somewhat backed off a labeling approach to anti-misinformation efforts this year. And that may be for the best.

  1. More platforms will take active measures to reduce the reach of misinforming content

This was certainly one of the biggest fact-checking stories of 2019 — but it comes with some caveats.

The biggest moves platforms made to combat misinformation came in the form of targeting specific kinds of content. In March, Facebook announced that it would remove groups and pages that share anti-vaccine misinformation from its recommendations — a move that came after similar actions from Pinterest and YouTube. Two months later, Twitter followed suit.

There were other, broader efforts to limit the spread of misinformation on social media. Facebook expanded its fact-checking partnership to Instagram and YouTube started surfacing fact checks in search. More recently, Facebook announced that a team of part-time contractors will help expedite its fact-checking process. (Disclosure: Being a signatory of the IFCN’s code of principles is a necessary condition for joining the project.)

But at the same time, our explicit prediction — that both Twitter and WhatsApp would follow in Facebook and Google’s footsteps — did not really come to pass.

Aside from Twitter tackling anti-vaccine misinformation, neither it nor WhatsApp really did much to tackle misinformation head-on in 2019. WhatsApp’s efforts, namely limiting the number of accounts to which users can forward messages, may actually be ineffective. And while Twitter banned political ads in October, the company has still not worked with fact-checkers in a way that’s comparable to Facebook.

  1. Misinformers will continue to retreat to smaller groups and platforms where it’s harder to measure content

If platforms taking more action against misinformation was the biggest story of 2019, this was a close second.

While hoaxes have long thrived worldwide on platforms like WhatsApp, this year we saw misinformation spreading on more apps in the United States. In the weeks after back-to-back shootings in El Paso, Texas, and Dayton, Ohio, false shooting rumors spread on platforms like Snapchat and iMessage. Facebook CEO Mark Zuckerberg said he wants to make the platform more ephemeral, which experts say could give misinformers the cover they need to spread their false messages.

Aside from the proliferation of misinformation in smaller groups on mainstream platforms, some have been kicked off those platforms altogether. In June, YouTube banned white supremacist content, removing thousands of channels in the process. Twitter has continued to suspend accounts around the world for manipulating its platform.

Those moves, while welcome for many misinformation experts, have also resulted in the growth of fringe platforms like Gab and BitChute. Misinformers don’t have as much reach on those fringe platforms, but only time will tell if they will have any sort of impact on mainstream discourse.

  1. The EU will take center stage in the battle against online misinformation

The EU has certainly made some moves to combat the spread of misinformation on the continent. But it’s still unclear what result those moves will have in the long run.

As it promised last year, the EU did set up an action plan to combat disinformation, which requires monthly reports from the platforms and an early alert system for member states. The governing body has also regularly pressured tech companies to live up to its voluntary code of practice, which lays out certain self-regulatory steps the companies can take to limit their role in the dissemination of misinformation.

However, by the EU’s own admission, there is still more work to be done. Meanwhile, other government attempts to regulate online misinformation have been called into question for their use against journalists and activists.

  1. Videos will become an even more fraught source of evidence

This was an understatement.

While rancor over the potential havoc of deepfake videos continued into 2019, we saw the very real rise of two strains of misleading videos: “cheapfakes” and “dumbfakes.”

In an example of the former, an edited video of U.S. House Speaker Nancy Pelosi (D-Calif.) went viral on Facebook in May. It wasn’t particularly sophisticated (someone just slowed down Pelosi’s speech to make it look like she was intoxicated), but the video set off an entire news cycle about visual misinformation.

More recently, a pair of comedians created a video that purportedly showed staffers for Democratic presidential candidate Michael Bloomberg dancing to a Maroon 5 song. The video was satirical, but it fooled pundits across the political spectrum and showed how easy it is to trick people with a bogus Twitter bio.

In short, our prediction that the deepfake threat would continue to be mostly theoretical was correct. But there were still plenty of misleading videos to be had in 2019 — and the same will likely hold true for 2020.

. . . technology

  • Lead Stories wrote about how fake profiles of people who do not exist are being used on Facebook to influence the 2020 U.S. elections.
    • Can you easily detect who is real and who is fake on the internet by just seeing profile pictures? Play WhichFaceisReal.com, a project developed by the University of Washington.

. . .  politics

  • A political consultant has opened shop in Washington to help campaigns fight disinformation, The New York Times reported. Examples of how candidates are vulnerable to misinformation abound, the story said.
    • “Still, few politicians or their staffs are prepared to quickly notice and combat incorrect stories about them, according to dozens of campaign staff members and researchers who study online disinformation,” wrote Davey Alba.
  • Did the U.K. election show public resistance to fact-checking? Jen Birks, an assistant professor in Media and Political Communication at the University of Nottingham, is researching this and, based on preliminary results, says that fact-checking still faces challenges in popular reception. In 2017, she analyzed fact-checkers’ activity and engagement on Twitter during the campaign period and wrote a book about it.

. . .  the future of news

  • Starting this week, Facebook will have a team of community reviewers working in the United States. The group, currently being assembled by Appen, will analyze “obvious online hoaxes” but are not supposed to flag false information on the platform. The final call about the veracity of a piece of content will still be made by professional fact-checkers, Facebook said.
  • The fight against fake news is inextricably tied to the need to save local news, Emily Bell of Columbia University’s Tow Center wrote in the Guardian, citing how a U.K. newspaper editor’s thoughtful response to a reader also came packaged with a “heap of dubious advertising and problematic content.”
    • “For a dwindling number of journalists to be paid to dispel the social media ‘dust cloud of nonsense,’ as Barack Obama once called it, their publications have to rely on the services of companies such as Facebook and Taboola, who make money from having disgracefully low or non-existent editorial standards themselves.”
  • “Whenever I scroll through my mum’s Facebook newsfeed — full of viral hoaxes — a part of me dies inside,” wrote Anna Levy for ABC Life in Australia. Perhaps reading her guide to social media isn’t such a bad idea for people who will encounter parents, grandparents and others during this holiday break.

Since 2015, an image of a man holding a decapitated head has circulated on French social media along with a photo of the same individual wearing jeans and hoodie in a busy shopping center.

The caption under the two images claims that the first photo shows a Syrian member of ISIS somewhere in the Middle East and that the second was taken in Europe, after the terrorist became a refugee.

AFP Factual‘s team used a reverse image search, found the man in the images, went over many of his Facebook posts, used Google Earth content to match locations, reached out and interviewed his lawyers to finally debunk the story.

The individual shown in both photos is not a former ISIS member. He was actually a member of an Iraqi militia who fought against ISIS.

He is also not a refugee in Europe. In 2015 he was arrested in Finland and ultimately given a 16-month suspended sentence after being convicted of committing a war crime by desecrating the body of an ISIS combatant and posting the image on Facebook.

What we liked: AFP’s fact-check was produced by students in their first year of the Sciences Po journalism school, under the supervision of three professional fact-checkers. It is another indication that media literacy programs work and should be added to university curricula.

  1. The Washington Post’s Fact Checker has updated its database of President Trump’s falsehoods. It counted his false or misleading claims at 15,413 as of Dec. 10. Also in year-end roundups, PolitiFact identified its “lie of the year.” And The Post identified its “biggest Pinocchios.
  2. The Los Angeles Times profiled the Taiwan FactCheck Center and its fight against a disinformation campaign by China to stoke division and undermine democracy.
  3. Accounts linked to Russia have been posting and promoting fake stories on the community section of BuzzFeed and other sites, the BBC reported.
  4. Newsrooms will formalize practices around combating disinformation, news philanthropist and Craigslist founder Craig Newmark said in his 2020 Nieman Lab prediction.
  5. A new study says “fake news” is costing the global economy $78 billion a year, ZDNet reported.
  6. Top state election officials across the United States have launched a nationwide campaign to fight misinformation around elections.
  7. Facebook investor and billionaire investor Peter Thiel is at the center of divisions within the company over political issues, The Wall Street Journal reported, including its decisions not to fact-check political advertisements.
  8. Last week we wrote about how anti-vaxx groups are now using new strategies to dodge social media policies. Truthmeter, in Macedonia, found connections between these groups and the Russian Orthodox Church.
  9. Eighty-one projects are now being considered for 10 grants as part of the Fact-Checking Innovation Initiative.
  10. Baybars Örsek, the IFCN’s director, wrote an end-of-year report and talked about 2020. You might want to take a look at this article.

That’s it for this week — and for this year! Feel free to send feedback and suggestions to factually@poynter.org.

The Factually team is going to take a year-end break, but we’ll be back in your inboxes Jan. 9 — perhaps with some fact-checking predictions for 2020. Happy holidays!

Daniel, Susan and Cristina

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Tags:
Cristina Tardáguila is the International Fact-Checking Network’s Associate Director. She was born in May 1980, in Brazil, and has lived in Rio de Janeiro for…
Cristina Tardáguila
Daniel Funke is a staff writer covering online misinformation for PolitiFact. He previously reported for Poynter as a fact-checking reporter and a Google News Lab…
Daniel Funke

More News

Back to News