Factually is a newsletter about fact-checking and accountability journalism, from Poynter’s International Fact-Checking Network & the American Press Institute’s Accountability Project. Sign up here.

The Factually forecast for 2020

Happy New Year! With the start of 2020, we’ve been thinking about what the year might bring to the misinformation and fact-checking world.

Of course, hoaxers, manipulators and propagandists will come up with new techniques. And we also expect a new intensity of misinformation, especially around the U.S. election. There will still not be enough fact-checkers in the world to debunk it all. But beyond that, what can we expect for the coming year?

Here’s our Factually Forecast for 2020 – one prediction from each of us.

Cristina: Anti-misinformation regulations will grow – and in some places show themselves for what they really are. 

In late 2019, Thailand and India launched their national anti-fake news centers. Singapore approved its Protection from Online Falsehoods and Manipulation Act. In the meantime, governments in other parts of the world have been quietly observing the impact of these regulations, some with an eye toward replicating them.

Groups that advocate freedom of speech are watching too – more nervously.

So far, none of the moves have statistically proven a reduction in the falsehood levels – and maybe that is not even the goal. In 2020, we might learn what the real motives of these efforts are – to rein in so-called “fake news” or to inhibit free speech in the name of reining in fake news.

Early indications are worrisome. In November, a person was arrested in Thailand for having shared links to “obscene websites that came with advertisements for diet supplement products.” The content that spread through groups on Line (a WhatsApp-like app) was considered “fake news.” In Singapore, the government told Reuters it was just “a coincidence” that the first few cases brought under the nation’s new anti-fake news law were against political figures and parties.

The IFCN maintains a guide to regulation attempts in more than 50 countries and will continue to monitor them closely in 2020.

Daniel: Contact with tech companies will become the norm — not an anomaly — for fact-checkers.

If 2019 taught us anything, it’s that technology companies like Facebook are becoming increasingly reliant on the efforts of fact-checking organizations to weed out misinformation on their platforms. I expect this trend will continue in 2020, normalizing the relationship between the two industries.

Last year, Facebook grew its partnership with fact-checkers worldwide and held its first-ever Fact-Checking Partner Summit. When that project was announced in late 2016, it was major news — Facebook was letting fact-checking outlets sift through potentially false posts, then applying a penalty to pages that repeatedly shared misinformation.

Now the collaboration seems like business as usual for the more than 50 organizations that are a part of the partnership. And in 2020, more companies will try to match or beat Facebook’s effort. (Disclosure: Being a signatory of the International Fact-Checking Network code of principles is a necessary condition for joining the project.)

Take TikTok for example; it recently announced that it would prohibit the distribution of misinformation about elections or other civic processes. Twitter made waves in October when it announced that it would ban all political ads, seemingly in response to Facebook’s decision to not apply fact-checking penalties to false ads from politicians. Spotify recently made a similar move. These efforts weren’t a direct result of partnership with fact-checkers, but they were almost certainly informed by their work against misinformation.

Expect more of this in 2020.

Susan: The fight against misinformation will become more commercialized.

Smart business people never saw a crisis they couldn’t turn into an opportunity. So it goes with misinformation. Much of the effort toward battling misinformation has been housed in nonprofit organizations and academia. Of course, some for-profit news organizations like The Washington Post have fact-checkers. But this year, I think more commercial entities will join the fight.

We’ll see more companies like the social media intelligence and news agency Storyful (owned by News Corp.) that help other organizations detect and call out fake content. Public relations shops will create special units to guide the targets of misinformation. The New York Times last month profiled a political consultant who is doing just that.

We can expect to see more people making a business out of battling misinformation, especially because the opposite is also true – there is more “disinformation for hire,” as BuzzFeed recently reported.

We promise that next year, the Factually team will assess how our predictions panned out, just as we did this year. Meanwhile, send us your own predictions (in less than 150 words) at factually@poynter.org. We might publish some of them here.

. . . technology

  • Facebook announced that it is banning the publication of deepfake videos ahead of the 2020 election in the United States. The policy covers videos that are “edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person,” according to the tech company.
    • After Facebook’s announcement, Witness, a nonprofit that focuses on the use of video to advocate for human rights, published a list of pros and cons of the policy. Among the criticisms: “Although deepfakes are an emerging threat, there is currently a much bigger problem with ‘shallowfakes,’ media manipulation done with more simplistic techniques like editing or mislabelling or misrepresenting a video or photo.”
  • Computer-generated “people” will enter debates on political issues, according to a piece in The Atlantic from Bruce Schneier, a fellow and lecturer at Harvard’s Kennedy School.
    •  “AI-driven personas will be able to write personalized letters to newspapers and elected officials, submit individual comments to public rule-making processes, and intelligently debate political issues on social media,” he wrote.

. . .  politics

  • U.S fact-checkers are already catching mis/disinformation regarding Iran and the impact of this new conflict. FactCheck.org dissected tweets from Vice President Mike Pence saying that  General Qassem Soleimani helped 9/11 hijackers travel from Iran through Afghanistan, noting that the 9/11 Commission report does not implicate Iran in the hijackers’ travel.  For PolitiFact, Daniel explained how conscriptions happen and rated false a post that claimed gays and felons wouldn’t be eligible for a U.S. draft, if one were enacted. Lead Stories and Snopes reported that an image showing former president Barack Obama shaking hands with Iranian president Hassan Rouhani was digitally manipulated. The picture has been circulating online since at least 2013.
  • A video edited to make Joe Biden look racist went viral on Twitter. PolitiFact debunked it, and Washington Post opinion writer Greg Sargent wrote about the episode as a cautionary tale of what we can expect in the coming year.
    • “It is simply incredible that anyone in the business of informing people would circulate a video like this before verifying the full context,” he wrote. “Have we really learned nothing in the past few years? One hopes this episode will be taken as a cautionary tale of what’s coming.”

. . .  the future of news

  • As we’ve noted so often here, crises often bring a new onslaught of misinformation. On Poynter.org, Cristina offered tips on how people can use their phones to spot and debunk false images surrounding the U.S. conflict with Iran.
    • To prove the point once again, The Australian reported how bogus images about the bushfires on the continent are going viral on social media.

Speaking of predictions, the coming year will see “a smarter conversation taking shape about how facts and fact-checking matter in democratic politics,” wrote Lucas Graves, a researcher who teaches journalism at the University of Wisconsin, in NiemanLabs.

On Sunday, The New York Times tweeted that “a crowd of people stretching over 30 kilometers, or almost 20 miles, poured out onto the streets of Ahvaz,” in Iran, to mourn the death of General Qassem Soleimani. Iranians who seemed to know the area went to social media to criticize the information, claiming it was an exaggeration.

“The total radius of Ahvaz city is not 30 km,” wrote one of them. “The entire city of Ahvaz is not 30 km in any direction!” added another.

When there is doubt, fact-checkers act. So Factnameh.com, a fact-checking organization launched by Iranians who live in Canada, jumped in the story to see who was right.

First, the fact-checking team got a map of Ahvaz, a city of 1.2 million people (according to the latest census) in the southwest part of Iran, and placed over it all the traffic restrictions that had been imposed by the government for Soleimani’s funeral. By doing so, fact-checkers knew exactly were the crowd would be.

Then, the fact-checking team measured each avenue and each street using Mapchecking. It concluded that the crowd could not have stretched more than 3 kilometers (1.8 miles) in Ahvaz because there was no room for it in the city.

To carefully review the work, Factnameh’s team took another step. It compared some aerial images from the actual event to a few computer simulations generated by the Crowd Safety and Risk Analysis site. With that, they concluded that between 58,000 and 87,000 people (including positive and negative coefficient of 20%) mourned Soleimani on Sunday in Ahvaz. That number of people wouldn’t be enough to fill 30 kilometers the way pictures show.

The final fact-check was published in Farsi, with pictures, maps and the crowd simulations. A tweet was posted in English, attaching the content from The New York Times.

What we liked: Factnameh’s eyes were sharp and ready to doubt the information tweeted by one of the most important newspapers in the world. This fact-check proves that journalists should know how to use tools to measure big crowds. No correction had been issued by the newspaper as of Wednesday.

  1. The tireless debunkers at BuzzFeed News rounded up some of the top hoaxes that spread on social media after Iran’s missile strike on two U.S. air bases in Iraq.
  2. Liberal group Media Matters compiled a list of QAnon conspiracy supporters that are running for Congress this year in the United States
  3. Fact-checkers are struggling with misinformation in TikTok. MediaWise did good work debunking falsehoods about a military draft for World War 3 posting explanatory short videos there.
  4. Tumblr announced a new initiative to teach users about misinformation and other suspicious web activity.
  5. U.S. Chief Justice John Roberts, in his year-end address on the state of the federal judiciary, issued a warning about the threat of misinformation.
  6. Russian trolls are targeting American veterans, The Washington Post reported.
  7. There is growing concern about misinformation from China leading up to Taiwan’s election this week, The New York Times reported, as did TechCrunch.
  8. Facebook removed some ads about the potential side effects of the HIV prevention medication Truvada, after dozens of LGBTQ advocacy groups called the ads a harm to public health, NBC News reported in late December.
  9. Bellingcat published a must-read guide on how to use advanced reverse image searches for digital investigations.
  10. In late December, BuzzFeed’s Jane Lytvynenko profiled an American veteran who has been tracking online scams and fake accounts targeting other veterans.

That’s it for this week! Feel free to send feedback and suggestions to factually@poynter.org.

Daniel, Susan and Cristina

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Tags:
Cristina Tardáguila is the International Fact-Checking Network’s Associate Director. She was born in May 1980, in Brazil, and has lived in Rio de Janeiro for…
Cristina Tardáguila
Daniel Funke is a staff writer covering online misinformation for PolitiFact. He previously reported for Poynter as a fact-checking reporter and a Google News Lab…
Daniel Funke

More News

Back to News

Comments

Comments are closed.

  • I would have predicted that the Factually newsletter would have no comment on the tip I sent in a couple of weeks ago (during hiatus).

    The Senate testimony of Inspector General Michael Horowitz showed that each of the “elite three” (Annenberg Fact Check, PolitiFact, Washington Post Fact Checker) U.S. fact checkers wrongly characterized the findings of the IG report on the FBI’s “Crossfire Hurricane” investigation.

    The IG report said, regarding some specific areas of inquiry, that it found no “documentary or testimonial” evidence that political bias affected those areas of inquiry. The fact checkers translated that into a finding that political bias did not affect the investigation. Horowitz specifically refuted that supposed conclusion during his Senate testimony.

    Next question: How long will the “elite three” ignore the need for a correction, and will the IFCN ever act toward them in a manner consistent with the role of an accountability organization?