Factually is a newsletter about fact-checking and misinformation from Poynter’s International Fact-Checking Network. Sign up here to receive it on your email every Thursday.
Less than half of Birdwatch users include sources, and many reveal partisan bent
In January, Twitter announced Birdwatch, its experimental crowdsourced fact-checking platform as a way to fight mis/disinformation. The project allows users to mark tweets as misleading and add context within “notes.” It also lets other users rate those notes based on helpfulness and sourcing.
For the last week, I’ve analyzed more than 2,600 notes made by Birdwatchers and reviewed 8,200 ratings pushed by the social media platform based on that information. And I have tested out a public algorithm the company uses to rank notes based on their “helpfulness.”
The results so far aren’t encouraging, as I found blatant misinformation receiving “not misleading” notes, context that reveals political bias and a small number of voices — with dubious Twitter feeds of their own — dominating Birdwatch activity.
I’m skeptical of farming out fact-checking to every Twitter user and an algorithm, as no automated system can compete with human fact-checkers in the search for truth. But, if Twitter is committed to improving its tool in order to fight against mis/disinformation, here are a few steps the social media platform should take:
About sources:
- Twitter should require Birdwatch users to cite at least one source within their notes. More than half of the content I reviewed in the program did not include a single source.
- Rely on human fact-checkers to vet Birdwatch notes. Wikipedia, another crowdsourced portal, was one of the most-cited sources on the program. Fact-checkers do not rely on Wikipedia as a reliable source.
- Promote notes that offer fact-checked articles. URLs from professional fact-checking organizations were rare among the notes I analyzed, even though some of the topics had been largely covered by fact-checkers.
About the algorithm:
- Stop promoting Birdwatch notes that carry misinformation or lack sources.
- Review the vetting process so that prolific users who have shared misinformation on their own timelines are not allowed to become Birdwatch users.
About language and profile:
- Limit the number of notes per Birdwatch user. The five most active Birdwatchers account for more than 10% of Birdwatch’s notes.
- Employ natural language processing to weed out bias in notes. The most prolific Birdwatch users use common partisan language.
About the community:
- Onboard Birdwatch users with basic media literacy training. Research from the Stanford Social Media Lab has shown that an hour-long online course increased participants’ ability to identify false news headlines by more than 20 percentage points.
- Offer regular fact-checking lessons to users in the form of an email newsletter, or even regular direct messages from the Birdwatch account.
I am aware that this is just a pilot program, which will likely change considerably before it is rolled out in the rest of the U.S. and beyond. But the problems I found echo what the fact-checking community feared when Birdwatch was announced.
Still, Twitter’s transparency and commitment to improve the tool as data comes in gives me hope that we could — one day — see a somewhat successful model.
Alex Mahadevan
Senior multimedia reporter, MediaWise
@AlexMahadevan
Interesting fact-checks
- Faktograf: “Red Bull didn’t test positive for COVID-19”
- Croatian fact-checkers debunked a video that supposedly shows a man trying to test an energy drink for COVID-19. In December, the same team flagged an Austrian politician attempting the same test with Coca-Cola. In both cases, the tests were incorrectly handled and their results can’t be considered valid.
- Full Fact: “A Bangladesh study doesn’t show that depression follows Covid-19”
- On Feb. 15, the Telegraph published a headline that said: “Half of covid victims go on to suffer depression, says study.” But the study, which surveyed 1,002 people in Bangladesh, explicitly said that its findings didn’t necessarily reflect the impact of COVID-19.
Quick hits
From the news:
- “The Value of News on Facebook,” from Facebook. The social media giant announced Thursday it would “restrict the availability of news on Facebook in Australia,” in response to a proposed law that would make tech companies pay news publishers for their content. This comes on the same day Google announced a deal with News Corp. to begin paying the media giant to display its news content.
- RMIT ABC Fact Check, one of IFCN’s verified signatories in Australia, is no longer available on Facebook. The staff is using Twitter to direct followers to its app.
- The Bureau of Meteorology’s page was temporarily blocked from posting on Facebook, but had its access restored just after midday local time.
- According to The Verge, the Department of Fire and Emergency Services Western Australia, and Queensland Health had no content available on their Facebook pages either.
- And the move drew the ire of journalists in the Philippines. Rappler’s CEO, Maria Ressa, criticized Facebook on Twitter, saying this ban will have an “impact on facts and democracy.” Rappler’s page has been taken down too.
- “On social media, vaccine misinformation mixes with extreme faith,” from The Washington Post. Even though Pope Francis is urging people to get shots, some Christian leaders and experts believe the religious movement against vaccines is growing — and fast.
- “The role of cable television news in amplifying Trump’s tweets about election integrity,” from First Draft. This research will make TV journalists, editors and producers think about their role in amplifying falsehoods generated on social media. Between Jan. 1, 2020 and Jan. 19, 2021, MSNBC, Fox News and CNN displayed 1,954 tweets from Trump on screen for a total of 32 hours.
- “Anatomy of a conspiracy: With COVID, China took leading role,” from The Associated Press. A joint investigation from the AP and the Atlantic Council’s Digital Forensics Lab traced the origin and the evolution of the COVID-19 bioweapon hoax. A companion piece highlights some of the key players propagating this conspiracy.
From/for the community:
- The Washington Post launched “#DIYFactCheck,” a series of tools to verify videos and debunk misinformation. Available via the Post’s Instagram, the guide takes readers step-by-step as it explores key questions such as how to find the original video, who posted the video, and where and when the video was filmed. It builds on the paper’s 2019 infographic — “The Fact Checker’s Guide to Manipulated Video”.
- Institutions and governments, content creators, journalists, teachers and students in Latin America should take a look at PortalCheck. The website, launched by UNESCO, Chequeado and LatamChequea, with the European Union’s support, offers not only useful resources to fight mis/disinformation but also a list of events and activities taking place in the region. PortalCheck is available in English, Spanish and Portuguese.
- Here’s an idea: In an attempt to promote vaccination all across India, NewsMobile has recorded and posted videos on social media featuring first-person accounts of Indians who have received the COVID-19 vaccine. Many reacted positively to the vaccination saying they felt great and had no unexpected reactions. Recordings took place in Delhi, Patna, Bhopal and other cities.
Events and training
- Feb. 18 (Today): “Fact-check images using your cell phone.” Offered in Portuguese, by Agência Lupa, in Brazil. In this 90-minute virtual course, participants will learn how useful their mobiles can be in the battle against visual disinformation.
If you are a fact-checker and you’d like your work/projects/achievements highlighted in the next edition, send us an email at factually@poynter.org by next Tuesday.
Any corrections? Tips? We’d love to hear from you: factually@poynter.org.
Thanks for reading Factually, and a special thank you to Alex for joining us this week!
**Limit the number of notes per Birdwatch user. The five most active Birdwatchers account for more than 10% of Birdwatch’s notes.**
It’s currently a pilot program with a limited number of participants. It’s completely arbitrary to cap the number of notes based on a percentage. Who is to say whether those note-writers can sustain that pace over time? Instead, emphasize quality. Limit notes based on quality assessments (bad note writers get a restricted number, perhaps as low as zero).