By:
June 27, 2024

SARAJEVO, Bosnia and Herzegovina — Though the use of artificial intelligence in journalism has attracted skeptics, some journalists say discerning judgment and a collaborative approach can allow fact-checkers to avoid the pitfalls of the technology and become more efficient.

The key, said International Center for Journalists Knight fellow Nikita Roy, is to restrict usage of generative AI in journalism. Fact-checkers should use the tools for “language tasks” like drafting headlines or translating stories, not “knowledge tasks” like answering Google-style questions that rely on the training data of the AI model. She and a panel of fact-checkers from around the world offered examples of such usage Thursday at GlobalFact 11, an annual fact-checking summit hosted by Poynter’s International Fact-Checking Network.

“We really owe it to our audience to be one of the most informed citizens on AI because this is having a profound impact on every single industry, as well as the information ecosystem,” said Roy, one of the conference’s keynote speakers. “If we use it responsibly and ethically, it has the potential to streamline workflows and enhance productivity. … Every single minute misinformation is spread online and we delay in getting our fact checks out, that’s another second that the information landscape is being polluted.”

Fact-checkers are fighting a changing information landscape, one that is driven both by emerging technologies like generative AI and the whims of major tech companies. Though the low barrier of entry to using generative AI means bad actors can easily disseminate mis- and disinformation, it also means fact-checkers can build tools without acquiring specialized skills.

Two years ago, building a claim detection system would require the help of a data scientist, said Newtral chief technology officer Rubén Míguez Pérez. But today, anyone can build one by providing a prompt to ChatGPT.

“The power of generative AI is democratizing the access that people — regular people, not data scientists, not even programmers — are going to have (to) this kind of technologies,” Míguez Pérez said.

In a 2023 survey of IFCN signatories that saw responses from 137 fact-checking organizations, more than half said they use generative AI to support early research. AI can help outlets identify important, harder-to-find claims to fact-check, said Andrew Dudfield, the head of AI at Full Fact.

But AI can’t fact-check for journalists, Dudfield said. In reviewing past fact checks from his own organization, Dudfield found that the vast majority involved “brand new information.” Fact-checkers had to consult experts and cross-reference sources to produce those stories. AI tools, which draw upon existing knowledge sources, can’t do that.

Other language tasks fact-checkers can use AI for include summarizing PDFs, extracting information from videos and photos, converting articles to videos and repurposing long videos to shorter ones, Roy said. AI can also make content more accessible for audiences through tasks like generating alt text for images.

One of the biggest concerns journalists have with generative AI tools is “hallucinations.” Because these tools are probabilistic and essentially make predictions based on the dataset they use, they are liable to generate nonsensical or false information.

For fact-checkers interested in using AI for knowledge tasks, like building chatbots, Factly Media & Research founder and CEO Rakesh Dubbudu advised curating the datasets used by those tools. For example, his team built a large language model that ran off a database comprising press releases from the Indian government. Limiting the pool of knowledge that the AI tool drew from largely solved the problem of hallucinations, Dubbudu said.

Generative AI tools that don’t use curated datasets can also pose issues if they regurgitate copyrighted materials. To avoid this problem, Dubbudu suggested that fact-checking organizations use their own materials as their database when creating their first AI tool.

“A lot of news agencies today, they have hundreds of thousands of articles,” Dubbudu said. “A lot of us could start by converting those knowledge sources into chatbots because that is your own proprietary data. There is no question of hallucination. There is no question of copyright citations.”

Tech companies and other organizations will continue to develop AI tools, regardless of whether fact-checkers participate themselves. The result can be problematic, Lupa founder Cristina Tardáguila warned. She found, for example, a fact-checking chatbot that is likely led by an expert in programming from Russia — “a country that punishes journalists.”

Citing past fact-checking collaborations like the Coronavirus Facts Alliance, Tardáguila urged fact-checkers to work together to build their own tools

“When dealing with this new, and yet to grow, devil of AI, we need to be together,” Tardáguila said. “We have to have a real group, a tiny community, a representative group that is leading the conversation with tech companies.”

Update, June 28: The second paragraph of this article was amended to clarify the difference between a knowledge task and a language task.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Angela Fu is a reporter for Poynter. She can be reached at afu@poynter.org or on Twitter @angelanfu.
Angela Fu

More News

Back to News