March 12, 2024

When I worked as a wire service reporter more than 40 years ago, we kept a cardboard box full of file folders alphabetically labeled with our most important follow-up stories. Using this, we provided around-the-clock coverage 365 days a year for our clients.

If someone was arrested for a crime, we wrote about the booking and later the arraignment, the charge, the jury selection, and so forth until the verdict. We also did this on top stories that linger to this day, including clean energy, pollution, the environment, voting and abortion rights, campaign coverage, state/federal elections, political scandals, racism, sexism, health care crises, and medical/scientific discoveries.

We did relatively few follow-up reports about the great technological advances of email (1971), electronic gaming (1972), disc storage (1972), video recorders (1972), cellphones (1973), digital cameras (1975), Apple computers (1976), GPS (1978) and portable music players (1979).

As these came on the market, we featured them with tired themes of convenience, consumerism and entertainment.

The same phenomenon is happening today, only the stakes are much higher, involving machines rapidly gaining a facsimile of consciousness without conscience, human oversight or government regulation. Artificial intelligence not only has become a part of our lives but also soon may be a part of our bodies, promising panacean benefits that few reporters are able to fact-check, let alone fathom.

As the Tow Center for Digital Journalism reports, “Despite growing interest, the effects of AI on the news industry and our information environment — the public arena — remain poorly understood. Insufficient attention has also been paid to the implications of the news industry’s dependence on technology companies for AI.”

I explored some of those implications with Jeffrey Cole, director of the Center for the Digital Future at the University of Southern California, Annenberg. The intent here is to urge the news media to cover artificial intelligence with the dogged persistence of the futures file — a shared future that awaits all of us, for better or worse, whether we want it or not.

Broken news

If you do a Google search of “futures file” and “journalism” you will find scant few articles, one by me in the Nieman Watchdog and another in 1966 by Time Magazine, “Wire Services: The Rewards of Routine,” about how The Associated Press scooped New York media using such a file.

How would society have evolved had we in the 1970s and beyond kept an AI futures file?

The kind of file I mean not only informs the public about each discovery as reported by corporate news release, stockholders portfolio, product launch and publicity stunt, but prophesies — post by post, podcast by podcast, article by article — where the algorithms might lead us. That requires a modicum of acumen in computer and data science as well as the history of technology and social change.

“I would rate the performance of the news media on AI as a B-minus,” Cole said, noting the industry’s penchant for elevating chief operating officers to celebrity and mythic status.

“We didn’t know in America very many CEOs until the digital era,” he said. “We knew John Rockefeller was in oil or that Lee Iacocca sold Chryslers. The first tech CEO we really got to know was Bill Gates mostly for being the richest man in the world. But Gates wasn’t very inspiring, exciting or dynamic. He was followed by Steve Jobs, who was from Central Casting. There was nobody like him.”

Cole remembered Jobs’ product launches — “the hottest ticket in the country. Normally you couldn’t get journalists to cover a product launch. Jobs used to stand on stage at the Moscone Center once a year in his blue jeans and black turtleneck sweater, and then, coming to the end of his presentation, he’d say, ‘Oh, and one more thing.’ It’s almost as if he forgot. Then he’d introduce a product that changed the world.”

After Jobs the media focused on Mark Zuckerberg, Cole mused, “the poster child for the evil CEO who’s counting his money. Then came the most amazing CEO — and this sounds funny now — Elon Musk doing his genius thing, working 20 hours a day until he turned to the dark side.”

Cole said this is the world that Sam Altman, CEO of OpenAI, has inherited from the news media. “He will be the face of AI and ChatGPT for the whole industry.”

Rather than create prophetic content, the news media have created corporate caricatures.

High-tech views

In the 1990s at the dawn of the internet, news organizations were decidedly optimistic. Cole notes that society at last had access to all the information in the world. “But as consumers began to use the web, they encountered hate speech, bullying, misinformation, and scams.” Today’s editors and producers remember that. “AI started exactly the opposite with all the focus on fears and dangers” associated with the launch of ChatGPT.

The news media was taken by surprise in late 2022 when OpenAI announced, “We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”

There are allusions in that announcement to a 1950 journal article by A.M. Turing, “Computer Machinery and Intelligence,” marking the birth of AI. Turing asked, “Can machines think?” (Mind, 49, 433-460). The question, he wrote, was better suited to a Gallup poll than a journal article. Rather, he devised an “imitation game” in which a computer and a person answer questions typed into a terminal by a human judge. If the judge believed answers by the computer were made by a person, the computer won the game.

In some ways, today’s chatbots continue to play the imitation game.

In that watershed Turing article, he made an astute observation about the difference between machine and human intelligence. In defining “machine,” he eliminated from consideration “men born in the usual manner.” He acknowledged that it may be possible to “rear a complete individual from a single cell of skin,” a biological feat deserving of high praise; but this would not be regarded as “constructing a thinking machine.” He defined “machine” as an electronic or digital device.

And yet at the moment, AI researchers are developing hybrid bionic systems, including the recent report that Musk, chief executive of SpaceX, has developed a product called Telepathy that allows a person to control a computer merely by thinking of an action. A small implant is placed in the region of the brain that operates movement and motion. One patient already has received such a device.

Major news organizations must follow up on and investigate this groundbreaking experiment without waiting for Musk to inform us.

Logic versus pathologic machines

Artificial intelligence lacks a conscience. In essence, chatbots and robots currently being trained via digital scraping are psychopaths, the technical definition of which concerns people with little or no conscience who are able to follow social conventions when it suits their needs with “limited, albeit weak, ability to feel empathy and remorse.”

“Neurology is not one of my fields of expertise,” Cole said, “but I think under the technical definition, AI is pathological without a conscience.” He emphasized that he does not want AI to have much more of a conscience than it currently has. “That doesn’t mean I want it to be devoid of ethics but that I’d like it to stay literal and not get into deep thinking.”

Cole referenced an article by New York Times technology columnist Kevin Roose who spent two hours with a Bing chatbot he dubbed Sydney. “It started asking him about his marriage.”

Here’s how Roose described it:

“As we got to know each other, Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.”

Cole noted that the public has yet to cipher what Sundar Pichai, CEO of Google and its chatbot, Gemini, thinks about the fake hallucinations AI makes when not being asked to answer inquiries. “This is the terrifying stuff — how little the experts know.”

Nevertheless, scientists and engineers are developing Theory of Mind AI (able to cipher human emotions) and Self-Aware AI (able to attain enhanced awareness), creating algorithms that can only feign or mimic emotions without understanding the biological millennia that created human intelligence.

This is an important topic as tech companies experiment with brain-computer interfaces, chips in the brain or AI ingested nanobots, essentially making bionic people who can move, control or engage in physical environments merely by using their thoughts.

Facing the interface

Cole expressed interest in a microprocessor that one of his friends had implanted in her finger. “The process takes 10 seconds and one out of 10 times you have a little trickle of blood.” He compared it to a blood test where nurses put a Band-Aid on you. “It’s totally harmless. Within minutes, she no longer needed keys to the office or security badges. If we all had chips planted in our fingers, we could get rid of passports, credit cards — all the IDs we use. It could eliminate much fraud if not close to all fraud.”

Such a microchip also contains a person’s entire medical history. Paramedics would know within seconds what a person’s medical issues are. “I think this is just a fascinating topic because what we’re doing actually is defining what it means to interface with a computer system.”

I asked Cole about the consequences if a bad actor hacked into such a chip. “I don’t think there’s a chip that you can’t hack into,” he noted, “because the minute you tell people it can’t, it’s not hackable, they will prove to you that it is automatically.”

The consequences, risks and, yes, profits from this and similar brain-computer-interface-related devices may alter society beyond recognition. The public deserves to know that before tech companies establish norms and rules without government oversight or regulation, as they did with social media, hiring lobbyists after risks became apparent to mitigate liability.

“If you talk to the average citizen today, they will tell you, ‘I’m frightened of AI. I may lose my job,’” Cole says. Parents are worried about their children no longer having critical learning skills — “all the way to the most extreme danger, nuclear war, which may mean the end of humankind.”

This is why Cole is concerned about the public’s lack of technical knowledge. “Most experts do not really know if AI is capable of being sentient. They’re working in the blind.” Journalists have been learning along with the rest of us, but they can’t extrapolate risks into the future. They just write what their fears are now without understanding the benefits. “It’s ‘Terminator,’ ” he added, “although I happen to think the fears about employment are real.”

Some estimates foresee as much as 85 million current jobs being replaced by AI by 2025. That likelihood will create new cohorts of privilege and disenfranchisement.

Silicon cyborgs

How will brain-computer interfaces change society in five, 10 or 25 years? Will it be more like the oppressive omnipresent autocracy in “1984” by George Orwell or the socially engineered society in “Brave New World” by Aldous Huxley? Will it devolve into the drug-filled, criminal, tyrannical “A Clockwork Orange” by Anthony Burgess? Will social debate revolve around who gets the SAT chip so that privileged high school students can compete for admission in Ivy League schools, a bionic version of “Operation Varsity Blues”? What implications will result concerning social class and marginalized groups?

“Those are all negative views of course,” Cole said, “and the future is going to have elements of that. We already have disinformation and misinformation and clearly, it’s some people’s agenda to throw everything into doubt so that you believe nothing.”

Cole added that wealth, privilege and access to innovation are facts of American life. “Think back to the fall of 2020 when Donald Trump as president got COVID. Evidently, he had the kind of COVID that would have killed him, and he was overweight and older. But he got access to medication that was purely experimental at that time.”

Cole mentioned “the lasting image” of “A Clockwork Orange,” Stanley Kubrick’s 1971 film, and the antagonist Alex being seated with his eyes wired open, having to watch blood and gore as the authorities try to turn him from a violent person into a nonviolent one. “That I don’t see,” he said, “so I think we’re probably closer to ‘1984’ and ‘Brave New World.’”

The news media has a role in determining that future, tracking AI advances doggedly and explaining the boons, banes and boondoggles that CEOs feed an unsuspecting public.

“I don’t mean necessarily that they’re phonies or charlatans,” Cole said. “I think the most interesting moral question for journalists to ask is just because we can do something doesn’t mean we should do something.”

In other words, hold AI accountable.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Michael Bugeja, a regular contributor at Poynter, is author of "Interpersonal Divide in the Age of the Machine" (Oxford Univ. Press) and "Living Media Ethics"…
Michael Bugeja

More News

Back to News