The Metaversial Century
I spent the last week or so reporting on what impact artificial intelligence (AI) systems such as ChatGPT will have on journalism. While I only scratched the surface, I concluded that journalists face a hopeful but potentially terrifying technological tsunami.
A range of computer systems can now produce sentences or even long stories that read as if a journalist wrote them. Complex networks that devour huge troves of data enable high-tech systems to put one word after another often better than humans. I played around with one — ChatGP,T a mind-boggling experience.
AI systems unquestionably carry profound implications for journalism, but they are even more sweeping for the world at large. In one form or another, many individual and businesses already use artificial intelligence techniques for tasks that once employed humans in field ranging from finance to health care. More change is coming, though, faster and sooner than most people realize.
The technology impresarios in Silicon Valley have big plans for the future of AI. Sam Altman, the CEO of Open AI, the company that created ChatGPT, foresees a world that resembles utopia, where benevolent capitalists finance a welfare system for the labor they destroy. But as Ezra Klein, who interviewed Altman on his podcast, pointed out, Altman’s perceived world could just as easily be a dystopia.
In journalism, technology has been upending the craft since printing press replaced monks who served as scribes. Another wave of technological Innovation eliminated scores of jobs once performed by men with ink on their fingers. In recent years, newsroom technology has turned photographers into reporters with an iPhone. The New York Times reports that roughly a third of the content published by Bloomberg uses some form of automated technology; that the Associated Press uses “robot reporters” to produce articles on minor league baseball; and the Washington Post uses them for high-school football stories. For journalists, the future is already here.
Advocates of these techniques argue that newsrooms employ AI to eliminate mundane tasks, freeing reporting to do more substantial work, such as investigative reporting. That would be great. But I’ve experienced a different story: News executives in organizations starved for profit use AI techniques to replace reporters or produce superficial — even questionable — content they need to justify advertising adjacencies.
Dire warnings about the detrimental impact of AI proliferate from credible sources with the AI community. They warn that big technology companies such as Google, Meta and Open AI are rushing ahead with seismic innovations that lack independent oversight by anyone.
“I will absolutely frankly say that it is a profoundly scary time to be a professional creative,” Adrain Tchailovsky, a highly acclaimed British science fiction author, said on an Ezra Kline podcast discussing AI. “I am watching people come for the visual artists today and they will be coming for the wordsmiths tomorrow.”
But other technology executives such as Richard Boyd, founder and head of several innovative high tech companies based in Chapel Hill, North Carolina, have a different take.
In a recent open letter to Meta CEO Mark Zuckerberg, Boyd congratulated the company for investing $100 billion in the “Metaverse,” which many in the computer industry view as the next iteration of the Internet. But he told Zuckerberg: “you are doing the Metaverse wrong.”
Boyd, a seasoned tech entrepreneur and AI expert, has a more positive view of how the tools of the Metaverse. He says techniques such as machine learning, augmented and virtual reality, or huge libraries of data — can create a simulated world where ideas, words, or strategies can be tested and then applied to solve real world problems.
“The last century was about the moving image,” he wrote. “It was the first time in human history when we could review and critique major events by looking at recorded footage. The 21st century will, I believe, will be the simulation century.”
“This Metaversial century,” he continued, “promises to give us every medium ever used before in human history: images, sound, text, 3D interactive objects, 3D avatars or real and simulated humans, vast simulated virtual environments or augmented real world overlays. Because of this upper medium set of attributes, it can serve as the vessel and testing environment for the future of human and machine collaboration to help us achieve the right balance between humans and machines to optimize the future.”
The key to alleviate concerns voiced by others, Boyd says, it to achieve he right balance of machine and human. How would journalism be affected by all of this?
Some jobs will undoubtedly go away as they have with most technological innovations. But the next chapter of AI could also help journalism by providing cutting edge insights into the needs, desires and wishes of their audiences. Technologists could also build huge public record databases that could eliminate much of the drudgery of investigative reporting. Individual journalists and authors could more easily find paying audiences for their work and eliminate the need for middlemen like publishers and agents.
But journalists will also have to grapple with a wave of misinformation that systems like ChatGPT 3.5, the latest and best version of the chatbot that first came out in 2020.
“Something incredible in happening in AI right now, and it’s not entirely to the good,” says Gary Marcus, a New York University professor entrepreneur and AI advocate who is sounding alarms about the way the science behind the systems is developing.
Among other faults, Marcus says, “systems like these pose a real and imminent to the fabric of society. They can easily be automated to generate misinformation on an unprecedented scale. They cost almost nothing to operate, Russian troll farms spent more than a million dollars a month for the 2016 elections: nowadays you can get your own custom trained large language model, for keeps, for less than $500,000. Soon the price will drop further.”
At this juncture, no one really knows how and when all of this will play out. Open AI made its ChatGPT public, partially to let people test it and partially to discover glitches in the system. I, for example, asked the bot about a historic individual and got an inaccurate answer. When I wrote that the system got it wrong, the erroneous information was changed the next day.
Many of the other systems under development, however, are being hatched in secrecy by technological giants such as Google, Meta, Amazon and others. Those interested in the projects have little of the access that Open AI provided. Journalists will be left to sort out a tidal wave of misinformation at a time when most of the companies they work for are under incredible financial stress. The danger is that the misinformation flowing intonnews feeds will overwhelm legitimate journalism. The public will be unable to discern fact from fiction. It poses a threat to quality journalism, but journalists can’t hide from it. Their role has never been more important. Already many scientists and journalists are striving to discover systems that can snuff out misinformation. But others work on systems to avoid such scrutiny.
It’s impossible to cover all implications of the technology in a post such as this. I will examine more in the future. One thing is clear, though. As Boyd says: “Those who become adept at modeling and simulation in the future will prevail over those who don’t.”
—James O’Shea
James O’Shea is a longtime Chicago author and journalist who now lives in North Carolina. He is the author of several books and is the former editor of the Los Angeles Times and managing editor of the Chicago Tribune. Follow Jim’s Five W’s Substack here.