Shadowboxing

 

News coverage of the drama over Sam Altman’s ouster as CEO of OpenAI overshadows the most important question we should be asking about artificial intelligence: How can we oversee a technology expanding at the speed of light while the public’s still in the dark?   

photo by Thao LEE

Ever since the board of OpenAi fired Altman on November seventeenth, media coverage focused on the corporate soap opera that prompted the decision to fire and then rehire Altman. Speculation dominated coverage of the company and its board until the New York Times recently published a definitive piece on Altman, exposing the backstabbing and deceit that prompted the board to act.

As a former top editor at two major newspapers, I can’t blame anyone for playing up the story about Altman. He’s the poster boy for AI. What editor could pass up a juicy story about scheming, ambition and hypocrisy in Silicon Valley’s most significant company? Unfortunately, though, the Times piece is not really what the story of the upheaval at OpenAi is all about.

As most anyone following the saga knows, a high-powered group of Silicon Valley powerbrokers, including Altman, founded OpenAI as non-profit dedicated to developing this powerful new technology to help society instead of causing harm. Comparisons of AI to the creation of fire, light and the iPhone are no exaggeration. AI has the potential to change how we live, work, play, think and even survive.

As the Altman drama showed, though, OpenAI evolved from a do-gooder non-profit into a company with a for-profit arm able to generate the profits to benefit employees and make it easier to raise the capital needed to power ChatGPT. Altman, who remains CEO but lost his board seat, clearly sees the prospect of huge profits to propel the company and move faster than the slow-poke path preferred by an altruistic non-profit.

Many questions surrounding the Altman drama still have not been answered. The new board, sans Altman, will determine whether it can trust him. But the real question is can the public trust OpenAI to develop a brand of generative AI technology that could become either a monster or a messiah?

We already have some clues about the impact of the technology OpenAi champions. There’s no doubt that it will eliminate many jobs now being done by humans. News publishers already suffer badly from the technology, and bad actors will use generative AI for disinformation campaigns in the upcoming 2024 elections. Voters will be misled and misinformed in the digital disruption already happening. Lies and deceit will proliferate. How can this technological bull be corralled is anyone’s guess.

But AI also promises huge societal benefits. The basic technology has been around awhile. It’s now used by drug companies to accelerate the development of new drugs for a wide range of diseases, including cancer. I recently had surgery performed by a physician aided by an AI-enabled robot. He said the computer executed certain tasks better than he could. It’s used in cybersecurity to detect crime. Manufacturers use it to untangle supply chain snarls that propelled inflation. The benefits that AI promises are enormous. And they all have one thing in common: A human being is implementing the technology.

Normally, we deal with such fateful questions by adopting regulations imposed by some government agency to protect the public interest. Sometimes we rely on self-regulation in which capitalists regulate themselves, probably not a good idea with technology this powerful. Indeed, even AI boosters such as Elon Musk say the evolving technology needs some form of oversight. At this juncture, though, we don’t really know what we are dealing with. AI is developing so fast that regulations published today will address yesterday’s problem. Indeed, regulating AI resembles shadowboxing, maybe with a meme.

Technology executives want some controls on a technology destined to become even more powerful. Leaders from Google, Microsoft and OpenAI recently signed a statement stating that “mitigating the risks of extinction from AI should be a global priority alongside others societal scale risks such as pandemics and nuclear war.” But no one really knows what to do despite the many ideas proliferating on social media and online journalism platforms.

Arguments for and against immediate AI regulation range from European Union’s recently enacted top-down rules that prohibit the use of AI that poses unacceptable risks, whatever that means, to China’s dictum that algorithms must be reviewed in advance by the state and should adhere to core socialist values, whatever that means. For a good overview, read this piece by Bill Whyman, an expert at the Center for Strategic and International Studies. The Biden administration has come up with a thoughtful agreement signed by many AI leaders pledging responsible development of AI.

Marc Andreessen, a leading venture capitalist in Silicon Valley and a public intellectual on artificial intelligence argues that fears about AI’s impact are overblown, irrational and amount to fear mongering by large companies who want to use regulation to thwart competitors. The result, he says, will be less innovation.

“The development and proliferation of AI is far from a risk that we should fear,” Andreessen argues.

He says AI is a moral obligation we must pursue without the roadblocks erected by regulators who don’t fully understand the technology underlying AI.

”Big AI companies should not be allowed to establish a government protected cartel insulated from market competition,” he says. “To offset the risk of bad people doing bad things with AI, government, working in partnership with the private sector, should use AI to maximize society’s defensive capabilities.”

I think Andreessen has a good point. We live in a politically polarized society with a Congress that can barely keep the government running much less agree on regulating a technology most legislators don’t understand. AI presents international challenges, too. Even if all nations in the world agree to a solution for a problem or create an international norm, outliers always break the rules.

Since 1968, for instance, 191 nations around the world signed the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), surely as great a threat to humanity as artificial intelligence. Yet three nations that didn’t sign the treaty – India, Pakistan and Israel – are believed to have added nuclear weapons to their arsenals. North Korea signed the treaty and then withdrew in 2003 after detonating a nuclear weapon. International inspectors found that Iran, which signed the NPT, has violated the terms of the pact in its pursuit of a nuclear capability. The  same thing would probably happen if nations around the globe forged a treaty or regulation on AI.

Agreement exists among all interested parties about basic AI principles, such as safety, reliability, transparency, privacy, accountability and fairness, says Gary Marcus, an outspoken New York University professor, entrepreneur and AI skeptic, and Anka Reuel, a Stanford University computer science PhD student and founding member of KIRA, a think tank that promotes responsible AI.

In an Economist essay, they called for the “immediate development of a global, neutral non-profit international agency for AI.” Called the IAAI, the agency “would rely on government, large technology companies, non-profits, academia and society at large aimed at collaboratively finding government and technical solutions to promote safe secure and peaceful AI technologies.”

The bones for such an organization existed in the non-profit arm of OpenAI. Despite its faults and shortcoming’s OpenAI’s original mission represented a potentially powerful advocate for responsible generative AI formed by market-driven experts that could blow a whistle on nations that will surely develop a brand of AI to serve their nationalistic needs, such as China and Russia.

What will evolve from the controversy over Altman remains to be seen. A committee currently is reviewing the controversy surrounding Altman’s dismissal and return. Altman says he remains committed to the need for caution over cash, but he also recently told a conference on AI in Vietnam that the company was rethinking its hybrid profit and non-profit corporate structure. "The structure clearly has some bugs in it, and our new board is thinking really carefully about what the best corporate structure for our mission should be.”

Undiminished is the need for the initial aspirations of OpenAI non-profit arm, particularly as the 2024 president election looms and undisciplined AI is ubiquitous. At a Senate hearing earlier this year, Marcus opened his testimony with a “news article” written by Chat GPT-4, OpenAI’’s top model. “It convincingly alleged that parts of Congress were secretly manipulated by extraterrestrial entities.”    

 —James O’Shea

James O’Shea is a longtime Chicago author and journalist who now lives in North Carolina. He is the author of several books and is the former editor of the Los Angeles Times and managing editor of the Chicago Tribune. Follow Jim’s Five W’s Substack here.  

 

 
James OSheaComment