AI’s Mad Dash

Artificial Intelligence companies are barreling down the information superhighway

 

As everyone wrings their hands about the potential dangers of Artificial Intelligence, companies worldwide are moving at lightning speed to integrate sophisticated AI technology into their businesses.

photo by Florian Steciuk

Less than six months ago, more than a thousand technology leaders and researchers, including industry giants such as Elon Musk, signed a letter urging a moratorium on technological advances. They said, “We’re locked in an out-of-control race to develop and deploy even more powerful digital minds that no one – not even their creators – can understand, predict, or control.”

Yet the desire to capitalize on the promise of AI clearly outweighs any potential perils. A recent report by the global consultant McKinsey and Company said that an astonishing sixty-five percent of the companies it surveyed in 2024 reported that their organizations are regularly using generative AI, nearly double the percentage from just ten months ago.

“If 2023 was the year the world discovered generative AI,” the McKinsey report said, “2024 is the year organizations truly began using – and deriving business value from – the new technology.” Generative AI refers to an AI system that can generate new and original content, such as text, images, video, and audio, from databases they tap.

Not all adaptations McKinsey cited in its report fall into the AI categories that carry the greatest potential for harm, such as misinformation, power grid threats, or cyberattacks. However, companies rapidly adopt the technology in a regulatory vacuum, which could easily lead to abuse. The stakes for business are too high. Musk has integrated AI into X, formerly known as Twitter, where users routinely generate misinformation and propaganda.

Indeed, just last week, a group of current and former AI employees released a letter alleging that OpenAI, a major force in the spread of Artificial Intelligence, had a culture of recklessness and secrecy in its drive to dominate the sophisticated technology market. The group that released the letter included eleven current and former Open AI employees and two from Google’s DeepMind AI application.   

“We believe in the potential of AI to technology to deliver unprecedented benefits to humanity,” the letter said. “We also understand the serious risks posed by these technologies. The risks range from the further enrichment of existing inequalities, manipulation and misinformation, and the loss of control of autonomous AI systems potentially resulting in human extinction.” The group called for greater oversight, transparency in the development of models, and greater protection for whistleblowers.

In a statement responding to the letter, Open AI expressed pride in its track record of providing the most capable and safe AI system and believes in its scientific approach to addressing risk. Google declined to comment.

The letter called for greater scrutiny and regulation of Artificial Intelligence by the scientific community, government officials, and the public. “However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.”

As a long-time observer of government regulation and deregulation, the flimsy oversight of AI reminds me of the quote, “It’s déjà vu all over again,” made famous by Yogi Berra. Besides covering deregulation as a reporter in the Washington bureaus of the Des Moines Register and Chicago Tribune, I wrote a book called The Daisy Chain about the poor oversight that led to the collapse of the savings and loan industry in the 1980s. Time and again, government regulators and lawmakers fail to keep pace with profit-driven companies determined to dominate an industry in the throes of change. 

President Joe Biden met with industry leaders about the promises and perils of Artificial Intelligence and issued an executive order late last year, “to ensure that America leads the way in seizing the promise and managing the risks of Artificial Intelligence.” 

The order, which has the force of law, is pretty good. It establishes safeguards to ensure responsible AI development by companies and government officials. It directs government agencies to tackle a broad range of AI safety and security risks, including those related to dangerous biological materials, critical infrastructure, software vulnerabilities” and the potential threat to the nation’s electric grid. The twenty-two inaugural Safety and Security Board members that the Biden order created included leaders from various sectors with a stake in the future of AI.

However, the nation needs more than an executive order to deal with transformative technology developing at lightning speed. Executive orders can be revoked by a president's successor, and violations of the orders often involve lengthy court fights.  

The trouble is market developments usually outpace government regulators, particularly when developing new technologies such as AI. The rules they create can be fine one day and obsolete the next as shifts in a competitive marketplace change the ground rules before they can take effect.

Undoubtedly, corporate interest in AI is deep, sweeping, and blazing. The current stock market revival is driven largely by AI companies that have caught fire with investors. Exhibit A is Nvidia, which makes computer chips vital to AI. It just surpassed Microsoft as the world’s most valuable company.   

The McKinsey report also provides clues to the voracious corporate appetite for AI. “For the past six years, AI adoption (defined as using AI in a core part of the organization’s business) has hovered at around fifty percent. This year, the survey finds that adoption has jumped to seventy-two percent. And the interest is truly global in scope. The survey finds upticks in generative AI use across all regions, with the largest increases in Asia-Pacific and greater China.” AI advocates argue with much credibility that AI regulation must be global so that rules of the road apply to both China and America.

Failure to deal with the lack of sensible regulation of AI could be costly if the past is any indication, and it usually is. The regulatory vacuum will probably generate an economic calamity of some sort that will dramatically slow things down. The deregulation of the savings and loan industry in the 1980s created a competitive race for deposits that led to hundreds of home loan banks failing and a bailout that cost American taxpayers $123.8 billion.

Regulatory sclerosis in the subprime mortgage market led to a $499 billion bailout and a recession that reduced U.S. household net worth by $13 trillion. The government's failure to exact retribution for the bankers behind the subprime crisis no doubt contributed to the widespread cynicism that now plagues American politics.

On a more hopeful note, AI could run out of steam. The technology relies on huge data banks that voraciously consume power. At some point soon, the nation’s utility industry may not be able to supply the energy AI needs with its existing power networks. Also, there’s just so much data in the world, and the AI data crunchers could run out of new stuff.

Then there’s the nature of the industry itself. In a recent interview with Ben Thompson of Stratechery, an online site that covers the strategy and business of technology and media, Alex Wang, the CEO of Scale AI, a data company, suggested that AI is an industry that is constantly disrupting itself. Who knows? Perhaps the industry will disrupt itself and confront a curve in the road that will slow down everything.    

For now, though, AI companies are in a mad dash, barreling down the information superhighway, ignoring the speed limits. 

James O’Shea

James O’Shea is a longtime Chicago author and journalist who now lives in North Carolina. He is the author of several books and is the former editor of the Los Angeles Times and managing editor of the Chicago Tribune. Follow Jim’s Five W’s Substack here. 

 
James OSheaComment