56.2 F
Los Angeles
Thursday, February 29, 2024

Developers Who Leveled ‘Britain’s Wonkiest Pub’ Ordered to Rebuild

The Crooked House, a pub in England’s West Midlands...

A Quiet Town Has One of North America’s Oldest Chinese Temples

For a brief time in the mid-19th century, one...

Biden got suckered into border battle he can’t win

NEWYou can now listen to Fox News articles! ...

Why creating an international body for AI is a bad idea

OpinionWhy creating an international body for AI is a bad idea

NEWYou can now listen to Fox News articles!

Former Google CEO Eric Schmidt recently re-upped his calls for a global body, akin to the Intergovernmental Panel on Climate Change (IPCC), to advise member nations on regulating artificial intelligence (AI). 

Schmidt first made his case for an “International Panel on AI Safety” – an “IPCC for AI,” if you will – in an October 2023 op-ed in the Financial Times. He writes of the AI panel’s potential to be an, “an independent, expert-led body empowered to objectively inform governments about the current state of AI capabilities and make evidence-based predictions.” 

He claims that AI policy makers, “are looking for impartial, technically reliable and timely assessments about its speed of progress and impact.”

Sunak delivers speech on AI

Britain’s Prime Minister Rishi Sunak hosted an international summit on AI safety in November that drew about 100 officials from 28 countries, including Vice President Kamala Harris and executives from key U.S. artificial intelligence companies.  (Peter Nicholls/Pool via AP, File)

Increased understanding of the industry would doubtlessly help lawmakers to navigate whatever challenges AI will bring, but Schmidt’s template of the IPCC to achieve that is flawed. One need only to look at the structure and record of the actual IPCC to see its failings.


The IPCC was established by the World Meteorological Organisation (WMO) and the United Nations Environmental Programme (UNEP) in 1988 with the same lofty goals for climate change policy advising that Schmidt invokes for AI. 

But since its inception, the body and the reports it produces have come under fire for an opaque process of selecting authors, failing to include a diverse range of scientific views, intellectual conflicts of interest, and deficiencies in its peer review process. By allowing political representatives of member countries to craft its final reports, the IPCC inherently politicizes what was meant to be objective scientific information. 

Perhaps most importantly for the comparison, from its inception the IPCC process of creating a “summary for policymakers” has been credibly accused of sensationalizing the science to promote alarmist messages. 

Extreme scenarios have become baked into projections of what might happen, leaving policy makers around the world with a skewed view of what is actually happening. This could be devastating to the development of AI.

While certainly an expert on AI technology itself and presumably acting with good intentions, Schmidt also demonstrates a naivety of today’s geopolitical environment. He writes that U.S. regulations would be insufficient because of, “AI’s inherently global nature.” He is correct about the worldwide development of AI, but that might advise against global coordination, not recommend it.


To suggest assigning this power to an international body, he erroneously assumes good faith on the part of other member nations. Far from the cooperative global utopia Schmidt imagines, the U.S. is in fact engaged in a competitive race with China for AI supremacy. 

China announced earlier this fall plans to increase its computing power by 50% by 2025 in an effort to keep pace with U.S. AI and supercomputing capabilities. If climate policy is any indication of future AI policy, China’s recent permitting of more coal plants than at any other time in the last seven years, suggests it’s unlikely to put its own interests aside and cooperate with an international body keeping “tabs on both technical and policy solutions,” in AI as Schmidt proposes. China may prove to be just as much “China first,” in its AI policy as it is in its energy policy.

Europe, although often a U.S. political ally, doesn’t fare much better on the tech competition front. The continent has spent the last decade using U.S. tech companies as regulatory ATM machines. 


The EU fined Amazon $888 million in 2021, Alphabet’s Google $2.7 billion in 2017, another $5.1 billion in 2018 and Meta a cool $1.3 billion last May. 

It’s hard to imagine the EU or China acting impartially toward U.S. AI companies if given seats at the global advising table. In AI, just as in climate matters, it makes little sense to cede influence to foreign governments who have the best interests of their own firms and economy at heart.

Eric Schmidt speaks during a National Security Commission on Artificial Intelligence (NSCAI) conference on Nov. 5, 2019, in Washington, D.C. (Alex Wong/Getty Images)

Schmidt also assumes too much about the ability of politicians and regulators to predict the future and act in ways that avoid all downsides without sacrificing too many benefits. This is not because they don’t have the proper information, but because “the knowledge problem” is an inherent weakness of regulation. 


No one person, or relatively small group of people, can comprehend, prioritize and plan around all the knowledge dispersed among the engineers, users and entrepreneurs. The progress of AI must, as famed economist George Gilder puts it, “navigate a world that everywhere… undergoes combinatorial explosions of novelty.” Expert panels and committees – the IPCC model – are no answer to this problem.

Market forces are much better at handling that kind of promising dynamism than state planners will ever be. We needn’t replicate the failings of international climate governance; like AI, we can learn from our mistakes.



Check out our other content

Check out other tags:

Most Popular Articles