Skip to main content
Faculty Viewpoints

Meet the Five Schools of Thought Dominating the Conversation about AI

Yale SOM’s Jeffrey Sonnenfeld and Steven Tian and economists Paul Romer and Dirk Bergemann explain the arguments from each camp in the debate over artificial intelligence, from true believers to alarmists.

  • Dirk Bergemann
    Douglass and Marion Campbell Professor of Economics
  • Jeffrey A. Sonnenfeld
    Senior Associate Dean for Leadership Studies & Lester Crown Professor in the Practice of Management
  • Paul Romer
    University Professor, Boston College; Former Chief Economist, World Bank
  • Steven Tian
    Director of Research, Chief Executive Leadership Institute

In just the last month, the Wall Street Journal and the New York Times each published over 200 breathless articles pronouncing either the gloomy catastrophic end to humanity or its salvation, depending on the bias and experience of the experts cited.

We know firsthand just how sensationalist the public discourse surrounding AI can be. Much of the ample media coverage surrounding our 134th CEO Summit last week, which brings together over 200 major CEOs, seized upon these alarmist concerns, focusing on how 42% of CEOs said AI could potentially destroy humanity within a decade when the CEOs had expressed a wide variety of nuanced viewpoints as we captured previously.

Amidst the deafening cacophony of views in this summer of AI, across the worlds of business, government, academia, media, technology, and civil society, these experts are often talking right past each other.

Most AI expert voices tend to fall into five distinct categories: euphoric true believers, commercial profiteers, curious creators, alarmist activists, and global governistas.

Euphoric true believers: Salvation through systems

The long-forecasted moment of self-learning of machines is dramatically different from the reality of seven decades of incrementally evolving AI advances. Amidst such hype, it can be hard to know just how far the opportunity now extends and where some excessively rosy forecasts devolve into fantasyland.

Often the most euphoric voices are those who have worked on the frontiers of AI the longest and have dedicated their lives to new discoveries at the frontiers of human knowledge. These AI pioneers can hardly be blamed for being “true believers” in the disruptive potential of their technology, having embraced the potential and promise of an emerging technology when few others did—and far before they entered the mainstream.

For some of these voices, such as “Godfather of AI” and Meta’s chief AI scientist Yann LeCun, there is “no question that machines would eventually outsmart people.” Simultaneously, LeCun and others wave away the idea AI might pose a grave threat to humanity as “preposterously ridiculous.” Similarly, venture capitalist Marc Andreesen dismissively and breezily swatted away the “wall of fear-mongering and doomerism” about AI, arguing that people should just stop worrying and “build, build, build.”

But single-minded, overarching conceptual euphoria risks leading these experts to overestimate the impact of their own technology (perhaps intentionally so, but more on that later) and dismiss its potential downsides and operational challenges.

Indeed, when we surveyed the CEOs on whether generative AI “will be more transformative than previous seminal technological advancements such as the creation of the internet, the invention of the automobile and the airplane, refrigeration, etc.,” a majority answered “No,” suggesting there is still broad-based uncertainty over whether AI will truly disrupt society as much as some eternal optimists would have us believe.

After all, for every technological advancement which truly transforms society, there are plenty more which fizzled after much initial hype. Merely 18 months ago, many enthusiasts were certain that cryptocurrencies were going to change life as we know it—prior to the blowup of FTX, the ignominious arrest of crypto tycoon SBF, and the onset of the “crypto winter.”

Commercial profiteers: Selling unanchored hype

In the last six months, it has become nearly impossible to attend a trade show, join a professional association, or receive a new product pitch without getting drenched in chatbot pitches. As the frenzy around AI picked up, spurred by the release of ChatGPT, opportunistic, practical entrepreneurs eager to make a buck have poured into the space.

Amazingly, there has been more capital invested in generative AI startups through the first five months of this year than in all previous years combined, with over half of all generative AI startups established in the last five months alone, while median generative AI valuations have doubled this year compared to last.

Perhaps reminiscent of the days when companies looking for an instant boost in stock price sought to add “.com” to their name amidst the dot com bubble, now college students are hyping overlapping AI-focused startups overnight, with some entrepreneurial students raising millions of dollars as a side project over spring break with nothing more than concept sheets.

Some of these new AI startups barely even have coherent products or plans, or are led by founders with little genuine understanding of the underlying technology who are merely selling unanchored hype–but that is apparently no obstacle to fundraising millions of dollars. While some of these startups may eventually become the bedrock of next-generation AI development, many, if not most, will not make it.

These excesses are not contained to just the startup space. Many publicly listed AI companies such as Tom Siebel’s C3.ai have seen their stock prices quadruple since the start of the year despite little change in underlying business performance and financial projections, leading some analysts to warn of a “bubble waiting to pop.”

A key driver of the AI commercial craze this year has been ChatGPT, whose parent company OpenAI won a $10 billion investment from Microsoft several months back. Microsoft and OpenAI’s ties run long and deep, dating back to a partnership between the Github division of Microsoft and OpenAI, which yielded a Github coding assistant in 2021. The coding assistant, based on a then-little-noticed OpenAI model called Codex, was likely trained on the huge amount of code available on Github. Despite its glitches, perhaps this early prototype helped convince these savvy business leaders to bet early and big on AI given what many see as a “once in a lifetime chance” to make huge profits.

All this is not to suggest that all AI investment is overwrought. In fact, 71% of the CEOs we surveyed thought their businesses are underinvesting in AI. But we must raise the question of whether commercial profiteers selling unanchored hype may be crowding out genuine innovative enterprises in a possibly oversaturated space.

Curious creators: Innovation at the frontiers of knowledge

Not only is A.I. innovation taking place across many startups but it’s also rife within larger FORTUNE 500 companies. Many business leaders are enthusiastically but realistically integrating specific applications of AI into their companies, as we have extensively documented.

There is no question that this is a uniquely promising time for AI development, given recent technological advancements. Much of the recent leap forward for AI, and large language models in particular, can be attributed to advances in the scale and capabilities of their underpinnings: the scale of the data available for models and algorithms to go to work on, the capabilities of the models and algorithms themselves, and the capabilities of the computing hardware that models and algorithms depend on.

However, the exponential pace of advancements in underlying AI technology is unlikely to continue forever. Many point to the example of autonomous vehicles, the first big AI bet, as a harbinger of what to expect: astonishingly rapid early progress by harvesting the lower-hanging fruit, which creates a frenzy–but then progress slows down dramatically in confronting the toughest challenges, such as fine-tuning autopilot glitches to avoid fatal crashes in the case of autonomous vehicles. It is the revenge of Zeno’s paradox, as the last mile is often the hardest. In the case of autonomous vehicles, even though it seems we are perennially halfway towards the goal of cars that drive themselves safely, it is anyone’s guess if and when the technology actually gets there.

Furthermore, it is still important to note the technical limitations to what AI can and cannot do. As the large language models are trained on huge datasets, they can efficiently summarize and disseminate factual knowledge and enable very efficient search-and-discover. However, in terms of whether it will allow for the bold inferential leaps which are the domain of scientists, entrepreneurs, creatives, and other exemplars of human originality, AI’s use may be more confined, as it is intrinsically unable to replicate the human emotion, empathy, and inspiration, which drive so much of human creativity.

While these curious creators are focused on finding positive applications of AI, they risk being as naïve as a pre-atomic bomb Robert Oppenheimer in their narrow focus on problem-solving.

“When you see something that is technically sweet, you go ahead, and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb,” the father of the atomic bomb, who was wracked by guilt over the horrors his creation unleashed and turned into an anti-bomb activist, warned in 1954.

Alarmist Activists: Advocating unilateral rules

Some alarmist activists, especially highly experienced, even pioneering disenchanted technologists with strong pragmatic anchorings, loudly warn of the dangers of AI for everything from the societal implications and the threat to humanity to non-viable business models and inflated valuations–and many advocate for strong restrictions on A.I. to contain these dangers.

For example, one AI pioneer, Geoffrey Hinton, has warned of the “existential threat” of AI, saying ominously that “it is hard to see how you can prevent the bad actors from using it for bad things.” Another technologist, early Facebook financial backer Roger McNamee, warned at our CEO Summit that the unit economics of generative AI are terrible and that no cash-burning AI company has a sustainable business model.

“The harms are really obvious”, said McNamee. “There are privacy issues. There are copyright issues. There are disinformation issues….an arms race is underway to get to a monopoly position, where they have control over people and businesses.”

Perhaps most prominently, OpenAI CEO Sam Altman and other A.I. technologists from Google, Microsoft, and other A.I. leaders recently issued an open letter warning that AI poses an extinction risk to humanity on par with nuclear war and contending that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

However, it can be difficult to discern whether these industry alarmists are driven by genuine anticipation of threats to humanity or other motives. It is perhaps coincidental that speculation about how AI poses an existential threat is an extremely effective way to drive attention. In our own experience, media coverage trumpeting CEO alarmism on AI from our recent CEO Summit far overshadowed our more nuanced primer on how CEOs are actually integrating A.I. into their businesses. Trumpeting alarmism over A.I. also happens to be an effective way to generate hype over what AI is potentially capable of–and thus greater investment and interest.

Already, Altman has been very effective in generating public interest in what OpenAI is doing, most obviously by initially giving the public free, unfettered access to ChatGPT at a massive financial loss. Meanwhile, his nonchalant explanation for the dangerous security breach in the software that OpenAI used to connect people to ChatGPT raised questions over whether industry alarmists’ actions match their words.

Global governistas: Balance through guidelines

Less strident on AI than the alarmist activists (but no less wary), are global governistas, who view unilateral restraints being placed on AI would be inadequate and harmful to national security. Instead, they are calling for a balanced international playing field. They are aware that hostile nations can continue exploiting AI along dangerous paths unless there are agreements akin to the global nuclear non-proliferation pacts.

These voices advocate for guidelines if not regulation around the responsible use of AI At our event, Senator Richard Blumenthal, Speaker Emerita Nancy Pelosi, Silicon Valley Congressman Ro Khanna, and other legislative leaders emphasized the importance of providing legislative guardrails and safeguards to encourage innovation while avoiding large-scale societal harms. Some point to the example of aviation regulation as an example to follow, with two different agencies overseeing flight safety: The FAA writes the rules, but the NTSB establishes the facts, two very different jobs. While rule writers have to make tradeoffs and compromise, fact-finders have to be relentless and uncompromising in pursuit of truth. Given how AI may exacerbate the proliferation of unreliable information across complex systems, regulatory fact-finding could be just as important if not even more so than rule-setting.

Similarly, there are global governistas such as renowned economist Lawrence Summers and biographer and media titan Walter Isaacson who have each told us that their major concern revolves around the lack of preparedness for changes driven by AI. They suggest a historic workforce disruption among the formerly most vocal and powerful elite workers in society.

Walter Isaacson argues that AI will have the greatest displacement effect on professional “knowledge workers,” whose monopoly on esoteric knowledge will now be challenged by generative AI capable of regurgitating even the most obscure factoids far beyond the rote memory and recall capacity of any human being–though at the same time, Isaacson notes that previous technological innovations have enhanced rather than reduced human employment. Similarly, famous MIT economist Daron Acemoglu worries about the risk that AI could depress wages for workers and exacerbate inequality. For these governistas, the notion that AI will enslave humans or drive humans into extinction is absurd—an unwelcome distraction from the real social costs that AI could potentially impose.

Even some governistas who are skeptical of direct government regulation would prefer to see guardrails put in place, albeit by the private sector. For example, Eric Schmidt has argued that governments currently lack the expertise to regulate A.I. and should let the technology companies self-regulate. This self-regulation, however, harkens back to the industry-captured regulation of the Gilded Age, where the Interstate Commerce Commission, The Federal Communication Commission, and the Civil Aeronautics Board often tilted regulation intended to be in the public interest towards industry giants, which blocked new rival startup entrants protecting established players from what ATT founder Theodore Vail labeled as “destructive competition.”

Other governistas point out that there are problems potentially created by A.I. that cannot be solved through regulation alone. For example, they point out that A.I. systems can fool people into thinking that they can reliably offer up facts to the point where many may abdicate their individual responsibility for paying attention to what is trustworthy, and thus rely totally on AI systems–even when versions of AI already kill people, such as in autopilot-driven car crashes, or in careless medical malpractice.

The messaging of these five tribes reveals more about the experts’ own preconceptions and biases than the underlying A.I. technology itself–but nevertheless, these five schools of thought are worth investigating for nuggets of genuine intelligence and insight amidst the artificial intelligence cacophony.

Department: Faculty Viewpoints
Topics: