Skip to main content
Management in Practice

Is AI a Savior or a Peril—or Both?

With applications of artificial intelligence spreading from the realm of data science to the apps at your fingertips, a day-long conference at the Yale School of Management considered how to unlock the technology’s positive potential while containing possibilities for misuse, misinformation, and labor-market mayhem.

The wizardry of ChatGPT captured the world’s attention when it was released in 2022 and became the most successful product launch in history. Since then, AI companies and startups have attracted billions in new investments, and the technology has continued to dazzle. Consumer-facing AI programs can now compose love ballads—or fabricate realistic video of cowboys riding unicorns and shooting water pistols—from simple text prompts.

There are uncountable reasons to be curious, intrigued, and piqued about AI, but Rob Thomas, the senior vice president of software and chief commercial officer at IBM, who was the lead speaker at Yale SOM’s Responsible AI in Global Business conference on March 1, emphasized that the primary reason to be interested in AI is that it may be the best hope for future economic growth, and the human flourishing that accompanies such success. He presented a simple formula to back his argument:

GDP growth = population growth + productivity growth + debt growth

With population and debt under tight constraint, we need productivity growth per person, and that’s where AI, with its potential to augment human capability, comes in.

“We don’t have a choice,” Thomas said. “If your focus is growth—and growth is the one thing that has brought improvements to the world…we have to do AI in the right, responsible way while accepting the disruption that it may present.”

Thomas argued that “navigating the paradox” of responsibility and disruption will require leadership, new skills, and transparency about the sources and uses of AI. “The idea of navigating the paradox is to not be scared by the risks, to understand the risks, but to act anyway. You can already see this happening in companies around the world as they start to differentiate on financial performance. The companies that are leaning in are delivering better growth.”

Throughout the day, speakers, including leaders from academia, business, and government, discussed what this paradox means in practice.

Competitive imperative

The word “hallucination,” with its connotations of a deeply subjective experience, used to be distinctly human. Over the last two years, as more people interacted with generative AI (genAI) models and experienced their tendency to fabricate incorrect answers with confidence, the word has been pressed into service to describe this novel phenomenon. For one recent study, for example, researchers queried three of the biggest models for legal information and found high error rates in the responses, with the models offering made-up precedents and mixing up the authors of opinions. Companies competing to lead the way in AI are venturing into unmarked terrain, and trying to account for unprecedented challenges as they go.

“LLMs are tools for creativity and productivity,” said Samuel Payne, head of creative and content at Google, and a participant in the AI & the End User panel discussion. The underlying technology is essentially guessing, Payne said. So as teams continue to build these products, despite extensive testing, there are likely to be more errors as it’s impossible to know how these products will always act. “That means the guardrails we put in place are really important, but we also need to be ready to see how people use these products and adapt to use cases we’d not expected.”

It can’t be AI for AI’s sake. The purpose of business ought to be to solve the problems of people and the planet profitably.

The panel included representatives from Salesforce and Adobe—two companies shaping the way AI is being integrated into a range of functions—from games and search to visual design to the data structures undergirding the operations of many companies.

Sabastian Niles, the president and chief legal officer of Salesforce, described how the company approaches AI development by focusing on how to enable their customers to harness their own data and connect with their stakeholders. This leads to a focus on meaningful problems. “It can’t be AI for AI’s sake,” he said. “The purpose of business ought to be to solve the problems of people and the planet profitably—hopefully in a way that we’re learning and listening, and, yes, we’re leading. But it’s not business doing things in ways that exploit problems or exacerbate problems.”

Google’s Samuel Payne

AI miscreancy

A slew of posts with AI-generated images claimed that the 2023 Maui wildfires were started by the government. In Slovakia, a “deep fake” audio clip of a presidential candidate popped up on social media just two days before the election. An AI-generated selfie of the protestor who stood in front of a column of tanks in Tiananmen Square in 1989 briefly topped search results, displacing the genuine historical image.

Beth Goldberg ’18, the head of research and development at Jigsaw, a Google unit that explores threats to open societies, explained how the combination of the ability to create fake images and videos that are almost indistinguishable from reality and to automate processes in a way that greatly increases the reach of false stories has the potential to accelerate misinformation production by malevolent actors. Goldberg emphasized that the effects of misinformation can’t be ignored. “False and misleading information slowly erodes trust in institutions. We’re seeing it all across the world, and it’s especially damaging to confidence in democracies.”

AI is, most of the time, making problems we already had worse or solutions we already had better. The difference is in terms of how much more intense the solution or the problem has become.

Ziad Reslan, senior product policy manager for multimodal genAI at Google, is often the person helping to draw policy lines to combat misinformation and other forms of harmful content. He emphasized that Google takes the safety of the content its models generate very seriously, with multiple rounds of “red teaming” and testing done before any genAI product is launched. He added that as genAI takes more hold across the world, the challenge will be thinking through how different countries and cultures will end up using (and misusing) this technology. He remembered how, when he first began working in content moderation, he learned a great deal from observing his own mother in Beirut as she used WhatsApp to share medical advice—and, likely, misinformation—with her friends and neighbors. Similarly, he cautioned, “We need to slow down and think through how genAI will be used in each cultural context.”

Throughout the day, speakers discussed other potential risks associated with AI, including privacy, data security, bias, and copyright infringement. The pace of development and the unprecedented nature of the technology makes it hard to guess where the most realistic harm could come from, and thus hard for governments to know how to respond. Luciano Floridi, the founding director of the Digital Ethics Center at Yale, speaking in the closing keynote session, summed up the quandary: “AI is, most of the time, making problems we already had worse…or solutions we already had better,” he said. “The difference is in terms of how much more intense the solution or the problem has become [with AI]. It’s like putting everything at double speed when it comes to bias, when it comes to privacy, or when it comes to efficacy, efficiency.”

Who’s the boss?

In October 2023, the Biden Administration released the Executive Order on New Standards for AI Safety and Security, which aims to protect Americans from the risks of AI systems. Two months later, the EU reached a provisional agreement on the AI Act, which promises to be the first comprehensive law regulating AI. China has its own set of regulations and strategies governing the development of AI. So do Japan, the UAE, and Peru, among others. As Luciano Floridi pointed out, the various approaches cover much of the same territory, but each authority takes a slightly different angle.

This leaves open many questions about how AI will ultimately be regulated—even as companies rush ahead with new initiatives.

David Kappos, a partner at Cravath, Swaine & Moore LLP and former director of the U.S. Patent and Trademark Office, argued that existing legal frameworks will be able to adapt to cover many of the emergent questions around AI. He pointed to the lawsuits brought by the New York Times and other media companies against OpenAI for using copyrighted material to train AI models, which hinge on the concept of fair use. “The legal issues, I believe, are going to begin sorting themselves fairly quickly…and we’re going to see that our existing legal doctrines cover most of these issues,” he said.

JoAnn Stonier, the Mastercard Fellow of Data & AI, described how Mastercard already operates in more than 210 jurisdictions and is familiar with the challenges of facing disparate regulatory demands. She said the company’s approach is to use the “highest common denominator” and provide consumers everywhere with stringent protections. “We are going to have to look at what are we trying to achieve,” she said, “not just the output of generative AI but what’s the outcome we’re trying to achieve and what is the implication on human beings and how severe is it if we get it wrong.” She added that companies need to assess the level of potential risk in their products and increase the levels of review based on that.

Jeff Alstott, the former director for technology and national security at the National Security Council, argued that AI is too broad a category to regulate with a single set of rules. Appropriate regulation should look at the risk of any given application and consider how humans need to be involved in decisions. He argued that the FAA’s approach to regulation, which is stricter in cases with more risk, could be applied to AI. However, he also said that it’s not yet clear how some issues should be handled. “We still don’t know what we want as a society,” he said. “There are certain tensions between different concerns that we’ve talked about today such that it’s not obvious what is the right thing for individual liberty or rights or growth.”

One thread throughout the day’s conversations was whether AI systems will become fully autonomous and function without human direction. This is the source of some of the direst fears around AI—think Skynet in the Terminator movies or the hypothetical paperclip-producing AI that decides to eliminate humans to maximize its paperclip output—and some of the brightest hopes, like sitting back and reading a book while your car deals with traffic, construction, and detours all on its own.

While AI is radically new, it is also the latest development in a long relationship between humans and technology, which stretches back to the first flaked stone tools. Manuela Veloso, the head of AI research at JPMorgan Chase, argued that humans will always be a part of the equation.

“Business is a collaboration between the knowledge of humans and these machines,” she said. “There will be a moment when we will transition and we trust AI.” She described how people have come to trust earlier technologies, including washing machines and GPS directions. “How many of us follow Waze blindly—turn left, turn right, go straight?… The science of AI and engineering is to provide the best algorithms in integration with the humans that are enabled to incorporate feedback. The goal is for the human not to be checking everything the AI does. But that is the result of a journey.”

AI & your job

Whether everyone realizes it or not, we’re already well into the AI era in the workplace. AI is screening job applicants and creating training materials once people are hired. No doubt, ChatGPT is augmenting many a job applicant’s cover letter-writing abilities. AI is embedded in productivity software as well as the programs that route delivery drivers, and it’s being used in legal research and drafting. In other words, it is affecting all kinds of workers.

“Most businesses are preoccupied, and rightly so, in finding talent and ensuring that they are able to be competitive in finding the best talent out there, so automated hiring systems are quite popular,” said Ifeoma Ajunwa, the Asa Griggs Candler Professor of Law at Emory University School of Law and author of The Quantified Worker. She added some caveats: Not everything that is labeled “AI” justifies the hype that is attached to it. And AI applications can both magnify and obscure existing bias when the underlying source of data isn’t treated with care.

Emory University’s Ifeoma Ajunwa

Liz Grennan, a partner and the global co-leader of digital trust at McKinsey, urged audience members to think about how their own work can leverage AI’s strengths, which she described as the four Cs: coding, concision, customer service, and creation. “If you’re looking for near-term impact in the workplace, these are the high-value use cases,” she said. “If you extrapolate, you say, am I in any of these areas?… How can I use AI as my copilot?”

Ajunwa and Grennan also discussed the trend of companies using wearable technology, phones, and other systems to track the behaviors and performance of workers in greater and greater detail. “We all have data profiles that are being built as we speak,” said Grennan. “Sitting here, right now, the data is being recalibrated on each one of us. I think that has both a wonderful implication, in terms of personalization journeys, and an alarming implication if everything you do, including microexpressions, gets measured and tracked.”

Ajunwa pointed out the irony that workers are being used to generate the data sets that train the AI systems that may replace them. “One of my suggestions is, if we are using humans as the data subjects for automation, I call it ‘captured capital.’ We’re capturing all this data from workers in the workplace. We need to reinvest that capital back into workers. And we do that through upskilling, allowing workers to gain skills that will enable them to be humans at the helm, and reskilling, encouraging workers to learn other skills that are still very difficult for machines to do.”

McKinsey’s Liz Grennan

More videos from the Responsible AI in Global Business conference.

Topics: