Skip to main content
Management in Practice

For Companies Eyeing AI, the Question Is ‘When, Not If’

Generative AI has so much potential that firms in every sector should be paying close attention, said two Yale SOM alums in the field. But each company will need to decide whether to move quickly or wait until issues are ironed out.

As computers evolved from room-sized contraptions wrangled by specialists to machines that displaced typewriters on more and more desks, the change was met with excitement, anxiety, hype, and skepticism. Workers feared being replaced by machines; managers weren’t sure, despite the eyewatering expense of equipping everyone with a computer, whether the devices actually made workers more productive.

Today, businesses face a similar choice. A nascent, probably transformative technology is emerging. Generative AI is reshaping how we access and interact with information. Every sector is either feeling the impact now or will soon. The release of Chat GPT in November 2022 unleashed phenomenal buzz and a torrent of money into the tech sector. But how soon this technology will be broadly useful remains an open question.

One threshold has been passed, according to James Lin ’15, head of AI/ML innovation at Experian; the question facing companies is “when, not if” they will incorporate AI.

Those who move now will be part of figuring how to best use a technology with enormous but undefined potential. “It’s very unexplored,” said Jonas Dahl ’14, a senior product manager with Microsoft’s customer data platform. “There’s a lot of excitement.”

Dahl and Lin spoke during a Yale SOM discussion on September 22. The event was moderated by Alex Burnap, assistant professor of marketing, whose research focuses on using generative AI and other forms of machine learning to improve product management and design.

Microsoft has invested billions in its partnership with OpenAI, ChatGPT’s developer. AIs built around a large language model (LLM), Dahl noted, allow non-technical people to access data easily. “Before you had to write SQL or use a clunky query builder,” he said. “LLMs provide a completely different interface. It’s natural language. I can say, ‘How many people aged between 25 and 35 bought a handbag in California last week?’”

Drawing on the data, the AI can then generate marketing emails. And it can deliver a 360-degree view of customers synthesized from data currently walled off in sales, call center, and transaction siloes. That means, when properly trained and constrained, a generative AI can improve productivity, analysis, and insight. But both Lin and Dahl acknowledged that knowing whether an AI is properly trained and constrained isn’t straightforward.

Burnap offered an example of the challenge of getting an AI trained. If a company like Yelp wanted to use an LLM AI to offer an overall assessment based on all the user-submitted reviews of a restaurant, the AI would have to be fine-tuned to understand which reviews were helpful and which were not, which were real and which were spoofs. That requires tedious, time-intensive labeling of reviews by humans followed by testing to confirm that the AI learned the right lessons.

“It’s a lot of work that companies have to account for,” Lin said. There are automated ways to fine-tune an AI, but, he said, Experian has found it worthwhile to use an instruction-based format even though it requires human intervention, often from people with domain-specific knowledge like managers, product specialists, and business analysts. “The way that we’re looking at it is that the data that we’re enriching is our core asset,” he said.

Prioritizing quality data over a specific AI platform is wise when things are changing so quickly, Lin added: “Strategically the most important thing is to remain flexible, because nobody knows what’s around the corner.”

Burnap pointed out that while big players like Open AI and Google get much of the attention, there are already over 150,000 LLMs. When companies decide to move forward with AI, one of the key early choices is whether to use an open-source AI, which offers flexibility in fine-tuning for a specific capability, or a commercial product, which doesn’t require much expertise but is a black box.

“It really depends on the use case,” Dahl said. “For some applications you want to look under the hood and understand everything that’s going on. For some, it’s fine to use a black box.”

Burnap underscored that bias, misinformation, and hallucinations all continue to be significant issues with LLMs. While it might be less efficient, Dahl said, he would review marketing emails that an AI generates rather than letting them be sent automatically. A human review could prevent brand-damaging problems if there were something offensive in the email.

It’s also prudent to limit an AI to only answering questions within its area of knowledge, he noted. That way it will respond, “I don’t know” rather than making up a response. “It’s very important to understand that as a product manager you can really influence the risk you’re taking,” Dahl said.

However, sensible guardrails can’t eliminate all bias or hallucinations. Lin pointed out that it’s currently impossible to predict where hallucination will pop up. “That’s the crux of the problem.” That means, for Experian, which operates in the regulated financial sector, Lin said, “If there is a key financial decision, there has to be a human in the loop.” He expects that policy to continue for the foreseeable future.

So when should companies bring AI on board? The question, Lin said, is whether “you want to be an early adopter and face the problems that we’re talking about—or do you come in later when some of these things are solved.”

Topics: