The Right Approach to State Regulation of AI
Yale SOM’s Jeffrey Sonnenfeld and co-author Stephen Henriques write that Connecticut should begin by applying its existing consumer protection, civil rights, and data protection laws to artificial intelligence, rather than scrambling to create new laws that could hamper innovation in the state.
This commentary originally appeared in the Hartford Courant.
In the last week, financial markets and the tech sector experienced a convulsive shock akin to the Russian Sputnik surprise of 1957. While recent advancements—honestly achieved or not—by DeepSeek have leaders reconsidering their approach to the development of artificial intelligence, it is also a reminder of how much America must supercharge its efforts in the space to remain globally competitive. Just as the U.S. cannot now risk falling behind other nations, Connecticut cannot risk falling behind other states.
In developing a policy stance on AI, Connecticut has twin objectives which need not be thought of as incompatible. The first is positioning the state to be at the forefront of this transformational technology. The other is ensuring the appropriate protections are in place for Connecticut residents. Accomplishing the two-pronged approach requires a mixture of using existing legal guidelines and implementing a few very targeted new guidelines. In short, there are “low-hanging fruit” issues that should be addressed by the states in addition to other far more ambitious objectives that are far too consequential to be left to solitary state intervention.
Connecticut narrowly escaped the harmful effects of a well-intentioned but ill-conceived bill in the 2024 legislative session that would have addressed some “low-hanging fruit” issues but also placed burdensome regulatory responsibilities on companies developing and deploying generalized AI technology. Fortunately, the bill, which had been passed by the Senate, was ultimately blocked by the speaker of the House with the support of the governor. Both leaders rightly feared the legislation would have hampered innovation and hindered adoption of AI.
Our proprietary research discovered that story to be a familiar one in state assemblies across the U.S. Since 2023, states lawmakers have been frantically debating how to approach the risks inherent in AI. In 2024, nearly 700 AI-related bills were introduced in state assemblies, and a little over 100 were passed or enacted. The glut of legislation and other state legal actions seeking to regulate AI provide ample guidance for Connecticut legislators to draw on for the 2025 session.
While two states, Colorado and Utah, did successfully pass generalized AI legislation, they only did so “with reservations” and specifications that either delay the law’s effective date by multiple years or automatically repeal the law in one year in recognition of the high burden such disclosures, audits, and other reporting requirements place on businesses and tech innovation. These parameters call into the question the effectiveness, if not the purpose, of such legislation.
Most other states chose to only focus on “low-hanging fruit” issues specifically posed by AI, such as fraudulent political advertisements generated using AI that intentionally misinform voters about certain voting rights or misrepresent candidates and sexually explicit images of children generated by AI that would be classified as child sexual exploitation material.
Perhaps of most interest for Connecticut lawmakers to consider, however, is the approach taken in Massachusetts by Attorney General Campbell. Campbell aptly recognized that AI is simply a technology and therefore most uses and applications fall under existing consumer protection, civil rights, and data protection laws. Utah, Texas, and California soon followed with their respective actions. From our conversations with investors, technologists, and executives, we learned that they often direct their companies to operate as if existing legal guidelines apply. One well-known investor and technologist told us, “The no-new-legislation angle…in my view, is the appropriate approach…in most cases, you could replace the word AI with ‘technology,’ and it would make more sense.”
So as the 2025 legislative session in Connecticut enters their most active period, it is of paramount importance that state leaders recognize AI for what it is really is—technology that has been embedded in other applications less apparent to everyday consumers. Nevertheless, most people have been engaging AI daily—when they order an Uber, shop Amazon’s recommendations, or explore Netflix’s suggested “what to watch next”—with little question as to whether existing legal guidelines apply. Nothing changes with the large language models, such as ChatGPT, Grok, or Claude.
Under such an approach, Connecticut is given an advantage over most states, benefiting from one of the most expansive consumer protection laws, known as the Connecticut Unfair Trade Practices Act, or CUTPA. CUPTA offers the state an opportunity to provide every type of business—from frontier AI companies to mom-and-pop shops deploying basic AI technology—clear guidance under a system familiar to all. One senior expert at a leading AI startup told us that such clear and familiar regulatory guidance is a major attraction and could be a competitive advantage for Connecticut.
The path forward is evident. State leaders should request an advisory be issued by Attorney General William Tong on the application of existing consumer protection, intellectual property, data privacy, civil rights and other guidelines to AI. Where obvious issues remain, such as child sexual abuse material or distribution of other intimate images, targeted legislation should be adopted.
Thinking longer-term, a task force of the top AI executives, entrepreneurs, investors, policymakers, and experts from academia and economic development organizations should be established to readily develop sector-specific guidelines or proactively address emerging spill-over threats as AI technology matures, and where existing protections do not apply or should be amended.
To promote AI-related economic development, a non-governmental organization should launch and manage a statewide AI testing “sandbox” for those higher-risk uses of AI as part of a broader AI-support hub for key state industries, such as life sciences, aerospace and defense, or advanced manufacturing. Additional incentives targeting segments of the AI ecosystem, such as data center attraction and alternative energy production, should be deployed and actively adapted to respond to the latest advancements in AI, as we have seen the recent emergence of DeepSeek call existing energy demand assumptions into question.
The focus of state lawmaking should remain on the achievable rather than the aspirational. The achievable are those “low-hanging fruit” issues that require an immediate state response where no protections against the clear misuse or abuse of AI exist. The aspirational are those matters best served by federal regulators or a highly coordinated multi-state consortium, such as countering the potentially harmful effects of AI on the nation’s workforce, implementing a national standard for cybersecurity measures, formulating the appropriate rights of personhood, or establishing legal definitions related to AI’s responsible use.
The consequences of getting AI regulation wrong are significant. For instance, the Lamont administration is planning to make significant investments alongside Yale and UConn in quantum computing and AI applications, with a core component dedicated to workforce training. However, no investment will occur if the regulatory environment pushes our state-trained programmers to move to the lighter-touch regime in Georgia.
As the recent DeepSeek revelations have shown us, AI is still too nascent to fully understand or appreciate its potential. Unnecessary disclosures, impact audits, and other regulatory reporting would only place the state at a disadvantage against other states fiercely competing for the vast opportunities associated with AI. Moreover, it would only hamper what innovation Connecticut can contribute to America’s “AI race” with other countries not to mention undermine the state’s competition posture relative to 49 other states. An economic environment where innovation is encouraged and consumers are protected is possible, but a new sweeping regulatory program is not the answer.