Can Connecticut Break Its AI Regulation Deadlock?

Can Connecticut Break Its AI Regulation Deadlock?

With two years of failed attempts to pass comprehensive artificial intelligence legislation, Connecticut finds itself at a critical juncture. The debate pits lawmakers championing robust consumer protections against an administration and business community wary of stifling a technological revolution. To unravel this complex policy stalemate, we spoke with Donald Gainsborough, a leading expert in state-level technology policy and legislation and the head of Government Curated. He sheds light on the core disagreements over algorithmic bias and data privacy, explores the lessons from other states’ regulatory struggles, and maps out the potential path forward for Connecticut in the rapidly evolving world of AI.

After comprehensive AI legislation failed for a second year, what do you see as the primary points of friction between pro-regulation lawmakers and the governor’s administration, and what practical compromises might finally lead to a successful bill in 2026?

The fundamental disconnect really boils down to a classic tension between caution and ambition. On one side, you have legislative leaders like Senator Maroney who see the rapid, unregulated expansion of AI and feel a palpable urgency to erect guardrails. They’re hearing from constituents who are deeply concerned about their privacy and the potential for automated systems to make life-altering decisions without transparency. They tried to thread this needle with Senate Bill 2, which they felt was already scaled back, focusing more on disclosure by the end. The frustration was immense when even that version, which had bipartisan support in the Senate, couldn’t get a vote in the House. On the other side, the Lamont administration is looking at the bigger economic picture. They’re promoting Connecticut as a hub for innovation, a “second industrial revolution,” and they fear that being the first mover on stringent regulations could scare away the very tech investment they’re trying to attract. The administration’s stance is essentially, “Let’s not create a fractured landscape of state rules; let another state take the lead.” For a bill to pass in 2026, the compromise will have to be tangible. It likely means moving away from a single, all-encompassing bill and focusing on targeted, incremental wins. Perhaps they codify the “regulatory sandbox” idea from S.B. 2 to give businesses a safe space to innovate, while simultaneously passing a narrower bill focused specifically on something like biometric data privacy. This allows both sides to claim a victory—one for fostering innovation, the other for protecting consumers.

Business groups have voiced strong fears that regulations targeting algorithmic bias could saddle companies with immense compliance costs. Could you walk us through what a balanced set of guardrails might look like—one that addresses discrimination without creating a hostile environment for innovation, especially for smaller businesses?

This is the thorniest part of the debate, and the business community’s concerns are not unfounded. Their primary fear, as voiced by the Connecticut Business and Industry Association, is that the state will create a system where every business is presumed to be discriminating until it can prove otherwise. That feels like a guilty-until-proven-innocent framework, and the paperwork alone could be crushing. A balanced approach avoids this. Instead of mandating exhaustive, continuous impact assessments for every AI tool a business uses, the legislation could implement a tiered, risk-based system. For “high-risk” applications—like AI used in hiring, lending, or criminal justice—the compliance burden would be higher. A company using such a system would be required to conduct and document a thorough impact assessment before deployment, outlining potential biases and the steps taken to mitigate them. For lower-risk applications, like an AI tool that optimizes warehouse inventory, the requirements could be as simple as a disclosure to employees that the system is in use. A successful compliance process would also involve creating safe harbors. For example, if a small business uses an AI hiring tool from a vendor that has already been certified as compliant with state fairness standards, that business could be shielded from primary liability. This encourages vendors to build fairness in from the ground up and relieves the small business owner of the burden of becoming an AI ethics expert overnight. It’s about focusing scrutiny where the potential for human harm is greatest.

The proposed “protect, promote, and empower” framework sounds compelling, but it can also seem abstract. Could you give us a concrete policy example for each of those three pillars and suggest how you would measure its success?

Certainly. It’s a useful framework because it moves the conversation beyond just restriction. For the “protect” pillar, a concrete policy would be the proposed ban on facial recognition software in retail stores. This is a direct response to consumer fears about the collection and storage of their biometric data without their consent, as seen with chains like ShopRite and Wegmans. Success here is straightforward to measure: You would track the number of retailers deploying this technology, aiming for zero. You’d also monitor consumer complaints related to biometric data collection and conduct public polling to see if residents feel their privacy in public commercial spaces is more secure.

For “promote,” the policy could be the creation of the “regulatory sandbox” that was initially part of Senate Bill 2. This would establish a program, likely under the Department of Economic and Community Development, where startups and established companies could test new AI products in a live but controlled environment with relaxed regulatory obligations for a set period. Success would be measured by the number of companies participating in the sandbox, the amount of venture capital invested in those companies, and the number of new AI-driven products that successfully launch into the broader market after graduating from the program.

Finally, for “empower,” a great example would be funding and deploying AI-powered systems within state agencies to improve constituent services. Imagine an AI chatbot on the Department of Labor website that can instantly and accurately answer complex unemployment insurance questions 24/7, reducing call wait times and freeing up human agents for more complex cases. Key performance indicators for this would be clear: a measurable decrease in call center wait times, an increase in user satisfaction ratings for the agency’s website, and an analysis of cost savings from improved efficiency.

The use of facial recognition in grocery stores has triggered a push for an outright ban. Looking beyond a simple ban, what specific data privacy measures could effectively govern the collection and storage of biometric data by businesses, and what are the biggest hurdles to enforcing such rules?

While a ban is a clear and decisive action, a more nuanced regulatory approach could provide strong protections while allowing for potential future uses that might have public benefit. A robust governance framework would be built on a few key principles. First is mandatory, explicit, and informed consent. This isn’t a line buried in a 50-page terms-of-service agreement; it’s an active, clear opt-in before any biometric data is collected. Second are strict data minimization and purpose limitation rules. A business would only be allowed to collect the data necessary for a specific, disclosed purpose and could not repurpose it for marketing or share it with third parties without separate consent. Third, there must be clear data retention and deletion policies, requiring companies to securely destroy biometric data after its intended use is complete. Finally, granting consumers a “right to be forgotten”—the ability to demand the deletion of their biometric data at any time—is crucial. The biggest enforcement challenge is verification. How does a state agency audit a company’s internal data storage systems to ensure they’ve actually deleted the data? It’s not like a physical inspection. This would require creating a new class of certified third-party auditors who can technically vet these systems, which adds complexity and cost. Another hurdle is the sheer proliferation of the technology; enforcement becomes a massive game of whack-a-mole as more and more businesses, from large chains to small boutiques, start using off-the-shelf security cameras with built-in facial recognition capabilities.

Colorado’s experience with its comprehensive AI law has been a bit of a cautionary tale, with implementation challenges and high costs leading to potential revisions. What are the most critical lessons Connecticut lawmakers should draw from Colorado’s experience as they draft their own legislation?

Colorado’s experience is an invaluable, real-time case study for Connecticut. The first, and perhaps most important, lesson is to define everything with extreme precision. Vague terms like “algorithmic discrimination” or “meaningful human oversight” can sound good in a press release but become nightmares to implement when businesses and regulators can’t agree on what they actually mean in practice. Connecticut needs to write its definitions with technical and legal clarity from day one. A second lesson is to build the financial and administrative framework before the law goes into effect. Colorado is grappling with higher-than-expected implementation costs; Connecticut should conduct a thorough fiscal analysis and pre-allocate resources for hiring regulators with the necessary technical expertise. You can’t regulate complex algorithms with a team that doesn’t understand them. What to avoid is just as important. Avoid writing the law in a way that is technologically rigid. Instead of banning a specific type of algorithm, regulate the outcome or the risk. This makes the law more future-proof. Also, avoid creating a single, monolithic compliance pathway. The needs of a global bank using AI for fraud detection are vastly different from a local retailer using it for inventory management. The law must have the flexibility to accommodate that diversity.

With the federal government signaling its preference for a single national standard and even threatening to withhold funds, what are the primary risks and benefits for Connecticut if it decides to enact AI regulations that are significantly stricter than its neighbors?

The risks are very real and are exactly what the governor’s office and business groups are worried about. The primary risk is competitive disadvantage. If Connecticut imposes significant compliance costs and legal liabilities that don’t exist in New York or Massachusetts, you could see a “regulatory flight” of tech startups and investment capital. A company might choose to headquarter across the border to avoid the headache, even if they still serve Connecticut customers. This creates a patchwork quilt of regulations that is a nightmare for any business operating regionally or nationally, leading to higher legal fees and operational complexity. I remember a similar situation years ago with state-level data breach notification laws before federal standards became more common; companies had to navigate dozens of different notification timelines and requirements, and it was chaos. However, the benefits could be substantial. By enacting strong, clear, and fair AI regulations, Connecticut could brand itself as the “gold standard” for responsible AI. This can become a competitive advantage in itself, attracting consumers who value privacy and businesses that want to build trust with their customers. It also gives Connecticut a powerful seat at the table in shaping any future national standard. Federal lawmakers often look to successful state-level experiments as models. By getting it right, Connecticut could define the terms of the national debate rather than just reacting to them.

What is your forecast for AI legislation in Connecticut?

Given the gridlock of the past two years, I believe we will see a strategic shift away from a single, comprehensive “big bang” bill in 2026. The political will just isn’t there to overcome the deep divisions between the pro-regulation legislature and the more cautious administration. Instead, my forecast is for a more targeted, piecemeal approach. I expect lawmakers to focus on proposals that have clear, tangible consumer benefits and are easier to rally public support around. The ban on facial recognition in retail stores is a perfect example of this—it’s an issue that people can easily understand and feel strongly about. We will likely see a package of smaller bills addressing specific areas like biometric data privacy, deepfake criminalization, and perhaps new funding for AI workforce training. The larger, more contentious issue of regulating algorithmic bias in business practices will likely be pushed to a future session, perhaps after seeing how the revisions in Colorado play out and whether a federal framework begins to take shape. This incremental approach allows lawmakers to claim progress and deliver protections for their constituents, while the administration can avoid the kind of broad, sweeping regulation that it fears will harm the state’s business climate. It’s a compromise, but it’s also the most probable path forward.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later