Can States Regulate AI When Congress Won’t?

Can States Regulate AI When Congress Won’t?

A profound chasm is widening between the breakneck speed of artificial intelligence innovation and the deliberate, often gridlocked, pace of federal governance, creating a volatile landscape where individual states are now drawing the front lines in the battle to regulate this transformative technology. As algorithms reshape industries and daily life, the debate over who holds the authority to set the rules—a silent Congress or a collection of proactive states—has escalated from a theoretical discussion into a high-stakes political conflict with immediate, tangible consequences for public safety and economic fairness. This growing friction highlights a fundamental question of American federalism in the digital age: when the central government fails to lead, who is permitted to step in?

The Silence from the Capitol: When Technological Leaps Outpace Legislation

The struggle to craft comprehensive AI policy at the federal level is not an isolated event but rather the latest chapter in a long history of legislative inaction on critical technology issues. For years, Congress has been unable to pass a national data privacy law, leaving a void that states have filled with their own distinct regulations. Similarly, debates over net neutrality and the protection of minors on social media platforms have yielded more discussion than decisive action, creating a consistent pattern of federal paralysis. This governance vacuum leaves emerging technologies like AI to evolve in an unregulated space, forcing state legislatures to confront complex societal risks without a national framework for guidance.

In response to this federal inertia, states have increasingly assumed the role of legislative laboratories. This trend has resulted in a complex and sometimes conflicting patchwork of laws governing everything from consumer data to algorithmic transparency. California, a long-standing leader in tech policy, has often set the pace, but other states are following suit with their own initiatives. This decentralized approach, while responsive to local concerns, raises significant challenges for companies navigating a maze of different compliance standards and complicates the goal of establishing a unified national strategy for technological governance.

The Flashpoint: A Ham Handed Push for Federal Control

The tension between state and federal authority reached a flashpoint with a series of federal proposals aimed at preempting, or blocking, state-level AI laws. California State Senator Scott Wiener, a key architect of his state’s technology legislation, has sharply criticized these efforts as a “ham-handed” strategy. The core of the preemption problem lies in its breadth; rather than creating a thoughtful federal standard, the proposals seek to nullify state action altogether. This approach is viewed by critics not as a constructive attempt to create a unified policy but as a blunt instrument designed to halt regulatory momentum and protect industry interests from oversight.

The potential collateral damage of such broad preemption is significant. Senator Wiener has pointed out that early drafts of federal legislation were so expansive they would have unintentionally invalidated critical state laws targeting malicious uses of AI, such as the creation of nonconsensual deepfake pornography. This oversight demonstrates a critical failure to distinguish between fostering innovation and preventing demonstrable harm, putting public safety at risk. Supporters of state action argue that while AI holds immense promise, it is naive to ignore its capacity for misuse, a reality that state laws are often designed to address with greater agility than cumbersome federal processes.

Further complicating the issue are what Senator Wiener has described as “shameful” federal tactics designed to discourage state regulation through leverage. These include attempts to use executive orders to tie the allocation of federal broadband funding to state compliance with federal preferences on AI policy. This strategy effectively pressures states to abandon their own legislative efforts by holding essential infrastructure funding hostage, a move critics contend prioritizes a deregulatory agenda over both public protection and the principles of federalism.

California’s Crucible: A Case Study in State Led AI Governance

California’s journey toward AI regulation provides a compelling case study in the challenges and compromises inherent in state-led governance. The state’s initial, ambitious legislative effort, which would have established a new state board to oversee powerful AI models, passed the legislature but was ultimately vetoed by Governor Gavin Newsom. The governor’s veto message cited concerns that the framework was premature and might create a “false sense of security” in a rapidly evolving field. This setback, however, did not end the conversation but instead forced lawmakers to pursue a more focused and politically viable path forward.

Following the veto, a state task force recalibrated the approach, leading to a new law that concentrates on transparency and accountability. The current legislation, signed by Governor Newsom and now in effect, mandates that large-scale AI developers publicly disclose their safety and security protocols. This requirement shifts the focus from direct government oversight to public and expert scrutiny, creating a different form of accountability. The law compels companies to be open about their risk mitigation efforts, empowering external parties to evaluate their claims and apply pressure for stronger standards.

Central to California’s new rulebook is the establishment of “CalCompute,” a public cloud computing resource. This initiative aims to democratize innovation by providing startups, academic researchers, and smaller entities with access to the immense computational power required to develop advanced AI models. By leveling the playing field, the state hopes to prevent the AI landscape from being exclusively dominated by a handful of tech giants. Furthermore, the law includes robust whistleblower protections, creating secure channels for employees to report safety concerns without fear of retaliation, adding another crucial layer of internal oversight.

A Legislator’s Stand: Senator Scott Wiener on the Imperative of State Action

Senator Wiener has forcefully articulated the “absurdity” of the federal government’s position on AI, which he characterizes as, “We’re not going to do anything about it, but you’re not allowed to do anything about it either.” He argues that this contradictory stance—combining inaction with preemption—effectively abdicates the government’s responsibility to protect its citizens from the potential harms of new technologies. In his view, this approach suggests a greater concern for the corporate interests of large technology companies than for the public good, leaving states with no choice but to act in the interest of their constituents.

In defense of California’s transparency-focused law, Wiener refutes the criticism that it allows companies to simply “grade their own homework.” He contends that mandatory public disclosure is a powerful tool for external accountability. By forcing companies to reveal their safety protocols, the law enables experts, journalists, competitors, and the public to scrutinize and evaluate them. “It’s not grading our own work,” Wiener explains, “because the public and experts and everyone else gets to grade it.” This transparency creates market and reputational pressure for companies to adopt responsible practices, fostering a competitive environment where safety and security become key differentiators.

Ultimately, Wiener advocates for the vital role of states as “legislative laboratories,” especially when Congress is paralyzed. He argues that in a field as dynamic as AI, allowing different states to experiment with various regulatory models is not a flaw but a feature of the federalist system. This approach enables the development of innovative and effective policies that can be tested on a smaller scale before being considered for national adoption. When Washington will not or cannot lead, states have a right and a responsibility to pioneer solutions to protect their populations.

The Path Forward: Navigating a Fractured Regulatory Landscape

Although California’s AI law is still in its early stages, it is already generating tremors in Silicon Valley. Major industry players like OpenAI and Anthropic have begun issuing compliance frameworks, signaling a grudging engagement with the new state-level requirements. The law’s potential enforcement teeth have also become apparent, with reports emerging of companies having to publicly deny claims of violating the new statute. These initial developments suggest that even without a federal mandate, state action can compel the industry to begin institutionalizing safety and transparency.

While some see California’s law as a potential “good national standard,” its architects view it as a starting point, not the final word. The legislation is designed to be a model that can inform a national conversation, but it is not intended to be a one-size-fits-all mandate for the entire country. The understanding is that AI technology is not “frozen in amber” but is constantly evolving. This necessitates a flexible and adaptive regulatory posture, where laws are revisited and revised as the technology and its societal impacts mature.

Looking toward the future, a primary concern among policymakers is AI’s potential to exacerbate economic inequality. The immense wealth generated by this technology risks being concentrated in the hands of a small technological elite, widening the gap between the affluent and the rest of society. As such, the next frontier of AI legislation may need to confront not just safety and transparency but also the equitable distribution of the technology’s benefits. The challenge was to ensure that AI’s transformative power uplifts society as a whole, rather than creating a future of unprecedented disparity.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later