Louisiana Cuts AI Regulation to Save Federal Broadband Funding

Louisiana Cuts AI Regulation to Save Federal Broadband Funding

Donald Gainsborough is a central figure in the intricate world of state-level policy, currently serving at the helm of Government Curated. With an extensive background in legislative strategy and a reputation as a political savant, Gainsborough has spent decades navigating the friction between local governance and federal mandates. His expertise is particularly relevant today as states like Louisiana grapple with the rapidly evolving landscape of artificial intelligence. In this conversation, we explore the complexities of balancing technological innovation with consumer protection, the shifting dynamics of federal funding, and the legislative hurdles that arise when national executive orders collide with local bipartisan efforts to regulate the digital frontier.

Federal policy now links broadband infrastructure funding to state-level AI oversight. How do you weigh the urgent need for rural internet access against the desire to set local digital guardrails, and what specific metrics determine if a state law has “crossed the line” into a federal violation?

This is an agonizing trade-off for many lawmakers because rural Louisiana and similar regions are desperate for the connectivity that federal broadband money provides. When a legislator like Vincent Cox says he won’t jeopardize that funding, he is weighing the concrete, immediate benefit of internet access for his constituents against the abstract, future risks of unregulated AI. The metric for “crossing the line” is currently defined by an executive order that demands a single, nationwide policy structure rather than a patchwork of state laws. While the specific legal boundaries remain murky, any state law deemed “burdensome” by the Department of Justice or in direct conflict with federal priorities puts that essential infrastructure funding at risk. It creates a high-stakes environment where the fear of losing millions of dollars in grants effectively silences local debate on digital safety.

Lawmakers are currently advancing AI bills focused on child protection while shelving those related to healthcare insurance and employment. Why are sectors like child safety considered legally permissible under federal scrutiny, and what are the immediate risks to citizens if insurance and medical AI remain unregulated?

Child safety is the one narrow area where the current federal administration has signaled it will not interfere, viewing the protection of minors and the prevention of abuse as a clear state prerogative. This is why we see bills like House Bill 119, which targets AI-generated images of nude children, passing unanimously while broader regulations are abandoned. The risk in leaving insurance and medical AI unregulated is that we lose oversight on how life-altering decisions are made. If we shelf bills that limit AI in health insurance coverage or medical treatments, we are essentially allowing algorithms to determine a patient’s care without a human safety net or transparency. Citizens face a future where an opaque piece of software could deny a claim or mismanage a diagnosis, and without state guardrails, there is little recourse for the individual.

Critics argue that state-level AI restrictions could slow domestic innovation and give a competitive edge to international rivals. How can states foster a high-speed tech environment without sacrificing consumer protections, and what specific steps should be taken to ensure local businesses aren’t unfairly burdened?

The tension lies in the belief held by groups like the Pelican Institute that regulation is a handbrake on progress, potentially handing an advantage to global competitors like China. To foster innovation without sacrificing safety, states need to focus on transparency rather than prohibition, such as requiring disclosures when AI is used in automated calls or employment decisions. By implementing clear rules of the road—like prohibiting the use of surveillance data to set different prices for the same product—states can protect consumers while still allowing the tech to evolve. The goal should be a predictable regulatory environment where local businesses aren’t guessing at the rules, but are instead empowered by a framework that builds public trust in their tools. It’s about creating a “race to the top” for ethical AI, rather than a race to the bottom by removing all guardrails.

Executive offices often influence the legislative process when federal grants are at stake. When a governor asks a lawmaker to pull a bill to protect state funding, what does that negotiation process look like, and how does this affect the long-term bipartisan effort to manage emerging technologies?

The negotiation is often quiet but incredibly persuasive, involving direct appeals from the governor’s staff to legislative sponsors about the “greater good” of the state’s budget. We’ve seen this play out with Governor Landry’s office asking members to pull bills to avoid clashing with federal priorities, a move that forces lawmakers to choose between their policy goals and the state’s financial health. This process can be demoralizing for a bipartisan group that has worked hard to address constituent concerns about privacy and intellectual property. Long-term, it risks creating a chilling effect where legislators stop trying to innovate in the policy space because they fear their work will be vetoed by the threat of a funding cut. It shifts the power dynamic significantly toward the executive branch and federal oversight, often at the expense of local legislative initiative.

Several states are currently weakening their data-center and AI regulations in response to federal pressure. Is this trend leading toward a unified national policy, or is it creating a “regulatory vacuum” where neither the state nor the federal government provides adequate oversight for private industry?

What we are witnessing in states like Florida, Utah, and Louisiana is the creation of a dangerous regulatory vacuum rather than a cohesive national policy. While the federal government wants a unified approach, it hasn’t yet established a comprehensive statutory framework to replace the state laws being scrapped. When states back off under pressure, it leaves private industry to operate in a “wild west” environment where there are no clear rules for high-stakes applications like AI in hiring or healthcare. This vacuum benefits large tech companies and donors who prefer zero oversight, but it leaves the public vulnerable to algorithmic bias and privacy invasions. Until Congress passes a robust federal law, the retreat of the states means that for many citizens, there is simply no protection at all.

Proposals to prevent AI from recreating an individual’s identity or artistic material without permission are being sidelined in various legislatures. What are the legal implications for creators when these protections are removed, and how can artists protect their intellectual property if state-level guardrails are no longer available?

The removal of these protections is a devastating blow to the creative class, as it leaves them with almost no legal standing to prevent their likeness or voice from being stolen and monetized by AI. When bills that prohibit AI from recreating someone’s identity are shelved, we are effectively telling artists that their very persona is up for grabs in the digital commons. Without state-level guardrails, creators are forced to rely on outdated copyright laws that were never designed to handle the generative capabilities of modern technology. They find themselves in a David-versus-Goliath struggle, trying to protect their intellectual property against massive tech firms with far more legal resources. It’s a situation that fosters deep resentment among constituents who feel their livelihoods are being sacrificed for the sake of unfettered industrial growth.

What is your forecast for AI regulation in the United States?

I expect we will see a period of intense legal and political volatility where the “child safety exception” becomes the only viable path for state lawmakers for the next several years. Because the federal government has successfully used the “power of the purse”—specifically broadband funding—to cow state legislatures, we will likely see a stagnation of consumer protection laws at the local level. However, the pressure from constituents is not going away; as more people experience the downsides of unregulated AI, from insurance denials to deepfake fraud, the bipartisan demand for action will eventually force Congress’s hand. My forecast is that we are heading toward a massive federal legislative showdown, but until that happens, we will live in a fractured landscape where your digital rights depend entirely on how much your state government fears losing its next federal check.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later