Minnesota Bans AI Software That Creates Deepfake Nudes

Minnesota Bans AI Software That Creates Deepfake Nudes

A political savant and leader in policy and legislation, Donald Gainsborough stands at the helm of Government Curated, where he navigates the complex intersection of digital rights and emerging technology. With a career dedicated to crafting frameworks that protect citizens from the darker sides of innovation, he has become a pivotal voice in the movement to regulate generative artificial intelligence. In this conversation, we explore the landmark legislative shifts occurring in states like Minnesota, the technical challenges of distinguishing malicious tools from creative software, and the growing urgency to address digital abuse at the point of creation rather than just dissemination.

Many legal frameworks focus on the dissemination of explicit images rather than their creation. How does shifting liability to the developers of AI “undressing” tools change the legal landscape for survivors, and what specific hurdles remain when harmful images are stored privately on a computer without being shared online?

The shift toward targeting the point of creation is a monumental change because, traditionally, the law required an image to be shared publicly or sent to others before it was considered a crime. This left survivors like Molly Kelley in a devastating legal vacuum; she discovered that 80 women in Minnesota were victimized by a single perpetrator who kept the images on a private hard drive, making existing “revenge porn” or dissemination laws completely inapplicable. By focusing on the developers of these “nudification” services, we are finally acknowledging that the harm begins the moment a machine learning model is used to violate a person’s bodily autonomy. However, a significant hurdle remains in discovery and enforcement, as private storage makes it nearly impossible for victims to even know an image exists unless they stumble upon it. Even with new laws, proving the use of a specific third-party tool on a private device requires a level of digital forensics that many local law enforcement agencies are not yet equipped to handle.

Legislative efforts often exempt professional software like Photoshop while targeting “one-click” apps. What specific technical metrics should be used to differentiate a standard editing tool from a malicious nudification service, and how can regulators ensure these definitions don’t create loopholes for sophisticated developers?

The distinction lies primarily in the “technical skill of a user” required to achieve a specific result, a metric that groups like RAINN have worked hard to define within the Minnesota legislation. A standard editing tool provides a broad palette of functions—color correction, cropping, or manual layering—whereas a malicious nudification app is built for a singular, automated purpose that requires zero expertise. We look for “one-click” functionality where the generative AI is pre-trained specifically to identify clothing and replace it with sexualized imagery without human artistic intervention. To prevent loopholes, regulators must focus on the “intended use” and the marketing of the product; for instance, if a tool like Grok is advertised as being “willing to answer spicy questions” that others reject, it signals a lack of industry-standard precautions. We have to ensure that “general purpose” labels aren’t used as a shield for platforms that intentionally facilitate the creation of over 1.8 million sexualized images in just nine days, as we saw in recent mass episodes of digital violence.

Heavy financial penalties and civil lawsuits are being introduced to deter developers. How effective are $500,000 fines against offshore or decentralized platforms, and what practical steps can a state attorney general take to successfully collect these damages from entities that operate outside traditional jurisdictions?

A $500,000 fine per violation is a staggering figure designed to make the business model of digital abuse economically unviable, but its effectiveness is admittedly tied to the reach of our financial systems. For offshore entities, the state attorney general must look toward “following the money” by targeting the payment processors, app stores, and advertising networks that allow these platforms to monetize their services. If Meta continues to allow these apps to advertise on Facebook and Instagram, or if they remain accessible despite bans on the Google and Apple stores, those intermediaries become the pressure points for enforcement. Practical steps include seeking injunctions to freeze domestic assets or working through international treaties to enforce civil judgments, though we must be honest that decentralized platforms remain a “whack-a-mole” challenge. The goal is to raise the cost of doing business so high that legitimate infrastructure providers refuse to host or process payments for these bad actors.

AI chatbots and social media ads have made generating nonconsensual imagery accessible to anyone, including minors. Beyond basic guardrails, what industry-standard precautions should tech companies be required to implement, and how can school administrators better handle the rising trend of digital abuse among students?

Industry standards must move beyond simple keyword filters, which are easily bypassed, and toward robust “safety by design” that includes mandatory hashing of known harmful outputs and strict identity verification for high-risk generative features. We are seeing a terrifying trend where 1 in 10 women have experienced tech-facilitated sexual abuse in the last year alone, and the accessibility of these tools means that students are increasingly becoming the perpetrators. School administrators are currently on the front lines with very little support; they need clear protocols that treat the creation of a deepfake with the same gravity as physical sexual harassment. We’ve seen at least 23 cases of deepfake abuse targeting school communities since 2023, and administrators must move away from viewing this as a “prank” and toward a framework of restorative justice and digital literacy. This includes educating parents and students that even if an image isn’t “real” or “shared,” the act of creating it using a third-party model is a violation of the victim’s rights and, in states like Minnesota, a basis for a lawsuit.

While some states are passing bans, federal legislation has stalled and potential preemption risks remain. What are the practical implications for survivors if a patchwork of state laws is eventually nullified by federal policy, and how can local advocates maintain momentum in such an uncertain legal environment?

The risk of federal preemption is a dark cloud hanging over these state-level victories, especially if a future administration prioritizes a unified, and perhaps weaker, national AI standard over localized protections. If state laws like Minnesota’s are nullified, survivors could lose their only path to civil restitution, returning them to a state where they have no “right of action” to sue for damages. Local advocates can maintain momentum by framing these protections as fundamental privacy and civil rights that should form the floor, not the ceiling, of federal policy. We must continue to point to the data—like the fact that 1 in 3 women will experience this kind of abuse in their lifetime—to prove that the status quo is a public health crisis. By building a “patchwork” now, we are actually providing the blueprint for what a successful federal law should look like, forcing Congress to look at the DEFIANCE Act or the Take It Down Act as necessary evolutions of current law.

What is your forecast for the regulation of AI-generated intimate imagery?

I believe we are entering an era of “technological accountability” where the immunity currently enjoyed by many AI developers will rapidly erode as the human cost becomes impossible to ignore. In the next three to five years, I expect to see a federal mandate for “digital watermarking” and “provenance tracking” on all generative models, making it much easier to trace a harmful image back to the specific tool used to create it. We will likely see a significant Supreme Court challenge regarding the First Amendment implications of these bans, but the overwhelming public outcry and the sheer volume of victims will drive a shift toward viewing nonconsensual deepfakes as a form of “conduct” rather than “speech.” Ultimately, the industry will be forced to adopt a “zero-trust” architecture for sexualized content, where the burden of proof for consent lies with the creator and the platform, rather than the survivor having to chase ghosts across the internet.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later