California Faces Scrutiny Over AI Safety in Schools Following Scandal

California Faces Scrutiny Over AI Safety in Schools Following Scandal

The rapid integration of Artificial Intelligence into the classroom has sparked a national debate over safety, ethics, and the role of technology in early development. While proponents argue that AI is an inevitable part of the future, recent incidents—ranging from sexualized imagery in elementary assignments to failed AI tutoring bots—have left parents and educators questioning the adequacy of current safeguards. Donald Gainsborough, a leading figure in policy and legislation at Government Curated, offers an expert perspective on the friction between tech marketing and classroom reality. This conversation explores the technical and systemic failures that put students at risk, the hidden inequities within AI usage, and the urgent need for clear “opt-out” policies to protect the youngest learners.

When an elementary school assignment for a children’s book report results in sexualized AI imagery, what specific technical failures occur in education-specific software? How can districts bridge the gap between marketing claims and the actual risks children face on school-issued devices? Please elaborate with step-by-step vetting details.

The incident at Delevan Drive Elementary, where a search for “Pippi Longstocking” returned images of women in lingerie, highlights a catastrophic failure in the “safety filters” marketed by tech giants. Technically, these failures occur because generative models are often trained on vast, uncurated datasets from the open web, and the narrow guardrails intended for “education editions” fail to account for the nuanced ways children describe characters. To bridge the gap between marketing and reality, districts must move beyond taking a vendor’s word and implement a rigorous local vetting process. This starts with “red-teaming” the software—having a diverse group of 50 or more administrators and teachers intentionally test the tool for edge cases before it reaches a single student’s Chromebook. Finally, districts must demand transparency on how these tools were tested for child safety, rather than accepting 24-hour patches after a harm has already occurred.

Since young Black and Latino students often use generative AI more frequently than their white peers, how does this usage gap impact educational equity? What specific steps should administrators take to ensure these tools don’t reinforce racial stereotypes or provide lower-quality outcomes for these student populations?

The usage gap is a double-edged sword; while it shows an eagerness to adopt new tools, it also means Black and Latino students are disproportionately exposed to the “hallucinations” and biases embedded in AI. If a tool is more likely to generate stereotypical or sexualized imagery of women of color, these students face a higher risk of psychological harm and lower-quality educational outputs. Administrators must prioritize “equity-centered design,” ensuring that the 50-member state working groups and local boards include voices from these specific communities to evaluate tools for cultural competence. We must move away from the idea that technology is a neutral “leveler” and acknowledge that without proactive intervention, AI will intensify historical disparities in computer science education. It is not enough to simply provide the tool; we must provide the critical thinking framework that allows students to recognize and challenge the stereotypes the AI might present to them.

State guidelines often emphasize critical thinking but lack specific policy guardrails for classroom AI use. How can schools realistically vet these tools without the necessary staff or resources, and what should a comprehensive “opt-out” policy look like for families who reject AI involvement in their child’s education?

The current guidelines are often criticized for being too vague, leaving overworked teachers to navigate complex ethical dilemmas without a roadmap. Realistically, schools cannot vet these tools in a vacuum, which is why the state must provide specific policy recommendations and localized support rather than just “suggestions.” A comprehensive “opt-out” policy is essential; it should allow parents to decline AI interaction for their children without any academic penalty, mirroring the privacy protections we see in other sectors. We are seeing a push for this in the Legislature because families deserve a choice in whether their kindergartners become “proficient AI users” before they have even mastered foundational reasoning. This policy should be transparent, easily accessible, and presented to families at the start of the school year, ensuring that “opt-out” isn’t just a theoretical right but a functional reality.

Some argue that AI is ubiquitous and children must be prepared for it, yet high-profile failures in AI tutoring and grading persist. How do you balance this sense of inevitability with the documented risks of narrowed student perspectives and deficiencies in critical reasoning? Please provide relevant metrics.

The narrative of “inevitability” often serves as a smokescreen for skipping the necessary safety checks, as seen when Los Angeles Unified pulled its “world-class” AI tutor just weeks after launch. While polls show a majority of teachers and students are already using AI, we cannot ignore the Brookings Institution study involving 50 countries which concluded that the risks currently outweigh the benefits for foundational development. We are seeing cases where school boards sign contracts for AI grading tools without even realizing it, which directly threatens the integrity of student feedback. To balance this, we must shift our focus from “how much AI can we use” to “where should we draw the line,” particularly in elementary education where the risk of narrowed perspectives is highest. We need to treat AI proficiency not as a replacement for critical thinking, but as a secondary skill that is only introduced once a student’s logic and reasoning skills are firmly established.

What is your forecast for AI in schools?

I anticipate a period of significant correction where the initial “gold rush” of AI adoption is replaced by a much more defensive and regulated environment. While leaders like Governor Newsom emphasize that we cannot prepare youth for a ubiquitous AI future by banning it, the sheer volume of “Pippigate” style incidents will force the state to move from vague guidance to hard, enforceable requirements. We will likely see more states adopting computer science and AI literacy as graduation requirements, but these will be coupled with strict moratoriums on specific high-risk tools like AI grading and companion chatbots. Ultimately, the “narrow window” to set these norms is closing, and the next two years will determine whether AI becomes a helpful assistant or a permanent source of distraction and bias in the American classroom. For our readers, the best approach is to remain “critically engaged”—do not wait for the district to tell you a tool is safe; ask for the vetting data, demand opt-out options, and ensure that technology never replaces the essential human connection between a teacher and a student.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later