A political savant and leader in policy and legislation, Donald Gainsborough is at the helm of Government Curated. With extensive experience navigating the intersection of public interest and private innovation, he provides a unique vantage point on the shifting tides of educational standards. This conversation explores the rapid integration of artificial intelligence in American classrooms, focusing on the massive training initiatives currently being deployed to bridge the digital divide. Gainsborough discusses the logistical complexities of large-scale educator training, the nuances of corporate partnerships in public schools, and the fundamental shift toward AI literacy in a world where technology often outpaces policy.
Large-scale initiatives aim to train millions of educators across K-12 and higher education within a three-year window. What logistical hurdles arise when deploying such massive programs, and how can organizations ensure that digital badges and certificates translate into actual classroom proficiency rather than just surface-level participation?
The primary logistical hurdle is the sheer scale of the mission, as we are looking at reaching all six million K-12 and higher education faculty members in the U.S. within a very tight timeframe. Moving from the 23% of districts currently training on AI to the projected 75% by 2025 requires a massive coordination of resources and a departure from the “one-size-fits-all” professional development of the past. To ensure these programs result in true proficiency, we must move beyond the “land-grab” of simply issuing digital badges and focus on hands-on experience with tools like Gemini and NotebookLM. True proficiency is documented when teachers use these tools to solve real-world pedagogical problems, such as using an educator-focused chatbot to access curated, high-quality research. Success isn’t measured by a certificate on a LinkedIn profile; it’s measured by whether a teacher feels prepared for the realities of an AI-driven world.
Private tech giants are increasingly funding the professional development of public school teachers through multi-million dollar investments. How do you balance the need for cutting-edge tools with concerns about corporate “land-grabs,” and what steps can schools take to remain vendor-neutral while utilizing these specific platforms?
It is a delicate balance because while Google is making a “sizable investment” and Microsoft is launching platforms like Elevate for Educators, critics rightly worry about these becoming customer acquisition campaigns. We mitigate this by ensuring the training focuses on enduring AI skills that transcend any single platform, as the tools themselves will inevitably change over time. Schools can remain vendor-neutral by partnering with non-profits like ISTE+ASCD or aiEDU, which act as a buffer to ensure the focus remains on “best practices for learning” rather than just “how to use a specific product.” We must acknowledge that these companies have products to sell, but the $25 million in philanthropic commitments currently circulating can be used to empower teachers to remain in control of their classrooms. District leaders should treat these tools as options in a “super-generous free tier” while maintaining the pedagogical independence to switch vendors as the market evolves.
Some computer scientists argue that even experts cannot fully explain why large language models generate specific outputs. Given this complexity, what fundamental AI literacy skills should teachers prioritize, and how can they distinguish between teaching students how to use a specific tool versus understanding the underlying technology?
Even if the best computer scientists cannot fully explain why an LLM generates a specific text, educators must prioritize the skill of critical evaluation and “safe, appropriate use.” We need to teach students to see AI as a collaborator that requires human oversight, rather than an infallible source of truth. AI literacy should focus on “AI readiness”—understanding the logic of how these models are trained on curated databases—rather than just clicking buttons in a specific interface. Teachers can distinguish these by focusing on the “why” behind a prompt’s result; for example, if a student uses a chatbot for research, the lesson shouldn’t be about the chatbot itself, but about verifying the research against known high-quality sources. This ensures that even if the tool changes, the student’s ability to navigate an AI-augmented world remains intact.
Critics suggest that the rapid push for classroom automation could undermine the traditional human relationships essential for student development. In what ways can generative tools be configured to enhance rather than replace interpersonal dynamics, and what specific classroom scenarios demonstrate a successful balance between human-led instruction and AI assistance?
We have to remember that human beings have evolved to learn from each other within the context of relationships; that is our “superpower” as a species. Generative tools should be configured to handle the administrative and repetitive tasks—like drafting lesson plans or organizing research—so that teachers have more time for direct, one-on-one mentorship. A successful scenario involves using a Guided Learning app that allows a student to self-pace through a difficult concept, which frees the teacher to move around the room and provide emotional and cognitive support to those who are struggling. AI can actually make those human-to-human learning experiences much better by providing the data needed to understand exactly where a student is falling behind. When the technology handles the “mechanics” of delivery, the teacher can focus entirely on the “humanity” of instruction.
With certain federal technology offices closing and a lack of centralized national guidance, nonprofit organizations and private companies are taking the lead in setting standards. How does this decentralized approach impact equitable access to technology, and what should district leaders do to ensure underserved communities aren’t left behind?
The closure of the Office of Educational Technology has created a vacuum that requires an “all-hands-on-deck” approach from the private and non-profit sectors. This decentralized model risks creating a “zip code lottery” where wealthy districts get the best training while underserved communities lag behind. To combat this, we are seeing targeted investments, such as $25 million specifically aimed at reaching underserved communities through groups like 4-H and the Computer Science Teachers Association. District leaders must be proactive in seeking out these free tiers and partnership opportunities to ensure their staff and students aren’t excluded. Without a federal anchor, the responsibility falls on local leadership to advocate for “equitable access” as a core requirement of any tech partnership they sign.
Early data suggests that AI-driven tools may allow students to self-pace and spend more time on challenging concepts to deepen their understanding. Beyond tracking the time spent on a task, what long-term metrics are necessary to prove these tools improve proficiency, and how should educators document these outcomes?
We need to move beyond simple engagement metrics and look at how we can “bend the curves on proficiency” in core subjects like literacy and math. Long-term metrics should include the ability of a student to apply a concept learned via AI to a completely different, non-digital context, demonstrating true mastery. We should also track “depth of learning” by measuring whether students who spend more time on struggling topics eventually reach the same proficiency levels as their peers without requiring constant intervention. Educators can document these outcomes through portfolios that show the evolution of a student’s work, comparing early AI-assisted drafts with final, independent assessments. Research is ongoing, and we expect to share more public studies soon that connect these digital interactions to tangible learning gains.
What is your forecast for the role of AI in the American classroom over the next five years?
In the next five years, I expect AI to transition from a “novelty tool” to an invisible, foundational layer of the educational infrastructure, similar to how high-speed internet is viewed today. We will likely see a significant shift where the majority of administrative burdens on teachers are automated, allowing for a “renaissance of mentorship” where the human relationship is once again the centerpiece of the classroom. However, this future depends entirely on our ability to train the remaining 50% of educators who haven’t yet touched this technology. If we succeed, we will see a much more personalized education system where no student is left to struggle in silence because the technology will flag their needs in real-time. My forecast is that AI will not replace the teacher, but the teacher who uses AI will eventually replace the teacher who does not.
