As the defense landscape shifts toward algorithmic warfare, few voices carry as much weight in the halls of power as Donald Gainsborough. With the fiscal year 2026 National Defense Authorization Act on the horizon, the tension between Silicon Valley’s ethical boundaries and the Pentagon’s operational requirements has reached a boiling point. Gainsborough, a seasoned leader in policy and legislation at Government Curated, offers a sharp perspective on the evolving rules of engagement. In this discussion, we explore the legislative efforts to integrate artificial intelligence into military operations, the necessity of autonomous systems in high-speed combat, and the strategic importance of maintaining ethical standards that set the United States apart from its adversaries.
Current legal frameworks are being reassessed for the upcoming National Defense Authorization Act. How should governance provisions be structured to ensure AI safety alongside warfighters, and what specific benchmarks are necessary to track the effectiveness of these legislative updates?
The governance provisions in the fiscal year 2026 NDAA must be built on the reality that our rules of engagement are not static; they are living documents that change over time as technology evolves. Senator Mark Kelly has been leading the charge on this, examining how we can safely integrate advanced AI alongside our warfighters without compromising the mission or the individual. To track effectiveness, we need benchmarks that measure the reliability of AI decision-making under the stress of live combat scenarios, ensuring the systems act as an extension of the soldier’s intent. We are looking at a framework where the safety standards are codified early in the procurement process to avoid the kind of friction we’ve seen in recent months. This legislative update is about creating a predictable environment where the Department of Defense and tech innovators can operate with a shared understanding of what “safety” looks like on a digital battlefield.
High-speed combat operations sometimes require exceptions to the traditional human-in-the-loop requirement for autonomous offensive systems. Under what specific operational conditions should these exceptions be triggered, and how can the military maintain accountability when machines make split-second lethal decisions?
When you are staring down a threat that moves at hypersonic speeds, the luxury of a human-in-the-loop can sometimes become a fatal liability. We have to acknowledge that for autonomous offensive systems, there are moments where the sheer demand for flexibility and speed requires us to make exceptions to traditional oversight. These exceptions should be triggered in “denied environment” scenarios where communications are jammed or when the incoming threat volume exceeds a human’s cognitive capacity to respond in milliseconds. Accountability doesn’t vanish in these moments; rather, it shifts to the pre-mission stage where the parameters of the AI’s “lethal intent” are strictly defined by commanders. It’s a gut-wrenching shift for many, but as we’ve seen in closed briefings, the alternative is losing our competitive edge in a split-second engagement.
Tensions can arise when defense contractors’ safety standards regarding surveillance or lethal weapons usage conflict with agency objectives. How should “up front” discussions between contractors and the government be managed to prevent legal retaliation, and what clauses are vital to protect both safety ethics and mission requirements?
The recent fallout between Anthropic and the Department of Defense is a sobering reminder of what happens when expectations are not aligned from day one. We need to mandate “up front” discussions that act as a clear-eyed audit of what a contractor’s technology will and will not do, specifically regarding sensitive areas like the surveillance of citizens or autonomous lethal strikes. To prevent the kind of chaos that leads to lawsuits and claims of illegal retaliation, contracts should include “ethical compatibility” clauses that allow for off-ramps if a mission’s scope shifts beyond a company’s stated safety standards. It is a reasonable thing to expect that a contractor has boundaries, and the government must respect those boundaries to avoid the messy litigation we are seeing now. By getting these difficult conversations out of the way before the ink is dry, we protect the mission and the integrity of the private sector partners we rely on.
Maintaining higher ethical standards than adversaries like Russia or China is often viewed as a strategic strength for the U.S. and its allies. How do these rigorous AI standards specifically enhance military effectiveness during joint operations, and what metrics prove that ethical constraints do not compromise tactical speed?
There is a pervasive myth that ethics slow us down, but in reality, our standards are what make us a more effective and cohesive fighting force. We are not Russia, North Korea, or China; we operate with a level of transparency and morality that serves as the “glue” for our international alliances. When our allies know that our AI systems are governed by rigorous ethical frameworks, it facilitates seamless data sharing and joint operations that our adversaries simply cannot replicate. The metrics of success aren’t just in the speed of a single strike, but in the longevity of our partnerships and the reduction of collateral errors that often haunt less-regulated militaries. At the end of the day, having a clear standard of conduct ensures that our technological advantages are backed by a moral authority that strengthens our global position.
What is your forecast for AI integration in military operations?
My forecast is that we are entering an era of “hybrid accountability,” where the line between human decision-making and machine execution will become increasingly blurred but more legally codified. Over the next few years, the NDAA will likely transition from broad guidance to highly specific technical requirements, forcing a consolidation in the defense-tech market. We will see a shift where only the firms that can balance “mission-lethal” capabilities with “safety-first” architectures will survive the procurement gauntlet. Despite the current legal friction and the lawsuits alleging retaliation, the momentum toward autonomous systems is irreversible. The military of 2030 will rely on AI not just as a tool, but as a fundamental component of the command structure, requiring a complete reimagining of our rules of engagement.
