Donald Gainsborough is a prominent figure in the landscape of American public policy, serving as a leading voice at Government Curated. With an extensive background in legislative strategy and the digital transformation of state agencies, Gainsborough has spent decades navigating the complex intersection of social welfare and emerging technology. His expertise is particularly vital as states grapple with federal mandates like H.R. 1, which link funding to the precision of benefit disbursements. In this conversation, we explore the nuances of Michigan’s recent deployment of AI tools for SNAP applications, the lingering shadows of past technological failures like the MiDAS system, and the essential safeguards needed to protect the rights of vulnerable citizens in an increasingly automated world.
The following discussion examines the shift toward algorithmic case management, the mechanics of automated income verification, and the critical importance of human oversight and constitutional due process.
States are facing increased financial pressure under federal requirements to reduce payment error rates in SNAP benefits. How does integrating AI case-reading tools help agencies target high-risk households for review, and what specific metrics determine which cases have the highest likelihood of resulting in a payment error?
The integration of tools like Google Vertex AI allows agencies to move beyond the limitations of manual reviews, which historically could only cover a small fraction of the total caseload. By using these tools to scan every single case line-by-line before payments are issued, the department can identify specific patterns that human eyes might miss during a routine check. Our data shows that the highest frequency of errors typically occurs in single and dual-person households, whereas the most significant financial discrepancies—the “large dollar” errors—are concentrated in larger households with more individuals. By targeting these specific demographics, the AI helps officials focus their limited resources on the applications most likely to impact the state’s overall error rate. It creates a proactive environment where we can catch discrepancies in a “perfect environment” before the funds are actually disbursed, which is critical under the new federal pressures of H.R. 1.
Optical character recognition is often used to scan pay stubs and input income data to minimize manual entry mistakes. What specific safeguards prevent staff from deferring too much to these automated scans, and how do you ensure that minor data discrepancies are not automatically flagged as fraud?
The primary safeguard is a strict policy where eligibility staff remain the ultimate decision-makers, utilizing the optical character recognition (OCR) tools merely as a support mechanism to flag potential issues. We’ve seen in the past that systems which automatically equate any minor data discrepancy with intentional misrepresentation lead to disastrous results, so the goal here is to keep the “human in the loop” for verification on the back end. We are very conscious of the fact that fraud rates in public benefits are actually quite low, so the software must be tuned to avoid the “overcorrection” that plagued previous systems. Instead of the system making a final determination, it serves as an assistant that highlights areas of interest, such as income documentation, for a trained professional to review manually. This ensures that a simple clerical error or a slightly blurry scan of a pay stub doesn’t escalate into a life-altering fraud accusation without a human being looking at the actual context.
Past attempts to automate benefit systems have occasionally resulted in high rates of false positives and wrongful accusations. What specific testing and vetting protocols must be implemented before a new system goes live, and how can agencies prevent applicants from becoming “guinea pigs” during the rollout?
To avoid repeating the mistakes of the MiDAS era, where 93% of the 22,000 cases flagged as fraudulent turned out to be legitimate, we must implement rigorous, transparent testing phases using historical data before any live “go-no-go” decision. This involves vetting the algorithms against the AI Risk Management Framework from the National Institute of Standards and Technology to ensure they don’t produce a flood of false positives that overwhelm both the agency and the recipients. Agencies must resist the urge to rush rollouts for the sake of “efficiency” and instead prioritize the stability of the system to prevent turning 40,000 citizens into experimental subjects. We advocate for a “shadow” period where the AI runs alongside human workers without affecting actual benefits, allowing us to iron out bugs and ensure the software isn’t taking an average of an applicant’s income instead of looking at specific, individual paychecks. Only after the system proves it can distinguish between a technical error and actual fraud should it be fully integrated into the workflow.
Algorithmic determinations can sometimes create a “black box” where decisions are difficult for applicants to explain or challenge. How do you maintain constitutional due process when an automated system flags a case, and what steps ensure that human reviewers provide meaningful oversight rather than just rubber-stamping results?
Maintaining due process requires that every automated flag be accompanied by a clear, understandable reason that a human reviewer can explain to the applicant, effectively opening the “black box.” If a system flags a case but cannot provide the underlying logic, the subsequent human review becomes a meaningless rubber stamp, which is a direct violation of the constitutional right to a fair hearing. We must train our reviewers to be skeptics of the technology, teaching them about the inherent limits of AI so they don’t simply defer to the computer’s judgment out of convenience. Meaningful oversight means that an applicant shouldn’t have to wait until they are standing before an administrative law judge to have a human being actually look at their file for the first time. The process must remain transparent, providing applicants with the opportunity to correct data matching errors before any adverse action is taken against their benefits.
Many benefit recipients face significant barriers, such as limited internet access or the inability to meet tight administrative deadlines. How does the shift toward AI-driven reviews affect these vulnerable populations, and what specific resources are necessary to help them navigate an increasingly automated application and appeal process?
The shift toward automation can be incredibly daunting for individuals who are already in dire financial straits and may not have reliable access to a computer or the internet. When a system flags a case and triggers a 10-day response window, it places an immense burden on people who may struggle with administrative literacy or the hardware necessary to respond. To support these populations, we need to maintain robust, accessible legal assistance and physical service centers where people can get help from a person, not a chatbot. We must also ensure that the timelines for appeals are realistic and that the notification process doesn’t rely solely on digital portals that a recipient might not check daily. Automation should be used to speed up approvals for those in need, not to create a high-tech obstacle course that effectively freezes out the very people the program is designed to serve.
Private companies are frequently contracted to design and implement the software used for public benefit determinations. How can government agencies maintain ultimate accountability when a vendor’s system fails, and what specific guardrails should be included in these contracts to protect the civil rights of the applicants?
Ultimate accountability must always rest with the state agency; it is never acceptable for officials to shift the blame to a vendor when an algorithm goes haywire and harms thousands of citizens. Contracts with private software providers must include explicit guardrails regarding civil rights and data privacy, ensuring that the company isn’t essentially writing public policy through its code. These agreements should mandate regular third-party audits of the software and require the vendor to be transparent about how their “black box” logic reaches specific eligibility determinations. We also need “kill switch” clauses that allow the agency to halt the use of an automated tool immediately if it shows a spike in false positives or discriminatory outcomes. Government functions are public trusts, and when we outsource the “how” of benefit delivery, we cannot outsource the “why” or the responsibility for the results.
What is your forecast for the use of AI in social safety net programs?
I anticipate a period of high volatility where the drive for cost-savings and efficiency frequently clashes with the fundamental rights of low-income Americans. While tools like Google Vertex AI offer the potential for much-needed accuracy in a high-pressure federal environment, the erosion of federal guardrails under different administrations could lead to more “black box” disasters if states are not vigilant. We will likely see a widening gap between states that prioritize human-centered oversight and those that fully automate their systems to save money, potentially leading to a new wave of massive class-action lawsuits over wrongful benefit denials. My hope is that the lessons learned from past failures will lead to a more balanced approach, where technology acts as a supportive tool for caseworkers rather than a replacement for human judgment and empathy.
