As a political savant and the leader of Government Curated, Donald Gainsborough is at the forefront of the complex intersection between policy, legislation, and the digital lives of families. With tech platforms increasingly shaping childhood, his insights are crucial for understanding the battles being waged over parental rights and corporate responsibility in the online world. This interview explores the profound implications of policies that grant young teens digital autonomy, examining the risks to their safety and development, the legal challenges these practices create, and the potential for a more responsible path forward that prioritizes the well-being of the next generation.
Google’s policy allows children to remove parental controls at 13. How does this practice reframe the parent-child relationship in the digital world, and what are the specific developmental risks for a young teen suddenly gaining this autonomy? Please provide some real-world examples.
This policy does something incredibly insidious: it reframes parents as a temporary inconvenience to be outgrown, positioning a corporate platform as the default authority in a child’s life. When Google sends an email to a 12-year-old essentially counting down the days until they can escape supervision, it’s not just code—it’s a powerful message. It tells the child that their parents are an obstacle, not a guide. For a teen at such a critical stage of development, this sudden, unguided autonomy is a clear breach of the duty of care. For example, a child who was previously protected from certain content on YouTube can suddenly find themselves navigating deeply inappropriate material, while the platform that enabled it simultaneously works to advise advertisers on how to best target that same vulnerable teen. It creates a deeply predatory environment where the guardrails are removed by the very entity that stands to profit from the child’s unrestricted engagement.
When a child turns 13, they can enable payment methods on Google Pay and disable location sharing without consent. What are the immediate financial and physical safety concerns this creates, and what practical steps can a parent take to mitigate these new risks?
The immediate concerns are both stark and terrifying. On the financial side, allowing a 13-year-old to add payment methods without parental consent opens the door to unmonitored in-app purchases and potential financial exploitation. This directly challenges the spirit of regulations like the 2014 FTC Consent Decree, which was established precisely to prevent children from running up huge bills without parental knowledge. The physical safety risk is even more chilling. A parent’s ability to see their child’s location is a modern safety essential. If a child is late, misses a check-in, or is in an unfamiliar area, that tool is a lifeline. Having a platform encourage a child to disable it unilaterally is profoundly irresponsible. To mitigate this, parents are forced into a defensive crouch. They must have difficult, proactive conversations before the 13th birthday, explaining these dangers and establishing trust-based agreements outside of the app, because the technology itself has chosen to work against the family’s safety structure.
A complaint to the FTC alleges violations of child privacy laws and a decree on in-app purchases. Can you walk me through how allowing a 13-year-old to add payment methods could breach regulations that typically focus on children under 13?
This is a critical point that gets to the heart of regulatory loopholes. While the Children’s Online Privacy Protection Act (COPPA) explicitly protects children under 13, the 50-page complaint filed with the FTC argues that this policy creates a seamless, dangerous transition the moment that legal protection expires. The argument is that Google is failing its duty of care by engineering a system that encourages a child, who was legally protected just one day earlier, to immediately engage in risky behaviors like unmonitored spending. The violation of the 2014 FTC Consent Decree on in-app purchases is even more direct. That decree is built on the principle of parental permission for purchases. By allowing a 13-year-old to add a payment method and block parental oversight, the platform effectively circumvents the entire consent mechanism that the FTC fought to establish. It’s a deliberate design choice that neuters a key consumer protection.
Critics say that notifying a 13-year-old they can remove supervision frames parents as obstacles. What is the long-term impact of this corporate messaging on a teen’s trust in their parents, and how can families proactively navigate that conversation?
The long-term impact is the erosion of the family unit as the primary source of guidance and safety. When a powerful corporation sends the message that parents are simply “barriers to freedom,” it validates a teen’s natural impulse to push boundaries but does so without any of the wisdom, context, or love that a parent provides. It accelerates a push for tech independence without building any of the emotional readiness or safety knowledge needed to handle it. This can create a deep and lasting rift, positioning the parent as an antagonist in the teen’s digital life. To navigate this, families have to reframe the conversation entirely. It can’t be about control; it has to be about partnership. Parents need to explain that these digital guardrails are like learning to drive with an instructor—they are there to help you learn to navigate a dangerous environment safely, not to keep you from ever reaching your destination.
Advocates argue that continued parental supervision should be the default setting. From a policy standpoint, what would a more responsible corporate framework look like, and how could it better prepare teens for digital independence instead of just granting it at an arbitrary age?
A responsible framework would treat digital independence as a gradual learning process, not a switch that gets flipped on a birthday. The default should absolutely be continued supervision, as advocates like Tracy Parolin have stated. From there, the system should be designed to foster dialogue. Instead of a child unilaterally removing controls, perhaps a 13-year-old could request to disable a specific feature, which would then send a notification to the parent to discuss and approve it. This turns a moment of corporate-driven conflict into a teaching moment for the family. A truly responsible corporation would also build educational modules into this process, teaching teens about digital footprints, online financial literacy, and the very real dangers of the internet, especially given what we know about the negative impact on youth mental health, which 48 percent of teens already report feeling from social media.
What is your forecast for how tech companies and regulators will approach parental controls and teen digital autonomy over the next few years?
My forecast is that the status quo is on borrowed time. The pressure from advocates, parents, and lawmakers is reaching a boiling point. We are going to see a significant regulatory pushback against these “arbitrary age” cliffs. I anticipate a legislative movement, possibly a “COPPA 2.0,” that extends privacy and safety principles to older teens, recognizing that a 13- or 14-year-old is not a fully equipped digital adult. Companies will be forced by either regulation or public outcry to abandon these abrupt cutoffs in favor of more graduated, consent-based models that empower parents and teens to navigate digital independence together. With state leaders like Senator Mike Lee already pushing for age verification laws, and a history of multimillion-dollar fines for past violations, the tech giants will find that undermining the family unit is no longer a viable or profitable business strategy.