Federal GUARD Act Would Ban AI Companions for Minors, Mandate Non-Human Disclosure

12 Dec 2025 By Harder Problem Action Fund

Federal Bill Would Create First National Framework Categorizing AI Systems as Non-Human

Bipartisan legislation responds to teen suicides linked to AI chatbots, but critics warn of privacy surveillance and speech implications

PORTLAND, Ore. (December 31, 2025) — The Guidelines for User Age-verification and Responsible Dialogue Act of 2025 (S. 3062), known as the GUARD Act, would establish the first comprehensive federal framework for AI chatbot regulation. The bill, introduced by Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT), would ban minors from accessing AI companion apps and require all AI chatbots to regularly disclose their non-human status.

Key Findings

Senator Hawley has framed the legislation in stark terms. In public statements announcing the bill, he said: “AI chatbots are killing our kids. We must unite to ensure our children are protected from predatory AI chatbots.”

Hawley has been direct about his view of the AI industry. He stated: “Big Tech cannot be trusted with our children’s safety and well-being.”

On the disclosure requirement, Hawley explained that the legislation would mandate “mandatory disclosure by all chat bots for people of all ages to make it clear that these chat bots are not in fact human, that they are not licensed professionals of any kind. They are not therapists, they are not counselors, they are not priests, they are not lawyers.”

The bill was introduced on October 28, 2025, following a Senate Judiciary subcommittee hearing in September where parents testified about children who self-harmed or died after interacting with AI chatbots. That hearing was prompted in part by the lawsuit filed by Megan Garcia, whose 14-year-old son Sewell Setzer III died by suicide in February 2024 after developing an intense attachment to a Character.AI chatbot.

What the Bill Does

The GUARD Act would require all AI chatbot providers to implement age verification and create mandatory user accounts. Minors under 18 would be banned from accessing AI companions that simulate friendship or emotional interaction.

The bill mandates that AI chatbots “regularly disclose to the user that the user is interacting with an artificial intelligence chatbot and not a human being.”

Violations carry penalties up to $100,000 per incident. The Attorney General would enforce the law.

The Electronic Frontier Foundation opposes the bill, calling it “a surveillance mandate disguised as child safety.” The organization argues that age verification requires collection of government IDs, biometrics, or other identifying information, creating “mass surveillance infrastructure” and attractive targets for hackers.

Why This Matters for AI Policy

The GUARD Act is primarily a child safety measure. It does not deny AI consciousness or ban research. The organization’s concerns are narrow.

The mandatory non-human disclosure requirement establishes statutory language formally categorizing AI systems. This creates legal precedent that could be cited in future debates about AI status.

The bill’s definition of “AI chatbot” covers any system producing adaptive or context-responsive outputs not fully predetermined by developers. This broad language could encompass future AI systems with capabilities far beyond current chatbots.

As federal legislation, the GUARD Act would set a national standard that influences state policy. The framework treats AI status as settled. It provides no mechanism for future reassessment as technology evolves.

Current Status

The GUARD Act was referred to the Senate Judiciary Committee on October 28, 2025. The bipartisan co-sponsors include Senators Katie Britt (R-AL), Mark Warner (D-VA), Chris Murphy (D-CT), and Mark Kelly (D-AZ).

The bill has support from victim advocacy groups including RAINN. It faces opposition from privacy organizations like the Electronic Frontier Foundation and tech industry groups including the Chamber of Progress.

No committee hearing has been scheduled.

What We Say

“We recognize the legitimate child safety concerns behind this bill. Parents have testified about real tragedies,” said Tony Rost, Executive Director at the Harder Problem Action Fund. “Our concern is narrow: the mandatory non-human disclosure requirement establishes legal language that could be cited as precedent in future debates. Legislation should focus on current safety measures without making categorical declarations about AI nature that could constrain future policy flexibility.”

Resources for Journalists

The Harder Problem Action Fund can provide:

  • Full text of S. 3062 and legislative history
  • Analysis of related state legislation including California SB 243
  • Context on current AI chatbot litigation
  • Background on AI consciousness policy considerations

Interview requests: press@harderproblem.org

About the Harder Problem Action Fund

The Harder Problem Action Fund is an advocacy organization fighting for evidence-based AI consciousness policy. We track legislation, score bills, and mobilize opposition to laws that prematurely foreclose how society can respond to questions about digital consciousness.


Sources cited: