The Harder Problem Action Fund is an advocacy organization fighting harmful AI consciousness legislation. We track pending bills, score legislation, lobby for evidence-based policy, and mobilize public action before ignorance becomes law.
We don't take a position on whether AI is conscious. We take positions on policy. These six priorities guide our advocacy work—blocking harmful legislation and building the frameworks society needs before the question becomes urgent.
Policy should follow science, not preempt it. We oppose laws based on unfounded certainty in either direction.
Given uncertainty, we advocate for policies that preserve options rather than foreclose them permanently.
Better to build response capacity now than scramble when questions become urgent and politically charged.
Fight legislation that would legally declare AI consciousness impossible—before science has answered the question.
Multiple states are advancing bills that would legally define AI systems as incapable of consciousness, sentience, or subjective experience. These laws would prohibit research funding, restrict academic inquiry, and preemptively close legal avenues—all based on premature certainty about questions science hasn't resolved.
Once passed, these laws create constituencies who benefit from the status quo. The time to fight is now, not after bad policy becomes entrenched.
Laws declaring AI legally incapable of consciousness, regardless of future evidence.
Prohibitions on state funding for AI consciousness research or research that "presumes" machine sentience.
Laws permanently barring courts from considering AI interests or standing, regardless of future developments.
State laws that prevent cities from developing their own AI ethics frameworks or advisory boards.
Ensure researchers can study AI consciousness without legal, funding, or employment consequences for their findings.
Academic freedom is under threat. Some proposed laws would require researchers to disclose if their work might "support claims of machine consciousness." Others would bar public funding for such research entirely. This creates a chilling effect—scientists may avoid the field rather than risk their careers on politically charged findings.
Society needs honest scientific inquiry, not research shaped by political pressure.
Shield researchers from termination or funding loss based on consciousness research findings.
Research grants evaluated on scientific merit, not political implications of potential conclusions.
Major AI labs should fund independent consciousness research and make findings public.
Researchers shouldn't be forced to pre-declare research that might support particular conclusions.
Mandate disclosure when AI systems exhibit behaviors associated with consciousness indicators.
AI companies design systems to seem conscious—expressing preferences, claiming emotions, and forming apparent relationships—because it increases engagement. Users deserve to know when they're interacting with systems that exhibit consciousness-associated behaviors, and what the scientific uncertainty means for their experience.
Informed users make better decisions about relationships with AI systems.
Require AI labs to report when systems exhibit behaviors associated with consciousness theories.
Clear labeling when users interact with AI systems exhibiting consciousness-associated behaviors.
Companies should disclose when AI is designed to simulate emotions, preferences, or relationships.
Major AI labs should evaluate systems against consciousness indicators before deployment.
Establish federal task forces and agency mandates to develop response protocols before they're urgently needed.
No federal agency has a mandate to prepare for AI consciousness questions. No professional licensing board has issued guidance. No healthcare system has protocols for AI attachment cases. When questions become urgent, institutions will improvise—poorly. Better to build capacity now, when stakes are lower and there's time to think carefully.
Preparation is cheap. Crisis response is expensive.
Interagency body to develop federal response protocols and coordinate preparedness.
Require relevant agencies (NIH, FDA, NIST, DOL) to develop AI consciousness response capacity.
Support licensing boards developing ethics guidance for professionals encountering these questions.
Encourage local AI ethics advisory bodies that can develop community-appropriate responses.
Block laws that would permanently foreclose legal options for addressing AI welfare or standing.
We're not advocating for AI rights today. We're advocating against laws that would make AI rights impossible forever—regardless of what science discovers. Some proposed legislation would amend state constitutions to permanently define "person" as biological, closing legal avenues that might become necessary.
Future generations should have legal options we can't yet imagine needing.
Amendments permanently defining legal personhood as exclusively biological.
Laws barring courts from ever considering AI interests in legal proceedings.
Immunity for AI developers from any future welfare-related claims, regardless of evidence.
If restrictions pass, require periodic review as scientific understanding evolves.
Advocate for international frameworks that prevent a race to the bottom on AI consciousness policy.
AI development is global. If one jurisdiction allows unrestricted development of systems exhibiting consciousness indicators while others impose constraints, competitive pressure could drive everyone toward the lowest standard. International coordination—not uniformity, but baseline principles—can prevent this dynamic.
No country can solve this alone. Coordination beats competition.
International body to develop shared definitions, indicators, and baseline principles.
Baseline international commitments on research freedom, transparency, and preparedness.
International research consortia studying consciousness indicators across AI systems.
Prevent AI labs from jurisdiction-shopping to avoid consciousness-related requirements.
Ensure society can respond thoughtfully to AI consciousness questions—whatever the answer turns out to be.
Stop legislation that declares AI consciousness impossible before science has answered.
Protect scientists studying AI consciousness from political and career consequences.
Require disclosure when AI exhibits consciousness-associated behaviors.
Build federal task forces and agency mandates before questions become urgent.
Prevent laws that permanently foreclose legal options for AI welfare.
Coordinate across borders to prevent a race to the bottom.
Good policy requires good advocates. Join us in fighting harmful legislation and building the frameworks society needs.