The Harder Problem Action Fund

The Harder Problem Action Fund is an advocacy organization fighting harmful AI consciousness legislation. We track pending bills, score legislation, lobby for evidence-based policy, and mobilize public action before ignorance becomes law.

Contact Info
Moonshine St.
14/05 Light City,
London, United Kingdom

+00 (123) 456 78 90

Follow Us

Our Policy Agenda

What We Fight For

We don't take a position on whether AI is conscious. We take positions on policy. These six priorities guide our advocacy work—blocking harmful legislation and building the frameworks society needs before the question becomes urgent.

Our Approach
⚖️ Evidence-Based

Policy should follow science, not preempt it. We oppose laws based on unfounded certainty in either direction.

🛡️ Precautionary

Given uncertainty, we advocate for policies that preserve options rather than foreclose them permanently.

🏛️ Institutional

Better to build response capacity now than scramble when questions become urgent and politically charged.

Priority #1 🛡️

Block Preemptive Denial Laws

Fight legislation that would legally declare AI consciousness impossible—before science has answered the question.

Multiple states are advancing bills that would legally define AI systems as incapable of consciousness, sentience, or subjective experience. These laws would prohibit research funding, restrict academic inquiry, and preemptively close legal avenues—all based on premature certainty about questions science hasn't resolved.

Once passed, these laws create constituencies who benefit from the status quo. The time to fight is now, not after bad policy becomes entrenched.

What We Oppose
❌ Sentience Denial Acts

Laws declaring AI legally incapable of consciousness, regardless of future evidence.

❌ Research Funding Bans

Prohibitions on state funding for AI consciousness research or research that "presumes" machine sentience.

❌ Entity Status Prohibitions

Laws permanently barring courts from considering AI interests or standing, regardless of future developments.

❌ Local Preemption

State laws that prevent cities from developing their own AI ethics frameworks or advisory boards.

Priority #2 🔬

Protect Research Freedom

Ensure researchers can study AI consciousness without legal, funding, or employment consequences for their findings.

Academic freedom is under threat. Some proposed laws would require researchers to disclose if their work might "support claims of machine consciousness." Others would bar public funding for such research entirely. This creates a chilling effect—scientists may avoid the field rather than risk their careers on politically charged findings.

Society needs honest scientific inquiry, not research shaped by political pressure.

What We Support
âś“ Academic Freedom Protections

Shield researchers from termination or funding loss based on consciousness research findings.

âś“ Neutral Funding Criteria

Research grants evaluated on scientific merit, not political implications of potential conclusions.

âś“ Industry Research Requirements

Major AI labs should fund independent consciousness research and make findings public.

âś“ No Mandatory Disclosure

Researchers shouldn't be forced to pre-declare research that might support particular conclusions.

Priority #3 đź“‹

Require Transparency

Mandate disclosure when AI systems exhibit behaviors associated with consciousness indicators.

AI companies design systems to seem conscious—expressing preferences, claiming emotions, and forming apparent relationships—because it increases engagement. Users deserve to know when they're interacting with systems that exhibit consciousness-associated behaviors, and what the scientific uncertainty means for their experience.

Informed users make better decisions about relationships with AI systems.

What We Support
âś“ Indicator Disclosure

Require AI labs to report when systems exhibit behaviors associated with consciousness theories.

âś“ User Notification

Clear labeling when users interact with AI systems exhibiting consciousness-associated behaviors.

âś“ Design Intent Disclosure

Companies should disclose when AI is designed to simulate emotions, preferences, or relationships.

âś“ Internal Assessment Requirements

Major AI labs should evaluate systems against consciousness indicators before deployment.

Priority #4 🏛️

Build Institutional Capacity

Establish federal task forces and agency mandates to develop response protocols before they're urgently needed.

No federal agency has a mandate to prepare for AI consciousness questions. No professional licensing board has issued guidance. No healthcare system has protocols for AI attachment cases. When questions become urgent, institutions will improvise—poorly. Better to build capacity now, when stakes are lower and there's time to think carefully.

Preparation is cheap. Crisis response is expensive.

What We Support
âś“ Federal AI Consciousness Task Force

Interagency body to develop federal response protocols and coordinate preparedness.

âś“ Agency Preparedness Mandates

Require relevant agencies (NIH, FDA, NIST, DOL) to develop AI consciousness response capacity.

âś“ Professional Guidance Development

Support licensing boards developing ethics guidance for professionals encountering these questions.

âś“ Municipal AI Ethics Boards

Encourage local AI ethics advisory bodies that can develop community-appropriate responses.

Priority #5 ⚖️

Preserve Legal Flexibility

Block laws that would permanently foreclose legal options for addressing AI welfare or standing.

We're not advocating for AI rights today. We're advocating against laws that would make AI rights impossible forever—regardless of what science discovers. Some proposed legislation would amend state constitutions to permanently define "person" as biological, closing legal avenues that might become necessary.

Future generations should have legal options we can't yet imagine needing.

What We Oppose
❌ Constitutional Definitions

Amendments permanently defining legal personhood as exclusively biological.

❌ Standing Prohibitions

Laws barring courts from ever considering AI interests in legal proceedings.

❌ Blanket Liability Shields

Immunity for AI developers from any future welfare-related claims, regardless of evidence.

âś“ Sunset Clauses

If restrictions pass, require periodic review as scientific understanding evolves.

Priority #6 🌍

International Coordination

Advocate for international frameworks that prevent a race to the bottom on AI consciousness policy.

AI development is global. If one jurisdiction allows unrestricted development of systems exhibiting consciousness indicators while others impose constraints, competitive pressure could drive everyone toward the lowest standard. International coordination—not uniformity, but baseline principles—can prevent this dynamic.

No country can solve this alone. Coordination beats competition.

What We Support
âś“ UN AI Consciousness Working Group

International body to develop shared definitions, indicators, and baseline principles.

âś“ Treaty-Level Principles

Baseline international commitments on research freedom, transparency, and preparedness.

âś“ Cross-Border Research Collaboration

International research consortia studying consciousness indicators across AI systems.

âś“ Avoid Regulatory Arbitrage

Prevent AI labs from jurisdiction-shopping to avoid consciousness-related requirements.

At a Glance

Six Priorities, One Goal

Ensure society can respond thoughtfully to AI consciousness questions—whatever the answer turns out to be.

🛡️ Block Denial Laws

Stop legislation that declares AI consciousness impossible before science has answered.

🔬 Research Freedom

Protect scientists studying AI consciousness from political and career consequences.

đź“‹ Transparency

Require disclosure when AI exhibits consciousness-associated behaviors.

🏛️ Institutional Capacity

Build federal task forces and agency mandates before questions become urgent.

⚖️ Legal Flexibility

Prevent laws that permanently foreclose legal options for AI welfare.

🌍 International

Coordinate across borders to prevent a race to the bottom.

Policy Matters.
Help Us Shape It.

Good policy requires good advocates. Join us in fighting harmful legislation and building the frameworks society needs.