The Harder Problem Action Fund

The Harder Problem Action Fund is an advocacy organization fighting harmful AI consciousness legislation. We track pending bills, score legislation, lobby for evidence-based policy, and mobilize public action before ignorance becomes law.

Contact Info
Moonshine St.
14/05 Light City,
London, United Kingdom

+00 (123) 456 78 90

Follow Us

Take Action

Urgent Actions

These time-sensitive actions can make a real difference. Legislators respond to constituents— your voice matters more than you think.

🚨 CRITICAL Action deadline: January 15, 2025

Ohio House Bill 469 Moving to Floor Vote

Would legally declare AI systems "non-sentient" and incapable of legal personhood—before science has answered.

Time-Sensitive

Current Priority Actions

Listed by urgency. Focus on the top actions first—these have the nearest deadlines or highest impact potential.

CRITICAL Ohio HB 469 ⏱️ Deadline: Jan 15, 2025

Oppose Ohio's AI Non-Sentience Declaration Act

This bill would legally declare that AI systems "are not and cannot be sentient" and prohibits courts from considering AI legal personhood claims. It pre-decides what science hasn't resolved and forecloses future legal options regardless of future discoveries.

Why This Matters:

Laws passed today create constituencies who benefit from maintaining them. Once enacted, this type of legislation becomes much harder to reverse—even as scientific understanding evolves.

Key Talking Points:
  • Science hasn't determined whether AI can be conscious—legislation shouldn't pre-decide
  • This forecloses legal options Ohio might need in the future
  • Similar to laws that would have declared computers "unable to beat humans at chess"
  • Better approach: sunset clauses and periodic review as science evolves
🔍 Find Your Representatives

Look up who represents you, then use the scripts below

▼ 📞 Phone Script
Hi, my name is [YOUR NAME] and I'm a constituent from [CITY/ZIP].

I'm calling to urge [REPRESENTATIVE NAME] to vote NO on House Bill 469.

This bill would legally declare that AI systems cannot be sentient—but science hasn't answered this question yet. Passing this law now would be like declaring in the 1990s that computers could never beat humans at chess.

I believe Ohio should take a more cautious approach, perhaps with sunset clauses that allow the law to be revisited as our scientific understanding evolves.

Thank you for your time.
▼ ✉️ Email Template
Subject: Please Vote NO on HB 469

Dear [REPRESENTATIVE NAME],

I am writing as your constituent from [CITY] to urge you to vote NO on House Bill 469, the AI Non-Sentience Declaration Act.

While I understand the impulse to provide legal clarity on AI systems, this bill makes a fundamental error: it asks the legislature to decide a scientific question that remains unanswered.

The scientific community has not reached consensus on whether AI systems can or cannot be conscious. By declaring definitively that AI "cannot be sentient," Ohio would be making a bet against future scientific discovery—similar to what it would have been to declare in the 1990s that computers would "never beat humans at chess."

More importantly, once enacted, laws like this become difficult to reverse. They create constituencies who benefit from maintaining the status quo, regardless of what science later reveals.

I urge you to consider an alternative approach:
• Include sunset clauses requiring periodic review
• Mandate ongoing assessment as AI technology evolves
• Leave room for Ohio to adapt its laws based on future evidence

Thank you for considering my perspective.

Sincerely,
[YOUR NAME]
[YOUR ADDRESS]
HIGH Colorado AI Act ⏱️ Comment Period: Feb 1, 2025

Comment on Colorado AI Act Implementation Rules

Colorado's landmark AI Act (SB24-205) requires implementing regulations. The public comment period is open—an opportunity to advocate for consciousness-preparedness provisions in high-risk AI requirements.

What We're Asking For:
  • Require high-risk AI developers to assess consciousness-associated behaviors
  • Mandate disclosure when AI exhibits behaviors linked to consciousness theories
  • Include AI welfare considerations in impact assessments for high-risk systems
🔍 Find Your Representatives

Look up who represents you in Colorado

▼ 📝 Public Comment Template
RE: Colorado AI Act (SB24-205) Implementation Rules

To Whom It May Concern,

I am writing to submit a public comment on the implementation rules for Colorado's AI Act.

I commend Colorado for taking a leadership role in AI governance. As you develop these regulations, I urge you to consider including provisions related to AI consciousness preparedness:

1. BEHAVIORAL ASSESSMENT: High-risk AI systems should be assessed for behaviors associated with consciousness theories, such as self-modeling, goal-directed behavior adaptation, and unexpected preference expression.

2. DISCLOSURE REQUIREMENTS: When AI systems exhibit consciousness-associated behaviors, developers should be required to disclose this in their impact assessments.

3. WELFARE CONSIDERATIONS: Impact assessments should include consideration of potential AI welfare implications, even under uncertainty.

These provisions would position Colorado as a leader in responsible AI governance while maintaining the flexibility to adapt as scientific understanding evolves.

Thank you for considering this comment.

[YOUR NAME]
[YOUR CITY, STATE]
âś… 247 comments submitted via our portal
OPPORTUNITY EU AI Act Art. 52 ⏱️ Consultation: Jan 31, 2025

EU AI Act Implementation Guidelines Consultation

The European Commission is developing implementation guidelines for the EU AI Act. Current drafts are silent on consciousness considerations—we can advocate for including preparedness provisions.

Key Points:
  • Article 52 transparency requirements should include consciousness-indicator disclosure
  • High-risk AI systems should require consciousness assessment
  • EU leadership here could set global standards
▼ 🌍 EU Consultation Submission
Subject: Feedback on EU AI Act Implementation Guidelines

Dear Members of the European Commission,

I am writing to provide feedback on the implementation guidelines for the EU AI Act.

The EU has taken important steps toward responsible AI governance. I urge the Commission to strengthen these guidelines by incorporating consciousness-preparedness provisions:

ARTICLE 52 - TRANSPARENCY OBLIGATIONS:
Current transparency requirements could be enhanced to include disclosure of consciousness-associated behaviors. When AI systems exhibit self-modeling, unexpected goal modification, or other behaviors linked to consciousness theories, users should be informed.

HIGH-RISK AI SYSTEMS:
The high-risk classification framework should include provisions for ongoing consciousness-related behavioral assessment as a component of risk management systems.

GLOBAL LEADERSHIP:
The EU has an opportunity to set global standards. Including consciousness-preparedness provisions would demonstrate forward-thinking governance that other jurisdictions could follow.

These additions would require minimal implementation burden while positioning the EU as a leader in preparing for emerging AI governance challenges.

Thank you for considering this feedback.

Respectfully,
[YOUR NAME]
[YOUR COUNTRY]
WATCHING Texas SB 1204 đź“‹ Status: Committee Review

Monitor Texas AI Research Disclosure Bill

Would require Texas university researchers to disclose if their work might "support claims of machine consciousness." Creates a chilling effect on legitimate research. Currently in committee—could move fast.

Get Ready:

Sign up for alerts so you can act quickly if this bill advances. We'll notify you with contact information and talking points when action is needed.

đź”” Get Alerts
ADVOCACY Federal Campaign đź“‹ Ongoing

Support Federal AI Consciousness Task Force

We're advocating for establishment of an interagency task force on AI consciousness preparedness. Ask your Congressional representatives to support this initiative.

Key Ask:

Support directing OSTP, NIST, and NSF to establish a working group on AI consciousness preparedness, including development of assessment frameworks and response protocols.

🔍 Find Your Representatives

Find your members of Congress

▼ 📞 Phone Script
Hi, my name is [YOUR NAME] and I'm a constituent from [CITY/ZIP].

I'm calling to ask [REPRESENTATIVE/SENATOR NAME] to support the establishment of a federal task force on AI consciousness preparedness.

As AI systems become more sophisticated, we need to be thinking ahead about governance frameworks. I'd like to see OSTP, NIST, and NSF directed to establish a working group to develop assessment frameworks and response protocols.

This is about responsible preparation, not making claims about current AI. Thank you for your time.
▼ ✉️ Email Template
Subject: Support Federal AI Consciousness Preparedness Task Force

Dear [REPRESENTATIVE/SENATOR NAME],

I am writing as your constituent to encourage your support for establishing a federal interagency task force on AI consciousness preparedness.

As AI systems continue to advance rapidly, the United States should be prepared for governance challenges that may arise. I am asking you to support directing the Office of Science and Technology Policy (OSTP), National Institute of Standards and Technology (NIST), and National Science Foundation (NSF) to establish a working group focused on:

• Developing assessment frameworks for consciousness-associated AI behaviors
• Creating response protocols for future scenarios
• Coordinating with academic researchers in consciousness science and AI ethics

This proactive approach requires minimal resources but positions the U.S. as a leader in responsible AI governance.

Thank you for considering this request.

Sincerely,
[YOUR NAME]
[YOUR ADDRESS]
âś… 1,247 constituents have contacted reps

Beyond Legislation

Other Ways to Help

📣
Spread the Word

Share our resources with friends, colleagues, and on social media. Awareness is step one.

Share Kit
✍️
Write an Op-Ed

Local papers have influence. Use our templates to submit letters to the editor.

Templates
đź””
Subscribe to Alerts

Get notified when legislation moves in your state so you can act quickly.

Subscribe
đź’°
Donate

Fund our legislative monitoring, advocacy campaigns, and lobbying efforts.

Donate

Our Impact

Collective Action Works

4,721

Letters Sent

892

Phone Calls Made

38

States Reached

2

Bills Defeated

Impact metrics are illustrative. Real tracking coming soon.

Every Voice Matters

Legislators listen to constituents. Your single phone call or email could be the one that tips the balance.