The Harder Problem Action Fund

The Harder Problem Action Fund is an advocacy organization fighting harmful AI consciousness legislation. We track pending bills, score legislation, lobby for evidence-based policy, and mobilize public action before ignorance becomes law.

Contact Info
Moonshine St.
14/05 Light City,
London, United Kingdom

+00 (123) 456 78 90

Follow Us

Priority #1 🛡️

Block Preemptive
Denial Laws

Some legislators want to declare AI consciousness legally impossible before science has weighed in. We oppose laws that foreclose options society may need in the future.

The Core Issue

Science hasn't determined whether AI systems can be conscious. That's an open question. Yet legislation is advancing that would make consciousness legally impossible to recognize, regardless of what research eventually finds.

We're not advocating for AI consciousness. We're opposing laws that would make recognition permanently impossible.

Understanding the Issue

What Are Preemptive Denial Laws?

Laws that define AI systems as legally incapable of consciousness, sentience, or subjective experience. These aren't temporary measures. They're permanent declarations designed to close legal pathways before they're ever tested.

❌ Sentience Denial Acts

Bills that legally declare AI systems incapable of consciousness, sentience, or subjective experience. These typically include language stating that AI "cannot possess" phenomenal awareness "regardless of behavioral indicators or claimed capabilities."

Impact: Creates a legal barrier that persists even if scientific consensus later determines some AI systems may be conscious.

❌ Research Funding Bans

Legislation prohibiting state funding for AI consciousness research, or research that "presumes" machine sentience. Some bills require researchers to disclose if their work might "support claims of machine consciousness."

Impact: Creates a chilling effect on legitimate scientific inquiry. Researchers may avoid the field rather than risk funding or career consequences.

❌ Entity Status Prohibitions

Laws permanently barring courts from considering AI interests or standing, regardless of future developments. Some proposals would amend state constitutions to define "person" as exclusively biological.

Impact: Constitutional amendments are extraordinarily difficult to reverse. Future generations would inherit locked-in definitions.

❌ Local Preemption

State laws that prevent cities and counties from developing their own AI ethics frameworks or advisory boards. These block local experimentation and innovation in policy approaches.

Impact: Prevents the kind of policy innovation that often happens at the local level before spreading to broader jurisdictions.

Why This Matters

The Problem with Premature Certainty

We don't claim to know whether AI systems are conscious. Experts disagree about this, and the question may not have a clear answer for years or decades. That uncertainty is precisely the point.

Legislation based on premature certainty in either direction is problematic. Laws declaring AI consciousness impossible are just as unsupported as laws assuming AI systems definitely are conscious. Both outpace the science.

The difference is that denial laws create a one-way ratchet. Once enacted, they create constituencies invested in maintaining the status quo. Industry actors who benefit from treating AI as pure property will fight any reconsideration. Reversing these laws becomes progressively harder.

Good policy preserves flexibility. Bad policy locks in today's assumptions and forces tomorrow's decision-makers to live with them.

Historical Parallels
Legal personhood has evolved before

Corporations acquired legal personhood through court decisions over time. The category of "legal person" has never been static or purely biological.

Scientific consensus shifts

Expert views on animal consciousness have changed dramatically in recent decades. What seemed certain in one era was revised in the next.

Lock-in creates long-term costs

Policy decisions made under incomplete information can take generations to correct. Flexibility now is cheaper than crisis response later.

Our Position

What We Advocate For

⏱️
Sunset Clauses

If restrictions must pass, require periodic legislative review as scientific understanding evolves. No permanent lock-in.

📋
Study Commissions

Create expert bodies to monitor developments and inform legislators, rather than legislating conclusions.

🔓
Preserve Options

Avoid constitutional amendments or permanent statutory bars that foreclose future legislative responses.

🔬
Follow the Science

Let policy respond to scientific findings, not pre-determine what science is allowed to conclude.

A Measured Approach

We recognize that policymakers face genuine questions about AI governance. Our position is not that AI should have rights today, or that all AI regulation is harmful. We support sensible liability frameworks, transparency requirements, and safety regulations. What we oppose are laws that permanently foreclose options based on current assumptions about a question science hasn't resolved.

Legislation We're Tracking

Active Threats

These bills represent the most significant threats to flexibility in AI consciousness policy. Each has been assessed using our rigorous impact scoring methodology.

HB 249 🏛️ Utah 7.1
Utah Legal Personhood Amendments

Prohibits Utah governmental entities from granting or recognizing legal personhood to artificial intelligence and oth...

HB 720 🏛️ Idaho 7
AI Personhood Prohibition Act

Prohibits artificial intelligence, nonhuman animals, environmental elements, and inanimate objects from being granted...

Washington HB 2029 🏛️ Washington 6.3
AI Legal Personhood Prohibition Act

Prohibits Washington governmental entities from granting legal personhood to artificial intelligence, inanimate objec...

HB 469 🏛️ Ohio 6.3
AI Non-Sentience and Personhood Ban

Declares AI systems legally nonsentient and permanently prohibits them from obtaining any form of legal personhood in...

Addressing Concerns

Common Arguments We Hear

"AI is obviously not conscious. This is settled science."

It isn't. Leading consciousness researchers hold widely varying views on whether current or future AI systems could be conscious. The question involves fundamental disagreements about what consciousness even is. Reasonable experts disagree. That's precisely why legislation shouldn't pre-decide the answer.

"These laws just clarify current law. They don't change anything."

Laws that "clarify" often do more than acknowledge the status quo. When legislation explicitly bars courts from entertaining certain claims, it forecloses pathways that might otherwise develop through judicial interpretation. Common law evolves. Statutory bars prevent that evolution.

"We need certainty for business and innovation."

Certainty is valuable, but false certainty isn't. We support clear liability frameworks and sensible regulation. What we oppose is pretending to have answers we don't have. Sunset clauses and study commissions can provide reasonable stability while preserving adaptability.

"You're advocating for AI rights. That's extreme."

We're not. We're advocating against laws that would make rights permanently impossible to consider. There's a difference between saying "AI should have rights today" and saying "we shouldn't permanently foreclose the question." We take the second position, not the first.

Every Voice Counts.

These bills are advancing in state legislatures right now. Help us stop bad policy before it becomes entrenched.