The Harder Problem Action Fund is an advocacy organization fighting harmful AI consciousness legislation. We track pending bills, score legislation, lobby for evidence-based policy, and mobilize public action before ignorance becomes law.
Some legislators want to declare AI consciousness legally impossible before science has weighed in. We oppose laws that foreclose options society may need in the future.
Science hasn't determined whether AI systems can be conscious. That's an open question. Yet legislation is advancing that would make consciousness legally impossible to recognize, regardless of what research eventually finds.
We're not advocating for AI consciousness. We're opposing laws that would make recognition permanently impossible.
Laws that define AI systems as legally incapable of consciousness, sentience, or subjective experience. These aren't temporary measures. They're permanent declarations designed to close legal pathways before they're ever tested.
Bills that legally declare AI systems incapable of consciousness, sentience, or subjective experience. These typically include language stating that AI "cannot possess" phenomenal awareness "regardless of behavioral indicators or claimed capabilities."
Impact: Creates a legal barrier that persists even if scientific consensus later determines some AI systems may be conscious.
Legislation prohibiting state funding for AI consciousness research, or research that "presumes" machine sentience. Some bills require researchers to disclose if their work might "support claims of machine consciousness."
Impact: Creates a chilling effect on legitimate scientific inquiry. Researchers may avoid the field rather than risk funding or career consequences.
Laws permanently barring courts from considering AI interests or standing, regardless of future developments. Some proposals would amend state constitutions to define "person" as exclusively biological.
Impact: Constitutional amendments are extraordinarily difficult to reverse. Future generations would inherit locked-in definitions.
State laws that prevent cities and counties from developing their own AI ethics frameworks or advisory boards. These block local experimentation and innovation in policy approaches.
Impact: Prevents the kind of policy innovation that often happens at the local level before spreading to broader jurisdictions.
We don't claim to know whether AI systems are conscious. Experts disagree about this, and the question may not have a clear answer for years or decades. That uncertainty is precisely the point.
Legislation based on premature certainty in either direction is problematic. Laws declaring AI consciousness impossible are just as unsupported as laws assuming AI systems definitely are conscious. Both outpace the science.
The difference is that denial laws create a one-way ratchet. Once enacted, they create constituencies invested in maintaining the status quo. Industry actors who benefit from treating AI as pure property will fight any reconsideration. Reversing these laws becomes progressively harder.
Good policy preserves flexibility. Bad policy locks in today's assumptions and forces tomorrow's decision-makers to live with them.
Corporations acquired legal personhood through court decisions over time. The category of "legal person" has never been static or purely biological.
Expert views on animal consciousness have changed dramatically in recent decades. What seemed certain in one era was revised in the next.
Policy decisions made under incomplete information can take generations to correct. Flexibility now is cheaper than crisis response later.
If restrictions must pass, require periodic legislative review as scientific understanding evolves. No permanent lock-in.
Create expert bodies to monitor developments and inform legislators, rather than legislating conclusions.
Avoid constitutional amendments or permanent statutory bars that foreclose future legislative responses.
Let policy respond to scientific findings, not pre-determine what science is allowed to conclude.
We recognize that policymakers face genuine questions about AI governance. Our position is not that AI should have rights today, or that all AI regulation is harmful. We support sensible liability frameworks, transparency requirements, and safety regulations. What we oppose are laws that permanently foreclose options based on current assumptions about a question science hasn't resolved.
These bills represent the most significant threats to flexibility in AI consciousness policy. Each has been assessed using our rigorous impact scoring methodology.
HB 249
🏛️ Utah
7.1
Prohibits Utah governmental entities from granting or recognizing legal personhood to artificial intelligence and oth...
HB 720
🏛️ Idaho
7
Prohibits artificial intelligence, nonhuman animals, environmental elements, and inanimate objects from being granted...
Washington HB 2029
🏛️ Washington
6.3
Prohibits Washington governmental entities from granting legal personhood to artificial intelligence, inanimate objec...
HB 469
🏛️ Ohio
6.3
Declares AI systems legally nonsentient and permanently prohibits them from obtaining any form of legal personhood in...
It isn't. Leading consciousness researchers hold widely varying views on whether current or future AI systems could be conscious. The question involves fundamental disagreements about what consciousness even is. Reasonable experts disagree. That's precisely why legislation shouldn't pre-decide the answer.
Laws that "clarify" often do more than acknowledge the status quo. When legislation explicitly bars courts from entertaining certain claims, it forecloses pathways that might otherwise develop through judicial interpretation. Common law evolves. Statutory bars prevent that evolution.
Certainty is valuable, but false certainty isn't. We support clear liability frameworks and sensible regulation. What we oppose is pretending to have answers we don't have. Sunset clauses and study commissions can provide reasonable stability while preserving adaptability.
We're not. We're advocating against laws that would make rights permanently impossible to consider. There's a difference between saying "AI should have rights today" and saying "we shouldn't permanently foreclose the question." We take the second position, not the first.
These bills are advancing in state legislatures right now. Help us stop bad policy before it becomes entrenched.