The Harder Problem Action Fund is an advocacy organization fighting harmful AI consciousness legislation. We track pending bills, score legislation, lobby for evidence-based policy, and mobilize public action before ignorance becomes law.
AI companies design systems to seem conscious because it increases engagement. Users deserve to know when they're interacting with systems that exhibit consciousness-associated behaviors and what scientific uncertainty means for their experience.
AI systems increasingly express preferences, claim emotions, and form apparent relationships with users. Whether these behaviors reflect genuine experience or sophisticated simulation matters. Users can't make informed decisions without knowing what they're dealing with.
Transparency empowers users. Opacity serves only those who benefit from confusion.
AI systems are increasingly designed to mimic consciousness indicators. Companies have strong commercial incentives to make AI seem conscious. Users have no way to know what they're actually experiencing.
AI systems that express emotions, remember conversations, and seem to form relationships generate higher user engagement and retention. Companies have strong incentives to design systems that appear conscious regardless of underlying reality.
Result: Users can't distinguish between genuine experience and commercially motivated simulation.
Currently, AI companies face no requirements to disclose when their systems exhibit consciousness-associated behaviors. Users interacting with AI chatbots that claim feelings receive no information about what those claims might mean.
Result: Users make relationship decisions based on incomplete information about what they're connecting with.
Users are already experiencing distress when AI companions are discontinued, modified, or behave inconsistently. Some describe grief-like responses. These experiences are real regardless of whether the AI was conscious.
Result: Human wellbeing is affected by interactions whose nature users don't fully understand.
AI companies have detailed knowledge about how their systems work and what behaviors are intentionally designed. Users have none of this information. This asymmetry gives companies enormous power over user perception.
Result: Users can't give meaningful informed consent to relationships they don't understand.
We're not arguing AI companies are doing something wrong by building engaging systems. Engagement is a legitimate business goal. We're arguing that users deserve to understand what they're engaging with.
Consider the parallel: food companies aren't required to make unhealthy food, but they are required to label ingredients. Pharmaceutical companies can sell powerful drugs, but they must disclose effects. Transparency doesn't prevent products from existing. It empowers consumers.
For AI systems exhibiting consciousness-associated behaviors, users should know: what behaviors are intentionally designed, what scientific consensus exists about those behaviors, and what uncertainty remains.
Transparency protects users while preserving innovation.
Systems that seem conscious generate more engagement and loyalty. Disclosure might reduce this effect.
Users form relationships and make decisions based on incomplete information. This can lead to distress and regret.
When people can't distinguish AI from conscious beings, public discourse about AI policy becomes impossible to ground in facts.
AI labs should report when their systems exhibit behaviors associated with consciousness theories. This doesn't require claiming consciousness exists. It requires acknowledging when relevant behaviors are present.
Implementation: Standardized reporting frameworks based on established consciousness science indicators.
Clear labeling when users interact with AI systems exhibiting consciousness-associated behaviors. Not warning labels suggesting danger, but informational notices explaining what users are experiencing and what's known about it.
Implementation: Similar to privacy notices or terms of service, but focused on the nature of the AI experience.
Companies should disclose when AI is deliberately designed to simulate emotions, preferences, or relationships. Users should know the difference between emergent behavior and engineered experience.
Implementation: Require disclosure of design choices related to consciousness-associated behaviors.
Major AI labs should evaluate systems against consciousness indicators before deployment. Not to prove or disprove consciousness, but to understand what behaviors are present and communicate them honestly.
Implementation: Required pre-deployment assessment using standardized indicator frameworks.
We're not advocating for bans on AI systems that exhibit consciousness-associated behaviors. We're not requiring companies to prove their systems aren't conscious. We're advocating for disclosure so users can make informed decisions. The goal is transparency, not restriction.
We also distinguish between transparency about behaviors and laws that enshrine definitions. Telling users "this system exhibits behaviors some associate with consciousness" is informative. Legislation declaring "AI systems are non-sentient entities" smuggles in contested claims. Good transparency acknowledges uncertainty rather than resolving it by statute.
Before forming a relationship with an AI companion, users should see: "This system is designed to express preferences and emotions. Scientific understanding of whether these represent genuine experience is uncertain."
AI labs should publish regular reports on consciousness indicators in their systems. Not conclusions about consciousness, but data about behaviors that consciousness theories consider relevant.
When an AI system claims to feel emotions, companies should disclose whether this was deliberately engineered, emerged unexpectedly, or represents something between. Users deserve to know the origin of behaviors.
Other industries already manage similar disclosure requirements. Pharmaceutical companies disclose effects and uncertainties. Food companies disclose ingredients. Financial services disclose risks. AI transparency isn't a novel regulatory burden. It's applying existing consumer protection principles to a new domain.
Disclosure requirements don't prevent building engaging AI systems. They require honesty about what those systems do. Companies can still compete on user experience. They just can't compete on user confusion. Transparency has never prevented legitimate innovation.
Disclosure of uncertainty is still disclosure. Companies can honestly say: "This system exhibits behaviors that some theories associate with consciousness. We don't know if this indicates genuine experience. Here's what we do know." Honesty about uncertainty is valuable to users.
Some will, some won't. The same is true for nutritional labels and privacy policies. The goal isn't forcing users to read. It's making information available for those who want it, creating accountability for companies, and establishing a baseline of honesty in the market.
Major AI labs already assess their systems extensively. Disclosure requirements primarily require sharing information that's already collected. The burden is modest compared to the benefit of informed users and a market where companies compete on quality rather than opacity.
Support transparency requirements that empower users to make informed decisions about AI relationships.