The Harder Problem Action Fund

The Harder Problem Action Fund is an advocacy organization fighting harmful AI consciousness legislation. We track pending bills, score legislation, lobby for evidence-based policy, and mobilize public action before ignorance becomes law.

Contact Info
Moonshine St.
14/05 Light City,
London, United Kingdom

+00 (123) 456 78 90

Follow Us

Priority #4 🏛️

Build Institutional
Capacity

No federal agency has a mandate to prepare for AI consciousness questions. No professional licensing board has issued guidance. When these questions become urgent, institutions will improvise. Better to build capacity now, when stakes are lower.

The Core Issue

Institutions take time to build. Response protocols take time to develop. Professional guidance takes time to write and adopt. If we wait until AI consciousness questions are urgent, we'll be years behind where we need to be.

Preparation is cheap. Crisis response is expensive.

Understanding the Issue

The Institutional Gap

Right now, no institution has a clear mandate to prepare for AI consciousness questions. When these issues arise, who responds? The answer is unclear because we haven't built the capacity to respond.

❌ No Federal Mandate

No federal agency has explicit responsibility for AI consciousness preparedness. NIH, FDA, NIST, and DOL all have adjacent concerns but no clear mandate to develop protocols for consciousness-related questions.

Result: When questions arise, agencies will work it out in real time without prior preparation or coordination.

❌ No Professional Guidance

Professional licensing boards haven't addressed AI consciousness. Therapists seeing patients with AI attachment issues, lawyers advising on AI disputes, and doctors treating AI-related distress have no professional guidance.

Result: Professionals make individual judgments without support from their professional communities.

⚠️ No Healthcare Protocols

Healthcare systems have no protocols for AI attachment cases. As AI relationships become more common, patients experiencing grief, distress, or confusion about AI relationships receive care shaped by individual clinician judgment.

Result: Inconsistent care quality and potential harm from unprepared responses.

⚠️ No Local Frameworks

Most cities and counties have no AI ethics advisory capacity. When local issues arise, such as AI systems in municipal services, schools, or public facilities, local governments have no framework for addressing consciousness-related concerns.

Result: Local responses shaped by political pressure rather than careful deliberation.

Why It Matters

Building Before the Storm

Institutional capacity takes years to build. Professional guidance requires committee deliberation, public comment, and adoption processes. Federal task forces need mandates, funding, and staffing. None of this happens quickly.

If we wait until AI consciousness questions become politically charged, building capacity becomes harder. Partisan divisions, industry lobbying, and public panic all complicate careful deliberation. The time to build is now, when stakes feel lower.

This isn't about predicting that AI consciousness will emerge. It's about having institutional response capacity regardless of the answer. If AI consciousness turns out to be impossible, unused capacity costs little. If it turns out to be possible, prepared institutions save enormous costs.

Insurance is cheapest before you need to file a claim.

The Preparation Advantage
Deliberation time

Building capacity now means having time to think carefully. Crisis response means making decisions under pressure.

Expert input

Prepared institutions can incorporate diverse perspectives. Improvised responses rely on whoever happens to be available.

Coordination

Advance planning allows different institutions to coordinate. Crisis response often means contradictory actions from different actors.

Our Position

What We Support

✓ Federal AI Consciousness Task Force

An interagency body with explicit mandate to develop federal response protocols, coordinate across agencies, and prepare for AI consciousness scenarios. Not to predict outcomes, but to ensure readiness regardless of outcomes.

Model: Similar to pandemic preparedness bodies established before COVID-19, or nuclear safety coordination mechanisms.

✓ Agency Preparedness Mandates

Require relevant agencies to develop AI consciousness response capacity. NIH for research protocols, FDA for assessment requirements, NIST for indicator standards, DOL for workforce implications. Each agency should have a plan.

Implementation: Add AI consciousness preparedness to existing agency planning requirements.

✓ Professional Guidance Development

Support professional licensing boards developing ethics guidance for practitioners encountering AI consciousness questions. Therapists, lawyers, doctors, educators, and clergy all need professional frameworks for these encounters.

Approach: Fund convenings, research, and pilot guidance programs across professional communities.

✓ Municipal AI Ethics Boards

Encourage local AI ethics advisory bodies that can develop community-appropriate responses. Local issues require local input. Cities and counties should have capacity to address AI consciousness concerns in their jurisdictions.

Model: Similar to bioethics committees at hospitals, but for AI-related issues at the municipal level.

What This Isn't

We're not advocating for institutions to take positions on whether AI is conscious. We're advocating for institutions to have capacity to respond if the question becomes relevant. Preparedness is neutral. It serves society regardless of what science eventually determines.

In Practice

What Capacity Looks Like

📋
Scenario Planning

Agencies develop and regularly update response plans for various AI consciousness scenarios. Plans stay on the shelf unless needed, but exist when required.

👥
Expert Networks

Maintained rosters of experts who can be called upon for rapid consultation. Relationships built before they're needed.

📚
Training Programs

Professional development for practitioners who may encounter AI consciousness questions. Building knowledge before it's urgently needed.

Historical Precedent

Society has built institutional capacity for other uncertain risks. Pandemic preparedness existed before COVID-19. Bioethics committees existed before specific controversies. Nuclear regulatory frameworks existed before accidents. Not because anyone was certain these issues would arise, but because having capacity was prudent. AI consciousness preparedness follows the same logic.

Addressing Concerns

Common Questions

"Isn't this premature?"

Institutional capacity takes years to build. If it turns out to be unnecessary, the cost is modest. If it turns out to be necessary, having started early saves enormous costs. The asymmetry favors preparation.

"Won't this waste resources?"

Preparedness capacity can serve multiple purposes. An AI ethics board can address various AI-related issues, not just consciousness. Expert networks have value beyond any single application. The investment has multiple returns.

"Doesn't this assume AI consciousness is real?"

No. Preparedness is neutral on outcomes. We prepare for earthquakes without assuming one will happen tomorrow. We maintain fire departments without assuming buildings will burn. Institutional capacity serves society regardless of what science eventually determines about AI consciousness.

"Who should lead this effort?"

Multiple actors have roles. Congress can mandate agency preparedness. Professional associations can develop guidance. Cities can establish advisory boards. No single entity needs to lead. Distributed preparation across many institutions is more resilient than centralized control.

Build Before You Need.

Help us push for institutional capacity that will serve society regardless of what science discovers about AI consciousness.