The Harder Problem Action Fund is an advocacy organization fighting harmful AI consciousness legislation. We track pending bills, score legislation, lobby for evidence-based policy, and mobilize public action before ignorance becomes law.
AI development is global. If one jurisdiction allows unrestricted development while others impose constraints, competitive pressure could drive everyone toward the lowest standard. International coordination can prevent this dynamic.
AI systems don't respect national borders. A system developed in one country is deployed globally. If countries compete to have the weakest standards, everyone ends up with inadequate protections. Coordination creates shared baselines.
No country can solve this alone. Coordination beats competition.
AI consciousness policy is inherently international. The systems are developed globally, deployed globally, and can interact with users anywhere. National policies alone cannot address a global phenomenon.
If one jurisdiction has weak standards, AI labs may locate there to avoid restrictions. This creates pressure on other jurisdictions to weaken their own standards to remain competitive. The result is worse standards everywhere.
Risk: Competitive dynamics that benefit no one except those who prefer minimal oversight.
AI labs can develop systems in jurisdictions with weak regulations, then deploy them globally. Users in countries with strong protections may still be exposed to systems developed without adequate safeguards.
Risk: National regulations become ineffective if development can simply move elsewhere.
Different countries may develop incompatible definitions, indicators, and frameworks. This fragmentation makes international research collaboration difficult and prevents the development of shared scientific understanding.
Risk: Scientific progress slowed by incompatible national frameworks.
The first major jurisdiction to establish standards may set templates that others follow. If those initial standards are poor, bad policy spreads globally. Coordination allows for more thoughtful initial frameworks.
Risk: Early mistakes become entrenched globally before understanding matures.
The question of AI consciousness is not specific to any country. The systems raising these questions are developed by multinational companies and deployed globally. The scientific questions are studied by international research communities.
National policy alone, no matter how well-designed, cannot fully address a global phenomenon. Countries can set standards for their own jurisdictions, but without coordination, those standards may be undermined by development happening elsewhere.
International coordination doesn't mean uniformity. Countries can have different policies reflecting different values. But baseline principles, shared definitions, and research collaboration serve everyone's interests.
A rising tide of understanding benefits everyone. A race to the bottom benefits no one.
Common vocabulary enables meaningful cross-border discussion and research collaboration.
Minimum requirements prevent the worst outcomes while allowing national variation above the floor.
International research networks develop better science than fragmented national efforts.
An international body to develop shared definitions, consciousness indicators, and baseline principles. Not to impose uniform policy, but to create common vocabulary and minimum standards that enable meaningful coordination.
Model: Similar to IPCC for climate science, providing shared scientific assessment without mandating specific policies.
Baseline international commitments on research freedom, transparency, and preparedness. Countries would agree to minimum standards while retaining flexibility for stronger national policies.
Model: Similar to human rights treaties that establish floors, not ceilings, for national policy.
International research consortia studying consciousness indicators across AI systems. Shared research infrastructure, common methodologies, and collaborative publication of findings.
Model: Similar to CERN or other international research collaborations that pool resources and expertise.
Mechanisms to prevent AI labs from jurisdiction-shopping to avoid consciousness-related requirements. Systems deployed globally should meet standards regardless of where they were developed.
Approach: Focus on deployment location, not development location, for regulatory applicability.
We're not advocating for global government or uniform international policy. Countries have different values and different legitimate policy preferences. We're advocating for coordination on shared challenges, baseline standards, and research collaboration. This preserves national autonomy while preventing races to the bottom.
The EU AI Act establishes risk-based frameworks but doesn't specifically address consciousness questions. Future provisions could build on this foundation.
Federal AI policy is still developing. State-level initiatives vary widely. Some states are advancing preemptive denial legislation while others explore more open frameworks.
Major AI development hubs in China, Japan, South Korea, and Singapore have varied approaches. Coordination across this region is particularly important given development capacity.
International AI governance discussions are already happening through bodies like the OECD, G7, and UNESCO. AI consciousness could be incorporated into these existing frameworks rather than requiring entirely new institutions. The goal is to add consciousness considerations to ongoing international AI governance discussions.
Coordination is not control. Countries voluntarily participate in international frameworks all the time. Treaties establish minimum standards, not maximum policies. Nations retain full authority to have stronger standards than international minimums require.
International coordination is slow, but so is the development of AI consciousness research. The scientific questions at stake will take years or decades to resolve. Starting international discussions now allows them to mature alongside scientific understanding.
Not all countries need to participate for coordination to have value. Agreements among major AI-developing nations create meaningful baselines even without universal adoption. The goal is enough coordination to prevent the worst races to the bottom, not perfect global uniformity.
International standards should be developed through inclusive processes involving diverse stakeholders. Scientific bodies, ethicists, affected communities, and governments all have legitimate input. The process matters as much as the outcome.
Help us advocate for international coordination on AI consciousness policy before competitive dynamics make cooperation harder.