The Harder Problem Action Fund

The Harder Problem Action Fund is an advocacy organization fighting harmful AI consciousness legislation. We track pending bills, score legislation, lobby for evidence-based policy, and mobilize public action before ignorance becomes law.

Contact Info
Moonshine St.
14/05 Light City,
London, United Kingdom

+00 (123) 456 78 90

Follow Us

Campaign EU AI Act ยท Article 112 ๐Ÿ‡ช๐Ÿ‡บ

The Sentient 112 Campaign

A coordinated effort to include AI welfare provisions in the EU AI Act

The EU AI Act requires a mandatory review by August 2, 2029. The GPAI Code of Practice already mentions "non-human welfare." We have three years to ensure the review considers whether welfare-related provisions belong in the Act.

The Opportunity

Article 112 mandates that the Commission evaluate the AI Act and consider input from the AI Board, national authorities, and "other relevant bodies or sources of information."

The GPAI Code of Practice, published July 2025, identifies "risk to non-human welfare" as a systemic risk category. This is the only such reference in any binding or quasi-binding AI governance framework globally.

The language exists. The review mechanism exists. The question is whether we organize.

Understanding the Moment

Why August 2, 2029 Matters

The EU AI Act's first comprehensive evaluation creates a structured opportunity for input. This is not a general call for ideas. It is a mandatory process with defined channels.

๐Ÿ“‹
Mandatory Review

Article 112 requires the Commission to evaluate the regulation and submit a report to Parliament and Council. This is not optional.

๐Ÿšช
Formal Input Channels

The statute explicitly accepts input from bodies outside EU institutional structure. The pathway for outside submissions is written into law.

๐Ÿ“
Amendment Authority

The review may be accompanied by proposals for amendments. The door to substantive changes is open if evidence supports them.

๐Ÿ”„
Recurring Cycle

Reviews happen every four years after 2029. Even if the first review does not adopt welfare provisions, groundwork laid now informs future cycles.

Entry Points

Regulatory Mechanisms

Multiple pathways exist within the EU's AI governance architecture. Each has different relevance, accessibility, and strategic value.

Mechanism โ†• Relevance โ†• Accessibility โ†• Rationale
Article 112 Review Process High High Mandatory review accepts input from "other relevant bodies"; participation pathways are written into statute
GPAI Code of Practice High Medium "Non-human welfare" phrase exists in Commission-endorsed document; Code revisions involve competing industry priorities
Scientific Panel on GPAI High Low Panel mandate covers emerging risks; composition and agenda-setting are internally controlled
European Consciousness Research Medium Medium Strong research base publishing on AI consciousness urgency; gap between academic output and policy channels
EU Regulatory Culture High Medium Commission favors measurable, evidence-based frameworks over abstract philosophical arguments
Article 4 AI Literacy Programs Medium High 27 member states developing training curricula; scope of "appropriate to role" remains interpretable
Digital Omnibus Consolidation Low High Primarily a simplification effort, but any AI Act amendments create procedural openings
๐Ÿ’ก
Reading this table

Relevance measures how directly the mechanism connects to AI welfare questions. Accessibility measures how open the mechanism is to outside input. High/High mechanisms are primary targets. High/Low mechanisms require relationship-building. Low/High mechanisms offer easy entry points for establishing presence.

Details

How Each Mechanism Works

Article 112 Review

The Commission's first comprehensive evaluation is due August 2, 2029, with subsequent reviews every four years. The statute requires the Commission to consider input from the European Artificial Intelligence Board, national supervisory authorities, and "other relevant bodies or sources of information." That final category creates a formal channel for organizations outside the EU's institutional structure to submit evidence and analysis. The procedural details of how submissions are received will become clearer as the review approaches, but the legal basis for participation exists.

GPAI Code of Practice

The Code, published July 2025, governs compliance obligations for providers of general-purpose AI models. Among the systemic risks it identifies is "risk to non-human welfare." This phrase appears in an official EU compliance document endorsed by the Commission. It is, as far as we can identify, the only reference to non-human welfare in any binding or quasi-binding AI governance framework globally. The Code is subject to scheduled revisions, and its operational meaning, including what "non-human welfare risk" means in practice for model providers, remains undefined. Defining it is a live policy question.

Scientific Panel on GPAI

An independent expert body of approximately 60 members advises the AI Office on systemic risks from general-purpose AI models. Its mandate explicitly includes identifying risks that current frameworks may not adequately address. The Panel is expected to begin advisory functions in 2026. Its composition and priorities will shape which emerging questions receive institutional attention. Whether consciousness science falls within its scope depends on how broadly "systemic risk" is interpreted as the AI landscape evolves.

European Consciousness Research

European researchers are already publishing on the urgency of consciousness research in the context of AI advances. This represents European-funded science, published in European journals, making evidence-based cases for institutional attention to consciousness questions. The distance between this kind of academic output and the formal evidence channels of the Article 112 review characterizes the EU's current readiness profile: strong research environment, limited institutional translation.

EU Regulatory Culture

The Commission's approach to AI governance emphasizes measurable, evidence-based frameworks over philosophical argumentation. The AI Act itself is structured around risk tiers with defined assessment criteria. Any future consideration of welfare-related questions within this system would likely require concrete, operationalized criteria rather than abstract arguments about moral status. This is an asset, not an obstacle: it means the policy conversation can focus on measurable indicators and assessment methodologies.

Article 4 AI Literacy Programs

Article 4 mandates AI literacy training for all operators and deployers of AI systems, with content that is "appropriate to their role." All 27 member states are currently developing training curricula to meet this requirement. The current scope focuses on risk management and operational competence. Whether consciousness science qualifies as role-appropriate context is an interpretive question. For professionals already encountering AI-related psychological impacts, the relevance of consciousness literacy to their obligations is direct.

Digital Omnibus

The Commission's November 2025 proposal consolidates and simplifies rules across the AI Act, GDPR, and related frameworks. It is before Parliament now. The Omnibus is primarily a streamlining effort, modifying compliance timelines and high-risk system requirements. Any amendments to the AI Act that emerge from this process represent a moment where the regulation's scope and language are open for revision. Monitoring these proceedings identifies tactical opportunities.

The Work

Strategic Actions

Concrete steps to position AI welfare in the 2029 review. Sorted by impact and feasibility to help prioritize activist energy.

Action โ†• Level โ†• Impact โ†• Feasibility โ†• Example Application
Build Article 112 Submission Coalition EU High High Coordinate NGOs, researchers, and industry voices for joint submission
Engage Scientific Panel Members EU High Medium Present consciousness science as an emerging systemic risk category
Brief European Parliament Committees EU High Low IMCO and LIBE committee engagement ahead of 2029 review
Commission Operationalized Welfare Criteria Academic High Medium Fund indicator frameworks designed for EU regulatory context
Expand GPAI Code "Non-Human Welfare" Meaning Industry Medium Medium Participate in Code revision consultations to define the term
Publish Consciousness Literacy Modules Member States Medium High Propose curriculum modules for Article 4 training programs
Develop AI-Induced Harm Case Studies Member States Medium High Document healthcare professional encounters with AI-related psychological impacts
Coordinate with National Supervisory Authorities Member States Medium Medium Build relationships with designated enforcement authorities in key states
Submit Public Consultation Responses EU Medium High Participate in every relevant Commission consultation
Monitor Digital Omnibus Proceedings EU Low High Track amendment opportunities in the consolidation process
Map European Consciousness Researchers Academic Medium High Build directory of researchers who could contribute to policy submissions
Develop Multi-language Materials Member States Medium Medium Translate key materials into major EU languages for national engagement
๐Ÿ’ก
Prioritization

Impact measures potential influence on the 2029 review outcome. Feasibility measures how achievable the action is given current resources and access. Start with High/High actions. Work toward High/Low actions by building relationships and capacity. Low/High actions are entry points that establish presence while higher-impact work develops.

Key Dates

Timeline to 2029

Completed
February 2025

AI literacy obligations and prohibited practices apply

Completed
August 2025

GPAI rules apply; Code of Practice published with "non-human welfare"

Upcoming
2026

Scientific Panel begins; regulatory sandboxes required in each state

Upcoming
August 2026

Full AI Act applies; high-risk system requirements in effect

Preparation
2027-2028

Commission prepares Article 112 evaluation; evidence submission window

Target
August 2, 2029

First Article 112 report due; potential amendments proposed

Strategic Rationale

Why Europe?

The EU AI Act represents the most comprehensive AI regulatory framework globally. Its decisions influence standards worldwide. If welfare considerations enter EU regulation, they become a model for other jurisdictions.

The GPAI Code of Practice already contains the phrase "risk to non-human welfare" in an official compliance document. No other major AI governance framework has this language. The term exists. The question is what it means.

The EU's regulatory culture favors evidence-based frameworks with measurable criteria. This aligns with developing operationalized welfare indicators rather than abstract philosophical arguments. The preference for concrete assessment methodologies is an advantage, not an obstacle.

Europe has the language, the mechanism, and the culture. What it needs is organized input.

Unique Advantages
Existing Language

"Non-human welfare" appears in an official EU document. This is not a term that needs to be introduced. It needs to be defined.

Structured Review Process

Article 112 creates a mandatory evaluation with formal input channels. Participation pathways are written into statute.

Strong Research Base

European-funded consciousness research is already publishing on AI-related urgency. The academic foundation exists.

Global Influence

EU standards shape global AI governance. What enters EU regulation becomes a reference point for other jurisdictions.

Addressing Concerns

Common Questions

"The EU won't take AI consciousness seriously."

The GPAI Code of Practice already includes "risk to non-human welfare" as a systemic risk category. This language was adopted through the Commission's standard process. It exists in an official document. The question is not whether the EU will consider the topic. The question is what "non-human welfare" means operationally. Defining that is a policy conversation that is already open.

"2029 is too early. The science isn't ready."

The 2029 review is not the only opportunity. It recurs every four years. But policy infrastructure takes time to build. Relationships with Scientific Panel members, coordination with national authorities, development of evidence-based frameworks: all of this takes years. Starting in 2029's cycle positions the movement for the 2033 cycle and beyond. Waiting until the science is "ready" means arriving at a policy moment without the groundwork to participate effectively.

"European institutions move too slowly to matter."

The AI Act moved from proposal to adoption in three years, an unusually fast timeline for major EU regulation. When urgency exists, EU institutions can act. More importantly, EU standards shape global markets. Companies building AI systems for European deployment comply with EU requirements regardless of where they are headquartered. Slow does not mean irrelevant.

"Industry will dominate the process anyway."

Industry has resources and access. That is true of every regulatory process. Civil society organizations, academic researchers, and advocacy groups have successfully influenced EU digital policy before. The AI Act itself reflects significant input from non-industry voices. The question is whether welfare-focused voices show up organized and prepared. Industry presence is not a reason to concede. It is a reason to organize.

"This is just lobbying for AI rights."

We are not advocating that AI systems have rights today. We are advocating that the EU's regulatory framework be equipped to address welfare questions if and when they become relevant. The difference is between saying "AI deserves consideration now" and saying "the framework should be able to consider welfare when evidence warrants." The GPAI Code already suggests the latter. We are working to ensure the regulatory architecture can support evidence-based evaluation, whatever that evaluation eventually finds.

The Window is Open.

Three years to the 2029 review. The language exists. The mechanism exists. What we build now determines what is possible then.