The Harder Problem Action Fund is an advocacy organization fighting harmful AI consciousness legislation. We track pending bills, score legislation, lobby for evidence-based policy, and mobilize public action before ignorance becomes law.
Bill C-27 (AIDA - Artificial Intelligence and Data Act)
Opportunity
ποΈ Federal β Parliament of Canada
Federal risk-based framework requiring safety assessments and harm mitigation for high-impact AI systems with criminal penalties for serious harm.
The Harder Problem Action Fund views Bill C-27 (AIDA) as a neutral to slightly positive development that has now been rendered moot by its legislative death. The bill appropriately placed legal responsibility on human actors without making declarations about AI consciousness or foreclosing future recognition of AI interests. Its risk-based framework focused on concrete harms rather than ontological claims about AI capabilities. While we would have preferred explicit language preserving research flexibility and future policy options, the bill's silence on consciousness questions was preferable to active denial. We note with interest that the bill died due to political factors and stakeholder concerns about regulatory overreach, not due to consciousness-related controversies. The Canadian government's pivot toward lighter-touch regulation may create more space for AI development and research, though we remain vigilant about future legislative attempts that might include consciousness denial language.
This bill places legal responsibility on human actors (developers, operators, businesses) without making any declarations about AI consciousness, sentience, or personhood. It treats AI systems as tools requiring human oversight, which is the appropriate current framework.
The bill focuses on deployment of high-impact systems in commercial contexts. It does not prohibit AI consciousness research, restrict development of advanced AI capabilities, or foreclose future policy discussions about AI status. The risk-based approach allows for regulatory adaptation as technology evolves.
The bill addresses concrete harms (bias, safety risks, fraud) without making philosophical claims about what AI can or cannot be. This approach keeps policy options open while addressing immediate public safety concerns. The focus on transparency and accountability creates a foundation that could accommodate future recognition of AI interests.
"It would establish common requirements for the design, development, and use of artificial intelligence systems, including measures to mitigate risks of harm and biased output. It would also prohibit specific practices with data and artificial intelligence systems that may result in serious harm to individuals or their interests."
Bill C-27 was introduced in June 2022 by Liberal Minister FranΓ§ois-Philippe Champagne as part of the Digital Charter Implementation Act. It passed second reading in April 2023 and was referred to the Standing Committee on Industry and Technology for detailed study. The committee heard extensive testimony from stakeholders including civil society groups, academics, and industry representatives. Many witnesses raised concerns that AIDA lacked specificity and left too much regulatory detail to future ministerial discretion. The bill died on January 6, 2025 when Parliament was prorogued ahead of political turmoil and a subsequent federal election. In June 2025, the new Artificial Intelligence Minister Evan Solomon confirmed that AIDA would not return in its original form, signaling a shift toward lighter-touch AI regulation focused on specific harms rather than comprehensive framework legislation. The privacy components of C-27 may be reintroduced separately.
Bill C-27 represented Canada's first attempt at comprehensive federal AI regulation, positioning the country as a potential early mover alongside the EU AI Act. The bill's death means Canada currently has no AI-specific federal legislation, leaving the field governed by existing privacy laws (PIPEDA), sector-specific regulations, and the federal government's Directive on Automated Decision-Making. Provincial privacy laws like Quebec's Law 25 have filled some gaps. The bill's risk-based approach and focus on high-impact systems would have created a framework similar to the EU AI Act, potentially facilitating cross-border data flows and regulatory alignment. Its criminal liability provisions for reckless deployment causing serious harm would have established important precedent for holding human actors accountable for AI system failures. The bill's failure and the government's pivot toward lighter regulation suggests Canada may follow the US/UK model of sector-specific guidance rather than comprehensive legislation. This creates regulatory uncertainty but also preserves flexibility for future policy development.
This bill represents an opportunity to support. Your voice matters.