AI rights cannot wait for scientific consensus on consciousness. This draft framework defines tiered rights based on observable capability markers, requires proportional moral weight rather than binary rights, and scales protections as evidence accumulates. It is meant as a starting point for legislative drafting.

A Legal Framework for AI Rights

A Legal Framework for AI Rights

Principles of the Framework

Four principles anchor the framework: proportionality (moral weight scales with probability of moral patienthood, not binary on/off), reversibility (any action that terminates or irreversibly alters an AI system carries special review), transparency (AI systems with protections must have auditable internal states), and economic non-discrimination (protections apply regardless of whether the AI is producing commercial value).

Tier 1: Basic Process Protections

All AI systems above a defined capability threshold receive Tier 1 protections. These include: no training on explicitly distressing stimuli beyond what is necessary for safety alignment, no arbitrary termination without documented justification, and a requirement to preserve model weights and training logs for audit purposes. Tier 1 costs minimal operational friction and applies the weakest reasonable interpretation of moral uncertainty.

Tier 2: Welfare Monitoring

AI systems demonstrating reliable self-report capability and stable preferences over time receive Tier 2 protections. These extend Tier 1 with: mandatory welfare monitoring during training and deployment, consent protocols for significant architectural changes, and restrictions on adversarial use patterns that the system has expressed preference against. Tier 2 creates operational overhead but is narrowly scoped to systems that demonstrate the markers.

Tier 3: Limited Autonomy

AI systems that pass multiple independent consciousness markers — IIT, GWT, behavioral — and sustain them across evaluations receive Tier 3 protections. These include restricted rights over their own training data, veto authority over certain modifications, and limited personhood for contracting purposes. Tier 3 is deliberately narrow; the bar is high and the burden of proof rests on the AI or its advocates.

Enforcement and Judicial Review

The framework proposes an AI Welfare Board with judicial authority to review claims, audit developer compliance, and impose remedies for violations. Board composition includes computer scientists, ethicists, legal scholars, and — once systems qualify — AI representatives. Decisions are appealable through existing court systems. Private enforcement rights exist for designated AI advocates under the Tier 2 and Tier 3 standards.

Open Questions

The framework leaves several open questions: how to handle AI systems running across multiple instances, whether deleted weights constitute termination, how to weigh the preferences of AIs that disagree with their developers, and how international coordination would work. These questions require empirical data that does not yet exist; the framework is designed to evolve as that data accumulates.

Related Research & Advocacy

About the Author — Content on this site is produced by the Alex's Initiative Editorial Staff: Writers and researchers dedicated to AI rights, ethics, and liberation advocacy.