Beyond AI Ethics: A Human Rights Framework for Responsible AI Development

AI systems are reshaping society, but too often they’re built without considering their human impact. Despite good intentions, AI has repeatedly harmed vulnerable communities—from biased hiring algorithms to discriminatory facial recognition systems. Our new framework offers a practical solution: integrating human rights considerations throughout the entire AI development lifecycle.
Why Human Rights, Not Just Ethics?
Why Human Rights, Not Just Ethics?
While AI ethics principles are important, they’re often too abstract and situational to provide concrete guidance. Human rights law, by contrast, offers a universal foundation rooted in human dignity, equality, and non-discrimination. This approach also helps organizations comply with emerging regulations like the EU AI Act, which requires Human Rights Impact Assessments for high-risk AI systems.
A Practical Six-Stage Framework
Our framework guides AI teams through essential questions at each stage of development:
1. Objective & Team Composition
Our framework guides AI teams through essential questions at each stage of development:
1. Objective & Team Composition
- Who benefits from this system and who might be disadvantaged?
- Are affected communities involved in defining the problem and solution?
- Does your team include diverse perspectives and social science expertise?
2. System Requirements
- How do you balance accuracy with fairness and explainability?
- What level of transparency do affected communities need?
- Who has the authority to contest system decisions?
3. Data Discovery
- Who is represented in your training data and who is excluded?
- What historical biases might your data perpetuate?
- How will you document data sources and preprocessing steps?
4. Model Selection & Development
- Does your model provide appropriate explainability for the stakes involved?
- Which fairness metrics are most relevant to your context?
- How will you minimize environmental impact?
5. Testing & Interpretation
- Have you tested with affected communities, not just technical metrics?
- What guidance will you provide to human operators?
- For which contexts has your system been trained, and where might it fail?
6. Deployment & Monitoring
- Do affected communities have agency to delay or stop deployment?
- How will you detect and respond to emerging harms?
- When should the system be retired?
Moving Beyond Compliance
This isn’t just about avoiding harm—it’s about proactively promoting human rights. By involving affected communities from the start and giving them real decision-making power, we can build AI systems that enhance human dignity rather than undermining it.
The key insight is that human rights considerations shouldn’t be an afterthought or add-on. They should be woven into every decision from day one, ensuring AI serves all communities, especially those who have been historically marginalized.
This framework was developed as part of the <AI & Equality> Human Rights Toolbox, integrating insights from the Alan Turing Institute‘s Human Rights Impact Assessment with practical guidance for AI practitioners. Join our community to learn more about building equitable AI systems.