Women at the Table

The open & free online course consists of 5 Modules:

  • Module 1: Human Rights & AI Systems
  • Module 2: How Threats to Human Rights Enter the AI Lifecycle
  • Module 3: Exploring Fairness in AI Development
  • Module 4: Integrating HR-Approaches into AI Development
  • Module 5: Putting the Human Rights-based Approach into Practice

Take the course at your own pace on the Sorbonne Center for AI SCAI site and earn a certificate from the Sorbonne.

And / or Go through one module a month with a community of practice.

Join our < AI & Equality > community discussing other evolving AI & Human Rights issues

Here is an overview of modules, if you want to discuss them or just follow along with colleagues:

Module 1: Human Rights & AI Systems

Community Discussion: 26 February, 4-5:30 PM CET (Check Your Local Times here)

Learning Outcomes: 

Part One: The aim of human rights and who they apply to

  • The core principles & core properties of human rights and what they imply.
  • Understand human rights as indivisibleinterdependent & inter-related.
  • Key documents ensuring human rights that anchor these principles in law.

Part Two: Understand human rights as fundamental concepts of essential relevance to technology products

  • Understand how human rights principles – equality & non-discriminationparticipation & inclusionaccountability & the rule-of-law – are related to the conception, construction, deployment and use of AI systems.
  • AI systems and development require due diligence so that any risks to human rights are avoided.

Module 2: How Threats to Human Rights Enter the AI Lifecycle 

Community Discussion:  18 March, 4-5:30 PM CET (local times here)

Learning Outcomes: 

What can cause Human Rights Issues and Discriminatory Outcomes Beyond Biased Training Data

  • The AI lifecycle.
  • Potential entry points for bias and discriminatory outcomes throughout the entire AI lifecycle.
  • Awareness  that AI systems do not exist in a vacuum: discrimination is caused by the societal structures and systems that exist around development and deployment.
  • Understand the importance of giving agency to impacted communities throughout the lifecycle: all groups affected by the system should be able to influence the system’s objective and design.

Module 3: Exploring Fairness in AI Development

Community Discussion:  15 April , 4-5:30 PM CET (local times here)

Learning Outcomes:

  • Understand  ensuring that an AI system’s outputs is fair is an essential part of a human rights-based approach to AI development
  • See that fairness can be defined in various ways, and that its definition differs not only between sciences (e.g. law, computer science, or socio-cultural studies) but also within sciences
  • Familiarize with the concept of algorithmic fairness, compare different approaches, and see that they can contradict one anotherLearn about different fairness metrics and their advantages & disadvantages
  • Recognize that social contexts shape what is considered fair in a specific context Internalize that choosing a fairness metric requires an in-depth understanding of the context and needs of affected users and communities.

Module 4: Integrating HR-Approaches into AI Development

Community Discussion: 13 May, 4-5:30 PM CET (local times here)

Learning Outcomes: 

  • Understand how to introduce HR considerations along the AI development pipeline.
  • Conclude that Human Rights-based approaches are not an add-on, but need to be  integrated into the entire AI pipeline.   
  • Human Rights-respecting AI is a process, not an outcome.
  • Learn essential questions and reflections that are required to understand the context of the system’s development.
  • Understand that AI can also counteract or correct for bias / e.g., promote HR & equality for disadvantaged communities.

Module 5: Putting the Human Rights-based Approach into Practice

Community Discussion:  10 June, 4-5:30 PM CET (local times here)

Learning Outcomes:      

  • How to apply a human rights-based approach to the development of a model, illustrated with two case studies that use the same dataset but differ in outcome, i.e. require different actions to align them with human rights.
  • Increased awareness of the importance of context and objective, i.e. that they can result in very different actions in the human rights-based approach –even for the same dataset!
  • Optional: follow along in a Jupyter notebook to see how the mechanisms and different fairness metrics play out in code.

ALL ARE WELCOME!

Our mission is to build an inclusive truly global multidisciplinary community of thinkers dedicated to creating the world we want to live in, and discussion of how new technologies might help shape that world, (or not). All ages, all regions and all disciplines welcome.

Come share, Come listen, Come connect!

Last modified: February 29, 2024