Affirmative Action for Algorithms
In order to innovate and thrive in a rapidly changing global environment, new norms are needed.
The “standardized male” is the default of flawed systems and cultural standards that currently control how we live and work – defaults so normalized we don’t even notice. From 20th century drug trials, international standards and global trading rules, to 21st century algorithmic decision making and machine learning systems, this default has proven to harm people – and the bottom line.
The systemic exclusion of women in defining the old rules of the system and a continuing exclusion of women in defining the new rules, calls for new strategic and innovative thinking to achieve gender equality, and to strengthen democracy in the new systems we create.
In July, Women@theTable released its position paper ‘Triple A: Affirmative Action for Algorithms – Artificial Intelligence, Automated Decision-Making & Gender’. The first paper of its kind to focus specifically on gender, artificial intelligence and automated decision-making. The report speaks of all intersections of women and girls, who serve as a proxy for all groups traditionally invisible and ‘other’ to the system – those traditionally left behind.
The position paper highlights the mounting evidence that gender bias and sexism is pervasive in Automated Decision-Making (ADM). From inherent bias in hiring; selection bias and stereotypes in the delivery of ads to women; and the entrenched implicit stereotypes and unconscious bias that gets translated into explicit misogyny through feminised machines like Alexa – women continue to be excluded and left behind.
We are at a critical turning point. Particularly urgent given the scale at which Automated Decision-Making (ADM) systems are being deployed, is that machine learning trains on and then extracts data’s deep and implicit inference. This means that machine learning makes the implicit information in the data, explicit in the code, and the machine learning ‘intelligently mirrors’ the information it has been given from the analog world. In addition, machine learning improves on the historic bias in the data, and crafts it into an embedded, exacerbated digital form. Thus, making the bias being slowly stripped from the analog world, a new digital, and potentially permanent reality.
The position paper advocates for Affirmative Action for Algorithms (AAA) to correct real life bias and barriers that prevent women from achieving their full participation and rights in the present, and in the future we invent. The position paper provides practical recommendations to the Government, private sector and civil society organisations, centred around accountability, inclusion, and cooperation to advance the values of equality and to correct for the visibility and influence of half the world’s population – women.
A Global Alliance
Recognising the need for an alliance and coordination at the global level to address gender bias in ADM, Women@theTable is leading a global alliance for AAA. The AAA Alliance is comprised of concerned technology leaders, government agencies, nonprofits, and academics committed to addressing this issue while it’s still possible. The alliance was formally launched with a Declaration at Women in Big Data in Zürich in June 2019 and at London’s renowned Chatham House in July 2019.
AAA Alliance advisory board members
To date, the AAA Alliance advisory board members includes senior leaders from Ciudadania Inteligente / The Smart Citizens Foundation, the Data-Pop Alliance, EPFL, Yale, MIT, the City of Barcelona, the World Wide Web Foundation, the World Bank Group, and others.
Awareness & Advocacy
The AAA alliance focuses on awareness raising and advocacy for algorithm accountability in framework discussions being held multilaterally in trade (WTO), intellectual property (WIPO), standards (ISO, ITU, IEC), human rights council (HRC) and among other international settings / institutions, primarily in Geneva.
& Municipal Pilots
The AAA alliance is also committed to creating low cost targeted pilots that correct for gender bias across geographies on the municipal level using a tri-partite and multi-disciplinary approach. This approach brings together coalitions of: academia – data scientists together in conversation with social scientists; municipalities – cities in consultation with their citizens; and the private sector – technology companies, philanthropists and foundations. The AAA pilots will be designed to actively correct real world and digital bias, and not only check for bias. AAA pilots embrace the principles of Smart Cities 3.0 which focuses on the quality of the lives we want to lead and a bottom up participatory approach. That is, citizen co-creation and and design, specifically including a gender analysis.
A Critical Turning Point
We are at a critical turning point. AAA pilots offer us an opportunity to ensure that machine learning does not embed an already biased system into all our futures.Particularly urgent given the scale at which ADM systems and machine learning are being deployed. For example, AAA pilots could be used to allocate public housing, university scholarships, certain social incentives and subsidies to women who have traditionally been left behind - to correct real life bias and barriers that prevent women from achieving their full participation and rights. Other pilots could focus on urban mobility and justice / crime prevention.
As Marshall McLuhan is famously quoted, “We shape our tools, and thereafter our tools shape us.” This is our immediate challenge. We must establish new tools, and new norms, for lasting institutional and cultural systems change now and for the century beyond. This concerns all corners of the world. It is crucial that we focus on gender equality and democracy for both women and men, now. Then everyone can thrive. We must leave no one behind.
We call on Governments, Private Sector and Civil Society Organizations to:
Undertake Algorithmic equitable actions to correct real life biases and barriers that prevent women and girls from achieving full participation and equal enjoyment of rights:
1. ADVOCATE FOR AND ADOPT GUIDELINES THAT ESTABLISH ACCOUNTABILITY AND TRANSPARENCY FOR ALGORITHMIC DECISION-MAKING (ADM) IN BOTH THE PUBLIC AND PRIVATE SECTOR.
We must ensure machine learning does not embed an already biased system into all our futures.
- Public institutions to Pilot and Lead: Affirmative Action for Algorithms deployed when public institutions pilot ADM. Base pilots on longstanding and new social science research that allocate social incentives, subsidies, or scholarships where women have traditionally been left behind by prior systems. This is a positive agenda to advance values of equality we have long embraced, to correct for the visibility, quality and influence of women proportionate to the population.
- Public and Private sector uptake of Algorithmic Impact Assessments (AIA): A self-assessment framework designed to respect the public’s right to know the AI systems that impact their lives in terms of principles of accountability and fairness.
- Rigorous testing across the lifecycle of AI systems: Testing should account for the origins and use of training data, test data, models, Application Program Interfaces (APIs) and other components over a product life cycle. Testing should cover pre-release trials, independent auditing, certification and ongoing monitoring to test for bias and other harms. ADM should improve the quality of, not control the human experience.
- Strong legal frameworks to promote accountability: Including potential expansion of powers for sector specific agencies, or creation of new terms of reference to oversee, audit and monitor ADM systems for regulatory oversight and legal liability on the private and public sector.
- Gender-responsive procurement guidelines: Organizations and all levels of Government, to develop ADM gender equality procurement guidelines with hard targets; and outline roles and responsibilities of those organisations required to apply these principles.
- Improve datasets – Open gender disaggregated data, data collection and inclusive quality datasets: Actively produce open gender disaggregated datasets; this better enables an understanding of the sources of bias in AI, to ultimately improve the performance of machine learning systems. Invest in controls to oversee data collection processes and human-in-the-loop verification, so that data is not collected at the expense of women and other traditionally excluded groups. Engage in more inclusive data collection processes that focus not only on quantity but also on the quality of datasets.
2. TAKE CLEAR PROACTIVE STEPS TO INCLUDE AN INTERSECTIONAL VARIETY AND EQUAL NUMBERS OF WOMEN AND GIRLS IN THE CREATION, DESIGN AND CODING OF ADM.
New technologies offer new opportunities including the creation of genuinely new structures that require new ideas and new teams. Gender roles being removed from the real world are being baked into new ADM with the old and stereotypical conceptions and associations of gender, race and class. Innovative and inclusive thinking are necessary. This imagination and skill can be provided by the largest untapped intellectual resource on the planet – women and girls.
- Gender balance in AI decision-making: Gender balance in decision-making should be put on the official agenda of all involved with the funding, design, adoption and evaluation of ADM.
- Gender balance in design teams: Employment of a robust range of intersectional feminists in the design of ADM systems will trigger and assist greater innovation and creativity, as well detection and mitigation of bias and harmful effects on women, girls and the traditionally excluded.
- Require companies to proactively disclose and report on gender balance targets in design teams. Incentivize companies with balanced teams.
- Require universities and start-ups to proactively disclose and report on gender balance targets in research and design teams, including upstream when applying for grants. Incentivize teams that are balanced and multi-disciplinary.
- Research fund: Create a research fund to explore the impacts of gender and AI, machine learning, bias and fairness, with a multi-disciplinary approach beyond the computer science and engineering lens to include new ways of embedding digital literacy, and study the economic, political and social effects of ADM on the lives of women and those traditionally excluded from rules making and decision-taking.
3. INTERNATIONAL COOPERATION AND AN APPROACH TO ADM AND MACHINE LEARNING GROUNDED IN HUMAN RIGHTS.
Mass scale correction of skewed data systems will require multilateral and international cooperation to ensure we leave no one behind.
- A UN agencies-wide review of the application of existing international human rights law and standards for ADM and gender: This can guide and provoke the creative thinking for an approach grounded in human rights that is fit for purpose in the fast-changing digital age.
- Development of a set of metrics for digital inclusiveness: To be urgently agreed, measured worldwide and detailed with sex disaggregated data in the annual reports of institutions such as the UN, the International Monetary Fund, the International Telecommunication Union, the World Bank, other multilateral development banks and the OECD.