• We Shape Our Tools,

    Thereafter our Tools Shape Us

    Artificial Intelligence, Automated Decision-Making & Gender

  • Introduction

     

    "Bias is inherent in human life, and therefore in the human data that informs the rule making of machine learning." We can either seize this moment to correct bias in the digital realm, as we tackle bias in the analog world, or condemn ourselves to old bias hardwired into the future century of Automated Decision- Making (ADM) that is trained by machine learning on biased data sets.

     

    We mean all intersections of women and girls when we speak of them in this paper – however women and girls can also serve as a proxy for all groups traditionally invisible and ‘other’ to the system – those traditionally left behind. Because there has been systematic exclusion of women in standards making, in data collection, and defining the old rules of the system, and a continuing exclusion of women in defining the new rules –aggravated by the lack of women scientists creating ADM systems – newly strategic and innovative thinking to achieve gender equality, and to strengthen democracy is needed in the new systems we create. We are at a critical turning point. Particularly urgent given the scale at which ADM systems are being deployed, is that machine learning trains on and then extracts data’s deep and implicit inference. This means that machine learning makes the implicit information in the data, explicit in the code, and the machine learning ‘intelligently mirrors’ the information it has been given from the analog world. In addition, machine learning improves on the historic bias in the data, and crafts it into an embedded, exacerbated digital form. Thus, making the bias being slowly stripped from the analog world, a new digital, and potentially permanent reality.

     

    This position paper advocates for Affirmative Action for Algorithms (<A+>) in order to correct real life bias and barriers that prevent women from achieving full participation and rights in the present, and in the future we invent. We must ensure that machine learning does not embed an already biased system into all our futures.

     

    We begin by outlining the landscape of the problem of gender bias in ADM and then continue in three sections: Accountability; Inclusion; and Cooperation. We end with a set of practical recommendations that offer a real opportunity to rescript bias and create a proactive agenda. We must seize the moment to advance the values of equality we have long embraced, and correct for the visibility, quality and influence of women proportionate to the population.

All Posts
×