THE CASE OF ALGORITHMIC DISCRIMINATION AND HOW IT ADVERSELY AFFECTS MINORITiES – WOMEN’S HISTORY MONTH EDITION

MIT researchers Joy Buolamwini, Deborah Raji, Stanford’s Timnit Gebru, and others, have as part of a team dedicated remarkable efforts to the demo testing of the Artificial Intelligence (AI) in facial recognition software and the synthesis of its data output. Their exhaustive study of bias in AI concluded that Amazon’s facial recognition software Rekognition, a software being sold to law enforcement and also used for retail security surveillance, exhibits significant gender and racial bias for gender classification. In Joy’s MIT thesis titled “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification“, the team’s findings concluded that this racial and gender bias in surveillance AI adversely affects women and minority groups who are then subject to increased surveillance, searches, interrogations, and potential loss of liberties. Writer and CNET News reporter Alfred Ng wrote of the software that lack of standards to prevent abuse and AI bias could “get you banned from places you’ve never been” (CNET.com), as store security is free to share their biased data about you across a vast network.

So next time you are browsing in a Whole Foods Market or a Target and you find yourself being stalked while Black by store security, please do think about the below findings and whether or not you are comfortable with this:

“…An error in the output of a face recognition algorithm used as [a deciding factor in surveillance] can have serious consequences. For example, someone could be wrongfully accused of a crime based on erroneous but confident misidentification of the perpetrator from security video footage analysis.”

Screen capture of actual data output using the face of TV personality Oprah Winfrey
Amazon Rekognition software demo (Source: “Ain’t I A Woman”)

“In other contexts, a demographic group that is underrepresented in benchmark datasets can nonetheless be subjected to frequent targeting…False positives and unwarranted searches pose a threat to civil liberties. Some face recognition systems have been shown to misidentify people of color, women, and young people at high rates (Klare et al., 2012).”

Screen capture of actual data output using the face of Civil Rights activist Ida B. Wells
Amazon Rekognition software demo (Source: “Ain’t I A Woman”)

“Past research has also shown that the accuracies of face recognition systems used by US-based law enforcement are systematically lower for people labeled female, Black, or between the ages of 18-30 than for other demographic cohorts (Klare et al., 2012). The latest gender classification report from the National Institute for Standards and Technology (NIST) also shows that algorithms [that the] NIST evaluated performed worse for female-labeled faces than male-labeled faces.” (source: Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification)

Help fight bias by sharing your experience or concerns with surveillance AI technology HERE.