The Genius of Dr. Joy Buolamwini's "Unmasking AI: My Mission to Protect What Is Human in a World of Machines"
A glimpse into one of the decade's most influential books on AI and bias
Cover Image of Unmasking AI: My Mission to Protect What Is Human in a World of Machines
At the inception of this blog, Tech Policy - the human perspective, I was certain I wanted to address bias in AI as one of my first pieces. However, I lacked the language to speak about it beyond my overly simplified argument that stated, “AI systems are biased on racial and gender lines and I know this because they are built and trained by humans who carry the same implicit biases”. Then I came across Dr. Joy Buolamwini’s book, Unmasking AI: My Mission to Protect What Is Human in a World of Machines.
Unmasking AI bends genre classifications and can be qualified as a memoir, and a Scientific book with pockets of poetry all at once. In the first chapter of the book, Dr. Buolamwini is invited to a Halloween party where she wears a white mask as part of her costume. As the chapter progresses, she details a project, Aspire Mirror, that she is working on for one of her MIT Masters classes. Aspire Mirror is a device that allows you to look at yourself and see a reflection on your face based on what inspires you or what you hope to empathize with. As she works on adding interactivity and movement tracking so the filter follows her face, she becomes cognizant of the fact that the system simply does not register her face and hence cannot track its movement. Believing this is just a regular hurdle in the software-building process, she seeks out to see if it can track any face at all. She tests this by drawing a face using basic lines on her hand. It works. The system detects it. Dr. Buolamwini looks around for other objects and finds her white Halloween costume mask and as she holds it against her face, the system detects it. Being a dark-skinned woman means the facial recognition system cannot “see her”. Here, she coins the term, the “coded gaze”.
The coded gaze describes the ways in which the priorities, preferences, and prejudices of those who have the power to shape technology can propagate harm, such as discrimination and erasure.
The “coded gaze” goes hand in hand with the concept of “power shadows” which are a direct result of data reflecting the assumptions and prejudices of society. Through this concept, Dr. Buolamwini illuminates the impact of factors such as racism and colorism, both rooted in colonialism, on our data and consequently on the systems we can construct. This relationship between the "coded gaze" and "power shadows" underscores the pervasive influence of historical biases on technological advancements.
Part II of the book particularly stood out to me because of the 5th chapter, “Defaults Are Not Neutral”. Often, there is a misconstrued belief that because of the logical and mathematical systems underlying machine-learning algorithms, they are objective by nature. Machines are believed to be free of the subjectivity and biases that plague humans. Dr. Buolamwini gives an example of cameras, which appear neutral but history tells us a different story. Did you know furniture and chocolate companies played a significant role in the development of film cameras, specifically enhancing their capability to capture richer and darker shades of brown?
“Even though cameras may appear neutral, history reveals another story. The film used in analog cameras was exposed using a special chemical composition to bring out desired colors. To calibrate the cameras to make sure those desired colors were well represented, a standard was created. This standard became known as the Shirley card, which was originally an image of a white woman used to establish the ideal composition and exposure settings. The consequence of calibrating film cameras using a light-skinned woman was that the techniques developed did not work as well for people with darker skin. In fact, it wasn’t until furniture and chocolate companies complained that the rich browns of their products were not being well represented that Kodak introduced a new product that better captured a range of browns and dark sepia tones”.
The film camera example sets the precedent for understanding the importance of racial diversity in datasets used to train AI facial recognition systems. If diverse skin tones are absent from the training set, then the result is a facial recognition system that is coded without inclusion. In other terms, there are people who are “coded-out” of systems, a term Dr. Buolamwini refers to as excoding. Excoding poses significant current and future risk to marginalized populations especially as AI systems proliferate and are used for identification, sorting and selection tasks.
You can be excoded when a hospital uses AI for triage and leaves you without care, or uses a clinical algorithm that precludes you from receiving a life-saving organ transplant. You can be excoded when you are denied a loan based on algorithmic decision-making. You can be excoded when your résumé is automatically screened out and you are denied the opportunity to compete for the remaining jobs that are not replaced by AI systems. You can be excoded when a tenant screening algorithm denies you access to housing.
Throughout the book, we follow Dr. Buolamwini’s journey into uncovering and tackling biases in datasets used to train AI models. Often, when exploring the field of AI safety, existential risks, and potential global catastrophes are placed front and center. Validly, risks like bioterrorism and rogue AIs could be detrimental to humanity’s continuity thereby warranting the attention and resources given to their prevention. In the same breath, bias is often regarded as a secondary risk being deemed far less dangerous. What is regularly overlooked are minorities who bear the brunt of the coded gaze, experiencing the full weight of its impact on facial recognition systems. In addition to addressing potential catastrophic risks, it is crucial to prioritize the resolution of the existing issue of algorithmic bias. While preventing future catastrophes is essential, it is equally imperative to confront and rectify the immediate impact of biases embedded in current AI systems.
Beyond her years of research on bias and its effects, Dr. Buolamwini details her victories in the fight against bias through her digital advocacy non-profit the Algorithmic Justice League (AJL). Through AJL’s groundbreaking work, tech giants like IBM have become aware of and worked towards improving the accuracy of their facial recognition systems that previously performed poorly on an intersectional analysis (race and perceived gender). Dr. Buolamwini was also at the forefront of championing inclusion in the beauty space as the face of Olay’s #DecodeTheBias campaign that sought to raise awareness on the shortcomings of technology and algorithmic bias. Her pioneering work speaks to the importance of using our voices to amplify the change we need to see.
This blog post is but a glimpse of the sheer genius present in Unmasking AI. Dr. Buolamwini walks us through technical topics, ensuring to clearly explain jargon. We understand how systems are built right from the data to the relevant actors. Through her lived experiences, we also see the power of grit and self-belief. Finally, we understand that we do not have to be a singular expression of ourselves. Dr. Buolamwini is as much a researcher as she is an engineer as she is a poet as she is a skateboarder. Often, we believe that to succeed we must pick a singular form of self-expression and stick to that alone. But we are free to explore, free to merge all these aspects of ourselves into a beautiful tapestry of being. I highly recommend this book to anyone interested in the AI field. If I could summarize its mastery, I would use Charles Bukowski’s quote, “Genius might be the ability to say a profound thing in a simple way”.
What a great book review, Tunu. Following your recommendation, I am currently reading the book. You captured its essence very smartly in your article.
Dr Joy is amazing writer and leader in AI justice. You do a great job at summarizing her work and letting people think more deeply of AI systems