Go to content, skip over navigation

Sections

More Pages

Go to content, skip over visible header bar
Home News Features Arts & Entertainment Sports Opinion MagazineAbout Contact Advertise

Note about Unsupported Devices:

You seem to be browsing on a screen size, browser, or device that this website cannot support. Some things might look and act a little weird.

Joy Buolamwini talks AI bias in Kenneth V. Santagata Memorial lecture

September 26, 2025

On Tuesday night, students, faculty and community members gathered in Mills Hall to hear from Dr. Joy Buolamwini, a prominent author and researcher who specializes in the social implications of artificial intelligence (AI). Sponsored by the Kenneth V. Santagata Memorial Fund and the Hastings Initiative for Artificial Intelligence and Humanity, Buolamwini’s lecture highlighted key issues regarding AI and algorithmic bias.

In her research, Buolamwini has focused on how bias appears in facial recognition technology, both through the bias embedded within the algorithms and how these algorithms can be used and misused by different actors, something she defines as the “coded gaze.”

“The ‘coded gaze’ is about who has the power to shape the priorities, preferences and also at times prejudices—not always intentionally—that get embedded in AI systems…. I first encountered it in a really visceral way while I was a student at [the Massachusetts Institute of Technology] working on an art installation that used face detection,” Buolamwini said.

Buolamwini discovered in her research auditing AI systems that facial recognition technologies are often tested using “pale male” data sets, which overrepresent lighter-skinned men. Buolamwini found that as a result, women and people with darker skin tones are less accurately identified by facial recognition systems.

“There are many ways in which computers can read our faces and how they read our faces matters because it influences how we test them, the types of laws we might write and the policies and so forth,” Buolamwini said.

According to Buolamwini, this leaves the potential for individuals to be what she terms “excoded.”

“‘Excoded’ is really a way of describing anyone who’s been convicted, condemned, exploited [or] otherwise harmed by AI systems,” Buolamwini said.

Facial recognition technologies are becoming increasingly prevalent as many companies make their tools available to intelligence agencies. Buolamwini highlighted how the effects of algorithmic bias are felt across different populations.

“You have these algorithms of discrimination, and we’re thinking about the excoded. We also have algorithms of surveillance … increasingly entering different parts of our lives,” Buolamwini said. “So we’re seeing, for example, the expansion of AI facial recognition technology at airports. You’re also going to see more surveillance in smart glasses that are coming out.”

However, Buolamwini explained, the impacts of algorithmic injustice are not always distributed as one might expect.

“‘AI is a mirror of society.’ That’s the sentiment I sometimes hear. Like, ‘Yes, AI is biased, but so are we, right? So it’s not doing too much more.’ But I really think we have more than a mirror. I think we’re looking at a kaleidoscope of distortion…. The technologies of the future are actually taking us back to the discrimination of the past while robbing us of our humanity in the present,” Buolamwini said.

Alma Dudas ’27 attended both a student conversation with Buolamwini held earlier in the day and her lecture after reading Buolamwini’s book, “Unmasking AI: My Mission to Protect What Is Human in a World of Machines” in a philosophy course last year. Dudas appreciated Buolamwini’s discussion of the consequences of AI.

“Everybody talks about [AI], but few really understand the complications of it. I think she does a really good job at being the technical person and understanding how it works in an algorithm but also what the consequences [are] and how to communicate that in an efficient way to an audience that might not have the technical background that she has,” Dudas said.

According to Professor of Digital and Computational Studies Eric Chown, who serves as the faculty director of the Hastings Initiative, Buolamwini’s lecture reflected a key purpose of Reed Hastings ’83’s gift—to encourage AI dialogue at the College.

“We’re trying to build up this network of events and things to showcase both the positive side, that AI can do good things, and also some of the things to be worried about with AI to try to give campus a balanced perspective,” Chown said.

Chown noted how the Hastings Initiative is finding its place on campus through engagement with faculty and students.

“At the end of the day, the Hastings Initiative is about empowering students, so we’re trying to engage with faculty so they can use it more in their classes,” Chown said. “We’re also trying to engage with students directly. I think we’ve already done a bunch of exciting things, but there’s a lot more to come, and I can’t wait until campus can see some of that.”

Comments

Before submitting a comment, please review our comment policy. Some key points from the policy:

  • No hate speech, profanity, disrespectful or threatening comments.
  • No personal attacks on reporters.
  • Comments must be under 200 words.
  • You are strongly encouraged to use a real name or identifier ("Class of '92").
  • Any comments made with an email address that does not belong to you will get removed.

Leave a Reply

Any comments that do not follow the policy will not be published.

0/200 words