Go to content, skip over navigation

Sections

More Pages

Go to content, skip over visible header bar
Home News Features Arts & Entertainment Sports OpinionAbout Contact Advertise

Note about Unsupported Devices:

You seem to be browsing on a screen size, browser, or device that this website cannot support. Some things might look and act a little weird.

Professor Hall leads student group discussion on AI fears and optimism

April 21, 2023

AI, AYE AYE!: The McKeen Center for the Common Good and MacMillan House host a conversation about ethics and the future of AI with students and Professor of Digital Humanities Crystal Hall.

Yesterday evening, 15 students were joined by Associate Professor of Digital Humanities Crystal Hall for an informal discussion on one of this year’s most contentious topics: artificial intelligence (AI). The discussion, hosted by the Joseph McKeen Center for the Common Good and MacMillan House, centered around questions regarding the biases of AI programs, potential interferences with the creative process and proper uses of the technology.

Hall opened the discussion by reflecting on other technological advancements that seemed daunting at the time of their release, such as gunpowder, vaccines and the automobile. She connected these examples to pertinent questions about AI today.

“The reason I was thinking about these historical examples of technology is that there are things that we have [had] to decide: ‘Do we trust them? Why do we trust them?’” Hall said. “So, are we going to use a telehealth system that is based on a ChatGPT model to … make decisions about our bodies based on what it’s giving back to us for answers?”

Hall transitioned into discussing the implications of the types of data that AI programs use, sharing the findings of the Washington Post’s recent investigation into AI. The study revealed that a popular AI program used data from biased sources like Reddit and extremist political websites when generating answers. Hall discussed her concern over these findings.

Students continued to discuss AI’s trained biases throughout the evening. In proposing the proper uses of AI, Renata Gonzalez Chong ’23 suggested that it may be best to only harness the technology for the most objective of tasks, like reading medical scans, to prevent these biases from coming into play.

“I think [AI is] good at pointing out what it doesn’t perceive as normal, but then that can be problematic too. For proteins, maybe not. For brain scans, maybe not. But for other [more subjective] stuff,… ‘That’s normal,’ or ‘That’s strange,’ it may unethically do that,” Gonzalez Chong said.

Another topic of discussion was the future of artistic and creative disciplines given the rise of AI-generated images, music and writing. Students mentioned an AI-generated image’s recent victory at the Sony World Photography Awards—granted without knowledge of its technological origin—and a viral AI-generated song made to sound like it was performed by artists Drake and The Weeknd.

Though many felt uneasy about AI and the future of the arts, some concluded that a model based on already existing data cannot be deemed as being truly creative.

For Hall, students’ breadth of opinion on AI is cause for optimism, as it demonstrates a collective drive to ensure the technology is used responsibly.

“This very ‘Bowdoin’ moment, this is the liberal arts at work, right?,” Hall said. “You’re sampling from all of this to get the perspectives necessary to push for something different, to create the thing you want and not accept the thing that you’re being given.”

Like Hall, attendees appreciated the intellectual diversity in the room. Emma Gibbens ’25 left feeling enlightened by her peers.

“I thought it was a really great opportunity to get a very interdisciplinary approach to the situation,” Gibbens said. “People from the sociology side, or from the environmental science side, we can all speak to elements of why we’re concerned or why we’re looking forward to using AI.”

Comments

Before submitting a comment, please review our comment policy. Some key points from the policy:

  • No hate speech, profanity, disrespectful or threatening comments.
  • No personal attacks on reporters.
  • Comments must be under 200 words.
  • You are strongly encouraged to use a real name or identifier ("Class of '92").
  • Any comments made with an email address that does not belong to you will get removed.

Leave a Reply

Any comments that do not follow the policy will not be published.

0/200 words