Go to content, skip over navigation

Sections

More Pages

Go to content, skip over visible header bar
Home News Features Arts & Entertainment Sports OpinionAbout Contact Advertise

Note about Unsupported Devices:

You seem to be browsing on a screen size, browser, or device that this website cannot support. Some things might look and act a little weird.

Faculty Forum explores the future of artificial intelligence

February 17, 2023

On Monday, the Committee on Governance and Faculty Affairs (CFA) hosted a faculty forum framed around the role of artificial intelligence (AI) and technology in teaching and learning prompted by the recent popularity of AI software ChatGPT. The original purpose of the forum was to discuss ChatGPT in the context of academics at Bowdoin. However, the discussion revealed that faculty are less concerned about the current state of the software, but what ChatGPT symbolizes for the future of AI technology in society.

Around 30 faculty members attended the forum in person, with many others sharing their opinions over Zoom. The forum was moderated by Senior Vice President and Chief Information Officer Michael Cato and featured panelists Senior Director of Academic Technology and Consulting and Adjunct Lecturer in Computer Science Stephen Houser, Director of the Baldwin Center for Learning and Teaching and Lecturer in Education Katherine Byrnes and Assistant Professor of Digital and Computational Studies Fernando Nascimento.

After the panel, faculty shared their thoughts on ChatGPT technology with the group. The discussion ranged from potential concerns about plagiarism and cheating on writing assignments to broader questions about the future of evolving AI and its role in the classroom. The general consensus was that faculty members trust students not to use ChatGPT for their academic assignments. Thus, the conversation quickly shifted towards opinions on the technology itself and its societal implications as it continues to improve. Much of the conversation focused on the epistemological implications of ChatGPT as a source of often inaccurate content.

“The way this technology works is that it draws from all the information at its command—billions and billions of words that are out there and available for it to digest. For me, as a historian, that’s an archive—a body of data that’s working from that body of data itself,” Professor of History and organizer of the forum Patrick Rael said. “If there are these biases in that archive and that massive data set—which there must be—then, this is a technology that will reflect that in some way.”

While ChatGPT can successfully mirror language and speech patterns by gathering information from its expansive database, it cannot think with logic and reason like humans. This shortcoming assuaged some fears about academic dishonesty, but it also poses critical challenges to the future of knowledge consumption for students and the public alike.

“Here’s a machine that is capable of reproducing bias without knowing it and that lies with confidence … if it’s actually coming from a bot that is incorporating all these latent biases that itself is unaware of, that’s a real problem,” Rael said. “We know what good words look like. We know what good arguments look like. But what happens when the mass of people start reading words that are not actually written by humans?”

As the technology improves, differentiating facts from inaccurate content generated by ChatGPT will become increasingly difficult. Rael believes that Bowdoin students are well-equipped to think critically, evaluate sources and question how different arguments converse with each other. Nonetheless, faculty agreed that this emerging technology will force the College to think critically about information literacy and AI in the future.

Rael commented that professors in the humanities and the social sciences expressed the most concern about the implications of ChatGPT, as did STEM professors teaching writing-intensive classes, like first year seminars.

Meredith McCarroll, Lecturer in English and the Director of Writing and Rhetoric and the First Year Writing Seminar Program, is hoping to alleviate some of the concerns around ChatGPT by integrating it into her classes.

“I plan to use AI-generated texts in class for students to assess, edit and revise. The texts that I have seen produced by the technology would benefit from revision and can provide good resources for students to think about the choices we make as writers,” McCarroll said.

Other professors, such as Rael, questioned if the existence of the technology necessarily warrants its use in the classroom or suggests that professors will know how best to use it for their curriculum. Nonetheless, he acknowledges that AI will certainly change the way newer generations think and learn.

“You folks are digital natives. Maybe the way we learn and process information will fundamentally change, and we have to keep up with that,” Rael said. “It’s straight down the chute to be asking questions about how we deliver instruction, what are the learning processes really about and how do the things that we use in the classroom [supplement that].”

Future collaborations with the IT department and faculty will build upon the faculty forum’s conversation about the role of emerging technology beyond just ChatGPT in Bowdoin’s approach to teaching and learning.

Comments

Before submitting a comment, please review our comment policy. Some key points from the policy:

  • No hate speech, profanity, disrespectful or threatening comments.
  • No personal attacks on reporters.
  • Comments must be under 200 words.
  • You are strongly encouraged to use a real name or identifier ("Class of '92").
  • Any comments made with an email address that does not belong to you will get removed.