Go to content, skip over navigation


More Pages

Go to content, skip over visible header bar
Home News Features Arts & Entertainment Sports OpinionAbout Contact Advertise

Note about Unsupported Devices:

You seem to be browsing on a screen size, browser, or device that this website cannot support. Some things might look and act a little weird.

AI Ethics Board weighs the uses and misuses of ChatGPT

February 3, 2023

Chinwe Bruns
TECH TENSION: As AI chatbots like ChatGPT have become more advanced, institutions all over the country have begun exploring how to confront their misuse. (“TECH TENSION” is an AI-generated phrase, by the way).

In November, the San Francisco-based research firm OpenAI launched ChatGPT, an advanced artificial intelligence (AI) interface capable of generating essays and creating computer code, which is now open to the public. ChatGPT’s popularity has spread to academia, prompting charged discussions

amongst administrators and faculty in higher education, including Professor of Digital and Computational Studies Eric Chown and Professor of Government Michael Franz, both members of the National Initiative on AI Ethics board at the College.

ChatGPT is a language-processing model chatbot. Using Reinforcement Learning with Human Feedback and troves of data from the internet, ChatGPT is capable of fulfilling a wide variety of language-based tasks, like answering questions or translating text.

Academics are concerned about ChatGPT’s potential to compromise academic integrity in higher education institutions. The advanced AI language tool is capable of generating responses that could be used as substitutes for students’ own work.

Franz described conversations between academics online that suggested a collectively negative reaction to ChatGPT.

“[The conversations] seem a little feverish and negative,” Franz said. “[For example],‘This is awful;’ ‘I just asked AI my final exam question and it gave me this answer;’ ‘This is scary.’”

Some colleges and universities have banned ChatGPT on their devices and networks out of concern that students will abuse the tool to plagiarize. On the contrary, Chown believes that educators, especially those in fields like Digital and Computational Studies, have a responsibility to educate students on AI to minimize its misuse.

“I think [banning ChatGPT is] a big mistake. It’s part of the world and we need to engage with it,” Chown said. “Engage with it, try to understand it as a tool. When would it be useful for me? When would it be a bad idea for me to use? What are its strengths? What are its weaknesses?”

According to Franz, Bowdoin faculty are taking a different, less fear-based approach to AI. The approach is grounded in collaborating with AI, not fighting it. As such, some professors are exploring ways to teach students that AI should complement human intelligence—not replace it.

“I think Bowdoin faculty … are reflecting over how to incorporate AI into their own [curriculum and] how to incorporate a message to students that says AI is not a sufficient way to answer questions,” Franz said. “I think we’re going to see developed, fairly soon, ways to test whether or not someone’s submitted materials were produced by these sorts of bots, by sort of feeding questions into it, and getting out answers, and then comparing that to what students turn in.”

AI chatbots have come a long way in recent years. Collaborating with AI might seem like an easy way for students to enhance their academic work and overall educational experience. Despite these advantages, Franz and Chown urge students to be mindful of AI’s shortcomings—imprecision and bias susceptibility.

In order to effectively use AI, students need to learn how to best discern the truth. This includes being conscientious of potential biases in its programming and cross-checking information with an array of reputable sources. Since virtual assistants like ChatGPT are trained on massive amounts of data, they might inherit biases from said data and consequently generate biased responses.

“It’s really hard to tell when ChatGPT gets something right, versus something wrong. It writes really convincing essays, it writes really convincing computer code, and sometimes it’s 100 percent right, and sometimes it isn’t,” Chown said.

ChatGPT isn’t only occasionally wrong. Sometimes, the tool will generate entirely misinformed content that masquerades as well-researched.

Chown believes that computer science academics have the responsibility to combat the spread of AI misinformation. Chown believes that the Department of Digital and Computational Studies (DCS) should have courses that teach students how to confidently navigate AI chatbots to glean accurate information.

“DCS should have a course [about] how to find out if something is true or not. We all know that the internet, the digital world, is full of fake news and false information and so forth. And that problem is going to get worse with ChatGPT,” Chown said. “It’s easier to produce information. It’s easier to produce misinformation. And one of our goals as digital citizens should be becoming better at discerning what is the truth and what isn’t.”


Before submitting a comment, please review our comment policy. Some key points from the policy:

  • No hate speech, profanity, disrespectful or threatening comments.
  • No personal attacks on reporters.
  • Comments must be under 200 words.
  • You are strongly encouraged to use a real name or identifier ("Class of '92").
  • Any comments made with an email address that does not belong to you will get removed.

One comment:

  1. Class of '23 says:

    The difference make for a world class college is going to be how we adapt to the changing world. Education is fundamentally the same as it was 150 years ago. With the easy accessibility of these chatbots, we have to figure out how to best incorporate them into our education. As a professor, if you are teaching what can be easily gleaned from a ChatBot, then maybe you need to change what you are teaching. These chatbots can be incredibly powerful tools even though I believe they will ultimately lead to our downfall — innovation is not always a good thing.

Leave a Reply

Any comments that do not follow the policy will not be published.

0/200 words