With the end of another semester comes another Bowdoin Orient Student Survey (BOSS). When you all filled it out (we hope), you may have noticed a new set of questions asking Polar Bears about their habits around generative artificial intelligence (AI) like ChatGPT.
The survey revealed that many students have taken eagerly to the new technology—27 percent of respondents reported using generative AI for help on academic assignments this semester.
If you’re among them, we don’t blame you.
ChatGPT can be a valuable addition to users’ work and creative processes, and many professors across campus acknowledge its utility. We all know people who swear by ChatGPT as a way to generate essay ideas, find helpful sources or get their creative juices flowing. While we hope most students use it simply as an academic helping hand, we can only imagine that its powers can often be abused.
We have to remember, however, that AI has limitations. There is a common assumption that algorithms are indicative of undebatable truth—that they are inherently logical and produce infallible content. But behind each algorithm is a person who coded it and the human-generated information it has been fed, both of which are mired by human subjectivity. The technology is also still getting its kinks worked out; issues of bias and fabricated information are among the AI challenges that should make us proceed with caution.
It is not novel to say that using AI, especially at a collegiate institution with an academic honor code, introduces many ethical quandaries. How, exactly, can we define “correct” or “incorrect” usage of the technology? While some people might find using the tool to generate essay ideas permissible—a practice many professors allow, or even encourage—others might see it as dishonest. There’s no way around the fact that generative AI is forcing us to confront where we draw the line of academic dishonesty.
The College has no determinate rules on ChatGPT usage. The issue was addressed in an email sent by Senior Vice President and Chief Information Officer Michael Cato before the school year began, writing that “faculty should be clear about their policies on permitted uses, if any, of generative AI with the students they are teaching and advising.”
This is uncharted territory for professors and students alike. We encourage the entire community to work together in building a foundation of academic integrity that each of us can use to reflect on our generative AI motivations. Ask yourself: “Am I using this technology as a valuable supplement to my education, or am I using it as a crutch? Could a friend, tutor or academic mentor have ethically assisted me by offering similar advice? Am I enhancing my creativity, or am I actively avoiding independent critical thought?”
If you were hoping for a hard-line, universally applicable solution to the problem of AI ethics, we can’t deliver, mostly because we don’t think a solution that is universal exists—at least not yet. Maybe you can ask ChatGPT to figure it out.
This editorial represents the majority opinion of the Editorial Board, which is composed of Robeson Amory, Sara Coughlin, Emma Kilbride, Lucy Watson, Sam Pausman and Juliana Vandermark.