Go to content, skip over navigation

Sections

More Pages

Go to content, skip over visible header bar
Home News Features Arts & Entertainment Sports Opinion MagazineAbout Contact Advertise

Note about Unsupported Devices:

You seem to be browsing on a screen size, browser, or device that this website cannot support. Some things might look and act a little weird.

Faculty discuss varied policies on student AI use

October 3, 2025

With the increased prevalence of generative artificial intelligence (AI) tools, faculty across campus have had to adopt AI policies in their syllabi. However, these policies are by no means standardized.

Some faculty have allowed the use of AI in certain cases. For Assistant Professor of Biochemistry and Biology Maria Sterrett, her approach has been to find ways where students can use AI in ways that will still enhance their learning. In one of her courses, where students find and interpret scientific articles, Sterrett discussed how she lets students use AI to direct their learning and find information.

“I think AI does have the potential to help narrow that scope and give you information relatively quickly about things that may direct your interest,” Sterrett said. “After that, though, I’m challenging the students to then find a scientific article and read it and interpret it and not use the AI tool to actually do the interpretation part.”

Sterrett discussed how she believes this method will help students later on in their lives.

“I think, ultimately, when [students] get out of college and you’re in the workforce …, you’ll be successful if you can use the tool to direct your thinking but really do the thinking on your own,” Sterrett said.

In contrast, Associate Professor of English Ann Kibbie prohibits the use of generative AI tools and considers their use for researching, outlining, drafting, writing or revising work to be a violation of academic honesty in her class. Kibbie explained her rationale behind this policy is to have students engage in “genuine authorship,” which is restricted by the use of AI tools.

“I know that there are a lot of different opinions regarding these policies. I am combining my policy with a new emphasis on in-class writing in an attempt to refocus the discussion of writing from product to process,” Kibbie wrote in an email to the Orient. “I want students to recapture the pleasure of not ‘outsourcing’ their thinking…. That is how we all develop, and always continue to develop, our skills as thinkers.”

According to Senior Vice President and Dean for Academic Affairs Jen Scanlon, the dean’s office has focused on letting faculty develop their own AI policies, which may even differ between classes taught by the same department.

“I think our biggest piece of guidance for faculty is to develop an AI policy,” Scanlon said. “It’s really fundamental that faculty decide for themselves what their AI use is. It’s one important measure of faculty autonomy …, so we don’t presume to say you must do this, or you must not do that, but more how important it is in the age of generative AI for faculty to take the time to really think about what their practices are and then to communicate those clearly to students.”

In an email to the Orient, Scanlon shared that the College is currently considering additional guidance for AI policies. These guidelines were developed by the Committee on Teaching and Classroom Practice and are being reviewed by the Curriculum and Educational Policy Committee before being shared with faculty.

“The guidelines, then, will provide guidance about how to develop individual policies and descriptions of potential AI uses to consider. We expect that those will be shared with the faculty soon,” Scanlon wrote.

However, Scanlon noted that the College has given faculty many opportunities to learn more about AI and its usage in the classroom.

“Now that we have LibreChat, there are ways for faculty to learn about it more and to get training. We have the Davis educational grant … and then our Hastings Initiative [for AI and Humanity]. Both of them provide very clear, supportive opportunities for faculty to learn more and to experiment,” Scanlon said.

For Sterrett, the equity aspect of College-provided tools like LibreChat, which students have to complete an AI ethics module before gaining access to, is important when considering AI in the classroom.

“If I’m opening up AI usage, if someone has a subscription and someone doesn’t, that’s kind of an unfair advantage point, but if everyone has access to LibreChat, then if you want to use it, you can, and it’s not necessarily a division of equity,” Sterrett said.

Associate Director of the Baldwin Center and Director of the Writing Program John Paul Kanwit leads a workshop for faculty about AI in writing-intensive courses and teaches a first-year writing seminar called “Writing with and about Generative Artificial Intelligence.” Kanwit expressed that while some professors are willing to experiment with AI, others are fearful of engaging with this new technology.

“It is a lot for faculty and everybody to take in, so one of the things I do in the workshop is … try to convince them to change one or two things about what they’re doing,” Kanwit said. “I don’t think faculty need to change their entire class because AI is here.”

Assistant Professor of Government Ezgi Yildiz allows for generative AI use for certain assignments in her class as long as students provide screenshots of the prompts that they used. Even if students do not use AI on these assignments, she asks that they submit a statement indicating that they did not use AI.

“I feel like people are maybe dipping their toes [in AI] to see whether this is something they can do. I might be one of the earlier adopters of this kind of policy because of my previous experiences where there clearly was [AI] use,” Yildiz said.

She believes this policy will allow students to be open and honest about their use of AI and limit accusations of academic dishonesty.

“There are certain tell signs of AI, but some people write that way as well. So I just didn’t want to have that guesswork be part of the conversation anymore,” Yildiz said.

Sterrett emphasized how strict policies are not the only way to discourage students from outsourcing their assignments and thinking to AI.

“I’m a big believer that if I make the class engaging for students and make them understand ‘Why is it I’m assigning this?’ or [understand] what they are going to get out of an assignment, [then] they won’t feel like they should just use AI to complete it,” Sterrett said.

Assistant Professor of Digital and Computational Studies Fernando Nascimento expressed that while AI tools can be helpful in his discipline, overreliance on these tools can stunt student growth and learning as well. In addition, Nascimento pointed out some of the weaknesses of using AI in digital and computational studies (DCS).

“For coding-intensive projects specifically, AI tools often miss the layers of design that go into properly solving a problem, instead producing ‘one-shot’ solutions that are difficult to understand and debug,” Nascimento wrote in an email to the Orient.

Nascimento teaches a class on the ethics of AI and emphasized that, beyond immediate concerns surrounding the outsourcing of thinking, there are ethical considerations to balance when utilizing generative AI.

“Beyond these immediate pedagogical concerns, DCS believes it is central to our mission to recognize the wider implications of AI use, such as broader impacts to the environment, certain forms of labor exploitation involved in the training of AI systems, the power centralization derived from the dominance of a few companies in the development of large AI applications, the risk of new forms of privacy violation due to the inferential potential of machine learning model and the risk of increasing social inequalities due to a new layer of the digital divide,” Nascimento wrote.

Comments

Before submitting a comment, please review our comment policy. Some key points from the policy:

  • No hate speech, profanity, disrespectful or threatening comments.
  • No personal attacks on reporters.
  • Comments must be under 200 words.
  • You are strongly encouraged to use a real name or identifier ("Class of '92").
  • Any comments made with an email address that does not belong to you will get removed.

Leave a Reply

Any comments that do not follow the policy will not be published.

0/200 words