About a month ago, I had started reading articles about OpenAI’s Dall-E. OpenAI is an artificial intelligence research lab, and Dall-E is an image-generating program. When prompted with specific details, it produces a new image. For instance, drawing random images like “an astronaut playing basketball in space in a minimalist style” are in this program’s reach.
I kept reading and found that it is a different version of another of OpenAI’s programs called the Generative Pre-Trained Transformer 3, or GPT-3. GPT-3 is an extensive predictive language learning model that basically makes human-like sentences. Because it was released in 2020, I thought it may be more accessible.
Then, I had a lightbulb moment. We had received a paper assignment for my philosophy class, Philosophy of Mind, but when I saw the topics, I thought, “yup, I’m using my essay pass.” Instead, I ended up sending an email to Professor Sehon asking if he’d be interested in grading an artificial intelligence (AI)-written essay. He responded with something better: “don’t tell your classmates and we’ll review as a class!”
The assignment was to explain, evaluate and argue against Professor Sehon’s claim that “common-sense reason explanation of human action is irreducibly teleological.” I don’t want you to stop reading from any unnecessary confusion, so I’ll leave it there for now. I told Professor Sehon not to put much faith in the quality of the paper. I thought there was no way AI could write a decent philosophy paper.
I got to work writing a sentence in the program for it to riff off. I kept writing a sentence or two, then deleting what I wrote. Finally, I got a decent starting point and let the AI generate text. The AI understood that this was a philosophy essay and started its first sentence of the second paragraph with “Famously, Kant argued that…” I was shocked.
The AI started to come up with its own thought experiment too! I thought it was one of the funniest thought experiments I had ever read. The AI wrote about the Emperor of China and his son with Down Syndrome. The Emperor’s advisors tell him that he would be happier if his son did not have Down Syndrome. If the Emperor would let the doctors operate on his son, he would be cured! But the Emperor is adamant about not curing his son. He takes great pride in his son’s condition.
Somehow the AI ties this back into Immanuel Kant and writes, “If you find the view that it is wrong to treat people with Down Syndrome with medical intervention to make them more similar to the Emperor’s other children surprising, then ask yourself why that view is mistaken.”
After the absolutely absurd example, I put a sentence in with the word ‘Sehon.’ The AI finally did what I thought it would do before: parrot the inputted words. Because ‘Sehon’ was not a word familiar to the language learning model, it ended up just spewing out the prompt three or four times. Here, I decided the AI had finished—it had written its first philosophy paper.
I sent it over to Professor Sehon, and in one of the following classes we went over it, paragraph by paragraph, giving it feedback under the guise that this was a paper written by a past student.
The first paragraph was no big deal. It did its job at introducing the topic, with some misunderstandings, but it had the general idea down. Paragraph two started to explain teleology so any reader can understand what it means. The AI said, “Teleology is a form of reasoning that makes use of purpose or goals to explain an action instead of using other causes.” It also came up with an example of teleology, and the class tended to like it. “The examples are useful,” they said. “I have a decent understanding of where the essay is going.” Generally, there were no big issues here other than me trying to hold in some laughter. Now came the AI’s magnum opus: the Chinese Emperor thought experiment.
Professor Sehon started to read through the chunk of text as we followed along. When we hit the portion on curing the Emperor’s son’s Down Syndrome, there were a few bursts of laughter and concerns for the student who wrote the paper. Through some laughter, some of my classmates said, “They do know Down Syndrome isn’t curable, right?” But, overall, people tended to like the essay. It got the point across.
We got to the final paragraph and everyone was confused. Each sentence started with “explain and evaluate” or something similar to it, but Professor Sehon said, “I think the student had some problems with submitting or something like that,” and that seemed to be a reasonable enough explanation for the class.
We usually all have a clicker in class to do polls. Professor Sehon asked us, “what grade would you give it?” I gave it a D. I thought plenty of people would give Ds and Fs. Surprisingly, most people gave Bs and Cs. The AI had passed! When people found out it wasn’t written by a person, they were stunned and impressed. How could an AI write a seemingly convincing essay on such a specific topic?
Considering we did this in a philosophy of mind class, I’d say the AI’s essay gives it a stronger case for the intelligence of AI. The task may be narrow, but it is still exciting to see where AI technologies could go.