Let’s confront AI
February 21, 2025

It is hard not to cringe when reading about last week’s Global Artificial Intelligence (AI) summit, with governments around the world—led by the U.S.—competing to offer the least AI oversight. Safety is not taking a backseat; it is unceremoniously stuffed in the trunk. But let’s separate politics from technology and not add AI to the growing list of polarizing issues—we have a responsibility to stay informed and help shape the future of AI.
AI has many well-known problems. To give just one example: AI makes mistakes, often with severe consequences, such as providing incorrect medical advice or, in the case of self-driving cars, killing pedestrians. But guess what: Humans make mistakes, too. Recent evidence actually shows that AI consistently outperforms experienced doctors in medical diagnosis and that self-driving cars are safer than human drivers.
This does not mean we should outsource these important decisions to AI, but it does mean that the questions we need to confront are more complicated: Does it matter if a person dies as a result of a mistake made by a human rather than by AI? If AI proves safer than humans, do we have a moral obligation to adopt it? And what are the long-run costs of creating dependencies on AI?
Confronting AI also means learning about emerging challenges. Many people were alarmed by reports that AI consumes millions of gallons of fresh water each year. While unfortunate, this does not rank near the top ten issues I worry about. (For perspective, U.S. golf courses use hundreds of billions of gallons of water.) I’m more concerned about topics such as AI surveillance, synthetic biology, superintelligence and cyborg technology exacerbating global inequities—issues that are hard to wrap one’s mind around (including for someone teaching these topics).
AI is not simply a “statistical machine.” At its heart, AI is a value function determined by humans. Training GPT4, for instance, involves specifying a set of desirable (implicitly moral) values that AI models optimize. Likewise, self-driving cars have a “crash algorithm” to guide decisions, such as those described in the infamous trolley problem. These value functions may soon affect all areas of our lives. So, it’s even more crucial to ask: Who gets to decide them?
Tackling these problems requires critical thinking and benefits from an interdisciplinary approach. These are exactly the types of questions we aspire to grapple with as part of the Bowdoin education. Many classes—in the digital and computational studies department and beyond—are already integrating these topics. Whether we like it or not, AI is here to stay and will reshape our lives. Let’s be proactive and confront AI … and maybe even nudge it in the right direction!
Given the lack of public oversight, the only lever left to shape AI is arguably through our consumer behavior. AI companies are losing billions of dollars, as models are costly to train. (The low reported cost of developing DeepSeek was misleading, as it did not include many of the fixed costs.) And in industries with high fixed costs, increasing the user base is the name of the game. Companies with more users outcompete others and grow. AI companies differ, especially regarding their approach to safety. Be an informed consumer. For example, use AI models of companies that most closely align with your values.
To be clear, this is not a call for students to use AI in their education; in fact, I am mostly pessimistic about AI’s impact on learning. It is a call to critically engage with the complexities of AI. The frontier of AI is rapidly evolving. Let’s stay engaged and informed so we don’t end up debating yesterday’s problems!
Martin Abel is an Assistant Professor of Economics.
Comments
Before submitting a comment, please review our comment policy. Some key points from the policy: