Reflections on science history: A professor’s take on AI
Associate Professor of History David Hecht shares his thoughts on technological developments.
May 16, 2026
I walked into Associate Professor of History David Hecht’s office at the start of the semester with vague questions about technological developments. Tucked into the right corner of Adams Hall, his office was decorated with “Atoms for Peace” posters and newspaper clippings of fallout shelters pinned to a bulletin board. Books entitled “The Atomic Age” and “Atomic Age Postcards” were neatly tucked away on his shelf, overlooking his desk and computer with a visibly worn keyboard.
As a self-described person who likes to write too much, Hecht first became interested in science history as an undergraduate at Brandeis University. What intrigued him about science history was that there was a lack of chronologicalization, since one technological development does not necessarily lead to another—rather, they are the product of both science and politics. Hecht later got involved in nuclear history and the underlying themes of wrestling with a technology that can also be an existential threat.
Throughout my time in his course, “The Nuclear Age,” I could not help but draw similarities between the atomic age and the current rise of AI. Technology in these movements are regarded as powerful, special and exceptional—whether it truly is or not. There is a push to develop this technology by both politicians and scientists, and it is already developing at a breakneck speed. The public perception of both is often engulfed by fear, anxiety and pessimism, alongside hopefulness and idealism. What follows is a conversation with Hecht on these similarities, their implications and their significance moving forward.
AN: Are there other comparisons that can be made between AI and past technological developments?
DH: I think that it’s easy to view technology as coming out of nowhere. However, there are always precursors—and sometimes they stretch back much further than it might seem. By this I don’t simply mean the history of the innovation itself, though that’s of course true. I also have in mind the social place of technology. For example, it’s easy to cite the internet as a world-changing technological development. But there are plenty of ways in which news, communication and commerce were already changing decades before the internet came along. Or think of an airplane: an invention which radically changed people’s experiences of time and place, using fossil fuels to allow a degree of spatial interconnectedness that felt very new. But people could have said, and did, exactly the same thing about trains a century earlier. In the case of AI, this is not the first moment in which we have outsourced our thinking to machines. The ability to perform a Google search, for example, radically decreased the time it took to do basic research on a subject. As with ChatGPT, this could be seen as a good thing (increased efficiency) or a bad one (weakening critical thinking skills). Either way, AI has precedents, and that is important to keep in mind when confronted with something that seems wholly new and revolutionary.
AN: What are some similarities between the current development of AI and the development of the atomic bomb in the 1940s specifically?
DH: The first thing that comes to mind is an interesting comment that The New York Times columnist Michelle Goldberg made at the end of last year. She wrote that for AI, like with the atomic bomb, “its progenitors saw its destructive potential from the start but felt desperate to beat competitors to the punch.” Of course, there are differences between the two—and she acknowledges that it’s easy to overstate the potential for doom when faced with a new, potentially scary technology. But her point is a good one. Very often, the risks of a new technology or innovation became obvious only over time. That’s not the case here.
AN: Looking into the future, what do you predict the culture around AI will be? Are there similarities between other reactions to major technological innovations?
DH: Technological innovations can easily have an aura of inevitability—that the changes they bring in society happen inexorably, by the very nature of the innovation. In fact, human choices drive what happens just often in unseen ways. We are right now rushing ahead with AI in a number of sectors, to the exclusion of concerns (such as the environment) in large part because no one wants to be left behind in capitalizing on what looks to be the Next Big Thing. But nothing inherent in the technology compels that—instead, it’s the result of incentive structures and cultural norms.
Just to make this a little more concrete: There’s a classic book in the history of technology, Ruth Schwartz Cowan’s “More Work for Mother,” that illustrates this well. Cowan writes about the early history of labor-saving technology in the home, such as washing machines and vacuum cleaners. It’s easy to think that these devices lightened the burden of housework. But Cowan argues that they actually increased it: As the technology improved, the standards of cleanliness went up. Moreover, women did an increasing percentage of the work, as the “labor saved” was that of men, children and household help. Nothing about the technology itself dictated that change; social values and expectations did.
I guess this is a bit of a dodge to your question, because I’m always hesitant to make predictions—one thing history shows you is that things often turn out in ways that people don’t expect. So I guess that my initial answer is that the “culture around AI” will be a product of what people decide they want it to be. A perhaps bland sounding answer, but one that I think runs counter to much of how we assume technology interacts with society. It’s not always the driver that it appears to be.
AN: When reflecting on the implementation of AI today, what lessons can we learn from major technological developments in the past?
DH: Trying to make people afraid of something can backfire. There’s an article, written by Megan Barnhart Sethi, that I sometimes teach in my Nuclear Age class about the early activist efforts of some scientists after World War II. They tried to convince both the public and policymakers that nuclear weapons were too dangerous to be left in national hands—that they had to be subject to international control. They succeeded in convincing people to be afraid of the weapons, but this didn’t have the policy effect they wanted. It caused most Americans to double down on the importance of having them and having more of them than anyone else. It’s easy to make a parallel with climate change—trying to scare people into action clearly has not worked. I don’t know what the answer is, but I’m skeptical that AI can be halted—to the extent we even want it to be—simply by eloquently and urgently articulating its risks.
AN: If a fear factor does not seem to shape policy as much as intended, what other kinds of policy tools or approaches do you think have historically been more effective in shaping the trajectory of powerful technologies?
DH: I wish I knew! There certainly are examples of times when technology has been regulated: the ban on atmospheric nuclear testing in 1963, the U.S. ban on DDT in 1972 and the Montreal Protocol in 1987 phasing out the use of ozone-depleting chemicals (among many others examples). I should note that all three of these developments did involve making both the public and policymakers aware of the dangers of the technology. So, I should modify my answer to the previous question; it’s not that fear and worry can’t play any role in shaping these trajectories. My skepticism is about relying on emotional appeals as the main strategy, as a thing that will automatically prompt beneficial social and policy changes.
Those three examples came to mind because I teach about them in various classes; they aren’t the only ones. One thing that they have in common is that they brought their opponents on board. The partial nuclear test ban didn’t happen until scientists and policymakers were satisfied that underground tests could replace atmospheric ones, and the DDT ban didn’t happen until alternatives were found that didn’t carry the same health and environmental risks. I have some trouble imagining a direct parallel from either of those things to AI, though perhaps the lesson is that aiming for moderate change that blunts the worst aspects is the best strategy.
The other thing that occurs to me—and this is less me being a historian than just my general thoughts—is that those of us who are concerned about AI need to articulate a positive vision for what kind of AI-limited world we’d like to see. It can’t just be “this is dangerous” or “look who/what this is going to hurt.” The ease of using it is too great, and the (apparent) financial incentives too clear for either of those to work—not to mention the fact that some people feel it has very much helped their professional lives. I think that AI skeptics need to focus on how limiting and regulating it will make people’s lives better and happier—not simply warning about what will happen if we don’t.
AN: To what extent do you think our current fears about AI are shaped by cultural memory of earlier technologies, instead of by AI itself?
DH: This is just a gut feeling, but I think that a lot of it is AI-specific. That’s not to discount the memory of earlier technologies; there was certainly a worry about the power that technology companies have over our lives that predates AI. And worries about job losses have a clear precedent in various rounds of automation that go back many decades. But as much as the historian in me wants to argue for the importance of the past—and as much as I’ve done so in most of my answers so far—there does seem to be something novel about AI. It presents us with questions and challenges that are different from things we’ve faced before.
AN: Are there cases where early narratives about a technology ended up shaping its trajectory more than the technology itself?
DH: It’s really hard to separate those things, as you might imagine. At the moment, for example, we have AI technology. This itself is a fluid category, because the technology is evolving: The question “what is AI?” involves both understanding what it is right now as well as judgments about where it is likely to be headed—and some of those judgments might be more accurate than others. We also have narratives about what might happen (socially, politically, economically) as a result of this developing technology. Those narratives can’t be separated from the facts about AI, since they are in part based on the technology. But they also have other roots—such as the precedents you were talking about in the last question. Further complicating matters is that there are plenty of examples of discoveries being anticipated in science fiction before they were realities—so there is a sense in which cultural ideas about inventions can precede (and perhaps lay the groundwork for) the inventions themselves. This isn’t a very concrete answer, but I think it’s important to recognize the interdependence of narratives and technology.
AN: As a historian, what questions come to mind when thinking about AI? Do you think we are asking the right questions about AI?
DH: Great question. I have an answer, but it will require a little bit of a tangent.
Sherry Turkle is a sociologist who has been writing about technology, specifically computers and robotics, for a long time. In her 2011 book “Alone Together: Why We Expect More from Technology and Less from Each Other,” she traces the history of how technology came to fill social needs. A key moment happened in the mid-1980s, when toys and other entertainment devices started to seem more lifelike, more like human beings, than the animals that had once seemed to be our “nearest neighbors.” I believe it was in that book that she coined the term “robotic moment.” I first encountered that idea in a lecture she gave a few years later, in which she said (I’m paraphrasing) that we’re in the robotic moment not because the robots are ready for us—but because we are ready for them.
I find this fascinating. Not just because of what it means about robotics specifically, but because of how it flips around the way that we usually think about technological advancements. As she describes it, the key development is not a technological one—not whether the robots were ready to take over a given social (or cognitive, or economic) function. It’s that, broadly speaking, our society was interested in having them do that.
Let’s apply this idea to AI, as it currently exists. So often we focus on understanding it, what it can do and what it is going to change about our world. But if Turkle is right, there are other—and perhaps more important—questions to ask. Why do we have this technology? How is it being promoted? Who is benefiting from it? What is appealing about it? What needs (real or perceived) is it filling? I’m sure that others would have their own list of questions. The key is to shift focus from the technology itself to studying the world that produced it. If we are living in the “AI moment,” that is a social and cultural assessment—not just a technical one. To understand the latter, we need to study the former.
Comments
Before submitting a comment, please review our comment policy. Some key points from the policy: