Politics & Society
*Review of Brian Merchant, Blood in the Machine: the Origins of the Rebellion Against Big Tech (Little Brown and Company, 2023) and Karen Ham, The Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI (Penguin, 2025).

What Now, Humanist?*

By

Jennifer Delton

There is a time when the operation of the machine becomes so odious, makes you so sick at heart, that you can’t take part; you can’t even passively take part, and you’ve got to put your bodies upon the gears and upon the wheels, upon the levers, upon all the apparatus, and you’ve got to make it stop. And you’ve got to indicate to the people who run it, to the people who own it, that unless you’re free, the machine will be prevented from working at all!

You may recognize Mario Savio’s manifesto from 1964, when “the machine” was the military industrial complex, corporate bureaucracy, and the IBM punch-card that stole individual identities. The machine is pretty much the same today but instead of the IBM punch-card we have a more insidious computer threat on campuses in the form of AI, or large language model chatbots (LLMs).

LLMs like ChatGPT3 are not only aggregators of data, but also interpreters, synthesizers, and, increasingly, creators of knowledge, culture and ideas. While not always factually accurate, their output captures tone, perspective and complexity—in seconds. Why should a student read a whole book or article, when they can read a summary that includes the arguments the professor will discuss in class? Why even read the summary? Why go through the tedium of writing a paper when ChatGPT or Claude can do it?

Professors may believe reading and writing are essential to independent thinking, which is the actual goal of education and, arguably, the basis of human freedom. But the critical thinking skills needed in the future will likely be shaped by AI, not professors born in the twentieth century.

This is because educational institutions are all in on AI. Schools have welcomed Google and OpenAI’s products into almost every aspect of campus life. Ohio State University requires graduating students be “AI fluent.” Even our own Skidmore College is urging faculty to incorporate LLMs in their teaching. Meet the students where they are, say AI boosters, have them use LLMs in their classwork! The arguments are familiar. LLMs are simply a tool. Yes, they are disruptive. But that is the nature of progress. Like previously disruptive advances, such as the laptop and internet, the benefits will eventually outweigh the consequences. In terms of knowledge production, creative capability, and resource accessibility, LLMs aid and expand the academic mission. We would be remiss if we ignored, or worse, limited AI. Parents paying $90,000 a year expect their kids to be prepared for the future. Whether you like it or not, it’s here and we need to adapt. Those who don’t will be sitting in empty classrooms.

Resistance on campuses is inchoate and hesitant.1 Faculty recognize AI’s threat to writing, reading, and teaching, their stock in trade. Those committed to a liberal arts education fear that AI will take not just our jobs, but also our humanity. By relying on LLMs to write, read, and think for us, we fail to develop those uniquely human prerogatives in ourselves and in future generations. Why allow profit-driven, environment-depleting monopolies to automate and monetize students’ thinking and creativity? Why offload our thoughts and creative energy to Big Tech?

The problem is that LLMs are pretty amazing. Magical, even. Despite the lies and hallucinations, they know how to compile and summarize the latest research, how to identify and find sources, write annual reports, mimic tone and style, create visual presentations, diagnose maladies, strike the right note for a condolence note, and assess solutions for myriad household and personal problems. Tech giants promised us the moon: cures for cancer! Eternal life! colonies on Mars! But it turns out we can be bought for less. If it makes our routine tasks easier, we’re in. We cannot resist.

Still, there are people who think AI can be resisted. These hearty few—these skeptics, resisters, book readers—insist humanity can take back control from the monopolists. Big Tech’s version of AI is not inevi- table, they say. That is the message of the two books reviewed here, both written by technology journalists and both determined to convince us we still have a choice.

In Blood in the Machine, Brian Merchant offers an engaging history of the original Luddites, those 19th century weavers and hosiers who smashed the machines that stole their livelihoods. For many people, especially progressives, the Luddites were change-fearing peasants who could neither accept nor adapt to the future. Others, however, like Merchant himself, understand Luddites as heroic fighters against exploitation. Luddites were not against change or technology, but rather against the kind of change that beggared their communities while enriching a small group of factory-owners who had the audacity to call this exploitation “progress.”

For Merchant, the AI revolution is another instance of disempowering workers via automation and control. Even before LLMs, Big Tech algorithms made work more precarious, inscrutable, and easily surveilled. Now, with the improving quality of LLMs, the jobs on the chopping block are white-collar professional jobs in the media, education, and culture industries. These are the very jobs that liberal arts graduates typically go on to inhabit. Both students and their professors face an existential threat to their livelihoods—not unlike the Nottingham weavers of yore.

The book provides a lively account of the weavers, croppers, and hosiers who embraced the outlaw folk hero Ned Ludd, as well as the entrepreneurs who invented the mechanized loom and factory-owners who adopted it. We learn that Queen Elizabeth I nixed an early patent request for a mechanized loom because of its effect on her “poor subjects”: “It would assuredly bring them to ruin by depriving them of employment, thus making them beggars.” By the 19th century, however, English rulers had embraced the liberal principles of laissez-faire and their subjects were left to defend themselves.

Like Silicon Valley’s tech pioneers, mechanical loom inventors and investors saw themselves as visionaries, not capitalists. They were challenging the old world of monarchical authority just as the founders of Apple and Google challenged the entrenched industrial economy built by multinationals like General Motors and U.S. Steel. Early Nineteenth-century factory owners were still bound to their workers by an age-old moral compact that dictated wages and employment. Many factory-owners wanted to honor the old agreement. But if they did not adopt the labor-saving machines, they ceded the market to competitors who did. When weavers and hosiers organized and started destroying their property—what historian Eric Hobsbawm called “collective bargaining by riot”—their employers felt betrayed, as if the workers had violated the agreement.

The law came down fast on the side of property owners. The movement’s leaders were rounded up, tried, and, after Parliament passed the Frame Work Bill of 1812, executed.

The Luddite movement was a full-on rebellion. Not just weavers and hosiers, it included everyone impoverished by the ripple effects of progress. The King’s troops trampled into Nottinghamshire, Yorkshire, and Huddersfield, trying to keep order and protect property. Lord Byron, whose ancestral family hailed from this region, was sympathetic to the Luddites. In his brief career in the House of Lords, Byron opposed the Frame Work Bill that would make destroying the frames (machines) a capital offense. He did not condone the weavers’ violence, but asked Parliament to understand the circumstances that had led the Luddites to these actions. They needed help not punishment. In a poem satirizing the bill’s authors, Byron wrote of them, “Who, when asked for a remedy, sent down a rope.”

Other poets and writers likewise supported the Luddites, including Charlotte Bronte, Percy Shelley, and Mary Godwin Shelley. Romantics and writers, they worried the new machines would destroy humans’ capacity to create and thus alter what it meant to be human. Merchant presents their lives and concerns alongside the era’s entrepreneurs and Luddites, taking seriously their humanist critiques of technology’s dangers. Shelley’s eternally relevant Frankenstein, The Modern Prometheus (1818) is still one of the best articulations of technology’s seductions and costs from a humanist perspective—and Merchant gives it its due. But he clearly favors the Luddites’ materialist arguments over the humanist critique. This is a superb history. But the Luddites lost. Everything in this book points to the inevitability of that loss. The illustrations of old machines, the historical figures, the broadsides, the appeal to collective action, the examples used to show how Amazon and Uber workers are fighting back. The author seems to think that a taxicab driver who committed suicide outside City Hall to protest Uber is akin to some kind of resistance. If anything, the book shows how f*cked we are.

Merchant says we still have choices. Unions, for instance. He argues that the Luddites’ main legacy was collective action in the form of labor unions, which over the course of the 19th and 20th centuries allowed workers to gain control of their fate and fight deskilling and automation. Unfortunately, this is exactly what people hate about unions—they block progress and innovation. Plus, unions are weaker today (in the U.S.) than they have been since Congress guaranteed their rights in 1935. It may be true that collective action once allowed jobholders to counter the power of innovating employers, but how does that help us now?

Karen Hao’s Empire of AI is a classic muckraking expose of the ways OpenAI and other tech monopolies destroy democracy and do evil. OpenAI is the company that introduced ChatGPT-3, one of the first LLMs to be available for consumer use. Founded by Sam Altman, Elon Musk, and some of the best computer scientists in the world, it was originally a nonprofit research corporation intended to counter the for-profit AI efforts of Big Tech monopolies like Google.

A reporter for MIT Technology Review, Hao has been covering OpenAI since its founding in 2015. She sees OpenAI and its competitors as empires. Like past empires, these AI monopolies have the power to change worlds, destroy cultures and lay waste to the environment. Their leaders insist this power is a force for good, for progress, for improving and advancing civilization. While they compete against each other to determine which monopolist, which empire, will rule supreme, peoples’ livelihoods and entire economies crumble to dust. Eventually, however, empires fall. Hao believes this one will too and that we, the readers of this book, “can shape the future of AI together.”

By exposing the corruption, egoism, and dangers of AI monopolies, Hao hopes to galvanize the public to “wrest back control of this technology’s future.” Like Merchant, she insists the problem is not technology but who controls it. In the right collective hands, it can actually enhance human capability, but in a fair and democratic way.

There is a lot of shock, awe, and gossip packed into these 500 well-researched pages. The narrative follows Sam Altman, a brilliant, Barnumesque visionary who attempts to create an open, transparent AI system that has the capacity to think (and “feel”) like a human being, and how that project leads to secrecy, scandal, environmental degradation, and economic exploitation. It is Frankenstein all over again.

Among other things, we learn about the extreme amounts of money being invested in AI (the industry is entirely supported by venture capitalists and tax breaks), the nuts, bolts, and expense of LLM training, and how Kenyans were paid two dollars an hour to moderate the violent, pornographic content LLMs picked up online. While Altman and company believe they can recreate human thinking and feeling ala the Scarlett Johannsson character in the 2013 movie Her , what they have actually created merely parrots human words based on random predictability of what letters and words follow other letters and words.

Unsurprisingly, Hao brings out the worst elements of Altman, his lying, hyperbole, back-stabbing, and alleged abuse of his sister. Despite this, it is hard not to marvel—just a little—at these AI scientists, entrepreneurs, and venture capitalists attempting to replicate human intelligence and consciousness in a computer system. It reminds me of how muckraker Ida Tarbell’s 1905 expose of John D. Rockfeller inadvertently left people in awe of how cleverly Rockefeller gamed the system and created an oil empire. Although it downplays their appeal, Hao’s account nonetheless features an impressive supporting cast of brilliant, internationally diverse and extremely young movers and shakers—most of them in their late twenties and thirties. Albanian-born Mira Murati (born in 1986) was OpenAI’s Chief Tech Officer from 2018-2024. Soviet-born Israeli Ilya Efimovich Sutskever developed the “neural network” technology on which today’s LLMs are based and was co-founder of OpenAI. Jakub Pachocki, born in Poland in 1991, was OpenAI’s chief scientist and developer of ChatGPT4. This kind of diversity and youth is exactly what one would expect from a dynamic emerging industry. Yet Hao ignores this actual diversity and instead identifies the lack of inclusion as another imperial tendency of Big Tech. Here she focuses on Tinmet Gebru, an Ethiopian-born scientist formerly employed in Google’s ethics department. Founder of a group called Black AI and a critic of Big Tech’s treatment of women, Gebru was also co-author of a major critical report on AI dangers, which Google tried to suppress.2 Gebru is an important part of this story—but to accept her narrow DEI critique and skip over the actual demographics of the industry is to miss something big about these tech empires.

Hao is excellent, however, at capturing how startlingly aware AI creators were and are of the danger it poses to humans. At one point, Elon Musk believed pursuing AI might destroy humanity, which is why he thought it needed to be regulated and transparent. Altman likewise proposed a public “Manhattan Project for AI,” structured “so that the tech belongs to the world via some sort of nonprofit.” As Hao shows, the “open and transparent” part of the project disappeared quickly as Altman began to consolidate talent and resources against his competition, namely Google.

While past visionaries denied or downplayed their inventions’ dangers, AI scientists seem to obsess over the dangers. A few even revel in it—Hao calls them “doomers.” Doomers often veer into the apocalyptic, asking questions like whether certain projects would lead to the complete extinction of humanity or merely “catastrophic outcomes,” meaning substantial deaths. Concerns about AI-induced human extinction fueled much of the Enlightened Altruism (EA) movement of the early 2020s, associ- ated with the now discredited crypto fraudster Samuel Bankman-Fried. A favorite of tech billionaires, EA was based on the idea that accruing wealth was a moral prerogative if that wealth was used to solve problems threatening humanity. The chief problem for many EA philanthropists was an AI-instigated extinction event.

This complicates Merchant and Hao’s argument that the problem is not the technology, but rather who controls it. What if the problem is the technology—as its inventors seem to believe? What makes Merchant and Hao so sure a union or a democratic collective could curb these dangers any better than an empire?

Hao lauds the benefits of AI—provided it is in the right hands. In a brief epilogue, she offers alternative ways to organize what she sees as a potentially liberating technology. Her first example is how AI is being used to help the Maori people restore their traditional language. Here is an AI that can undo empire. The AI researchers who took on this project are not in it for the money. They seek to include the Maori people in the process, getting their consent for data collection and recordings. Other organizations have followed this model, using AI to resist empires and monopolies, envisioning a new, more inclusive way forward, especially for minorities whose identities have been marginalized.

Hao identifies organizations like Gebru’s nonprofit Distributed AI Research Institute and a Queer in AI workshop as alternative ways to regain control over AI’s future. But how exactly do these small, grant-dependent organizations allow anyone to gain control over the equity-funded behemoth monopolies she has just described? This is as untenable as Merchant’s hope that unions are the way to combat the tech giants.

Like Brian Merchant’s Luddites and unions, these democratic “alternatives” reek of weakness and ineffectuality—especially in the current Trumpian political era. Unions, regulation, nonprofits run by and for identity-based minorities—all of this requires policies and legislation and public trust. Where is the political coalition that can deliver this agenda? Do they think—against all evidence—that the Democratic Party is still a viable political organization?

More to the point, neither book asks us to “make it stop,” as Savio did in 1964. As critical and informative as these books are, they concede the inevitability of AI technology in our futures and on our campuses. It is just a more leftist version of the we-have-to-adapt argument. Do we really believe that as long as some as-yet unspecified “we” controls the AI machinery instead of tech giants, somehow things will be okay?

All is not lost, however. Our campuses will still have teachers who can inspire, believe in, and be there for those students who want to learn how to think, write, and create. Not everyone will fall for the AI trap. True, there will be fewer professors and they will not have the cushy academic and economic situation that has existed since WWII. That situation was exceptional—the result of the Cold War and America’s industrial age advantage, the last vestiges of which are finally ending. But there will always be some young people who want to learn about the wonders and complexities of literature, art, history, the sciences. So we go back to educating the few rather than the many. In this way we keep alive Savio’s “We are human beings!” energy. Not through resistance as typically defined but just keeping on with what we do.

Notes


1 Full disclosure, I was inspired to write this review based on the readings and discussion of a faculty reading group which dubbed its enterprise, Luddite summer. My thanks to my colleagues in that group.

2 Co-authored with Emily Bender in 2021, the report is titled, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” and widely available on the internet.