Pre-Digestion of the Status Quo: A Review of 'Why We Fear AI'

by Nicolas D Villarreal, May 28, 2025

Nico D Villarreal reviews Why We Fear AI, a new book by Hagen Blix and Ingeborg Glimmer, and argues that, although the work has merits, the authors' heavy reliance on well trodden ideas from the left of academia precludes a broader critical discussion of AI.

81CdYQ7cEJL
Cover of 'Why We Fear AI: On The Interpretation of Nightmares,' by Hagen Blix and Ingeborg Glimmer (Common Notions, 2025).

Why We Fear AI is an attempt to understand the many cultural obsessions with AI and the fears it has inflamed through the lens of the existing traditions of high left wing theory. The book is written by Hagen Blix and Ingeborg Glimmer, who have experience in cognitive science, linguistics, and machine learning, and likely due to that providence it avoids some of the common pitfalls of other left theory books on AI such as The Eye of the Master by Matteo Pasquinelli, which often fail to grasp basic facts about the technology. As far as articulating this type of perspective goes, the sort that perhaps could have been published by Verso (it's actually published by Common Notions), this book is likely as good as it gets. However, in this way, it also shows the limits of the current left theoretical perspective, and the need for theoretical innovation.

In terms of theoretical devices put to work in the book, it relies on the usual mechanisms such as the personification of capital in corporations and capitalists, analogy to primitive accumulation through enclosure of the commons, the genealogy of neoliberalism into the present day, and the ideology critique of self-serving mythologies for the partisans of capitalism. Here, we can see the clear influence of those who came before: Mark Fisher, David Harvey, Silvia Federici, Kohei Saito, to name a few. This passage outlines their approach well and gives an idea of what they’re going for:

In what follows, we trace five ways in which these developments manifest in current-day AI technologies: first, the decrease in the comprehensibility of our tools culminates here, insofar as current AI is, in some curious sense, incomprehensible even to the experts that create it. Second, current AI metabolizes previous human activity and labor, and turns it into its internal properties—a machine predicated on the invisibilization of labor. Third, in doing so, it effectively privatizes a previously public world, representing a novel kind of enclosure. That is, it fuses the sociotechnical development of labor invisibilization with the neoliberal/capitalist tendency towards enclosure, in a single tool. Fourth, we argue that these AIs resemble bureaucracies, and share a common ancestry with neoliberal theory.

Obviously, some of these borrowed mechanisms land better than others.

The Personification of Capital

I’ve critiqued the idea of personification of capital elsewhere,[1] as a concept from Marx which has been falsified, as capitalists can defy the logic of capital otherwise forcing them to invest and grow the forces of production in order to preserve themselves, as they have in neoliberalism. We have seen, all around us, just how a lack of investment in physical capital allows industry to atrophy. Capitalists are not just as subjected to capital as everyone else is, for the simple reason that they can collectively decide whether or not to invest given there is not too much internal competition between them. While occasionally this personification is employed by Blix and Glimmer in this erroneous way, it is more generally used to describe the pressure to automate or intensify production, which certainly plays a large role in the way AI has been developed and deployed.

Importantly, even if the macro levels of investment are declining, investment to improve or establish profitable industries still tends to occur, and these investments will attempt to create the most rationalized systems of production possible in the circumstances. Wherever there is active production, in which the technical forces of production are brought in relation to each other in order to create commodities, there will be attempts to cut costs and rationalize the process, including through the tight control of actions by labor and centralization of technical knowledge of the production process. This is apparent in the history of Taylorism, as well as in contemporary attempts to use AI to surveil and manage workers. There are some illustrative examples included in the book, including cheap outsourced labor surveilling other workers surveilling factory workers, a recursive pyramid of surveillance that AI based automation is designed to replace.

Is AI Enclosure?

The analogy to enclosure with regards to AI does not work nearly as well. Blix and Glimmer argue that AI such as Large Language Models (LLMs) are enclosure of the commons by taking hold of the text produced by private individuals, including their intellectual property, without compensation, as well as a distillation of these commons into a black box which doesn’t reflect the labor of any particular individual. As they themselves admit, however, this doesn’t really seem to enclose much since the original text usually remains accessible. I also find deference to intellectual property misplaced in a left wing critique.

Similarly, though Blix and Glimmer motion to the lack of intelligibility of the AI as a set of neural net weights even to its creators, people who have interacted with LLM chatbots know very well the intelligibility of its outputs as the clear distillation of the human culture which was its input. Not to mention, there is ongoing and well-funded research on just how to make LLM neural weights themselves more intelligible.[2] When it comes to scientific applications, a major source of critique for Blix and Glimmer as a result of their fear that actual human understanding will be automated away, there are very significant innovations in this intelligibility.[3] And considering that many models and the research on how to make them, including some bleeding edge ones, are open source, I struggle to see the argument that this represents an enclosure of a human commons.

Tied into this enclosure argument is an echo of Ivan Illich-type critique of corporations intentionally making technology less legible,[4] the lapsing ability to repair or examine the inner workings of common tools, and the monopolization by corporate managers of knowledge about how production really works. As they say:

Put another way, the changing nature of technology, of our tools, from comprehensible, tangibly repairable things to incomprehensible, usable things severs our relation to the ‘internal structure’ of objects—that is, we more and more know how to use particular tools, but not how they themselves work internally. We hardly ever interact with the internal structure of day-to-day tools directly, we rarely repair them, and little kids today do not pry open their newly acquired electronic toys to understand how they operate. As this happened (and we’ll explore why it happened in more detail later), the internal structure of more and more of our tools vanished; not, of course, from the world, but from the social view. We interact with the useful functions of these tools, but how they accomplish them, we leave to others, to experts. Both figuratively and literally, the backs of transistor radios may have been an open window into the world of circuitry—but the backs of current smartphones are glued shut.

This is certainly a big problem in contemporary society. Corporate secrecy and difficult to repair technologies actively inhibit the proliferation of scientific knowledge. But, yet again, I fail to see how this really applies to AI. Scientific knowledge isn’t just the particular set of techniques used to produce some widget, it’s also the general field of knowledge, such that many production techniques can be compressed into general scientific knowledge. With access to a good LLM chatbot, I’ve found myself much more capable of learning about many different processes and just as well taught myself how to code many useful applications, including some that I use on a regular basis. For example, in just a few days I coded an app in python, a language I am not very familiar with at all, which I now use to easily download .epub versions of webpages, something that was quite a pain before. Of course, this required some know-how regarding the limitations of LLMs, and to the extent the general population lacks this know-how, LLM use can lead to the opposite of greater scientific knowledge.

Still, there’s reasons to be optimistic about the impact of LLMs in the actual agency of individuals, indeed, individual workers as political agents. People with strong motivation and persistence can use AI in a way that augments their abilities and speeds up learning curves, something that’s crucial for people that only have so many free hours in the day. This is a far cry from the suggestion that AI will lead only to a further lack of agency and increase in ignorance which the authors provide. Similarly, I’m skeptical about the authors’ claims that AI adoption will only reduce the quality of products. Contrary to their predictions, companies are not simply using AI to bring the lowest common denominators of workers up to speed, but to maintain smaller workforces by speeding up existing workers, for example in software engineering jobs. This, of course, is deleterious to labor interests, but not quite in the way they describe.

Surveillance and Control

When it comes to how AI will be used directly as a method of social control the authors do correctly outline what is going on. AI allows for the creation of automated systems that can take in raw data, whether text, video, etc., and classify it into an existing symbolic system. This has obvious applications for law enforcement in facial recognition, or just as well, in the military for target identification as high profile cases in the Gaza War have illustrated.[5] AI will and is being used as a system for social control by both states and corporations, to apply discipline and violence faster and more accurately than before.

If anything, I think that Blix and Glimmer do not go far enough in this direction, and become sidetracked by focusing on the issue of errors in these systems biased against minorities. If there is a means of detecting this error through statistics, it could and likely will eventually be fixed if there is pressure to do so. A more general critique of state violence and surveillance, I think, would be warranted in preparation of the day when we have a 100% unbiased AI facial recognition software in service of the police, and I can already anticipate the way that such complaints about errors could be co-opted in service of a state violence with a more rational and “human” face. There is a long history on the left of taking what are likely contingent features of a specific technology or tendency and identifying it as the intentional design, only for this critique to become incorporated into state ideology.

Why Fear AI?

When it comes to the ultimate and titular question of the book, just why we fear AI, I was left somewhat disappointed. The authors say that, ultimately, the fear is derived from the downward leveling effect of AI automation potentially disrupting the ideology of meritocratic intelligence held by professionals and managers. As evidence for this assertion, they point towards different technologies and their reception among the public and AI researchers:

Which one is more scary, the one that’s easier to automate (hence presumably requiring less intelligence) and high in the hierarchy (thus nonetheless conferring more power and prestige), or the one that is harder to automate but lower in the hierarchy? Well, recall our Amazon AI, which surveils the warehouse workers. In an obvious sense, surveillance is higher in the hierarchy of managerial control than stowing is. And yet, Amazon is already automating surveillance of human stowers where they have yet to automate the stowing itself. As it turns out, the task higher in the hierarchy, surveillance, was just plain easier to automate. In all likelihood, the skills required for stowing classification form a proper subset of the skills required for stowing, since the stowers themselves need to know how to stow in accordance with the rules. Hence, stowing is almost by definition a more complex task than stowing classification. Just like driving cars, stowing requires a lot of real-time cognitive integration, it’s a demanding task. But working at an Amazon warehouse is certainly not something considered high in the social hierarchy—it’s a task that requires homo sapiens, the most intelligent animal we all know, but not a task that our society associated with the sense of intelligence that is the post-hoc justification of social hierarchy… The surveillance AI, not the stowing robot, reminds us of Skynet. Why? Because it embodies the post-hoc justification of social hierarchy, and because the oversight position in the pyramid is associated with people having power over people. And, it turns out, the people who are higher in the pyramid find the thought of being treated like the people at the bottom of the pyramid—as scary as the thought of death.

I am not sure this is empirically true, i.e., that people fear automated workplace surveillance systems more than actual robots doing menial tasks. See, for example, the comments section of any Boston Dynamics Youtube video.[6] What I think people fear are general capabilities that could be used arbitrarily by any agent, such that a moving robot will always inspire a lot of anxiety, and the more dexterity the robot has, the more anxiety it will produce. The authors themselves, in a brief aside, mention how robot anxiety comes from fears of lower class uprisings, and this is a fear that comes precisely from the broad universality of human capabilities which allows for any ruling class to be possibly destroyed or replaced.

The explanation about class anxiety of the upper middle class also feels a bit too neat for me. Most of the big AI safety people raising the alarms about the dangers of future runaway AI aren’t doing this on a whim and indeed have a very well thought out ideological system for just why they believe what they believe. There is a whole library’s worth of videos and essays on this topic produced by those in the rationalist intellectual ecosystem on why we should necessarily expect AI to get out of control and destroy humanity. This ideology isn’t really taken seriously anywhere in the book. And, to be clear, when it began, almost 20 years ago at this point, in its very overdetermined way, it wasn’t the subculture of upper middle class Silicon Valley strivers, but of outcast and socially isolated teenagers interested in STEM and connected via online forums such as LessWrong. Indeed, this ideology was produced by a concrete labor, a labor which is now merely a ghost in the pages of this book.

Similarly, the characterization of AI as a whole is treated just a bit too neatly. There is an attempt to draw a straight line from enclosure, neoliberalism, and Taylorism to AI, wrapped in the now traditional language of left intellectuals. Certainly, a case can be made for a necessary connection between the drive for automation that Marx identifies, the copying and crystallization of human labor through the application of scientific knowledge. But what the authors suggest is much more narrow. There is little interest in just how AI might be different, or cause a departure from previous trends, or just what it might tell us about human cognition itself. This lack of curiosity has defined nearly all left theoretical writings about AI, which has been a great personal frustration of mine. The authors have fallen into this familiar trap of relying on well trodden ideological paths, which they very well accuse AI of doing in presenting the “pre-digestion of the status quo” as our own thoughts. In doing so the authors neglect the materiality of AI as an object of investigation. This has important consequences, as while there will be many ways AI will continue existing trends of tighter control from above, it is also necessary to imagine what a positive future with AI and other technologies might look like as an alternative.

For those interested in the ways AI does continue certain dystopian trends, as well as what the dominant left theoretical understanding of AI is today, I can recommend this book. However, I continue to hope that a broader discussion about AI will emerge on the left, one which will take the materiality and autonomous logic of the subject seriously. Some of this discussion is beginning to take form at Cosmonaut Magazine, although it yet remains in its infancy.

Liked it? Take a second to support Cosmonaut on Patreon! At Cosmonaut Magazine we strive to create a culture of open debate and discussion. Please write to us at submissions@cosmonautmag.com if you have any criticism or commentary you would like to have published in our letters section.

  1. “Here is the necessary connection between the tendency for the rate of profit to fall and stagnation: so long as investment is rising as a share of non-labor income, all else being equal, the rate of profit will fall. That is, so long as the economy is moving according to the logic of capital: industrializing, mechanizing, rationalizing. At some point, capitalism reaches its limits, and it cannot industrialize any further without destroying the social reproduction of the capitalists, this is the fetters of production Marx describes, after which the capitalist class becomes smaller and smaller until it disappears. The point at which capitalism hit these limits was 1979, and to overcome them, it has sacrificed the logic of capital, as well as the working class, to preserve the capitalist class.” See: Nicolas D. Villarreal, “The Tendency for the Rate of Profit to Fall, Crisis and Reformism,” Pre-History of an Encounter, September 16, 2023, https://nicolasdvillarreal.substack.com/p/the-tendency-for-the-rate-of-profit.

  2. “Tracing the Thoughts of a Large Language Model,” Anthropic.com, 2025, https://www.anthropic.com/news/tracing-thoughts-language-model.

  3. Ziming Liu, et al., “KAN: Kolmogorov-Arnold Networks,” ArXiv.org, 2024, https://arxiv.org/abs/2404.19756.

  4. Ivan D. Illich, Tools for Conviviality (Marion Boyars, 1985).

  5. Geoff Brumfiel, “Israel Is Using an AI System to Find Targets in Gaza. Experts Say It’s Just the Start,” NPR, December 14, 2023, https://www.npr.org/2023/12/14/1218643254/israel-is-using-an-ai-system-to-find-targets-in-gaza-experts-say-its-just-the-st.

  6. “All New Atlas | Boston Dynamics.” n.d., accessed April 18, 2024, https://www.youtube.com/watch?v=29ECwExc-_M.

About
Nicolas D Villarreal

One of many contributors writing for Cosmonaut Magazine.