Letter: Response to Villarreal on AI

Aug. 26, 2025

Hagen Blix and Ingeborg Glimmer respond to Nicolas D Villarreal's recent review of their co-written book, Why We Fear AI.

Letter.jpg

In May, Cosmonaut published a review of our book, Why We Fear AI. Admittedly, the review (by Nico. D Villarreal) left us somewhat confused. On the one hand, the reviewer suggests that the book is “likely as good as it gets” (for a book in the “existing traditions of high left wing theory”) and concludes that he “can recommend this book.” Indeed, one of the goals of our book was to provide an accessible introduction to materialist theory through its application to artificial intelligence and discourses around it. So we are pleased—apparently it worked!

On the other hand, Villareal finds quite a few things to grumble about. Some of these involve basic disagreements, which are unlikely to be easily resolved. Readers can make up their own mind, of course, but we won’t be able to dive much into those. To take one example: we (following Marx) take it for granted that the logic of capital dominates the whole of capitalist society, including the capitalist class. Villareal takes issue, since he believes that he has disproven Marx, and that the capitalist class “can defy the logic of capital.” Obviously, we won’t bridge that gap here.

At other times, we are faulted for stances we never take. We are, for instance, accused of deference to intellectual property, even though we merely mention that AI training data has been collected with an indifference to their status as commodities. Whether a text was produced for the market, guided by the profit motive, or outside of it, for some other personal or social reason, AI companies have crawled, torrented, scraped, and occasionally even bought or licensed their way to gigantic piles of data that they can feed into their machines. It’s hardly “deference” to mention that AIs are often trained on intellectual property, and that—as evidenced by a variety of ongoing lawsuits—different property claims are in conflict. These conflicts are ongoing matters of fact that we should take note of, if we want to understand what’s going on, where fault lines may occur, and where interests may align in surprising ways. That’s the terrain of politics, after all.

Throughout the review, Villareal claims that we lack a positive vision for AI. We suspect that, at least occasionally, these complaints reflect a discomfort with the wider span of AI critical positions, rather than with the concrete arguments from our book. That would fit his false claim that we show deference to intellectual property—certainly, plenty of AI critiques focus on defending intellectual property, even though we don’t. Beyond that domain, too, there are plenty of critical approaches that ultimately exhaust themselves in moralistic finger wagging, in a “don’t you use it” attitude, aimed at shaming people for using or enjoying LLMs or other generative AI technologies. There is always an argument for diversity of tactics: when there is a good comparison to be made between using AI and crossing a picket line, shaming people should certainly be part of the political arsenal. But all too often, AI critics end up blaming individuals for reacting to the structural pressures that they operate under—say, by placing moral blame on students who use AI, rather than acknowledging that they are, in fact, reacting to the devaluation of education (by AI narrowly and capitalism more broadly). Such individualistic responses are not just fruitless, they also misdirect attention away from the real forces at play, and leech energy from collective approaches that can actually work.

In our book, we situate AI in a dialectic of knowledge, in which real subsumption and social stratification, upskilling and deskilling, are all moments of the same technological-economic development, constantly unfolding in contradictory fits and starts. Capital is always in the business of devaluing skills and labor power, and knowledge and technology are always loci of class antagonism. It’s crucial to understand that AI can serve as a weapon in this conflict. Real subsumption always involves the introduction of tools that are useful not simply for production in general, but for capitalist production in particular—tools that increase the power of capital. AI certainly is a tool-weapon in this sense, aimed at different strata of the working class, from artists to knowledge workers.

As materialists, we are interested not simply in the properties and affordances of the tool (not just in what it can do, whether for capital or in general) but also in what you could call the affordances of these affordances: what are the effects on class composition going to be, on stratification, or on class-internal conflicts (both within the bourgeoisie and within the working class)? Will there be new sources for solidarity, new opportunities for organizing and collective responses? Those are the crucial questions, and while we certainly don’t have exhaustive answers, we tried to work out general parameters that would be useful for asking these questions in different organizing and labor contexts. That is, after all, the point of “high left-wing theory”—to read the political economy strategically, and to find opportunities for collective emancipatory action.

Villareal, however, in apparently lumping us in with other AI critics, merely exhibits an individualistic reflex:

[T]here’s reasons to be optimistic about the impact of LLMs in the actual agency of individuals, indeed, individual workers as political agents. People with strong motivation and persistence can use AI in a way that augments their abilities and speeds up learning curves, something that’s crucial for people that only have so many free hours in the day. This is a far cry from the suggestion that AI will lead only to a further lack of agency and increase in ignorance which the authors provide. Similarly, I’m skeptical about the authors’ claims that AI adoption will only reduce the quality of products. Contrary to their predictions, companies are not simply using AI to bring the lowest common denominators of workers up to speed, but to maintain smaller workforces by speeding up existing workers, for example in software engineering jobs.

This is, to use that silly phrase, pure ideology. It is Milton Friedman having a lovely fireside chat with Norman Vincent Peale about the power of bootstraps, persistence, and positive thinking. It is the stuff of PR and LinkedIn prose, nigh indistinguishable from a random CEO’s commentary. The industry frequently puts this exact kind of “optimism” under the rubric of AI “democratizing access to skills.” Their “democracy,” and Villareal’s “agency of individuals” are, of course, all about embracing the metaphorical rat race, self-optimization, and the most ruthless competition between workers. In whose interest, one might wonder, but the interest of capital?

We categorically reject Villareal’s claim that we predict AI will lead to an “increase in ignorance.” We certainly argue that AI is a tool for deskilling—that is for reducing the price of particular skills, for depressing wages. But as we say repeatedly throughout the book, this is not a property of knowledge or skills per se, but of the bargaining power that a worker can derive from their particular skills. There are, of course, many ways for capital to undermine the bargaining power associated with particular skills. Some of them even involve the very opposite of increasing ignorance. Take an example from our book: reading and writing. Once special skills with special bargaining power, they were deskilled precisely by making them universal. No worker gets paid more for knowing something that everyone is taught. Indubitably, capitalists have tried their best to deskill, for example, programming—sometimes through coding academies and free access to learning materials, and sometimes through AI.

Contra to what Villareal suggests, we don’t take AI to be an attack on skills but an attack on the bargaining power associated with them. It seems to us that Villareal is simply confused about the difference between the use value of a particular kind of labor power (say, the concrete work performed by a software engineer), and its exchange value (as with any other commodity, the amount of socially necessary labor time required for the training of said software engineer, as well as their food, housing, etc). Certainly, if AI causes the labor time required for the production of a particular kind of labor power to go down (as Villareal seems to suggest), we should expect its exchange value to go down as well. In other words, the result will be depressed wages.

Or, if we want to spare ourselves the trouble of applying the labor theory of value to the development of skills, and instead put it into the language of a simple supply-and-demand model: if certain skills are easier to acquire and thus cheaper to produce (i.e., if their supply curves flatten), the equilibrium will be at a lower price point. This, it is crucial to say, can be true even if AI does not work properly, even if it merely enables the production of sub-par goods (text, images, etc) at a sufficiently cheap rate. As we argue in detail in our book, AI will likely follow the IKEA and fast fashion model of market capture. Industries can grow to immense sizes if they can sufficiently undercut the competition on price, even if that comes at the expense of quality.

Elsewhere in Cosmonaut, Villareal has written (quite correctly, in our opinion) that:

Presently, society faces a set of new technologies under the umbrella of artificial intelligence (AI) which is […] revolutionizing the instruments and relations of production. But contrary to the wild apocalyptic and utopian fantasies of bourgeois technologists, its historic trajectory will be decided first and foremost by class struggle, rather than the hidden inner logic of an alien intelligence.

Given this quote, it comes as a surprise to us that the review ignores, almost entirely, our reflections on the role that AI itself will likely play in that struggle. To make theory useful for class struggle, to identify threats, pitfalls, and opportunities that may arise from this particular development of the means of production is in fact what Why We Fear AI is meant to do.

Solidarity,

-Hagen & Ingeborg

Liked it? Take a second to support Cosmonaut on Patreon! At Cosmonaut Magazine we strive to create a culture of open debate and discussion. Please write to us at submissions@cosmonautmag.com if you have any criticism or commentary you would like to have published in our letters section.