Letter: On "Modern Industry and the Prospects for Socialism"

March 2, 2026

Lea Firmiani responds to key problems in scaling cybernetic management, ala the Toyota Production System, to large-scale economic planning.

Letter.jpg

Dear Editors,

I read ‘Modern Industry and Prospects for Socialism’ by Dónal Ó Coisdealbha with great interest. The TPS (Toyota Production System) is indeed one of the greatest organizational developments of modern industry and it’s absolutely critical that socialists understand modern industrial management and not simply rely on criticisms of Taylorism (however sophisticated and correct those may be).

In this letter, I want to point out some strands of econophysics research that outline big problems with highly integrated, networked, low-inventory, and crucially, locally optimized production systems. I do this not because I think cybernetic management has nothing to offer socialist planners, but because scaling this method of organization to a whole economy is not trivial and we must be lucid about the challenges.

The TPS often uses inventory size as the key factor to minimize, reducing the number of Kanban baskets in a factory and shrinking the size of warehouses in multi-site supply chains. There is a clear incentive to do so in the modern industrial economy, because shipping is relatively cheap compared to rental costs for storage space, especially near urban centers. This structural tendency to long supply chains with ‘just in time’ fulfillment increases the fragility of the overall system, as it provides an environment in which local shocks can easily propagate (we felt this with covid and supply chain disruption driven inflation). The capitalist market is largely to blame here of course, as price fluctuations can freeze up some production systems or bring key links of the chain to the edge of ruin, but fragility with respect to shocks is an important requirement of ecosocialist industry, as environmental disruptions will only become more common.

The TPS uses small inventories to force adjacent steps of the production process to synchronize with each other - indeed, Ó Coisdealbha argues that it is explicitly necessary to motivate iterative improvement in the production process by creating mini over/under-supply issues and adjusting production rates accordingly:

“Reducing the buffer size does not cause the required rate of supply, in principle, to either rise or fall, but it reduces the tolerance for error … These breakdowns resulting from the smaller buffer size expose structural flaws in the production system, a lack of precision which then becomes the basis for a new round of process improvement.”

Smaller bins are mathematically modelled as stronger interactions, since the output in step 1 affects step 2 more quickly and significantly. Stronger and stronger interactions are necessary to maintain synchronization and to promote efficiency, but can bring new, emergent network-wide problems that must be addressed. If the buffer (bin size) is large, then the rates can be very different without really causing problems, since warehouses absorb overproduction at one step and liquidate stock during shortages. There is no strong incentive for a factory to synchronize production rates with its supplier and later or earlier than expected completion is fine. Smaller and smaller buffers are the mechanism of labor discipline and make sure that everything happens just on time. This prevents waste and, ideally, increases worker participation and control over their immediate conditions of labor, so that they have the freedom to innovate to meet the more and more stringent production schedules. It is especially important to note that synchronization is always local, e.g. each step synchronizes with its immediate inputs and outputs, but does not have direct communication or coupling with work, say, 2 steps removed along the chain. This is scientifically very important, because it means adjustments propagate in a wave up or down the supply chain, from one step to the next. Adjustments do not happen instantaneously everywhere at once, and some steps may be harder to modify than others, leading to the possibility of propagating jam-ups.

Recent work on “timeliness criticality” has mathematically demonstrated, albeit in a simplified system of firms with identically sized buffers, that below a critical buffer size, crises nearly always propagate to the whole production network, provoking generalized shortages. It is likely that a similar result would hold in a more complex model for some theoretical reasons the authors of the paper get into but are too technical to bother with here. This is of academic interest to theoretical physicists since the transition is mathematically extremely similar to phase transitions in statistical mechanics, but the consequences are much more significant. Phase transitions are by nature very sharp and ‘catastrophic’ in the sense that there will not be a linear increase in the size of a disruption, but an algebraic one. A relatively small shift in the average buffer size near the transition can all of a sudden cause system-wide disruptions. Above the critical buffer size, shortages happen but are limited in scope- a few steps out from the initial shortage, buffers have completely absorbed the shock and work continues as usual. The problem then resembles playing chicken with a foggy cliff; everything is fine until the ground falls out from under you. Firms can reduce inventories gradually, but disruptions will appear and spread suddenly. It is, in practice, very hard to tell where the ‘tipping points’ are without crossing them.

Moreover, the larger such a system gets, the more cycles emerge in the production network (where a shop produces input A which is key to make B and so forth until Z is key to make A). The complex and cyclic nature of production networks makes them vulnerable to Braess’ paradox, in which a local improvement in efficiency reduces the overall efficiency of the whole system. It’s easiest to imagine this with a traffic network; building extra lanes on one stretch of highway can have unpredictable effects on traffic elsewhere as induced demand reshapes travel flows. To my knowledge, there is no good local heuristic to determine how ‘Braessian’ a particular link in a production network is. The point being; sometimes it’s very hard to tell where innovation would actually be welcome in a complex network, and intervening to solve local jam ups (adding lanes to one stretch of road) can actually make overall resource use worse. Indeed, even with excellent information sharing, intelligent agents can still suffer from Braess paradox, so algedonic signals do not necessarily avoid the issue; Traffic jams still happen despite everyone using a real time traffic information app. Iterative improvements on one piece of the network whenever a crisis pops up risks a game of whack-a-mole ad infinitum without improving efficiency. Indeed, there are games which are unsolveable by any local algorithm or collection of agents. I believe it’s likely that global production optimization is most likely to fall in this class of disordered systems, though such a claim is admittedly very hard to prove.

One might object that the TPS has built in mechanisms to actively correct shortages, so they don’t just passively propagate. For example, on the shop floor, multi-skilled workers can change production lines to immediately boost production in lagging areas, solving shortages faster. On a larger scale, however, it becomes much more difficult to move workers or machinery around to meet local demand spikes: if machinery is hard to move, workers have to, but two hours of commuting to work at a lagging production line is a less appealing proposition to workers.

To conclude, while management cybernetics has made immense progress, there are serious theoretical challenges to using local couplings to optimize the global performance of a complex network. These are problems for markets as well, as these also use local signals (prices) to coordinate production. Any socialist planning system will have to strike a balance between inventory cost, synchronization, and robustness, since there seems to be an inherent fragility in highly synchronized, low buffer systems.

These issues are inherent to local optimization, where information sharing is not centralized. A global view of the system will always be necessary, since local interventions are only capable of optimizing certain types of problems. The challenge of socialist planning will be to reconcile this centralization and decentralization in a democratic fashion.

Sincerely,

Lea Firmiani

Liked it? Take a second to support Cosmonaut on Patreon! At Cosmonaut Magazine we strive to create a culture of open debate and discussion. Please write to us at submissions@cosmonautmag.com if you have any criticism or commentary you would like to have published in our letters section.