Could Computers Make Communism Work?

20.9.2016

We usually think of Communism as a political doctrine, but in practice it’s also a big logistical problem. If you get rid of free markets, how will people get the goods and services that they want and need? That’s a real conundrum, as Venezuela has discovered in the last few years, to their immense cost. Even a petro-state, awash in hard currency, couldn’t make it work.

There were various attempts to figure the problem out using computers, many decades ago, but they failed to take off. In Chile, for instance, Project Cybersyn looked promising until it was ended by Pinochet’s coup in 1973. And after Mikhail Botvinnik retired from chess, he became convinced that getting a computer to solve the economy would be simpler than a chess problem, and wasted years trying to prove himself right.

Computers are much more powerful now, and there are many new techniques for handling large-scale data. Could they make capitalism obsolete?

I’m a big fan of Betteridge’s Law—the observation that if a headline is in the form of a question then the answer is no—and I’ll tell you straight up: this piece will not break that rule. But it’s a question that’s been on my mind for years, ever since Amazon started making product recommendations. The path to an almost-definitive answer turned out to have some twists and turns.

Yes with a but

Since the collapse of the Eastern Bloc, it’s become a cliché to cite F.A. Hayek’s argument that decentralised free markets embody too much information for a central planner to handle. So it’s intriguing to read in Francis Spufford’s alt-historical novel Red Plenty that ‘the idea that the computer had conclusively resolved the socialist calculation debate in socialism’s favour was very much a commonplace of the early sixties.’

In Spufford’s alternative history, Kruschev successfully turned the Soviet system towards producing consumer goods and material comfort. One of my favourite bits is this description of a communist data scientist’s interior monologue, waiting for a computer to calculate an allocation of potatoes:

When a market is matching supply with demand, it is the actual movement of the potatoes themselves from place to place, the actual sale of the potatoes at ever-shifting prices, which negotiates a solution, by trial and error. In the computer, the effect of a possible solution can be assessed without the wasteful real-world to-ing and fro-ing; and because the computer works at the speed of flying electrons, rather than the speed of a trundling vegetable truck, it can explore the whole of the mathematical space of possible solutions, and be sure to find the very best solution there is, instead of settling for the good-enough solution that would be all there was time for, in a working day with potatoes to deliver.

Browsing through the 1960s literature on ‘socialist calculation’, I discovered that a lot of that confidence rested on a mistake. For instance, when Oskar Lange reflected in 1967 on his exchanges with right-wingers a few decades earlier, he commented that

were I to rewrite my essay today, my task would be much simpler. My answer to Hayek and Robbins would be: so what’s the trouble? Let us put the simultaneous equations on an electronic computer, and we shall obtain the solution in less than a second. The market process with its cumbersome tâtonnements [i.e., gradual price changes] appears old-fashioned.

Nerds might object that Lange was overconfident, because simulating tâtonnement on a computer could easily fail, but that’s a solvable problem. The general point stands: given the individual demand functions of households, and production functions of companies, a computer can figure out what goods and services everyone should make and receive, without any market prices needed.

Which, unfortunately, is nowhere near good enough.

Too many secrets

The problem with the 60s-era algorithms is that they make it easy on themselves by starting too late. To be clear, it’s a huge achievement to figure out that there is a best possible way to make and distribute goods and services; Arrow and Debreu got Nobel prizes for it, and Herbert Scarf came close to a Nobel for figuring out how to grind out the answer on a computer.

The Arrow–Debreu Model, the crystalline heart of modern economics, defines a commodity as ‘a good or service completely specified physically, temporally, and spatially’. For instance, take massage therapy. It’s not enough for the computer to figure out how much Health Services in general each person should be able to consume, and not even enough to allocate Massage Therapy amounts. It needs to decide who should receive, and who should provide, a sixty-minute session of Shiatsu Massage For Pregnancy at ten in the morning in downtown Sydney on October 3.

Once you specify commodities in that much detail (which is what you need to do, if you’re going to convince economists to give up the price system) then the computational task becomes impractical. According to Cosma Shalizi’s back-of-the-envelope estimate, getting anywhere near the solution would take longer than the age of the universe.

And that’s not even mentioning the massive amount of information you’d need to set the model up in the first place. To set up and solve the Arrow–Debreu Model, you need to know how much each person values each commodity, and how effective each company is at producing them. For instance, you need to know how much each citizen values a sixty-minute session of Shiatsu Massage For Pregnancy at 10 in the morning in downtown Sydney on October 3. That means you need to know what trade-offs they’d be willing to make: would they swap it for a session at eleven, or one on November 3? Would they be okay with giving it to someone else in exchange for twenty cups of coffee, or for a fifty-minute session of Jungian therapy? Collecting this amount of information would be fairly time-consuming.

It’s possible to start thinking of workarounds—ration coupons, sign-up sheets, shared calendars. But they quickly become unwieldy, and it’s simpler and easier to use money.

Close enough is good enough?

For ages I thought it might be possible to crack this problem by changing the goal. Instead of trying to figure out the very best possible way to allocate goods and services, what if we tried to calculate an outcome that was reasonably good? The computing task would definitely be feasible if we were dealing with thousands of types of households instead of millions of individuals, and millions of roughly equivalent commodities instead of trillions of differentiated ones.

The Arrow–Debreu model gives you a way to measure how good different outcomes are, and I started using it to figure out how an approximation could work. I made slow progress (my differential topology is a bit rusty) so I checked the existing literature, only to discover a set of negative results. The general conclusion was that approximation could work, but only if you made implausibly strong assumptions that would never come close to holding true in the real world.

Finally the penny dropped, and I realised why this research direction came to nothing. With millions of people, there will always be a near-miss that is arbitrarily bad. ‘You need insulin to live, so we got you ibuprofin.’ Again, you can start to imagine work-arounds, but it’s easier to use money.

Free markets with a human face

I started by ignoring the politics and focusing on logistics. But actually, the logistical issue turns out to be a problem of psychology, and not the usual one associated with communism (motivating people to work altruistically for a common good). In practice, any replacement for a market system would need to know a huge amount of information about people’s preferences that it’s impractical to gather. F.A. Hayek spent his career being wrong about many different and important things, but on this particular point he was right.

I think there’s a general lesson here for progressive reform. Looking for an ‘alternative to capitalism’ is a great idea for things like wealth accumulation and public services, but free markets need to stay. Which is to say, our utopia will have lots of subsidies, taxes, price controls, and other devices to generate the outcomes we want, but they’ll be built into a decentralised price system rather than replacing it.

For a decade or so, I haven’t been able to leave this topic alone because I suffer from what’s called Engineer’s Disease: the compulsion to respond to any difficulty by assuming that you just need to understand the problem, figure out what the solution is, then implement the solution. I’m finally ready to admit defeat. Data science and machine learning are transforming the economy in many ways, but they’ll never replace the trundling vegetable truck.