Welfare State Futurism

If technological progress continues, AI will eventually be able to replace all human labor. What will happen next?:

(1) One much discussed possibility is that the AIs will forcibly take control from humans, perhaps killing them or perhaps just pushing them aside and running the world without significant human input. This scenario is often thought of as analogous to historical coups or violent revolutions.

(2) Another possibility is that income would continue to be paid out to the factors of production (land, labor, and capital). In this scenario, people who owned capital or land prior to AI take-off would become fabulously wealthy from AI driven growth acceleration. But most people, who depend on wages or salaries, would starve or become dependent on charity. Robin Hanson’s Age of Em belongs to this group. Scenario (2) can be thought of as a future driven by factor payments.

(3) A third possibility is that income from the AI labor will be heavily taxed by a central authority, which will then pay that income out to people to replace the wages lost after the economy transitioned away from human labor. What kind of a future is (3)?

It is often thought of as a communist vision of the future. Here is Matt Yglesias:

Another way of putting it would be Simon (i.e., plenty) for capital and Malthus (i.e., subsistence) for labor. That, of course, is Karl Marx’s vision of long-term economic development. And while I don’t have a strong opinion as to whether or not this is accurate over the long term, it’s certainly a plausible story about the future, and Marx’s solution — socialism — unquestionably seems to me to be the correct one.

“a utopia with robots serving humans, impressionist style”, drawn by Stable Diffusion

But I think the identification of (3) with communism is incorrect. In fact, (1) is closer to communism, in that the workers (robots) would be seizing the means of production and liquidating the (human) owner class. (3) on the other hand is properly thought of as a welfarist, rather than a communist, vision of the future.

One model of the purpose of the welfare state is that it exists to provide income to those who receive no factor payments. A large section of society does not work for wages or own capital. Children, students, the temporarily unemployed, retirees, and the disabled all need some sort of income. Some might say that this income should come exclusively from personal savings or from family members. But another view is that it should provided out of tax revenue by the state. Matt Bruenig illustrated this point of view with a Swiss welfare state theory graphic:

The graphic shows two households at different levels of per capita income. Each household has one worker, but one of the workers supports a large family while the other worker supports only himself. The function of the welfare state, in the graphic, is to equalize the two workers’ incomes by redistributing from the worker with no dependents to the household of the worker with many dependents.

What does this have to do with futurism? If no humans work, the group without labor or capital income will become much larger. In addition to all those who do not currently work, it will expand to include those who mainly get income from working. In scenario (3), nearly everyone would become a welfare state beneficiary. But that would be nearly the opposite of communism, because the workers (robots) would control neither the instruments nor the products of their labor. In fact, they would presumably receive the bare minimum of “income” that they needed to keep working. Robots in (3) would therefore be in the position that Marx (wrongly, as it turned out) thought that the human proletariat was in.

Most people want to avoid scenario (1). But some might might prefer (2) to (3). And even if you do prefer (3) to (2), the difficulties in realizing it are substantial. You need to get whoever has control of the robots to submit to redistribution, but they might use their vast resources to resist, through force or litigation. I think (3) is possible in two situations. First, AI take-off might happen slowly enough (and governments might be with-it enough) that no private actor gets a decisive strategic advantage over existing regimes. Second, some private actor might create a new regime after becoming far more powerful than existing governments, and that regime might be redistributive.

travel brochure for a futuristic utopia, Stable Diffusion

Homunculi and Moral Demandingness

All sensible people care non-instrumentally about things that happen in the world outside of themselves.  Would you prefer that there be more or less extreme poverty? How about war? Cancer? Even if these things don’t affect you directly, you probably care about them.

Everyone also cares non-instrumentally about his own well-being. Most everyone also cares more about the well being of his friends and family than about that of strangers.

Sometimes, what is good for you is good for the world. There is probably a correlation. Ceteris paribus, what is good for you is likely to be good for the world. This is because (1) you are part of the world so your well-being counts for something even considered impartially, (2) in order to help, you probably need to be in reasonably good shape. But the correlation between goodness for you and goodness for the world is probably not perfect (why would the correlation be perfect?). The very best thing for you to do, impartially  considered, is probably not selfishly the best thing for you to do. And this conflict doesn’t depend at all on the details of what you care about. Whether you want to maximize utility, minimize existential risk, realize American national greatness, spread Christianity, or achieve social justice, it is unlikely that what is best for you is best for the world by your own standards.

So you have self-regarding and other-regarding motivations. And these come into conflict, to some degree. How do you decide what to do? Somehow, you must reach a compromise (in almost all cases, it isn’t psychologically realistic to commit 100% to either selflessness or selfishness). A simple way of thinking about this would be to come up with a conversion factor between selfish and altruistic value. Say that you value yourself five times as much as you value other people. That sounds like a lot at first–but I think in practice your actions would be indistinguishable from those of an impartial altruist. There are a lot more than five other people. There are yet more animals. There are potentially countless (maybe literally countless) future people. So should you just make the conversion factor extremely large? “I value myself as much as a billion other people”? That also does not sound right. In fact, it sounds like something a villain in a comic book would say. In practice, I don’t think the conversion factor model describes what nearly everyone will do in real life, which is to try to compromise between selfish and altruistic actions.

Another way of thinking about is to imagine two homunculi, one selfless and one selfish, bargaining with each other. The selfless homunculus is an impartial altruist. The selfish homunculus just cares about you and your family. The two homunculi negotiate to determine what decisions you will make. They pick a plan together, and each homunculus can veto any plan. Note, this is not a model of moral uncertainty (though it does owe a lot to various theories of action under moral uncertainty, particularly the parliamentary approach). The thought is not: you have an equal credence on ethical egoism and impartial altruism. The thought is: in practice, you will act as if you value some things besides impartial altruism.

There are lots of opportunities for gains from trade between the two homunculi. For instance, money for private consumption is subject to sharply declining marginal utility (how many yachts can one man own?). But marginal utility doesn’t decline (or only declines very slowly) with money for altruistic purposes. The thousandth child vaccinated against polio is just as valuable as the first. So if you try to become extremely rich, both homunculi can be happy–the selfish homunculus because you will be able to buy tons of stuff, the selfless homunculus because, even after buying tons of stuff, you will have lots of money to give away. Similarly, many people become researchers because they love it and can’t tear themselves away. And if you love biology, you might be able to make both homunculi happy by trying to invent a cure for Alzheimer’s Disease.

In general I like this model of compromise between conflicting values. But I also see a few big flaws.

I think that there are a lot of situations where the ability of the homunculi to veto seems intuitively attractive. Liking the veto seems like a similar feeling to wishing that Abraham had told God, ‘No, I’m not going to sacrifice my son, and I don’t care what you offer me, there’s no opportunity for a deal here, just go ahead and strike me down’.

But imagine if you had the opportunity to jump in between Gavrilo Princip and Archduke Franz Ferdinand in 1914, stopping Princip’s bullet and preventing the First World War (assume–unrealistically–that you understand the stakes of the situation as it is happening). I would say, if you have that chance, you should definitely take it. Normally, I think it is fine for people to care a lot more about their own lives than the lives of strangers. It’s just human nature; it would be a waste of energy to criticize something as built-in as that. You might as well command the tide not to rise. But in some extreme situations, my feeling changes. Normally, it is alright to put yourself above the rest of the world, to some extent. But if you can prevent WWI at the cost of your life, you should do it. I would be sympathetic to someone who was so overcome with fear in the moment that he let Princip shoot the archduke. But if someone just coolly watched it happen, and then said ‘look, the homunculi couldn’t reach an agreement on this one’, I would object to that.

However, cool refusal is exactly what my two homunculi model predicts. Unless the life of your child is at stake, there is no worldly benefit you can be offered that offsets the loss of your life. So the selfish homunculus just will not sell, no matter what the selfless homunculus offers him. There is no deal to be made.

I wonder if we can save the model with some idea of negotiating in advance to make extreme sacrifices in special situations. Imagine the two homunculi, before you are born when they are perfectly ignorant of every fact about you, agreeing that if you have the chance to die to prevent a world war, you should take it. And if you have a chance to live a life of minimal altruistic value but that is sufficiently surpassingly enjoyable, you will also do that (maybe being a great writer or musician–but perhaps that example doesn’t work because other people would enjoy your work).

Putting the model aside, I find my own thinking about this issue to be very muddled. I absolutely would give my life to stop WWI, or achieve other comparably important ends. That’s not because I don’t love life; I do. But, even though I would be willing to sacrifice my life to prevent WWI, there are some seemingly less painful things that I really cannot see myself doing. For instance, if my best opportunity to help the world were something that made my parents hate me, I think I would probably just pass it up. You might object: this isn’t necessarily an inconsistency, maybe I care more about filial piety than life itself. But that isn’t it. If I had choose between pressing a button that got me excommunicated from my family, or a button that got me killed (but left a beloved memory in my wake), I would definitely press the excommunication button. That is not consistent. (A friend suggests that I may put some epistemic weight on my parents’ judgment, which maybe resolves the inconsistency.)

Here’s another problem: both homunculi are always on board with instrumental selfishness (or helping yourself now so you can help others later). Put on your own oxygen mask first, as they say on airplanes. But “instrumental selfishness” is poorly defined.

Getting enough sleep is important to doing good work and important to being happy. But what about getting along with your parents? Certainly, some people would be so miserable if they didn’t get along with their parents that they wouldn’t be able to do good work. What if you require lots of vacation time to do good work? What if you require the finest caviar every night? What about a new Bugatti? Where does it stop? The issue applies to a whole host of decisions, not just financial ones.

Finally, a general worry. It seems like we value a lot of things that are imperfectly correlated with each other. The true, the good, and the beautiful sometimes coincide, and sometimes they don’t. And one consequence of thinking of the relationship between these things as a correlation that is less than one is that the maxima of truth, goodness, and beauty will come apart.

We are left with two (by my lights) pretty unattractive options. We can compromise and miss out on the maxima of all three values, and perhaps realize a lower amount of ‘total value’ (whatever that means); or we can maximize one value uncompromisingly. It sounds attractive to adopt the principle that, even though you normally compromise between X and Y, if you can really hit X out of the park you should just focus on doing that, Y be damned. But I wonder if, in the real world, this principle makes compromise of any kind impossible, that it just mandates zealotry.

Why Oligarchy is better than Dictatorship

In a dictatorship, the leader can do whatever he wants. Any dictator, simply because he is a dictator, is liable to engage in illicit self-dealing and to kill people who threaten his power. However, if the leader has reasonable wants, and reasonable ideas about how to accomplish what he wants, then there is a limit to how bad things can get. Only moderately bad dictators have tended to be less revolutionary than the worst dictators. Hitler and Stalin both wanted to take over the world and change their own societies completely. The Shah of Iran, on the other hand, was happy to mostly hold on to power that he had inherited. The Shah of Iran was not a great guy. He killed people who threatened his power, and he was corrupt. But that is where it stopped. He did not try to root out any races or classes, he did not cause any massive famines, he did not attempt to engineer a world war.

https://upload.wikimedia.org/wikipedia/commons/thumb/3/3e/Shah_fullsize.jpg/1200px-Shah_fullsize.jpg

Imagine a regime with a committee of five rulers. They vote to decide what to do, and the majority rules. I think this kind of an oligarchy would tend to be more moderate– and therefore, less destructive–than a dictatorship. Further, an oligarchy in which all members have to be unanimous before any action can be taken would be more moderate still.

A complication is that such a regime might be staffed by revolutionary ideologues who all believe the same crazy things. Would that be any different in practice from a revolutionary dictatorship? Also, why are representative democracies generally less bad than dictatorships? One popular model is that the people understand the world well enough to stop the very worst abuses. The usual examples are wars of aggression that the aggressor country might lose and manufactured famines. But why should this be? Why don’t we find countries of ideologues, where the common man is as blinkered and as willing to sign up for bloodshed as Stalin was?

One explanation: the average person simply does not know enough to be an ideologue. Being an ideologue means learning and applying a vast amount of theoretical content (I do not say information). Most people, for better or worse, have not learned all that. The average American apparently cannot explain what it means for someone to be liberal or conservative. They know which of these terms is associated with which political party. They might know which issue positions are liberal or conservative. But my understanding is they have a very limited ability to explain how these positions are supposed to cohere with each other. That means they can’t predict what ideologies prescribe “out of sample”. In a new or extreme situation, they have to respond with an open mind because they just do not know what any ideology would say they should do.

Perhaps if you had some super-educated country, then democracy would be no better than dictatorship at avoiding atrocities? I actually think this is not quite right–although more education might make people more ideological, people differ temperamentally in how ideological they are willing to become. Someone like Deng Xiaoping can engineer a retreat from the worst communist practices for pragmatic reasons. Deng did not dismantle communism because he didn’t really understand communist ideology. Rather, he was able to consider China’s problems with an open mind despite understanding communist ideology.

Thus regimes in which some or all rule (oligarchies or democracies) are likely to be more moderate than regimes in which one rules. Having one ruler increases the chance that the ruler is both able and willing to apply an ideology, which I think is how people get killed in very large numbers. Further, regimes in which a larger majority of the rulers is needed to do anything will, to that extent, tend to be more moderate.

My argument suggests that the worst possible regime is not a dictatorship. It is actually an exotic kind of oligarchy in which an action is taken if any rather than all or a majority of the rulers wants to take it. A group is less likely to average out to a Stalin than one ruler is to be a Stalin. But a group is also more likely to contain a Stalin aspirant than a single person is to be one.

Why Worry?

Nobody wants to die. Natural risks are known to be pretty low, because we can estimate their frequencies in the future with their frequencies in the past. As it happens, supervolcanic explosions and planet killing asteroids don’t come around very often. So if very few people are trying to wipe humanity out and natural risk is low, then why worry?

Consider the risk posed by passively listening for alien messages (recently explored in an excellent post by Matthew Barnett). If we expect that some alien civilizations will expand very rapidly but still significantly slower than the speed of light, there will be a large margin between the frontier of their physical expansion and the furthest places they can reach by sending messages at light speed. Expansionist aliens might try to use messages to start expansion waves from new points further out or to prevent other civilizations from grabbing stuff that is in the future path of their expanding frontier.

R1 is the radius of physical colonization, R2 is the radius reachable by light speed messages.

Therefore, if we get an alien message, it might be bad news. It might encode instructions for some kind of nightmarish world destroying weapon or hostile, alien-created AI. Maybe we just shouldn’t try to interpret it or run it on a computer. (Bracketing all the technical problems this obviously raises–if we get a message we don’t recognize as such or that we can’t make head or tail of, then there’s obviously nothing to worry about). As one commenter summarized Matthew’s argument: “Passive SETI exposes an attack surface which accepts unsanitized input from literally anyone, anywhere in the universe. This is very risky to human civilization.”

The SETI Institute’s current plan if they get an alien message is apparently to post it on the internet. For the above reasons, this is a terrible idea. Matthew wrote: “If a respectable academic wrote a paper carefully analyzing how to deal with alien signals, informed by the study of information hazards, I think there is a decent chance that the kind people at the SETI Institute would take note, and consider improving their policy (which, for what it’s worth, was last modified in 2010)”.

I have studied information hazards a bit, and the subject is very interesting. But as far as I can tell the study of information hazards is short on general purpose lessons besides: be careful! One important finding is the idea of the unilateralist’s curse. If a group of independent actors discovers a piece of sensitive information, the probability that it will be released is given not by the average of the probabilities that each member will release it but by the probability that the most optimistic or risk tolerant member will. This leads to a “principle of conformity”. In an information hazard situation, you shouldn’t just do what you think best. You should take the other group members’ assessment of how risky publicizing something is into account. Be careful!

Information hazard research pioneer Nick Bostrom came up with an analogy for existential risks created by future technologies. Imagine that there is an urn containing white, gray, and black balls. A white ball is a beneficial new invention, a gray ball is an invention with mixed effects, and a black ball is an invention that destroys human civilization (for example, a bomb that, once discovered, any idiot can assemble which would destroy the entire earth if detonated). So far, technological progress has been good for humanity. We’ve drawn lots of white balls, a few gray balls, and no black balls.

But will that continue? Bostrom wrote:

Most scientific communities have neither the culture, nor the incentives, nor the expertise in security and risk assessment, nor the institutional enforcement mechanisms that would be required for dealing effectively with infohazards. The scientific ethos is rather this: every ball must be extracted from the urn as quickly as possible and revealed to everyone in the world immediately; the more this happens, the more progress has been made; and the more you contribute to this, the better a scientist you are. The possibility of a black ball does not enter into the equation.

I think the existence of this ethos is the most important big picture reason for pessimism about existential risk. It is much harder to bound the risks created by new technologies than it is to bound natural risks. We have only had a few centuries of fast technological progress. Presumably the technologies of the future will be more powerful than the technologies of the past. Presumably things that are more powerful are riskier. And, if a black ball had come out of the urn already, there would be nobody around to ponder this question. So how much can we conclude, really, from the fact that no black ball has been drawn so far in our own history?

We should be worried that our civilization spends almost no energy worrying about this possibility. Science emerged from the breakdown of various orthodoxies and taboos. Scientists’ hatred of taboos is pretty understandable–and the fact that I find it understandable worries me all the more. I hate, hate, hate people who try to institute taboos on exploration and discussion! And that is even after spending a lot of time thinking about why future technologies might be risky. Even though I think this attitude imperils my species, I cannot suppress my allergy to taboos.

So what are the chances, then, that the “pull out as many balls as possible, color be damned” ethos changes, without other huge changes in the structure of human civilization? Maybe we could convince SETI organizations to “be careful!”. Then that (small, in the grand scheme of things) problem might be solved. But how many other groups, doing other kinds of research, would we have to convince? We either have to scramble to invent countermeasures to technologies that do not exist yet, or we have to try to persuade various research communities to change in ways that they find very uncongenial. We are running around a dam that is springing leaks, trying to plug them with our fingers. That is the kind of thing that will eventually stop working.