[Caveat lector: this essay is going to get pretty weird. It is definitely guilty of being simulation woo. Because it is just a personal reflection, I didn’t want to pepper it with warnings that I am not sure about this and I don’t know about that. Please take this paragraph as such a warning.]
During an emergency, people often feel that normal decision-making procedures should be suspended and replaced with something more unitary and top down. The Roman Republic had a constitutional provision that allowed the emergency appointment of a temporary dictator. Famously, this did not work out in the long run.
Joe Carlsmith describes a version of a similar, and similarly destructive, process on the level of individual decision making:
Sometimes I think of clinging as functioning to override more agentic processes that some other part of the system sees as at risk of messing up in a high-stakes way. In this sense, it’s a bit like an instinctive flinch away from a hot stove, but it’s tied up more closely with conscious processing. I imagine some part of my mind saying: “this whole consider-your-options-and-then-choose-with-agency thing is well and good in lots of cases, but this one is too big of a deal for such theoretical luxuries; I’m taking the wheel; I’m shutting down the whole live-in-harmony-with-your-deepest-values show for now; we’re staying up all night googling, ‘rationality’ be damned.’
But it is far from clear that you should never throw out normal decision procedures in an emergency. Even if you normally reason collaboratively with your spouse rather than barking orders, you might make an exception if she is asleep and your house is on fire.
What if your concern is not about your own safety (or the safety of the Roman Republic), but about that of the whole world? Then, if you think this way of thinking has merit in more familiar situations, there is an intuitive case to be made for not having much of a life and spending absolutely as much of your time as you can trying to solve the world’s problems. In such a situation, you might also feel that you should give up on personal projects and dreams so that you can work on more pressing issues. (This intuitive case assumes that not having a life actually is what would work best, an argument in that vein is laid out ably here, but obviously there’s still plenty of room for debate.)
You might feel this way even if you are not a “take no prisoners utilitarian.” During the American Civil War, men who avoided military service were looked down on. The idea, I think, was that while it is appropriate to put your own needs ahead of those of the group normally, during an emergency you should prioritize the needs of the group, even at great personal cost. One way of making this model of decision making a bit more precise is to think of it in terms of moral parliamentarianism. The idea of a moral parliament is a proposed response to moral uncertainty; if you are unsure whether some course of action (like eating meat) is right, wrong or neutral, you can model that as if your moral decisions were made by a parliament of homunculi with representation apportioned according to your credence in different moral theories. One can imagine a moral parliament with many representatives who believe in theories that are not very demanding. In “ordinary times” (whatever that means) they vote for you to mostly take care of your own interests, while fulfilling minimal obligations of benevolence and not violating prohibitions on theft, murder, and so on. They and the impartial altruists in the parliament ordinarily compromise. However, all representatives will vote, in an emergency, to temporarily replace themselves with an impartial altruist dictator focused solely on stopping the emergency.
I don’t think this is necessarily a crazy response to a dangerous situation, especially if you have a real reason to think the situation is dangerous. Many people seem to have responded in something like this way to climate change, and a few people have had similar responses to other problems.
But I think this hybrid of moral parliamentarianism and the Roman constitution interacts in an interesting way with Nick Bostrom’s “simulation hypothesis.” Bostrom writes:
A technologically mature “posthuman” civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one. If (1) is true, then we will almost certainly go extinct before reaching posthumanity. If (2) is true, then there must be a strong convergence among the courses of advanced civilizations so that virtually none contains any relatively wealthy individuals who desire to run ancestor-simulations and are free to do so. If (3) is true, then we almost certainly live in a simulation. In the dark forest of our current ignorance, it seems sensible to apportion one’s credence roughly evenly between (1), (2), and (3).
If you are in a simulation, a lot of the problems you might work on seem to lose most of their importance. Human extinction in the simulation would not actually be the end of humanity. Even in the worst case, humanity would go on in many other simulations and in the real world. Your work on preventing human extinction would be much less valuable. I know there is work on the implications of the simulation hypothesis and other similar ideas for decision theory, and I have not had time to study it yet (when I do, I will blog about it). But, at least for now, I can’t imagine how you would avoid the conclusion that if you are in a simulation, your ability to cause effects on the world beyond yourself (including morally significant effects) is much less than it would be if you were in the real world. On the other hand, being in a simulation does not (I don’t think) reduce the personal significance of what you do. Your experience of joy or sorrow would be the same in the simulation as it would be in the real world.
Also, intuitively, the idea of living many lives and spending all of them working tirelessly on issues that are not intrinsically all that interesting but that seem important is just hard to accept. It would be one thing if I knew we were playing for keeps. But the reason emergencies can make people who normally try to compromise between selfish and altruistic goals toss their selfish goals out the window is that emergencies are rare. If you are condemned to live through uncountable eons of emergencies, you might feel that you should revert to your prior stance of compromising.
So, have kids, go see the grand canyon, do the second best thing that seems like more fun rather than the first best thing that seems most important. Still try to help, but try to enjoy your life as well. Because it may be that “this life as you now live it and have lived it, you will have to live once more and innumerable times more; and there will be nothing new in it, but every pain and every joy and every thought and sigh and everything unutterably small or great in your life will have to return to you all in the same succession and sequence—even this spider and this moonlight between the trees, and even this moment and I myself.”