Why Oligarchy is better than Dictatorship

In a dictatorship, the leader can do whatever he wants. Any dictator, simply because he is a dictator, is liable to engage in illicit self-dealing and to kill people who threaten his power. However, if the leader has reasonable wants, and reasonable ideas about how to accomplish what he wants, then there is a limit to how bad things can get. Only moderately bad dictators have tended to be less revolutionary than the worst dictators. Hitler and Stalin both wanted to take over the world and change their own societies completely. The Shah of Iran, on the other hand, was happy to mostly hold on to power that he had inherited. The Shah of Iran was not a great guy. He killed people who threatened his power, and he was corrupt. But that is where it stopped. He did not try to root out any races or classes, he did not cause any massive famines, he did not attempt to engineer a world war.

https://upload.wikimedia.org/wikipedia/commons/thumb/3/3e/Shah_fullsize.jpg/1200px-Shah_fullsize.jpg

Imagine a regime with a committee of five rulers. They vote to decide what to do, and the majority rules. I think this kind of an oligarchy would tend to be more moderate– and therefore, less destructive–than a dictatorship. Further, an oligarchy in which all members have to be unanimous before any action can be taken would be more moderate still.

A complication is that such a regime might be staffed by revolutionary ideologues who all believe the same crazy things. Would that be any different in practice from a revolutionary dictatorship? Also, why are representative democracies generally less bad than dictatorships? One popular model is that the people understand the world well enough to stop the very worst abuses. The usual examples are wars of aggression that the aggressor country might lose and manufactured famines. But why should this be? Why don’t we find countries of ideologues, where the common man is as blinkered and as willing to sign up for bloodshed as Stalin was?

One explanation: the average person simply does not know enough to be an ideologue. Being an ideologue means learning and applying a vast amount of theoretical content (I do not say information). Most people, for better or worse, have not learned all that. The average American apparently cannot explain what it means for someone to be liberal or conservative. They know which of these terms is associated with which political party. They might know which issue positions are liberal or conservative. But my understanding is they have a very limited ability to explain how these positions are supposed to cohere with each other. That means they can’t predict what ideologies prescribe “out of sample”. In a new or extreme situation, they have to respond with an open mind because they just do not know what any ideology would say they should do.

Perhaps if you had some super-educated country, then democracy would be no better than dictatorship at avoiding atrocities? I actually think this is not quite right–although more education might make people more ideological, people differ temperamentally in how ideological they are willing to become. Someone like Deng Xiaoping can engineer a retreat from the worst communist practices for pragmatic reasons. Deng did not dismantle communism because he didn’t really understand communist ideology. Rather, he was able to consider China’s problems with an open mind despite understanding communist ideology.

Thus regimes in which some or all rule (oligarchies or democracies) are likely to be more moderate than regimes in which one rules. Having one ruler increases the chance that the ruler is both able and willing to apply an ideology, which I think is how people get killed in very large numbers. Further, regimes in which a larger majority of the rulers is needed to do anything will, to that extent, tend to be more moderate.

My argument suggests that the worst possible regime is not a dictatorship. It is actually an exotic kind of oligarchy in which an action is taken if any rather than all or a majority of the rulers wants to take it. A group is less likely to average out to a Stalin than one ruler is to be a Stalin. But a group is also more likely to contain a Stalin aspirant than a single person is to be one.

Why Worry?

Nobody wants to die. Natural risks are known to be pretty low, because we can estimate their frequencies in the future with their frequencies in the past. As it happens, supervolcanic explosions and planet killing asteroids don’t come around very often. So if very few people are trying to wipe humanity out and natural risk is low, then why worry?

Consider the risk posed by passively listening for alien messages (recently explored in an excellent post by Matthew Barnett). If we expect that some alien civilizations will expand very rapidly but still significantly slower than the speed of light, there will be a large margin between the frontier of their physical expansion and the furthest places they can reach by sending messages at light speed. Expansionist aliens might try to use messages to start expansion waves from new points further out or to prevent other civilizations from grabbing stuff that is in the future path of their expanding frontier.

R1 is the radius of physical colonization, R2 is the radius reachable by light speed messages.

Therefore, if we get an alien message, it might be bad news. It might encode instructions for some kind of nightmarish world destroying weapon or hostile, alien-created AI. Maybe we just shouldn’t try to interpret it or run it on a computer. (Bracketing all the technical problems this obviously raises–if we get a message we don’t recognize as such or that we can’t make head or tail of, then there’s obviously nothing to worry about). As one commenter summarized Matthew’s argument: “Passive SETI exposes an attack surface which accepts unsanitized input from literally anyone, anywhere in the universe. This is very risky to human civilization.”

The SETI Institute’s current plan if they get an alien message is apparently to post it on the internet. For the above reasons, this is a terrible idea. Matthew wrote: “If a respectable academic wrote a paper carefully analyzing how to deal with alien signals, informed by the study of information hazards, I think there is a decent chance that the kind people at the SETI Institute would take note, and consider improving their policy (which, for what it’s worth, was last modified in 2010)”.

I have studied information hazards a bit, and the subject is very interesting. But as far as I can tell the study of information hazards is short on general purpose lessons besides: be careful! One important finding is the idea of the unilateralist’s curse. If a group of independent actors discovers a piece of sensitive information, the probability that it will be released is given not by the average of the probabilities that each member will release it but by the probability that the most optimistic or risk tolerant member will. This leads to a “principle of conformity”. In an information hazard situation, you shouldn’t just do what you think best. You should take the other group members’ assessment of how risky publicizing something is into account. Be careful!

Information hazard research pioneer Nick Bostrom came up with an analogy for existential risks created by future technologies. Imagine that there is an urn containing white, gray, and black balls. A white ball is a beneficial new invention, a gray ball is an invention with mixed effects, and a black ball is an invention that destroys human civilization (for example, a bomb that, once discovered, any idiot can assemble which would destroy the entire earth if detonated). So far, technological progress has been good for humanity. We’ve drawn lots of white balls, a few gray balls, and no black balls.

But will that continue? Bostrom wrote:

Most scientific communities have neither the culture, nor the incentives, nor the expertise in security and risk assessment, nor the institutional enforcement mechanisms that would be required for dealing effectively with infohazards. The scientific ethos is rather this: every ball must be extracted from the urn as quickly as possible and revealed to everyone in the world immediately; the more this happens, the more progress has been made; and the more you contribute to this, the better a scientist you are. The possibility of a black ball does not enter into the equation.

I think the existence of this ethos is the most important big picture reason for pessimism about existential risk. It is much harder to bound the risks created by new technologies than it is to bound natural risks. We have only had a few centuries of fast technological progress. Presumably the technologies of the future will be more powerful than the technologies of the past. Presumably things that are more powerful are riskier. And, if a black ball had come out of the urn already, there would be nobody around to ponder this question. So how much can we conclude, really, from the fact that no black ball has been drawn so far in our own history?

We should be worried that our civilization spends almost no energy worrying about this possibility. Science emerged from the breakdown of various orthodoxies and taboos. Scientists’ hatred of taboos is pretty understandable–and the fact that I find it understandable worries me all the more. I hate, hate, hate people who try to institute taboos on exploration and discussion! And that is even after spending a lot of time thinking about why future technologies might be risky. Even though I think this attitude imperils my species, I cannot suppress my allergy to taboos.

So what are the chances, then, that the “pull out as many balls as possible, color be damned” ethos changes, without other huge changes in the structure of human civilization? Maybe we could convince SETI organizations to “be careful!”. Then that (small, in the grand scheme of things) problem might be solved. But how many other groups, doing other kinds of research, would we have to convince? We either have to scramble to invent countermeasures to technologies that do not exist yet, or we have to try to persuade various research communities to change in ways that they find very uncongenial. We are running around a dam that is springing leaks, trying to plug them with our fingers. That is the kind of thing that will eventually stop working.

Past and Future Trajectory Changes

It is reasonable to expect that, if all goes well, astronomically more people will be alive in the future than are alive today. So, the argument goes, ‘the utilitarian imperative “Maximize expected aggregate utility!” can be simplified to the maxim “Minimize existential risk!”’.

But, it was later objected, the fact that the number of future people may be very large does not by itself mean that we should focus on minimizing existential risk. In some situations, it might be more effective to try to make smaller improvements to the expected welfare of future people. Trajectory changes are changes that improve the value of the long-term future through some mechanism other than preventing existential catastrophe. Trajectory changes have to be:

(1) Sticky; to count, their effects must be extremely long lasting.

(2) Not inevitable; bringing something about that would have happened anyway a bit later does not count as a trajectory change.

(3) Morally significant; events that are unimportant cannot be trajectory changes.

Whether trajectory change or existential risk mitigation is more effective obviously depends on the magnitude of existential risk. More fundamentally, it depends on how smooth or jumpy the curve of increase in the expected value of the future is. To the degree that the future is not completely determined yet, variation in human choices will result in variation in the ultimate amount of  realized moral value. Good choices will result in more value than bad choices. Different worldviews imply different functions mapping quality of choices to amount of value. For instance, one might think that there are really only two equilbiria in the long-run: extinction and utopia. If this is your view, your function mapping performance to realized value would look something like this:

Given this function, you should probably focus on existential risk reduction. Smaller changes are precluded. Another, I think somewhat less popular, view is that extinction is quite unlikely but that realized value in the future varies significantly with performance:

Finally, you might think that existential risk is high and that the variation in value between different futures without existential catastrophe is large:

If we are in the world described by the second graph or the third graph, it might make sense to pursue trajectory changes in addition to or instead of existential risk reduction. But it can be hard to imagine exactly what kind of changes those would be. It is very easy to see why a nuclear war that killed everyone on earth would curtail humanity’s future. What kind of event might reduce the value of the long run future by, say, 1%? In order to build intuition, I looked into a few examples of morally significant and long lasting historical changes.

Historically Attested Trajectory Changes

The Caste System

In India, nearly everyone belongs to a traditionally endogamous group that historically occupied a specialized economic niche. Genetic evidence shows that caste endogamy is thousands of years old. The rate of intermarriage between at least some Indian sub-castes and their neighbors in the last several millennia must have been less than one percent:

People tend to think of India, with its more than 1.3 billion people, as having a tremendously large population, and indeed many Indians as well as foreigners see it this way. But genetically, this is an incorrect way to view the situation. The Han Chinese are truly a large population. They have been mixing freely for thousands of years. In contrast, there are few if any Indian groups that are demographically very large, and the degree of genetic differentiation among Indian jati [sub-caste] groups living side by side in the same village is typically two to three times higher than the genetic differentiation between northern and southern Europeans. The truth is that India is composed of a large number of small populations.

David Reich, Who We are and How We Got Here, 145-146.

In David Reich’s analysis, fully one third of studied groups were as endogamous as or more endogamous than Ashkenazi Jews.

Textual evidence shows that hierarchical ideas of caste are also thousands of years old. In the Rig Veda, a collection of hymns composed some time in the second millennium B.C., there is a hymn in which a god, Purusha, is sacrificed and his body divided to form the basis of the castes. Purusha’s mouth formed the priestly caste, his arms the warrior caste, his legs the farmer caste, and his feet the laborer caste. Later ancient texts, like the Arthashastra and the Manusmirti, prescribe laws and policies for maintaining caste hierarchy.

Finally, Reich presents genetic evidence that the highest caste, the Brahmins, are disproportionately descended from Steppe people who conquered the Indian subcontinent in ancient times. 

Does the above evidence prove that bad treatment of lower caste people in India dates back to ancient times? Well, it is hard to be sure because the Indian climate makes it very difficult for ancient texts to survive. However, it is at least very suggestive. Lower caste people have, in well-documented recent history, been relegated to lives of poverty and illiteracy. They also have been treated utterly without respect. It seems likely to me that people have diminishing marginal utility in status. That is, the gain in going from an extremely low and despised position to an average position is greater than the gain in moving from an average position to an extremely high position. If this is true, the caste system is negative sum in welfare terms–the misery of the low castes is not compensated in aggregate by the bliss of the high castes. All societies, even lots of groups of non-human animals, are hierarchical to some degree. While a moderate amount of inequality may not have significant welfare costs, the costs could be extreme in an extremely hierarchical society. And, at least in recent history, it is hard to think of more extreme examples than the caste system.

For the caste system to represent a historical trajectory change it has to be long-lasting, avoidable, and important. The hardest of these criteria to establish is that the caste system was avoidable. It seems unlikely that the caste system is a necessary result of military-economic competition. Talent for various jobs is unlikely to be perfectly correlated with caste (why would the correlation be perfect?). That means that, inevitably, there will be inefficiency when people perform the labor appropriate for their caste rather than the labor appropriate for their skills.

But might the caste system have been a necessary result of the ancient Indian political situation? The Indo-European invaders seem to have established weaker systems of endogamous classes or castes in ancient Persia, Rome, and Greece. But none of those countries has anything comparable to Indian caste. So caste systems stable on the scale of several millennia were not a universal result of the Indo-European conquests. The arrival of Islam in Persia is sometimes associated with the end of the ancient Persian caste system; perhaps the fact that Islam only became firmly established in India five hundred years after Iran caste to entrench itself in India. It is also possible that something about the ancient Indian political situation made the persistence of caste almost inevitable. Either way, after the caste system became firmly established, it has proved very hard to dislodge.

Infanticide and Abortion

The Greeks, Romans, and Pre-Islamic Arabs all practiced widespread infanticide. I think the extent to which Christianity and Islam actually ended infanticide in these places, as opposed to just pushing it out of the literary sources, is not totally clear. However, there is good data on infanticide and abortion from Early Modern Japan. Ordinarily, in pre-industrial societies fertility rates were high, and population growth was slowed by high natural infant mortality. In Tokugawa Japan, fertility rates fell long before industrialization and rose again early in the industrial period, once infanticide was brought under control:

From Mabiki by Fabian Drixler

In Early Modern Japan:

Infanticide permitted a range of interpretations. Administrators worried about dwindling populations and falling revenues, and often thought that it was a love of luxury that prompted people to kill their children. Villagers complained that poverty left them no other resort, and sometimes helpfully suggested that lower taxes would do wonders for the safety of their newborns. Men of learning often believed that moral education could convince villagers to give up infanticide, but some thinkers argued that it would take a fundamental reform of the political system to achieve that goal. Men of substance who were content to work within the established order, meanwhile, reinvented themselves as moral leaders of their communities and wrote to their governments with offers to finance the eradication of infanticide. Most domains in Eastern Japan built expensive systems of welfare and surveillance. By 1850, the majority of women north and east of Edo were obliged to report their pregnancies to the authorities, and the majority of the poor could apply for subsidies to rear their children. Over the same years, a demographic revolution was set in motion. In the eighteenth century, the consensus of many villages in Eastern Japan was that parents could, and under many circumstances should, kill some of their newborns. Perhaps every third life ended in an infanticide, and the people of Eastern Japan brought up so few children that each generation was smaller than the one that went before it. By 1850, in contrast, a typical couple in the same region raised four or five children, and a long period of population growth began. By the 1920s, the average woman brought six children into the world, and in Eastern Japan, as elsewhere in the nation, overpopulation at home became an argument for expansion abroad. Eastern Japan, in other words, had experienced a reverse fertility transition.

Fabian Drixler, Mabiki, 2.

One reviewer of Mabiki acknowledged the reality of the pattern but attributed the shortfall in births to abortion rather than infanticide. Either way, the population was reduced and per capita standards of living would have been raised (assuming, what is almost certainly true, that Early Modern Japan was a Malthusian economy). In addition to the effects of infanticide on population size and average well-being, it may also have had negative psychological consequences for parents. Speaking personally, the deaths that seemed to weigh by far most heavily on older members of my family are the (unintended) deaths of children, even though those deaths all happened seventy or more years ago. Those deaths were not products of infanticide, and for all I know, the pain associated with infanticide might have been less. On the other hand, I think murder is often more troubling to the victim’s family than other causes of death. And in Roman law, the paterfamilias could decide unilaterally, without the mother’s permission, whether to accept a new baby into the family. I imagine the pain of the mothers in those situations must have been enormous. Finally, if we are going to establish moral side-constraints against anything, we probably should start with infanticide.

Human Sacrifice and Gladiatorial Combat

Rodney Stark is a Christian historical sociologist. He is not at all a neutral observer. But I think his discussion of what gladiatorial games reveal about pagan society appropriately describes the stakes:

But, perhaps above all else, Christianity brought a new conception of humanity to a world saturated with capricious cruelty and the vicarious love of death. Consider the account of the martyrdom of Perpetua. Here we learn the details of the long ordeal and gruesome death suffered by this tiny band of resolute Christians as they were attacked by wild beast in front of a delighted crowd assembled in the arena. But we also learn that had the Christians all given in to the demand to sacrifice to the emperor, and thereby been spared, someone else would have been thrown to the animals. After all, these were games held in honor of the birthday of the emperor’s young son. And whenever there were games, people had to die. Dozens of them, sometimes hundreds. Unlike the gladiators who were often paid volunteers, those thrown to the wild animals were frequently condemned criminals, of whom it might be argued that they had earned their fates. But the issue here is not capital punishment, not even very cruel forms of capital punishment. The issue is spectacle for the throngs in the stadia, watching people torn and devoured by beasts or killed in armed combat was the ultimate spectator sport, worthy of a boy’s birthday treat. It is difficult to comprehend the emotional life of such people. In any event, Christians condemned both the cruelties and the spectators. Thou shalt not kill , as Tertullian (De Spectaculis) reminded his readers. And, as they gained ascendancy, Christians prohibited such “games.” More important, Christians effectively promulgated a moral vision utterly incompatible with the casual cruelty of pagan custom.

Rodney Stark, The Rise of Christianity, 214-215.

Fourteen centuries of Christian rule provided ample cruelties of their own. Christian era executions, some of which took especially dramatic forms like breaking on the wheel, often served as public spectacles. And pagan Rome wasn’t necessarily all bad. While the Carthaginians famously burned children alive as offerings to their god Moloch, the Romans: “did not tolerate human sacrifice among the peoples they conquered[…] seriously curtailing the practice (if not actually eliminating it) among the Carthaginians and among the Celts.” Practices such as human sacrifice and gladiatorial combat were long-lasting side-constraint violations, even if that the Malthusian model implies the welfare effects of each individual game in the Colosseum or human sacrifice were transient.

Vegetarianism

Today, vegetarianism and other diets that limit meat consumption are more common in India than almost anywhere else. Moral vegetarianism in India dates back to ancient times. My understanding, from a professor I had as an undergrad, is that early Jains were the first to become concerned about animal welfare and then this concern spread to Buddhists and Hindus. While many religions involve fasts from meat for spiritual purposes, I think Indian civilization is pretty distinctive in valuing animals intrinsically. This seems likely to be a contingent fact of India’s cultural history rather than an inevitable adaptation to the circumstances. Further, the animal welfare consequences of Indian vegetarianism over the broad sweep of history are likely to have been large. There also may be human population size consequences of foregoing meat.

Alcohol Prohibition

In the modern world alcohol might be a net benefit to humanity. Many people enjoy it, but many others become addicted to it. So it is hard (if not impossible) to say whether it is good or bad. In a Malthusian situation, the effects of alcohol seem more obviously negative. Either it is a form of luxury spending that can be replaced with other luxuries when the system is out of equilibrium. Or it is consumed at equilibrium as a substitute for subsistence goods. Who would trade subsistence for alcohol? Alcoholics.

Orthodox forms of Islam prohibit the consumption of alcohol. And this prohibition seems to be effective enough, at least now, that it can be read off of a map:

Average alcohol consumption is, as a general matter,  higher at extreme latitudes than close to the equator. While pre-existing variation may have something to do with these patterns, I find it very hard to believe that it explains everything–the differences are just too extreme. Thus the prohibition of alcohol was morally significant and has been long lasting. The content of religious laws also seems very contingent. If Muslims had won the Battle of Tours or lost the Battle of Talas, the contemporary borders of the Islamic world (and therefore also the region of minimal alcohol consumption) might be very different. 

Moral Change within the Malthusian Trap

Some readers might have noticed that none of the suggested historically attested trajectory changes involve changes in per capita living standards. That is because nearly all economies prior to the Industrial Revolution were governed by Malthusian dynamics. In a Malthusian economy, growth in technology or the capture of new natural resources results in only a transient improvement in per capita living standards. This is because the population always grows to eat up the new surplus. Once the surplus is eaten up, living standards decline again. How far do they decline? Back to subsistence; that is, back to the point at which they could not decline any further without causing the population to fall.  This is why per capita incomes did not grow in a sustained way between the rise of agriculture and the Industrial Revolution.

Unless the jaws of the Malthusian trap are broken, there is no way that changes in per capita economic living standards can be made to stick. But there is more to life than per capita economic living standards. Some changes along other dimensions have been significant and long lasting. That’s why I want to push back on the tendency I sometimes see in online discussions of macrohistory to assume that the only genuinely “macro” historical events are the invention of agriculture and the Industrial Revolution.

The examples I gave of historically attested trajectory changes fit into the categories of:

Changes in Population Size

Malthus allowed that, if fertility could be controlled artificially, the increase of population might not consume increases in productivity. Thus infanticide and abortion can change both population size and equilibrium economic living standards. The history of infanticide and abortion will therefore have different assessed consequences according to different views of population ethics (apart from their inherent moral significance).

Non-Economic Welfare Changes

Everyone knows that it is possible to be poor and happy or rich and unhappy. This idea seems less relevant to the deep past because the level of poverty that nearly everyone was subjected to was so extreme. Similar levels of poverty are seen today in rich countries not among the average poor but only among the very poorest. It is hard to imagine how anyone could be happy while starving or freezing. But it is easy to imagine how someone could be made more miserable. Extreme disrespect or non-disabling physical torture could make the life of even someone living at subsistence harder, without killing him by reducing his income.

Animal Welfare

Human economies were Malthusian with respect to the human population. But they also had associated populations of domesticated animals, in varied living conditions. Those animal populations did not follow Malthusian laws because people consciously regulate the size of animal herds. Thus changes to human beliefs or practices related to animals can have long lasting welfare consequences, even in a Malthusian situation.

Violation of Side-Constraints

The short-run welfare effects of many atrocities (murders, wars, violent rampages) are obviously negative in any situation. In the medium term, in the Malthusian trap, the consequences become debatable. A war that killed off 10% of the population would reduce the intensity of cultivation and might allow farmers an easier life until the system returns to equilibrium. Or maybe the acute suffering and destruction of physical capital outweigh this effect. Either way, in a Malthusian situation, the welfare effects of deadly violence will be transient. Eventually, the population will return to equilibrium, regardless of whether there was more or less suffering in the meanwhile. However, some moral theories hold that certain actions are wrong apart from their welfare consequences. And even if we are committed consequentialists, we should still not be certain that our preferred moral theory is right. So we might regard events as historical trajectory changes if they established long-lasting practices that violate the rules of deontological or virtue theories.

Addiction

Normally, people prioritize survival for themselves and their children over all other goals. There is, however, a big exception. When people are addicted to a drug, they often prioritize access to the drug over access to goods needed for survival. Because, unlike most goods, demand for an addictive drug can compete with demand for food, the spread of an addiction can reduce the maximum population that can subsist at a given level of total wealth. Also, addictive drugs introduce new sources of non-economic suffering.

The Steady State of the Future

Because of the expansion of the universe, there is only a finite amount of matter and energy that is in principle accessible from earth. The maximum amount of possible economic value per atom may be finite or it may be infinite. If it is finite, economic growth due to technological change will at some point cease. All the important, possible technology will have been invented already. If this “Technological Completion Conjecture” is correct, trajectory changes will have to act on some other mechanism than increasing the total wealth of the civilization of the future. Future civilizations would be analogous to past civilizations in that the only way they could increase per capita wealth would be population decline. The economic steady state of the future may or may not be Malthusian. Population could be artificially capped at some level above subsistence. But population and individual living standards will ultimately be capped, naturally or artificially. Thus changes to the moral trajectory of future civilization might have to take similar forms to changes to the trajectories of pre-industrial civilizations. My typology of trajectory changes possible given a fixed level of output is very likely to be incomplete. There are probably many other morally significant kinds of changes that can occur in the absence of changes to per capita income (most obviously, changes in the distribution of income might change welfare levels without changing total economic output). 

What specific aspects of the trajectory of future civilization might it be important to change? I have a few ideas, but they are very speculative. If output is again capped and Malthusian dynamics return, population axiology may begin to seem a lot less dry and academic. Factory farming might either disappear or radically expand. Digital minds might be created, and treated either humanely or horrifically. Horrific treatment could be motivated by efficiency (as it is in factory farming) or by more perverse motivations (as in human sacrifice or gladiatorial games). And new addictive drugs (and counter-measures to addiction) are very likely to be invented. 

[Crossposted to EA Forum]


Thanks to Applied Divinity Studies, Matthew Barnett, Skluug, Kenan, and Voxette for comments and discussion.

The Rise of Trade and the Spread of War

In the modern world, the richest countries often have very limited natural resources (e.g., Japan). And often, resource rich countries are very poor because of protracted civil war (e.g., Congo-Kinshasa), misgovernment (e.g., Venezuela), or being stuck at the bottom of the value chain (the so-called Dutch Disease). It is still better to have abundant natural resources than to not have them. But there is a lot more to a modern economy than natural resource extraction. I think that prior to the industrial revolution it was rarer for a country blessed with rich natural resources to be poor. One extreme example of this is the division of Genghis Khan’s empire among his four sons. The eldest son, Jochi, received the area corresponding to Russia and Siberia. The state created by Jochi and his descendants is known to history as the Golden Horde. The Golden Horde was the least populous of the four sections, but it was seen as desirable because it had abundant natural resources, particularly furs. If given the choice today, I would certainly rather be Khan of China or the middle east than of Siberia and the Great Steppe. But back then, they don’t seem to have seen it that way.

So it seems fair to say that natural resource are less central to determining which regions are considered rich or poor in the modern world than they have been historically. And it also would seem to follow that starting a war for the purpose of stealing natural resources makes much less sense than it used to. But, though the role of natural resources in starting wars has diminished, the role of natural resources in expanding wars once they begin has increased.

In general, a pre-industrial army is not that hard to supply. You need food, clothing, wood, various widely available metals, and maybe saltpeter. There are some exceptions–I’ve seen speculation that the Late Bronze Age collapse was caused in part by an inability to find new sources of tin for making bronze after the tin mines in what is now Afghanistan stopped operating. But, in general and for the most part, modern nations need vastly more varied and geographically disbursed resources to fight wars. I notice this whenever I read about World War II. The German and Soviet leadership were constantly freaking out about access to chromium or phosphorus or tungsten or various other obscure elements of the periodic table. One of the major reasons Japan attacked Pearl Harbor was the Americans had subjected them to an embargo on oil exports, and they imported 80% of their oil from America. Prior to the invention of the internal combustion engine, a situation in which it made sense to the Japanese to attack the United States may just never have arisen. During World War I, Germany was cut off from deposits of guano (bird and bat droppings) from South America which it used to get fixed nitrogen for use in fertilizers and explosives. The German chemist Fritz Haber saved the day by having invented the Haber-Bosch process. But that was just luck.

It has become far more common for rich countries to import the majority of the food they consume from abroad (though there are ancient examples of this, like the role of Egypt as the breadbasket of Rome). Fossil fuel powered transportation makes it more economical to ship heavy items like food. And food imports seem to have been a crucial factor in the escalation of the world wars.

Germany’s strategy in WWI was to use submarines to blockade England and starve the English into surrendering and deprive them of supplies. (I don’t know why people always say “U-boats” in this context, it just means unterseebooten, i.e. submarines. I guess it is the same impulse as refusing to translate Führer into “leader”.) Unrestricted submarine warfare had the side effect of leading the Germans to sink ships carrying American passengers, which expanded the war. The Germans, for their part, also suffered from difficulty importing food. It was serious enough that the effects of the Entente blockade show up in graphs of the height of German children.

Graphs from Hunger in War and Peace by Mary Elizabeth Cox.

These days, lots of us have WWIII on the brain. But those concerned about the long-term future have additional reasons to think about great power war. The ill effects of a world war could easily be permanent.

I want to suggest a model according to which international trade in geographically concentrated natural resources increases the risk that a small war becomes a world war:

(1) Cheap international shipping and greater knowledge of how to use rare materials creates more demand for geographically concentrated natural resources in peacetime.

(2) Therefore, modern economies rely to an ever greater extent on imports that can not be insourced in an emergency.

(3) When war breaks out, access to some essential resources is lost. In the simplest case, this is because you were buying the resources from your adversary. But it could also be because countries are less interested in trading with belligerents or you are under blockade.

(4) The need for domestically unavailable natural resources spreads war further because if the resources cannot be bought, the only other options are doing without (which may mean accepting defeat) or stealing them from other countries that you attack.

The idea that commerce reduces the likelihood of war has a distinguished and ancient pedigree. The reasoning goes that familiarity borne of trade breeds international understanding, and that it is financially disastrous to start a war with your trading partners. That all sounds right to me. But if my argument is correct the doux commerce thesis is only half of the story. Yes, war between major countries is less likely to start in the first place if there is lots of international trade. But if war does start, it is more likely to spread until it engulfs the globe, because countries will be faced with the choice between death and expanding the war in order to plunder necessary resources.

Constantly putting out small wildfires reduces the number of small wildfires in the short run. In the long run, it increases the number of large wildfires because putting out small ones allows a vast supply of tinder to build up. In the same way, financial bailouts might reduce the acute severity of the normal business cycle while gradually contributing to a future catastrophic economic meltdown. And international trade might make small wars less numerous and world war more likely.

What happens if you don’t allow controlled burns (source).