Going forward, all of the content here will also be posted at https://goodoptics.substack.com/
The substack subscribe feature is much easier to use. I will continue to post on wordpress for the time being.
Going forward, all of the content here will also be posted at https://goodoptics.substack.com/
The substack subscribe feature is much easier to use. I will continue to post on wordpress for the time being.
A friend recommended Tyler Cowen and Alex Tabarrok’s introductory economics textbook, Modern Principles of Economics, to me. It’s been almost ten years since I took an economics class, so I decided to take a look. I think I have noticed an exception to the law of comparative advantage as it is presented by that book. I figure that either I have made a mistake, or this exception is already very familiar to those in the know. If I’m right, the exception would have no real world significance whatsoever, but it might provide an interesting way of looking at why the law of comparative advantage is true. If I have made a mistake, hopefully someone will point that out in the comments.
Let’s start with just laying out comparative advantage. Why is trade and the division of labor necessary? There are three basic reasons:
(1) Tastes differ. If wild blackberries grow on my land and wild strawberries grow on yours, and we each prefer the other kind of berries, we can make ourselves better off by trading.
(2) Specialization allows learning and economies of scale. The example given in the book is that, if each of us were forced to live in the wilderness, growing our own food, building our own shelter, and making our own tools and clothing, we would probably starve. But one modern farmer can grow food for thousands of people. This is because farmers can learn far more about farming than can people who also have to learn about tool making, weaving, building, etc. Specialization enables the development of more knowledge. Specialization also makes it possible to invest in useful equipment that would not be justified if you were only growing food for personal use. Tractors are more efficient than trowels or draft animals, but a subsistence farmer wouldn’t be cultivating enough land for it to make sense to use a tractor.
(3) Comparative advantage.
[If you already know about comparative advantage, you can skip to the section “Fastistan vs. Slowistan” below; but maybe you will want to brush up like I did.]
Imagine a world economy with two countries: Burkina Faso and Britain. This economy also has only two products: shoes and socks. Britain can produce 50 pairs of shoes per year, or 50 pairs of socks, or any combination of an equal number of pairs adding up to 50. This graph represents the possible combinations of shoes and socks that Britain can produce in a year:
Note that the slope of this “Production Possiblity Frontier” (PPF) is 1.
Burkina Faso can produce 10 pairs of shoes or 20 pairs of socks. Britain can produce more than Burkina Faso of both products–it has an absolute advantage.
Note that the slope of the PPF is 1/2.
What could Britain gain from a trading partner who, seemingly, brings nothing additional to the table?
This is where comparative advantage comes in. Consider the opportunity costs of production for each country.
Because Britain can produce more of both products than Burkina Faso can, the cost in foregone production of producing in Britain is higher than in Burkina Faso. Therefore, both countries can be made better off by trade. As Cowen and Tabarrok put it: “The theory of comparative advantage not only explains trade patterns but it also tells us something remarkable: A country (or a person) will always be the low-cost seller of some good. The reason is clear: The greater the advantage a country has in producing A, the greater the cost to it of producing B” (pages 18-19).
Imagine two countries that are almost identical. The only difference is that in the first country, Fastistan, all productive machinery operates twice as fast as in the second country, Slowistan.
Fastistan can produce 10 pairs of shoes or 10 pairs of socks in a year.
Slowistan can produce 5 pairs of shoes or 5 pairs of socks.
Here are the opportunity costs of producing shoes and socks in Fastistan and Slowistan.
Note that the opportunity costs are the same. That is because, no matter how much more productive Fastistan is than Slowistan in absolute terms, what determines the unit opportunity cost is the per unit amount of foregone production. So if all products can be produced in the same ratio but in different amounts in two countries, the opportunity costs do not differ. It follows, I think, that neither country would be made better off by trade.
I don’t think that this fact has any real world implications–you are unlikely to get a situation like that of Fastistan vs. Slowistan in the real world. Modern economies produce millions of different products. If there were really only two products, it might be possible in practice to get two countries with PPFs with the same slope. But such a thing is very, very unlikely with a PPF that exists in millions of dimensions.
That leads me to a few concluding questions:
1. If the PPFs have the same slope, there is no comparative advantage, only absolute advantage (I think). Does it follow that the more different the slopes of the PPF functions are, the greater the gains from trade?
2. How do economists model trade in real economies, which have more than a few products? I once was talking to an economist about some class of models, and I asked him if every product gets its own dimension, leaving you with an ultra-high dimensionality monstrosity. He said that in principle you could model it that way, but that actually was too complicated to do in practice. He then explained what they do instead–but I don’t recall what he said! How is this handled?
If technological progress continues, AI will eventually be able to replace all human labor. What will happen next?:
(1) One much discussed possibility is that the AIs will forcibly take control from humans, perhaps killing them or perhaps just pushing them aside and running the world without significant human input. This scenario is often thought of as analogous to historical coups or violent revolutions.
(2) Another possibility is that income would continue to be paid out to the factors of production (land, labor, and capital). In this scenario, people who owned capital or land prior to AI take-off would become fabulously wealthy from AI driven growth acceleration. But most people, who depend on wages or salaries, would starve or become dependent on charity. Robin Hanson’s Age of Em belongs to this group. Scenario (2) can be thought of as a future driven by factor payments.
(3) A third possibility is that income from the AI labor will be heavily taxed by a central authority, which will then pay that income out to people to replace the wages lost after the economy transitioned away from human labor. What kind of a future is (3)?
It is often thought of as a communist vision of the future. Here is Matt Yglesias:
Another way of putting it would be Simon (i.e., plenty) for capital and Malthus (i.e., subsistence) for labor. That, of course, is Karl Marx’s vision of long-term economic development. And while I don’t have a strong opinion as to whether or not this is accurate over the long term, it’s certainly a plausible story about the future, and Marx’s solution — socialism — unquestionably seems to me to be the correct one.
But I think the identification of (3) with communism is incorrect. In fact, (1) is closer to communism, in that the workers (robots) would be seizing the means of production and liquidating the (human) owner class. (3) on the other hand is properly thought of as a welfarist, rather than a communist, vision of the future.
One model of the purpose of the welfare state is that it exists to provide income to those who receive no factor payments. A large section of society does not work for wages or own capital. Children, students, the temporarily unemployed, retirees, and the disabled all need some sort of income. Some might say that this income should come exclusively from personal savings or from family members. But another view is that it should provided out of tax revenue by the state. Matt Bruenig illustrated this point of view with a Swiss welfare state theory graphic:
The graphic shows two households at different levels of per capita income. Each household has one worker, but one of the workers supports a large family while the other worker supports only himself. The function of the welfare state, in the graphic, is to equalize the two workers’ incomes by redistributing from the worker with no dependents to the household of the worker with many dependents.
What does this have to do with futurism? If no humans work, the group without labor or capital income will become much larger. In addition to all those who do not currently work, it will expand to include those who mainly get income from working. In scenario (3), nearly everyone would become a welfare state beneficiary. But that would be nearly the opposite of communism, because the workers (robots) would control neither the instruments nor the products of their labor. In fact, they would presumably receive the bare minimum of “income” that they needed to keep working. Robots in (3) would therefore be in the position that Marx (wrongly, as it turned out) thought that the human proletariat was in.
Most people want to avoid scenario (1). But some might might prefer (2) to (3). And even if you do prefer (3) to (2), the difficulties in realizing it are substantial. You need to get whoever has control of the robots to submit to redistribution, but they might use their vast resources to resist, through force or litigation. I think (3) is possible in two situations. First, AI take-off might happen slowly enough (and governments might be with-it enough) that no private actor gets a decisive strategic advantage over existing regimes. Second, some private actor might create a new regime after becoming far more powerful than existing governments, and that regime might be redistributive.
All sensible people care non-instrumentally about things that happen in the world outside of themselves. Would you prefer that there be more or less extreme poverty? How about war? Cancer? Even if these things don’t affect you directly, you probably care about them.
Everyone also cares non-instrumentally about his own well-being. Most everyone also cares more about the well being of his friends and family than about that of strangers.
Sometimes, what is good for you is good for the world. There is probably a correlation. Ceteris paribus, what is good for you is likely to be good for the world. This is because (1) you are part of the world so your well-being counts for something even considered impartially, (2) in order to help, you probably need to be in reasonably good shape. But the correlation between goodness for you and goodness for the world is probably not perfect (why would the correlation be perfect?). The very best thing for you to do, impartially considered, is probably not selfishly the best thing for you to do. And this conflict doesn’t depend at all on the details of what you care about. Whether you want to maximize utility, minimize existential risk, realize American national greatness, spread Christianity, or achieve social justice, it is unlikely that what is best for you is best for the world by your own standards.
So you have self-regarding and other-regarding motivations. And these come into conflict, to some degree. How do you decide what to do? Somehow, you must reach a compromise (in almost all cases, it isn’t psychologically realistic to commit 100% to either selflessness or selfishness). A simple way of thinking about this would be to come up with a conversion factor between selfish and altruistic value. Say that you value yourself five times as much as you value other people. That sounds like a lot at first–but I think in practice your actions would be indistinguishable from those of an impartial altruist. There are a lot more than five other people. There are yet more animals. There are potentially countless (maybe literally countless) future people. So should you just make the conversion factor extremely large? “I value myself as much as a billion other people”? That also does not sound right. In fact, it sounds like something a villain in a comic book would say. In practice, I don’t think the conversion factor model describes what nearly everyone will do in real life, which is to try to compromise between selfish and altruistic actions.
Another way of thinking about is to imagine two homunculi, one selfless and one selfish, bargaining with each other. The selfless homunculus is an impartial altruist. The selfish homunculus just cares about you and your family. The two homunculi negotiate to determine what decisions you will make. They pick a plan together, and each homunculus can veto any plan. Note, this is not a model of moral uncertainty (though it does owe a lot to various theories of action under moral uncertainty, particularly the parliamentary approach). The thought is not: you have an equal credence on ethical egoism and impartial altruism. The thought is: in practice, you will act as if you value some things besides impartial altruism.
There are lots of opportunities for gains from trade between the two homunculi. For instance, money for private consumption is subject to sharply declining marginal utility (how many yachts can one man own?). But marginal utility doesn’t decline (or only declines very slowly) with money for altruistic purposes. The thousandth child vaccinated against polio is just as valuable as the first. So if you try to become extremely rich, both homunculi can be happy–the selfish homunculus because you will be able to buy tons of stuff, the selfless homunculus because, even after buying tons of stuff, you will have lots of money to give away. Similarly, many people become researchers because they love it and can’t tear themselves away. And if you love biology, you might be able to make both homunculi happy by trying to invent a cure for Alzheimer’s Disease.
In general I like this model of compromise between conflicting values. But I also see a few big flaws.
I think that there are a lot of situations where the ability of the homunculi to veto seems intuitively attractive. Liking the veto seems like a similar feeling to wishing that Abraham had told God, ‘No, I’m not going to sacrifice my son, and I don’t care what you offer me, there’s no opportunity for a deal here, just go ahead and strike me down’.
But imagine if you had the opportunity to jump in between Gavrilo Princip and Archduke Franz Ferdinand in 1914, stopping Princip’s bullet and preventing the First World War (assume–unrealistically–that you understand the stakes of the situation as it is happening). I would say, if you have that chance, you should definitely take it. Normally, I think it is fine for people to care a lot more about their own lives than the lives of strangers. It’s just human nature; it would be a waste of energy to criticize something as built-in as that. You might as well command the tide not to rise. But in some extreme situations, my feeling changes. Normally, it is alright to put yourself above the rest of the world, to some extent. But if you can prevent WWI at the cost of your life, you should do it. I would be sympathetic to someone who was so overcome with fear in the moment that he let Princip shoot the archduke. But if someone just coolly watched it happen, and then said ‘look, the homunculi couldn’t reach an agreement on this one’, I would object to that.
However, cool refusal is exactly what my two homunculi model predicts. Unless the life of your child is at stake, there is no worldly benefit you can be offered that offsets the loss of your life. So the selfish homunculus just will not sell, no matter what the selfless homunculus offers him. There is no deal to be made.
I wonder if we can save the model with some idea of negotiating in advance to make extreme sacrifices in special situations. Imagine the two homunculi, before you are born when they are perfectly ignorant of every fact about you, agreeing that if you have the chance to die to prevent a world war, you should take it. And if you have a chance to live a life of minimal altruistic value but that is sufficiently surpassingly enjoyable, you will also do that (maybe being a great writer or musician–but perhaps that example doesn’t work because other people would enjoy your work).
Putting the model aside, I find my own thinking about this issue to be very muddled. I absolutely would give my life to stop WWI, or achieve other comparably important ends. That’s not because I don’t love life; I do. But, even though I would be willing to sacrifice my life to prevent WWI, there are some seemingly less painful things that I really cannot see myself doing. For instance, if my best opportunity to help the world were something that made my parents hate me, I think I would probably just pass it up. You might object: this isn’t necessarily an inconsistency, maybe I care more about filial piety than life itself. But that isn’t it. If I had choose between pressing a button that got me excommunicated from my family, or a button that got me killed (but left a beloved memory in my wake), I would definitely press the excommunication button. That is not consistent. (A friend suggests that I may put some epistemic weight on my parents’ judgment, which maybe resolves the inconsistency.)
Here’s another problem: both homunculi are always on board with instrumental selfishness (or helping yourself now so you can help others later). Put on your own oxygen mask first, as they say on airplanes. But “instrumental selfishness” is poorly defined.
Getting enough sleep is important to doing good work and important to being happy. But what about getting along with your parents? Certainly, some people would be so miserable if they didn’t get along with their parents that they wouldn’t be able to do good work. What if you require lots of vacation time to do good work? What if you require the finest caviar every night? What about a new Bugatti? Where does it stop? The issue applies to a whole host of decisions, not just financial ones.
Finally, a general worry. It seems like we value a lot of things that are imperfectly correlated with each other. The true, the good, and the beautiful sometimes coincide, and sometimes they don’t. And one consequence of thinking of the relationship between these things as a correlation that is less than one is that the maxima of truth, goodness, and beauty will come apart.
We are left with two (by my lights) pretty unattractive options. We can compromise and miss out on the maxima of all three values, and perhaps realize a lower amount of ‘total value’ (whatever that means); or we can maximize one value uncompromisingly. It sounds attractive to adopt the principle that, even though you normally compromise between X and Y, if you can really hit X out of the park you should just focus on doing that, Y be damned. But I wonder if, in the real world, this principle makes compromise of any kind impossible, that it just mandates zealotry.