This is the first part of a series. It's mostly a conceptual foundation for the following parts, so it's sort of boring and "technical." I rely heavily on quotes by Scott Alexander because I think he gets the point across.

Preface

What is the system? Is it someone, or some group of people? Is it the totality of the people and institutions that make up the social and political order?

I'm not sure how most people conceptualize this term, but I think it carries the implicit suggestion that it's at least somewhat impersonal, or goes beyond the mere group of people who occupy it presently. 2012 anti-governmental protests under the slogan "the system must collapse" (სისტემა უნდა დაინგრეს) was split half and half between people who just wanted UNM to be replaced by GD, and people who actually wanted to put an end to single-party rule, deeming it detrimental for Georgia. The past decade has hopefully nudged us to sympathize more with the latter cause.

This series of essays (which I had to split into parts because it was over 30 pages long) is a somewhat opinionated and productively reductionist framework of what the system is. Most of it doesn't deal with Georgia specifically - I get to that near the end - and focuses mostly on the impersonal view of the term, making the claim that no one in the system has to be fond of it for it to live on cheerfully.

Kaini is an online forum centered on long-form essays. Anyone can register and upload their work, as well as engage with the work of other members through comments. Our goal is just to understand stuff better, particularly with regards to Georgia. We intend to hold some real-life events this summer; if you're interested, fill out this form: Link.


I.

Any human with above room temperature IQ can design a utopia. The reason our current system isn’t a utopia is that it wasn’t designed by humans. Just as you can look at an arid terrain and determine what shape a river will one day take by assuming water will obey gravity, so you can look at a civilization and determine what shape its institutions will one day take by assuming people will obey incentives.

But that means that just as the shapes of rivers are not designed for beauty or navigation, but rather an artifact of randomly determined terrain, so institutions will not be designed for prosperity or justice, but rather an artifact of randomly determined initial conditions.

Just as people can level terrain and build canals, so people can alter the incentive landscape in order to build better institutions. But they can only do so when they are incentivized to do so, which is not always. As a result, some pretty wild tributaries and rapids form in some very strange places.

Scott Alexander, Meditations on Moloch

I urge you to think about this passage before reading on.

This angle on history is at once striking and obvious: the course of history and the progression of human civilization is just everyone (more or less) doing the most obvious thing according to the incentives laid out to them, and somewhere downstream this results in the institutions and general social order we have today. I find it productive to think of the system as an environment of incentives; a sort of incentive terrain, in which the inhabitants of the system, i.e. the actual agents (free-willing actors) move along roughly the most natural path. 

As a result, the shape of the system itself will be the state resulting from the interactions between each of the agents in the system. Skyscrapers, medical innovation, climate change, and the United Nations are all simply artifacts emergent over each individual agent acting on the strongest incentives throughout time; all of them are the same in the sense that they are all byproducts of agents in a multi-agent system acting approximately rationally with respect to the near-term incentives. There is no sense in which the system itself wanted any of these things to happen; they just happened either because it made sense for some approximately rational agent to make them happen at the time, or as a broadly undersirable side effect of everyone acting on the near-term incentives. Either way, they were inevitable given the incentive terrain.

Here we have the incentives offered by the system, here we have the basic observations of game theory (i.e. the tendencies of agents interacting in multi-agent systems) emergent over human neuropsychology (which drives the tendencies), and there you have the inevitable outcomes, which were latent in the above from the very beginning. The natural question is: does everyone acting in self-interest guarantee the best outcome for most agents?

In short - no, every agent acting in self-interest (with respect to the incentives of the system) does not guarantee that everyone or anyone will be better off long-term. Here's a somewhat contrived example:

Bostrom makes an offhanded reference of the possibility of a dictatorless dystopia, one that every single citizen including the leadership hates but which nevertheless endures unconquered. It’s easy enough to imagine such a state. Imagine a country with two rules: first, every person must spend eight hours a day giving themselves strong electric shocks. Second, if anyone fails to follow a rule (including this one), or speaks out against it, or fails to enforce it, all citizens must unite to kill that person. Suppose these rules were well-enough established by tradition that everyone expected them to be enforced.

So you shock yourself for eight hours a day, because you know if you don’t everyone else will kill you, because if they don’t, everyone else will kill them, and so on. Every single citizen hates the system, but for lack of a good coordination mechanism it endures. From a god’s-eye-view, we can optimize the system to “everyone agrees to stop doing this at once”, but no one within the system is able to effect the transition without great risk to themselves.

Scott Alexander, Meditations on Moloch

This is called a multipolar trap, where everyone rationally pursuing the strongest incentives leads to detrimental outcomes for every agent, while no one has any (rationally justifiable) way to get the out of the situation. From the inside, each person is being rational with regards to the incentives laid out to them. From the outside, we see a society shocking itself eight hours a day. There is no dictator or anyone forcing people to shock themselves, and yet there they are, shocking themselves. Onto more relatable examples:

Climate change is the deterministic resolution of the multi-agent system we have on Earth: nations can't risk to make a radical switch to clean energy because they will be outcompeted by nations who won't, and the same thing is happening within local markets, where competing corporations cannot rationally make the change in fear of being outcompeted. So, who is causing climate change? Pretty much a bunch of people very worried about their their countries and companies being obliterated by the competition, just as you would be in their position. Classic multipolar trap. We like to imagine some evil greedy CEOs ruining our planet for their own profits, but in fact almost no one imagines themselves to be the evil villain of their life story, and the way this is actually happening is everyone just being rational with regards to the incentives offered by the environment. This would still happen if every CEO on planet was an Amazonian shaman deeply in tune with the ebbs and flows of Mother Nature's vibrations. More multipolar traps:

  1. Arms races. Large countries can spend anywhere from 5% to 30% of their budget on defense. In the absence of war – a condition which has mostly held for the past fifty years – all this does is sap money away from infrastructure, health, education, or economic growth. But any country that fails to spend enough money on defense risks being invaded by a neighboring country that did. Therefore, almost all countries try to spend some money on defense.
  2. Capitalism. Imagine a capitalist in a cutthroat industry. He employs workers in a sweatshop to sew garments, which he sells at minimal profit. Maybe he would like to pay his workers more, or give them nicer working conditions. But he can’t, because that would raise the price of his products and he would be outcompeted by his cheaper rivals and go bankrupt. Maybe many of his rivals are nice people who would like to pay their workers more, but unless they have some kind of ironclad guarantee that none of them are going to defect by undercutting their prices they can’t do it. [...] From a god’s-eye-view, we can contrive a friendly industry where every company pays its workers a living wage. From within the system, there’s no way to enact it.
  3. Cancer. The human body is supposed to be made up of cells living harmoniously and pooling their resources for the greater good of the organism. If a cell defects from this equilibrium by investing its resources into copying itself, it and its descendants will flourish, eventually outcompeting all the other cells and taking over the body – at which point it dies. [...] From a god’s-eye-view, the best solution is all cells cooperating so that they don’t all die. From within the system, cancerous cells will proliferate and outcompete the other – so that only the existence of the immune system keeps the natural incentive to turn cancerous in check.
  4. The “race to the bottom” describes a political situation where some jurisdictions lure businesses by promising lower taxes and fewer regulations. The end result is that either everyone optimizes for competitiveness – by having minimal tax rates and regulations – or they lose all of their business, revenue, and jobs to people who did (at which point they are pushed out and replaced by a government who will be more compliant).

[...] Once one agent learns how to become more competitive by sacrificing a common value, all its competitors must also sacrifice that value or be outcompeted and replaced by the less scrupulous. Therefore, the system is likely to end up with everyone once again equally competitive, but the sacrificed value is gone forever. From a god’s-eye-view, the competitors know they will all be worse off if they defect, but from within the system, given insufficient coordination it’s impossible to avoid.

Scott Alexander, Meditations on Moloch (he actually gives 15 examples.)

II.

All of these scenarios follow the same formula: it would be ideal if no one did X, but it would be terrible if anyone other than me did X, so I decide to do X to minimize my loss - and so does everyone else, placing us firmly in the worst possible outcome.

In what sense did anyone have any agency here? The worst possible outcome happened to us and no one had any way out of it. Except of course the amazing solution where everyone agrees stop doing things that are bad for everyone and continues competing equally with the new sustainable rules - but if this were so easy, we wouldn't currently be finding ourselves in the midst of climate catastrophe, or the probable AI catastrophe a couple decades down the line. In economics, this is called a "coordination problem," and it illustrates one of the core ways in which many self-interested agents interacting in a system irresistably fail to arrive at the most desirable system state. 

Coordination problems are cases in which everyone agrees that a certain action would be universally beneficial, but the free market cannot coordinate them into taking that action. Example from Scott Alexander's Non-Libertarian FAQ:

  • A lake houses a thousand identical, competing fish farms, each earning $1000/month.
  • Fish farm waste pollutes the lake, reducing productivity by $1/month per farm. At 1000 farms, the profits are zero for everyone.
  • A $300/month filtering system is invented and voluntarily adopted, reducing pollution and leaving each farm with a $700/month profit.
  • Steve stops using the filter, pollutes the lake slightly, but profits $999/month. Everyone else is now making $699.
  • Seeing Steve's higher profit, others also stop using filters.
  • Once 400 farmers stop filtering, Steve is earning $600/month - less than he would be if he and everyone else had kept their filters on. Those who still have the filters on are making $300/month.
  • Steve proposes a Filter Pact for all to use filters.
  • Everyone but Mike signs the pact. Everyone's back to $699/month profit, while Mike enjoys $999/month.
  • "Slowly, people start thinking they too should be getting big bucks like Mike, and disconnect their filter for $300 extra profit…"

"A self-interested person never has any incentive to use a filter. A self-interested person has some incentive to sign a pact to make everyone use a filter, but in many cases has a stronger incentive to wait for everyone else to sign such a pact but opt out himself. This can lead to an undesirable equilibrium in which no one will sign such a pact. [...] From a god’s-eye-view, we can say that polluting the lake leads to bad consequences. From within the system, no individual can prevent the lake from being polluted, and buying a filter might not be such a good idea."

I hope this all makes clear the extent to which the system isn't an agent, and how deterministic the resolution of a multi-agent system is with respect to the incentives offered by the environment plus game theory. We can think of the incentive terrain as a sort of "physics" of multi-agent systems: by knowing the individual incentives and basic game theory, we can predict the progression of the system through time, "just as you can look at an arid terrain and determine what shape a river will one day take by assuming water will obey gravity." The only thing that can take the multi-agent system off-course from the broadly deterministic outcome is an alteration to the physics, i.e. the incentive terrain. 

And if something undesirable keeps on happening (*ahem*), you should maybe examine the incentives, and - spoiler for the next part - figure out how to offset everyone's incentives in such a way that shortcuts game-theoretic traps and resolves in a broady desirable equilibrium.

I'll leave you with this passage:

I will now jump from boring game theory stuff to what might be the closest thing to a mystical experience I’ve ever had.

Like all good mystical experiences, it happened in Vegas. I was standing on top of one of their many tall buildings, looking down at the city below, all lit up in the dark. If you’ve never been to Vegas, it is really impressive. Skyscrapers and lights in every variety strange and beautiful all clustered together. And I had two thoughts, crystal clear:

It is glorious that we can create something like this.

It is shameful that we did.

Like, by what standard is building gigantic forty-story-high indoor replicas of Venice, Paris, Rome, Egypt, and Camelot side-by-side, filled with albino tigers, in the middle of the most inhospitable desert in North America, a remotely sane use of our civilization’s limited resources?

And it occurred to me that maybe there is no philosophy on Earth that would endorse the existence of Las Vegas. Even Objectivism, which is usually my go-to philosophy for justifying the excesses of capitalism, at least grounds it in the belief that capitalism improves people’s lives. Henry Ford was virtuous because he allowed lots of otherwise car-less people to obtain cars and so made them better off. What does Vegas do? Promise a bunch of shmucks free money and not give it to them.

Las Vegas doesn’t exist because of some decision to hedonically optimize civilization, it exists because of a quirk in dopaminergic reward circuits, plus the microstructure of an uneven regulatory environment, plus Schelling points. A rational central planner with a god’s-eye-view, contemplating these facts, might have thought “Hm, dopaminergic reward circuits have a quirk where certain tasks with slightly negative risk-benefit ratios get an emotional valence associated with slightly positive risk-benefit ratios, let’s see if we can educate people to beware of that.” People within the system, following the incentives created by these facts, think: “Let’s build a forty-story-high indoor replica of ancient Rome full of albino tigers in the middle of the desert, and so become slightly richer than people who didn’t!”

Just as the course of a river is latent in a terrain even before the first rain falls on it – so the existence of Caesar’s Palace was latent in neurobiology, economics, and regulatory regimes even before it existed. The entrepreneur who built it was just filling in the ghostly lines with real concrete.

So we have all this amazing technological and cognitive energy, the brilliance of the human species, wasted on reciting the lines written by poorly evolved cellular receptors and blind economics, like gods being ordered around by a moron.

Scott Alexander, Meditations on Moloch


Click "next" below to read part II.

6

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 4:19 AM

All of these scenarios follow the same formula: it would be ideal if no one did X, but it would be terrible if anyone other than me did X, so I decide to do X to minimize my loss - and so does everyone else, placing us firmly in the worst possible outcome.

Realizing that life is not, in fact, a zero-sum game is the 21st century equivalent of living with the absurd of your existence: Sisyphus knows deep down that he is stuck in the "system" to minimize his losses, but god forbid he tries to break out, that the boulder will fall down.