I hear a lot of talk of ‘taking over the world’. What is it to take over the world? Have I done it if I am king of the world? Have I done it if I burn the world? Have humans or the printing press or Google or the idea of ‘currency’ done it? 

Let’s start with something more tractable, and be clear on what it is to take over a horse. 

A natural theory is that to take over a horse is to be the arbiter of everything about the horse —to be the one deciding the horse’s every motion.

But you probably don’t actually want to control the horse’s every motion, because the horse’s own ability to move itself is a large part of its value-add. Flaccid horse mass isn’t that helpful, not even if we throw in the horse’s physical strength to move itself according to your commands, and some sort of magical ability for you to communicate muscle-level commands to it. If you were in command of the horse’s every muscle, it would fall over. (If you directed its cellular processes too, it would die; if you controlled its atoms, you wouldn’t even have a dead horse.) 

Information and computing capacity

The reason this isn’t so good is that balancing and maneuvering a thousand pounds of fast-moving horse flesh balanced on flexible supports is probably hard for you, at least via an interface of individual muscles, at least without more practice being a horse. I think for two reasons:

  • Lack of information e.g. about exactly where every part of the horse’s body is and where its hoofs are touching the ground how hard
  • Lack of computing power to dedicate to calculating desired horse muscle motions from the above information and your desired high level horse behavior

(Even if you have these things, you don’t obviously know how to use them to direct the horse well, but you can probably figure this out in finite time, so it doesn’t seem like a really fundamental problem.)

Tentative claim: holding more levers is good for you only insofar as you have the information and computing capacity to calculate which directions you should want those levers pushed. 

So, you seem to be getting a lot out of the horse and various horse subcomponents making their own decisions about steering and balance and breathing and snorting and mitosis and where electrons should go. That is, you seem to be getting a lot out of not being in control of the horse. In fact so far it seems like the more you are in control of the horse in this sense, the worse things go for you. 

Is there a better concept of ‘taking over’—a horse, or the world—such that someone relatively non-omniscient might actually benefit from it? (Maybe not—maybe extreme control is just bad if you aren’t near-omniscient, which would be good to know.) 

What riding a horse is like

Perhaps a good first question: is there any sort of power that won’t make things worse for you? Surely yes: training a horse to be ridden in the usual sense seems like ‘having control over’ the horse more than you would otherwise, and seems good for you. So what is this kind of control like? 

Well, maybe you want the horse to go to London with you on it, so you get on it and pull the reins to direct it to London. You don’t run into the problems above, because aside from directing its walking toward London, it sticks to its normal patterns of activity pretty closely (for instance, it continues breathing and keeping its body in an upright position and doing walking motions in roughly the direction its head is pointed).

So maybe in general: you want to command the horse by giving it a high level goal (‘take me to London’) then you want it to do the backchaining and fill in all the details (move right leg forward, hop over this log, breathe..). That’s not quite right though, because the horse has no ability to chart a path from here to London, due to its ignorance of maps and maybe London as a concept. So you are hoping to do the first step of the backchaining—figure out the route—and then to give the horse slightly lower level goals such as, ‘turn left here’, ‘go straight’, and for it to do the rest. Which still sounds like giving it a high level goal, then having it fill in the instrumental subgoals and do them.

But that isn’t quite right. You probably also want to steer the details there somewhat. You are moment-to-moment adjusting the horse’s motion to keep you on it, for instance. Or to avoid scaring some chickens. Or to keep to the side as another horse goes by. While not steering it entirely, at that level. You are relying on its own ability to avoid rocks and holes and to dodge if something flies toward it, and to put some effort into keeping you on it. How does this fit into our simple model? 

Perhaps you want the horse to behave as it would—rather than suddenly leaving every decision to you—but for you to be able to adjust any aspect of it, and have it again work out how to support that change with lower level choices. You push it to the left and it finds new places to put its feet to make that work, and adjusts its breathing and heart rate to make the foot motions work. You pull it to a halt, and it changes its leg muscle tautnesses and heart rate and breathing to make that work. 

Levers

On this model, in practice your power is limited by what kinds of changes the horse can and will fill in new details for. If you point its head in a new direction, or ask it to sit down, it can probably recalculate its finer motions and support that. Whereas if you decide that it should have have holes in its legs, it just doesn’t have an affordance for doing that. And if you do it, it will bleed a lot and run into trouble rather than changing its own bloodflow. If you decide it should move via a giant horse-sized bicycle, it probably can’t support that, even if in principle its physiology might allow it. If you hold up one of its legs so its foot is high in the air, it will ‘support’ that change by moving its leg back down again, which is perhaps not what you were going for.

This suggests that taking over a thing is not zero sum. There is not a fixed amount of control to be had by intentional agents. Because perhaps you have all the control that anyone has over a horse, in the sense that if the horse ever has a choice, it will try to support your commands to it. But still it just doesn’t know how to control its own heart rate consciously or ride a giant horse-sized bicycle. Then one day it learns these skills, and can let you adjust more of its actions. You had all the control the whole time, but all became more.

Consequences

One issue with this concept of taking over is that it isn’t clear what it means to ‘support’ a change. Each change has a number of consequences, and some of them are the point while others are undesirable side effects, such that averting them is an integral part of supporting the change. For instance, moving legs faster means using up blood oxygen and also traveling faster. If you gee up the horse, you want it to support this by replacing the missing blood oxygen, but not to jump on a treadmill to offset the faster travel.

For the horse to get this right in general, it seems that it needs to know about your higher level goals. In practice with horses, they are just built so that if they decide to run faster their respiratory system supplies more oxygen and they aren’t struck by a compulsion to get on a treadmill, and if that weren’t true we would look for a different animal to ride. The fact that they always assume one kind of thing is the goal of our intervention is fine, because in practice we do basically always want legs for motion and never for using up oxygen.

Maybe there is a systematic difference between desirable consequences and ones that should be offset—in the examples that I briefly think of, the desirable consequences seem more often to do with relationships with larger scale things, and the ones that need offsetting are to do with internal things, but that isn’t always true (I might travel because I want to be healthier, but I want to be in the same relationship with those who send me mail). If the situation seems to turn inputs into outputs, then the outputs are often the point, though that is also not always true (e.g. a garbage burner seeks to get rid of garbage, not create smoke). Both of these also seem maybe contingent on our world, whereas I’m interested in a general concept. 

Total takeover

I’ll set that aside, and for now define a desirable model of controlling a system as something like: the system behaves as it would, but you can adjust aspects of the system and have it support your adjustment, such that the adjustment forwards your goals. 

There isn’t a clear notion of ‘all the control’, since at any point there will be things that you can’t adjust (e.g. currently the shape of the horse’s mitochondria, for a long time the relationship between space and time in the horse system), either because you or the system don’t have a means of making the adjustment intentionally, or the system can’t support the adjustment usefully. However ‘all of the control that anyone has’ seems more straightforward, at least if we define who is counted in ‘anyone’. (If you can’t control the viral spread, is the virus a someone who has some of the universe’s control?)

I think whether having all of the control at a particular time gets at what I usually mean by having ‘taken over’ depends on what we expect to happen with new avenues of control that appear. If they automatically go to whoever had control, then having all of the control at one time seems like having taken over. If they get distributed more randomly (e.g. the horse learns to ride a bicycle, but keeps that power for itself, or a new agent is created with a power), so that your fraction of control deteriorates over time, that seems less like having taken over. If that is how our world is, I think I want to say that one cannot take it over.

***

This was a lot of abstract reasoning. I especially welcome correction from someone who feels they have successfully controlled a horse to a non-negligible degree.

New Comment
14 comments, sorted by Click to highlight new comments since:

Beautiful analogy! I'd say introducing the high-level concept of "controlling an interface" is the most useful next step in this chain of reasoning.

Between you and the horse is the standardized interface known as "tack," a system of leather, cloth and/or ropes literally harnessing a horse's might and speed. Variations of tack have been evolved by horse controllers over millennia to eke out every bit of control and usefulness a horse can reasonably provide a human, for various purposes: racing, farming, ranching, hunting, battling, and so on. You can reinvent the wheel if you wish, but at the end of the day, your kludged-together horse interface will probably recapitulate one of the stages of tack that other humans have already invented, some stages more humane to the horse than others.

But what is the combination of human and tack controlling on the horse? Its instincts and training. The horse was already a system, and now you've gone and added levers to its body and mind. And now you and the horse and the tack in-between are a system harnessed to your will imperfectly.

Back to taking over the world. Examining what interfaces already exist for the people who control the world is the first step. How can they be improved, and made more responsive? Whose purposes do they serve? What aspects of the world are directly or emergently controlled by those interfaces and which aspects are left alone?

Emperors are the rulers of kings, whatever their actual titles, and the interface of empire is delegation, negotiation, and self-marketing. Let the world run itself, but steer it a bit here and there. Sometimes, give it a little free rein and see how fast it can run.

The most useful examination I've seen of the interfaces of would-be masters of nations is the Rules For Rulers video, which details the spectrum of political will applied to various countries, and why countries tend toward either Enlightenment and democracy or dictatorship and misery. Simply put, there are countries that are like a horse that must be ridden with spurs, or else it will try to buck you off and smash in your head with its great hooves.

"take over" is a human, fuzzy concept, the exact application of which is context-dependent. And it's still useful. Any of "determining the direction and speed the horse goes next", "deciding whether to feed or starve the horse", "locking the horse in a barn to prevent it being ridden" or lots of other activities can be put under the heading "taking over".

If the details matter, you probably need to use more words.

"Total horse takeover" is a phrase I've used several times since reading this post, and seems useful for building gears-level models of what transformative change might look like, and which steps involve magic that need to be examined further.

Flaccid Horse Mass is my new band name.

Damn right.

I have to wonder: was the title written first? Brilliant in either case.

This kind of reminds me of how it goes when I get lucid dreams where I'm in control. They've sounded really great in theory, but my experience is that if I'm in control, then nothing happens without me controlling it. E.g. if I want some other person in my dream, I have to decide everything they say and do. This usually gets tedious within about three seconds and I just want to wake up.

This is management cybernetics. There is a whole science of doing this with literature going back to the 1950s. The pioneer in this field is Stafford Beer, who wrote at least ten books on this subject. His book "Brain of the Firm" (1972, 1981), covers his work in Chlie trying to incorporate these ideas at a governmental level, and his highly germaine to the above essay.

One wrong take on "taking over the world" is "having causal power to change everything". The reason for it is that because of the "butterfly effect" every my action will change fates of all future people, however, in a completely unknown way.

I think it would be both fun and useful to attempt rat cev with original seeing (not just bringing on existing rat experts, though perhaps consulting with them to some degree).

The term "rat cev" is new to me. My guess for rat is rational, but I am drawing a blank on cev. It probably isn't ceviche, though.

I think this post significantly benefits in popularity, and lacks in rigor and epistemic value, from being written in English. The assumptions that the post makes in some part of the post contradict the judgements reached in others, and the entire post, in my eyes, does not support its conclusion. I have two main issues with the post, neither of which involve the title or the concept, which I find excellent:

First, the concrete examples presented in the article point towards a different definition of optimal takeover than is eventually reached. All of the potential corrections that the “Navigating to London” example proposes are examples where the horse is incapable of competently preforming the task you ask it to do, and needs additional human brainpower to do so. This suggests an alternate model of “let the horse do what the horse was intended to do, let the human treat the horse as a black box.” However, this alternate model is pretty clearly not total horse takeover, which to me, suggests that total takeover is not optimal for sensorily constrained humans. One could argue that the model in the article, “horse-behaved by default, human-behaved when necessary” is a general case of the specific model suggested by the specific example, which I think brings up another significant issue with the post:

The model chosen is not a valuable one. The post spends most of its length discussing the merits of different types of horse-control, but the model endorsed does not take any of this deliberation into account: all physically permitted types of horse-control remain on the table. This means the definition for total control ends up being “you have total control when you can control everything that you can control” which, while not exactly false, doesn’t seem particularly interesting. The central insight necessary to choose the model that the post chooses is entirely encompassed in the first paragraph of the post.

Finally, I think the agent-level modeling applied in the post is somewhat misleading. The bright line on what you can “tweak” with this model is very unclear, and seems to contradict itself: I’m pretty sure a horse could put holes in its thighs if you have total control over its movements, for example. Are you allowed to tweak the horse’s internal steroid production? Neurotransmitters? The horse doesn’t have conscious control over blood flow, but it’s regulatable: do you get control over that? These seem like the kind of questions this post should address: what makes a horse a horse, and what does controlling that entity mean. I think this issue is even more pronounced when applied to government: does controlling the government as an entity mean controlling constituent parts? Because of these types of questions, I suspect that the question “what does total control of a horse mean” is actually more complex, not less, than it is for a government, and it worries me that the simplifying move occurs from government to horse.

In its current form, I would not endorse collation, because I don’t feel as though the post addresses the questions it sets out to answer.