Does the simulation argument even need simulations?

The simulation argument, as I understand it:

  1. Subjectively, existing as a human in the real, physical universe is indistinguishable from existing as a simulated human in a simulated universe
  2. Anthropically, there is no reason to privilege one over the other: if there exist k real humans and l simulated humans undergoing one's subjective experience, one's odds of being a real human are k/(k+l)
  3. Any civilization capable of simulating a universe is quite likely to simulate an enormous number of them
    1. Even if most capable civilizations simulate only a few universes for e.g. ethical reasons, civilizations that have no such concerns could simulate such enormous numbers of universes that the expected number of universes simulated by any simulation-capable civilization is still huge
  4. Our present civilization is likely to reach the point where it can simulate a universe reasonably soon
  5. By 3. and 4., there exist (at some point in history) huge numbers of simulated universes, and therefore huge numbers of simulated humans living in simulated universes
  6. By 2. and 5., our odds of being real humans are tiny (unless we reject 4, by assuming that humanity will never reach the stage of running such simulations)

When we talk about a simulation we're usually thinking of a computer; crudely, we'd represent the universe as a giant array of bytes in RAM, and have some enormously complicated program that could compute the next state of the simulated universe from the previous one[1]. Fundamentally, we're just storing one big number, then performing a calculation and store another number, and so on. In fact our program is simply another number (witness the DeCSS "illegal prime"). This is effectively the GLUT concept applied to the whole universe.

But numbers are just... numbers. If we have a computer calculating the fibonacci sequence, it's hard to see that running the calculating program makes this sequence any more real than if we had just conceptualized the rule[2] - or even, to a mathematical Platonist, if we'd never thought of it at all. And we do know the rule (modulo having a theory of quantum gravity), and the initial state of the universe is (to the best of our knowledge) small and simple enough that we could describe it, or another similar but subtly different universe, in terms small enough to write down. At that point, what we have seems in some sense to be a simulated universe, just as real as if we'd run a computer to calculate it all.

Possible ways out that I can see:

  1. Bite the bullet: we are most likely not even a computer simulation, just a mathematical construct[3]
  2. Accept the other conclusion: either simulations are impractical even for posthuman civilizations, or posthuman civilization is unlikely. But if all that's required for a simulation is a mathematical form for the true laws of physics, and knowledge of some early state of the universe, this means humanity is unlikely to ever learn these two things, which is... disturbing, to say the least. This stance also seems to require rejecting mathematical Platonism and adopting some form of finitist/constructivist position, in which a mathematical notion does not exist until we have constructed it
  3. Argue that something important to the anthropic argument is lost in the move from a computer calculation to a mathematical expression. This seems to require rejecting the Church-Turing thesis and means most established programming theory would be useless in the programming of a simulation[4]
  4. Some other counter to the simulation argument. To me the anthropic part (i.e. step 2) seems the least certain; it appears to be false under e.g. UDASSA, though I don't know enough about anthropics to say more

Thoughts?

 

[1] As I understand it there is no contradiction with relativity; we perform the simulation in some particular frame, but obtain the same events whichever frame we choose

[2] This equivalence is not just speculative. Going back to thinking about computer programs, Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels) follows lazy evaluation: a value is not calculated unless it is used. Thus if our simulation contained some regions that had no causal effect on subsequent steps (e.g. some people on a spaceship falling into a black hole), the simulation wouldn't bother to evaluate them[5]

If we upload people who then make phone calls to their relatives to convince them to upload, clearly those people must have been calculated - or at least, enough of them to talk on the phone. But what about a loner who chooses to talk to no-one? Such a person could be more efficiently stored as their initial state plus a counter of how many times the function needs to be run to evaluate them, if anyone were to talk to them. If no-one has their contact details any more, we wouldn't even need to store that much. What about when all humans have uploaded? Sure, you could calculate the world-state for each step explicitly, but that would be wasteful. Our simulated world would still produce the correct outputs if all it did was increment a tick counter

Practically every programming runtime performs some (more limited) form of this, using dataflow analysis, instruction reordering and dead code elimination - usually without the programmer having to explicitly request it. Thus if your theory of anthropics says that an "optimized" simulation is counted differently from a "full" one, then there is little hope of constructing such a thing without developing a significant amount of new tools and programming techniques[4]

[3] Indeed, with an appropriate anthropic argument this might explain why the rules of physics are mathematically simple. I am planning another post on this line of thought

[4] This is worrying if one is in favour of uploading, particularly forcibly - it would be extremely problematic morally if uploads were in some sense "less real" than biological people

[5] One possible way out is that the laws of physics appear to be information-preserving; to simulate the state of the universe at time t=100 you can't discard any part of the state of the universe at time t=50, and must in some sense have calculated all the intermediate steps (though not necessarily explicitly - the state at t=20 could be spread out between several calculations, never appearing in memory as a single number). I don't think this affects the wider argument though

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 9:11 AM
Select new highlight date
Rendering 50/102 comments  show more
  1. Bite the bullet: we are most likely not even a computer simulation, just a mathematical construct[3]

Biting the bullet here is roughly equivalent to accepting Tegmark's Ultimate Ensemble. This was discussed on LW in ata's post from 2010, The mathematical universe: the map that is the territory.

See Tegmark (2008). In particular, Section 6, "Implications for the simulation argument". A relevant extract:

For example, since every universe simulation corresponds to a mathematical structure, and therefore already exists in the Level IV multiverse [the multiverse of all mathematical structures], does it in some meaningful sense exist “more” if it is in addition run on a computer? This question is further complicated by the fact that eternal inflation predicts an infinite space with infinitely many planets, civilizations, and computers, and that the Level IV multiverse includes an infinite number of possible simulations. The above-mentioned fact that our universe (together with the entire Level III multiverse) may be simulatable by quite a short computer program (Sect. 6.2) calls into question whether it makes any ontological difference whether simulations are “run” or not. If, as argued above, the computer need only describe and not compute the history, then the complete description would probably fit on a single memory stick, and no CPU power would be required. It would appear absurd that the existence of this memory stick would have any impact whatsoever on whether the multiverse it describes exists “for real”. Even if the existence of the memory stick mattered, some elements of this multiverse will contain an identical memory stick that would “recursively” support its own physical existence. This would not involve any Catch-22 “chicken-and-egg” problem regarding whether the stick or the multiverse existed first, since the multiverse elements are 4-dimensional spacetimes, whereas “creation” is of course only a meaningful notion within a spacetime.


A while ago, I posted a LW discussion link to John Regehr's blog post about similar ideas: Does a simulation really need to be run?.

My thought is that your hypothesis is pretty similar to the Dust Theory.

http://sciencefiction.com/2011/05/23/science-feature-dust-theory/

And Greg Egan's counter-argument to the Dust Theory is pretty decent:

However, I think the universe we live in provides strong empirical evidence against the “pure” Dust Theory, because it is far too orderly and obeys far simpler and more homogeneous physical laws than it would need to, merely in order to contain observers with an enduring sense of their own existence. If every arrangement of the dust that contained such observers was realised, then there would be billions of times more arrangements in which the observers were surrounded by chaotic events, than arrangements in which there were uniform physical laws.

I think the same counter-argument applies to your hypothesis.

A steelmanned version of Egan's counterargument can be found in what Tegmark calls the (cosmological) measure problem. Egan's original counterargument is too weak because we can simply postulate that there is an appropriate measure over the worlds of interest; we already do that for the many-worlds interpretation!

In Tegmark (2008) (see my other comment):

One such issue is the above-mentioned measure problem, which is in essence the problem of how to deal with annoying infinities and predict conditional probabilities for what an observer should perceive given past observations.

[...]

A second testable prediction of the MUH [Mathematical Universe Hypothesis] is that the Level IV multiverse [the multiverse of all mathematical structures] exists, so that out of all universes containing observers like us, we should expect to find ourselves in a rather typical one. Rigorously carrying out this test requires solving the measure problem, i.e., computing conditional probabilities for observable quantities given other observations (such as our existence) and an assumed theory (such as the MUH, or the hypothesis that only some specific mathematical structure like string theory or the Lie superalgebra mb(3|8) [142] exists). Further work on all aspects of the measure problem is urgently needed regardless of whether the MUH is correct, as this is necessary for observationally testing any theory that involves parallel universes at any level, including cosmological inflation and the string theory landscape [67–71]. Although we are still far from understanding selection effects linked to the requirements for life, we can start testing multiverse predictions by assessing how typical our universe is as regards dark matter, dark energy and neutrinos, because these substances affect only better understood processes like galaxy formation. Early such tests have suggested (albeit using questionable assumptions) that the observed abundance of these three substances is indeed rather typical of what you might measure from a random stable solar system in a multiverse where these abundances vary from universe to universe [42, 134–139].

Tegmark makes a few remarks on using algorithmic complexity as the measure:

It is unclear whether some sort of measure over the Level IV multiverse is required to fully resolve the measure problem, but if this is the case and the CUH [Computable Universe Hypothesis] is correct, then the measure could depend on the algorithmic complexity of the mathematical structures, which would be finite. Labeling them all by finite bit strings s interpreted as real numbers on the unit interval [0, 1) (with the bits giving the binary decimals), the most obvious measure for a given structure S would be the fraction of the unit interval covered by real numbers whose bit strings begin with strings s defining S. A string of length n bits thus gets weight 2^(−n), which means that the measure rewards simpler structures. The analogous measure for computer programs is advocated in [16]. A major concern about such measures is of course that they depend on the choice of representation of structures or computations as bit strings, and no obvious candidate currently exists for which representation to use.

Each of the analogous problems in eternal inflation and the string theory landscape is also called the measure problem (in eternal inflation: how to assign measure over the potentially infinite number of inflationary bubbles; in the string theory landscape: how to assign measure over the astronomical number of false vacua).

In the many-worlds interpretation, the analogous measure problem is resolved by the Born probabilities.

Here's a visual representation of the dust theory by Randall Munroe: http://xkcd.com/505/

Glad to see this has been thought of; that argument was where I was headed in [3] (and this whole line of thought greatly annoyed me when reading Permutation City, so I'm glad Egan's at least looked at it a bit).

This gets us a contradiction, not a refutation, and one man's modus ponens is another man's modus tollens. Can we use this to argue for a flaw in the original simulation argument? I think it again comes down to anthropics: why are our subjective experiences reverse-anthropically more likely than those of dust arrangements? And into which class would simulated people fall?

Epistemology 101: Proper beliefs are (probabilistic) constrants over anticipated observations.
How does the belief that we are living in a computer simulation/a projection of the Platonic Hyperuranium/a dream of a god constrain what we expect to observe?

I don't think that can be right. We believe in the continued existence of stars that have moved so far away that we can't possibly observe them (due to inflation).

Yet, that belief constrains our observations.

How does it? What would be observe differently if some mysterious god destroyed those stars as soon as they moved out of causal contact with humanity?

Our present civilization is likely to reach the point where it can simulate a universe reasonably soon

I don't know about that, seems unlikely to me. A future civilization simulating us requires a) tons of information about us, that is likely to be irreversibly lost in the meantime, and b) enough computing power to simulate at a sufficiently fine level of detail (i.e. if it's a crude approximation, it will diverge from what actually happened pretty fast). Either of those alone looks like it makes simulating current-earth unfeasible.

But my main reaction to the simulation argument (even assuming it's possible) is "so what?". Are there any decisions I would change if I knew I might be being simulated?

A future civilization simulating their own ancestors would require a lot of information about them, possibly impossibly-hard-to-get amounts. You're right about that.

So what? They could still simulate some arbitrary, fictional pre-singularity civ. There is no guarantee whatsoever, if we're part of a simulation, that we were ever anything else.

But my main reaction to the simulation argument (even assuming it's possible) is "so what?". Are there any decisions I would change if I knew I might be being simulated?

Possible ethical position: I care about the continued survival of humanity in some form. I also care about human happiness in some way that avoids the repugnant conclusion (that is, I'm willing to sacrifice some proportion of unhappy lives in exchange for making the rest of them much happier). I am offered the option of releasing an AI that we believe with 99% probability to be Friendly; this has an expectation of greatly increasing human happiness, but carries a small risk of eliminating humanity in this universe. If I believe I am not simulated, I do not release it, because the small risk of eliminating all humanity in existence is not worth taking. If I believe I am simulated, I release it, because it is almost surely impossible for this to eliminate all humanity in existence, and the expected happiness gain is worth it.

Mostly, my thought is that "there probably exist real people out there somewhere, and we are probably not among them; we are probably mere simulations in their world" doesn't seem equivalent to "what it means to be a real person, or a real anything, is to be a well-defined abstract computation that need not necessarily be instantiated" (aka Dust theory, as has been said).

That said, I can't really imagine why I would ever care about the difference for longer than it takes to think about the question.

Sure, the former feels more compelling because it's framed as a status challenge, but if I do anything more than just superficially pattern-match it that pretty much dissolves... I have to be a lot more important than I am, relatively speaking, before the social status of my entire universe becomes a relevant consideration in my status calculations.

(To be clear, I am speaking solely for myself here. I do recognize that some folks here view themselves, individually, as important to the future development of our universe, and I can see how for those people the status of our universe as a whole might be an important consideration, and I'm not challenging that; I'm just asserting that I don't view myself as that important, and I believe I'm correct in that evaluation.)

Modern philosophy is just a set of notes on the margins of Descartes' "Meditations".

I actually arrived at this believe myself when I was younger, and changed my mind when a roommate beat it out of me. I

I'm currently at the conclusion it's not the same, because an "artificial universe" within a simulation can still interact with the universe. The simulation can influence stuff outside the simulation, and stuff outside the simulation can influence the simulation.

Oddly, the thing that convinced me was thinking about morality. Thinking on it now, I guess framing it in terms of something to protect really is helpful. Ontological platonism can lead to some fucked up conclusions, morally. I'll share a fleshed-out version of the thought-chain that changed my mind.

Review the claim, briefly:

But numbers are just... numbers. If we have a computer calculating the fibonacci sequence, it's hard to see that running the calculating program makes this sequence any more real than if we had just conceptualized the rule[2] - or even, to a mathematical Platonist, if we'd never thought of it at all. And we do know the rule (modulo having a theory of quantum gravity), and the initial state of the universe is (to the best of our knowledge) small and simple enough that we could describe it, or another similar but subtly different universe, in terms small enough to write down. At that point, what we have seems in some sense to be a simulated universe, just as real as if we'd run a computer to calculate it all.

1) So, if I set the initial conditions for a universe containing Suffering Humans, I'm not responsible - the initial conditions of the Hell-universe existed Platonically regardless of the fact that i defined it in the mathematical space.

2) Alright, so now what if I run the Hell Universe? Well, platonically speaking I already specified the entire universe when I laid out the initial conditions, so I don't see why running it is a big deal.

So we are currently running a Simulation of Hell, with a clean conscience. If you haven't already bailed from this ontology, lets continue...

3) Mathematically, the Hells which happen to have Anne inserted at time T were already in the platonic space of possible universes, so why not set the conditions and run that universe? Anne is a real person, by the way - we're just inserting a copy of her into the hell-verse

4) Anne just uploaded her consciousness onto a hard drive. Hold on...Anne can now be thought of as a self contained system, with input and output. Anne's consciousness is defined in the platonic space, as are all possible inputs and outputs that she might experience. If every input we might subject Anne to is already defined in platonic space, it makes no difference which one we choose to actually represent on the computer...

...Anyway, you see where this leads. Now forget the morality part - that was just to illustrate the weaknesses of Platonic ontology. Considering all mathematical structures equally "real" makes the concept of "reality" lose all meaning. There is something very important which distinguishes reality from non-real mathematical universes - the fact that you can observe it. The fact that it can interact with you.

This might seem less obvious when you're unsure whether or not your universe is a simulation, but it's obvious to the parent universe. If we ever start simulating things, we're not going to think of it as simply a representation specifying a point in platonic space - we're going to think of the simulated world as a part of our reality.

Bite the bullet: we are most likely not even a computer simulation, just a mathematical construct[3]

That's not a bullet...I'd say you were biting a bullet if you didn't believe that. Reality has to be a mathematical construct - if it isn't, we've just thrown logic out the window. But that doesn't mean anyone was sitting around writing the equation.

Reality is also special. It's different from all those other mathematical constructs, because I will only ever observe reality.

Even if most capable civilizations simulate only a few universes for e.g. ethical reasons, civilizations that have no such concerns could simulate such enormous numbers of universes that the expected number of universes simulated by any simulation-capable civilization is still huge

I don't think we should be calculating likelihoods this way.

I go to good-old Occam's razor (or more modernly, Mimimum Message Length). Does the simulation argument make for a simpler model? As in, can you actually suggest me a universe in which we are a simulation which is simpler than the universe outlined by vanilla physics? (The answer isn't necessarily "no", but I'd say that the simpler the laws we observe, the more likely the answer is to be "no". If we live in a more complicated universe - especially if the laws of the universe seemed to care about agents (the fact that we are even here does up the probability of that) - the answer might be 'yes". That said. I'd still bet on "no".)

There is something very important which distinguishes reality from non-real mathematical universes - the fact that you can observe it. The fact that it can interact with you.

I think this leads to unpleasant conclusions. If causality is all we care about, does that mean we shouldn't care about people who are too far away to interact with (e.g. people on an interstellar colony too far away to reach in our lifetime)? Heck, if someone dived into a rotating black hole with the intent to set up a civilization in the zone of "normal space" closer to the singularity, I think I'd care about whether they succeeded, even though it couldn't possibly affect me. Back on Earth, should we care more about people close to us and less about people further away, since we have more causal contact with the former? Should we care more about the rich and powerful than about the poor and weak, since their decisions are more likely to affect us?

I go to good-old Occam's razor (or more modernly, Mimimum Message Length). Does the simulation argument make for a simpler model? As in, can you actually suggest me a universe in which we are a simulation which is simpler than the universe outlined by vanilla physics?

If you don't consider the possibility of being simulated it seems like you would make wrong decisions. Suppose that you agree with Bob to create 1000 simulations of the universe tonight, and then tomorrow you'll place a black sphere in the simulated universes. Tomorrow morning Bob offers to bet you a cookie that you're in one of the simulated universes. If you take the bet on the grounds that the model of the universe in which you're not in the simulation is simpler, then it seems like you lose most of the time (at least under naive anthropics).

Now obviously in real life we don't have this indication as to whether we're a simulation. But if we're trying to make a moral decision for which it matters whether we're in a simulation, it's important to get the right answer.

Considering all mathematical structures equally "real" makes the concept of "reality" lose all meaning.

I agree, and I'd like to offer additional argument. Mathematical objects exist. Almost no one would deny that, for example, there is a number between 7,534,345,617 and 7,534,345,619. Or that there is a Lie group with such-and-such properties. What distinguishes Tegmark's claims from these unremarkable statements? Roughly this: Tegmark is saying that these mathematical objects are physically real. But on his own view, this just amounts to saying that mathematical objects are mathematical objects. Yeah yeah Tegmark, mathematical objects are mathematical objects, can't dispute that, but don't much care. Now I'll turn my attention back to tangible matters.

Tegmark steals his own thunder.

The problem with mathematical realism (which, btw, see also), is that it's challenging to justify the simplicity of our initial state - Occam is not a fundamental law of physics, and almost all possible universe-generating laws are unfathomably large. You can sort of justify that by saying "even universes with complicated initial states will tend to simulate simple universes first", but that just leaves you asking why the number of simulations should matter at all. (I don't have a good answer to that; if you find one, I'd love if you could tell me)

Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels)

Why this fascination with Haskell?
It seems more like a toy, or educational tool, or at the very best a tool for highly specialistic research, but pretty surely not suitable for any large scale programming.

Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels) >follows lazy evaluation: a value is not calculated unless it is used.

In that case, why does the simulation need to be running all the time? Wouldn't one just ask the fancy, lambda-derived software to render whatever specific event one wanted to see?

If on the other hand whole_universe_from_time_immemorial() needs to execute every time, which of course assumes a loophole gets found to infinitely add information to the host universe, then presumably every possible argument (which includes the program's own code--itself a constituent of the universe being simulated) would be needed by function anyway, so why not strict evaluation?

And both of these cases still assume we handle time in a common sense fashion. According to relativity, time is intertwined with the other dimensions, and these dimensions in turn are an artifact of our particular universe, distinctive characteristics created at the Big Bang along with everything else. Therefore, it then seems likely give_me_the_whole_universe() would have to execute everything at once--more precisely, would have to excite outside of time--to accurately simulate the universe (or simulation thereof) we observe. Even functional programming has to carry out steps one after the other, requiring a universe with a time dimension, even if the logic to this order is different from that of traditional imperative paradigms.

In that case, why does the simulation need to be running all the time? Wouldn't one just ask the fancy, lambda-derived software to render whatever specific event one wanted to see?

Indeed we would. If you believe we are such a simulation, that implies the simulator is interested in some event that causally depends on today's history. I don't think this matters though.

And both of these cases still assume we handle time in a common sense fashion. According to relativity, time is intertwined with the other dimensions, and these dimensions in turn are an artifact of our particular universe, distinctive characteristics created at the Big Bang along with everything else. Therefore, it then seems likely givemethewholeuniverse() would have to execute everything at once--more precisely, would have to excite outside of time--to accurately simulate the universe (or simulation thereof) we observe. Even functional programming has to carry out steps one after the other, requiring a universe with a time dimension, even if the logic to this order is different from that of traditional imperative paradigms.

Causality is preserved under relativity, AIUI. You may not necessarily be able to say absolutely whether one event happened before or after another, but you can say what the causal relation between them is (whether one could have caused the other, or they are spatially separated such that neither could have caused the other). So there is no problem with using naive time in one's simulations.

Are you arguing that a simulatable universe must have a time dimension? I don't think that's entirely true; all it means is that a simulatable universe must have a non-cyclic chain of causality. It would be exceedingly difficult to simulate e.g. the Godel rotating universe. But a universe like our own is no problem.

The Numerical Platonist's construct is just the universe itself again. No problem there.

If you're not a numerical platonist, I don't see how unexecuted computations could be experienced.

And that leaves us with regular simulation.

(Incidentally, point 6 has a hidden assumption about the distribution of simulated universes)