If we live in a simulation, what does that imply about the world of our simulators and our relationship to them? [1]

Here are some proposals, often mutually contradictory, none stated with anything near certainty.

1. The simulators are much like us, or at least are our post-human descendants.

Drawing on some of the key points in Bostrom's Simulation Argument:

Today, we often simulate our human ancestors' lives, e.g., Civilization. Our descendants will likely want to simulate their own ancestors, namely us, and they may have much-improved simulation technology which support sentience. So, our simulators are likely to be our (post-)human descendants.

2. Our world is smaller than we think.

Robin Hanson has said that computational power will be dedicated to running only a small part of the simulation at low resolution, including  the part which we are in. Other parts of the simulation will be run at a lower resolution. Everything outside our vicinity, e.g., outside our solar system, will be calculated planetarium-style, and not from the level of particle physics.

(I wonder what it would be like if we are in the low-res part of the simulation.)

3. The world is likely to end soon.

There is no a priori reason for an base-level (unsimulated) universe to flicker out of existence. In fact, it would merely add complexity to the laws of physics for time to suddenly end with no particular cause.

But a simulator may decide that they have learned all they wanted to from their simulation; or that acausal trade has been completed; or that they are bored with the game; and that continuing the simulation is not  worth the computational cost.

The previous point was that the world is spatially smaller than we think. This point is that the world is temporally smaller than we hope.

4. We are living in a particularly interesting part of our universe.

The small part of the universe which the simulators would choose to focus on is the part which is interesting or entertaining to  them. Today's video games are mostly about war, fighting, or various other challenges to be overcome. Some, like the Sims, are about everyday life, but even in those, the players want to see something interesting. 

So, you are likely to be playing a pivotal role in our (simulated) world. Moreover, if you want to continue to be simulated, do what you can to make a difference in the world, or at least to do something entertaining.

5. Our simulators want to trade with us.

One reason to simulate another agent is to trade acausally with it.

Alexander Kruel's blog entry and this LW Wiki entry summarize the concept. In brief, agent P simulates or otherwise analyzes agent Q and learns that Q does  something that P wants, and also learns that the symmetrical statement is true: Q can simulate or analyze P well enough to know that P likewise does something that Q wants. 

This process may involves simulating the other agent for the purpose of learning its expected behavior. Moreover, for P to "pay" Q, it may well run Q -- i.e., simulate it. 

So, if we live in a simulation, maybe our simulators are going to get some benefit from us humans, and we from them. (The latter will occur when we simulate these other intelligences).

In Jaan Tallinn's talk at Singularity Summit 2012, he gave an anthropic argument for our apparently unusual position at the cusp of the Singularity. If post-Singularity superintelligences across causally disconnected parts of the multiverse are trying to communicate with each other by mutual simulation, perhaps for the purpose of acausal trade, then they might simulate the entire history of the universe from the Big Bang to find the other superintelligences in mindspace. A depth-first search across all histories would spend most of the time where we are, right before the point at which superintelligences emerge.

6. We are part of a multiverse.

Today, we run many simulations in our world. Similarly, says Bostrom, our descendants are likely to be running many simulations of our universe: A multiverse.

Max Tegmark's Level IV multiverse theory is motivated partly by the idea that, following Occam's Razor, simpler universes are more likely. Treating the multiverse as a computation, among the most likely computations is one that generates all possible strings/programs/universes.

The idea of the universe/multiverse as computation is still philosophically controversial. But if we live in a simulation, then our universe is indeed a computation, and Tegmarks' Level IV argument applies.

However, this is  very different from the ancestor simulation described in points 1-3 above. That argument relies on the lower conditional complexity of the scenario -- we and our descendants are similar enough that if one exists, the other is not too improbable. 

A brute-force universal simulation is an abstract possibility that specifies no role for simulators. In addition, if the simulators are anything like us,  not enough computational power exists, nor would it be the most interesting possibility.

But we don't know what computational power is available to our simulators, what their goals are, nor even if their universe is constrained by laws of physics remotely similar to ours.

7. [Added] The simulations are stacked.

If we are in a simulation, then (a) at least one universe, ours, is a simulation; and (b) at least one world includes a simulation with sentience. This gives some evidence that being simulated or being a simulator are not too unusual. The stack may lead way down to the basement world, the ultimate unsimulated simulator; or else the stack may go down forever; or [H/T Pentashagon], all universes may be considered to be simulating all others.

Are there any other conclusions about our world that we can reach from the idea that we live in a simulation?

[1] If there is a stack of simulators, with one world simulating another, the "basement level" is the world in which the stack bottoms out, the one which is simulating and not simulated. This uses a metaphor in which the simulators are below the simulated. An alternative metaphor, in which the simulators "look down" on the simulated, is also used.

New Comment
59 comments, sorted by Click to highlight new comments since:

The laws of physics in the basement level universe make it relatively easy to run lots (perhaps an infinite number) of simulations.

Interesting new book that talks about simulated universes running inside simulated universes and how the physical attributes of universes could evolve over time, through a mechanism similar to natural selection. Simulated universes with certain physical traits would tend to survive longer and produce more habitable environments for more advanced civilizations to produce a higher number of simulated universes themselves with an increased amount of those physical traits, and so on. So, over time, there would be a tendency for simulated civilizations to reside in universes with the physics more suitable for life.

http://www.amazon.com/Computer-Simulated-Universes-Mark-Solomon/dp/0989832511/ref=sr_1_1?ie=UTF8&qid=1376689785&sr=8-1&keywords=computer+simulated+universes

If there is a stack of simulators, with one world simulating another, the "basement level" is the world in which the stack bottoms out, the one which is simulating and not simulated.

This short story comes to mind.

Actually, connecting this story to some earlier thoughts I had -- much after reading and forgetting about it previously -- it occurs to me that you ought to be able to use this setup for the kind of time-loop computation that Harry attempts, and fails at, in HPMoR. (Of course, I'm ignoring that the story already presupposes false magical powers of quantum computing to get to this point in the first place.)

Just set up some computation in the same way Harry does, preparing to send the result slightly into the past in the nested universe; the fixed point should appear in your universe just as you are about to send your own result to the nested universe.

(There seem to be a lot of potentially universe-destroying problems with this plan.)

[-]tgb10

One conclusion from the setup in this story that wasn't drawn (perhaps it's not sufficiently novel given the rest of what is occurring), is that if we ever get to this state, then whatever laws of physics we believed at the time of making the simulation are the correct and complete laws of physics for the universe.

Heh. I like it. Turning off the computer isn't a problem, though - just run the simulation out to aleph-0 years into the future before turning it off.

I have read that story before, but forgotten how amazing the details were.

Just for fun a recent paper:

Constraints on the Universe as a Numerical Simulation

Silas R. Beane, Zohreh Davoudi, Martin J. Savage (Submitted on 4 Oct 2012) Observable consequences of the hypothesis that the observed universe is a numerical simulation performed on a cubic space-time lattice or grid are explored. The simulation scenario is first motivated by extrapolating current trends in computational resource requirements for lattice QCD into the future. Using the historical development of lattice gauge theory technology as a guide, we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization and investigate potentially-observable consequences. Among the observables that are considered are the muon g-2 and the current differences between determinations of alpha, but the most stringent bound on the inverse lattice spacing of the universe, b^(-1) >~ 10^(11) GeV, is derived from the high-energy cut off of the cosmic ray spectrum. The numerical simulation scenario could reveal itself in the distributions of the highest energy cosmic rays exhibiting a degree of rotational symmetry breaking that reflects the structure of the underlying lattice.

http://arxiv.org/abs/1210.1847

Excellent! Thank you.

The numerical simulation scenario could reveal itself in the distributions of the highest energy cosmic rays exhibiting a degree of rotational symmetry breaking that reflects the structure of the underlying lattice.

I swear I hadn't read that when writing this. Honestly.

Thanks! This, however, is the subject of another post, which would discuss evidence that we are in a simulation. That's in contrast to the question I raised: What does being in a simulation imply?

What evidence is there for us being in a simulation? I've never heard of humans wanting to "simulate" history. Civilization doesn't play even remotely like a simulator and never claimed to be. The information equivalent to an entire world would have to be converted into data storage for such a project and what possible motive could there be for that? I'll follow Occam's Razor on this one- the more assumptions you make, the more likely you are to be wrong unless you have some sort of evidence.

What evidence is there for us being in a simulation?

Bostrom's trilemma is as follows:

  1. No civilization will reach a level of technological maturity capable of producing simulated realities.
  2. No civilization reaching aforementioned technological status will produce a significant number of simulated realities, for any of a number of reasons, such as diversion of computational processing power for other tasks, ethical considerations of holding entities captive in simulated realities, etc.
  3. Any entities with our general set of experiences are almost certainly living in a simulation.

The disjunct made up of the three statements seems fairly solid and many of us have lowish priors for the first two disjuncts, and so assign a highish probability to the third disjunct.

I've never heard of humans wanting to "simulate" history.

  • I want to simulate history.
  • I'm a human.
  • Therefore, some humans want to simulate history.

Civilization doesn't play even remotely like a simulator and never claimed to be. The information equivalent to an entire world would have to be converted into data storage for such a project and what possible motive could there be for that? I'll follow Occam's Razor on this one- the more assumptions you make, the more likely you are to be wrong unless you have some sort of evidence.

The rest of your comment seems incredibly...uninformed of the relevant literature, to say the least.

The disjunct made up of the three statements seems fairly solid and many of us have lowish priors for the first two disjuncts, and so assign a highish probability to the third disjunct.

The simulation argument makes many assumptions, like: "a non-simulated person and a simulated person have the same chance of subjective experienced existence" and also "we can actually count number of simulations meaningfully".

Which is really really problematic -- for example what's the difference between a single simulation double-checking every computation vs two simulations of the same thing? What's the difference between a simulation running on circuitry of 2nm width, vs two simulations running on circuitry of 1nm width each?

We don't really have a clue about how to count and compare probabilities of existence.

You want to run a model history, but you don't want to simulate it in enough detail that it actually contains people who experience history, if you have the slightest scrap of ethics.

Bostrom's trilemma is as follows:

  1. No civilization will reach a level of technological maturity capable of producing simulated realities.

  2. No civilization reaching aforementioned technological status will produce a significant number of simulated realities, for any of a number of reasons, such as diversion of computational processing power for other tasks, ethical considerations of holding entities captive in simulated realities, etc.

  3. Any entities with our general set of experiences are almost certainly living in a simulation.

The disjunct made up of the three statements seems fairly solid and many of us have lowish priors for the first two disjuncts, and so assign a highish probability to the third disjunct.

Reductio ad absurdum.

I clicked on the PDF and found the first few chapters to be rather childish, to be blunt. Assuming we can transform large amounts of matter into thinking material than what conceivable reason would there be for an ancestor simulation to be made? Do you imagine that we could create simulations on our laptops? Please tell me how we will be able to conjure infinite information out of nothing. "we don't know that it can't happen" is hardly an answer and isn't really unprovable either.

Also, what would the point be in creating humans to be in the sim? Why not just have them be controlled by some AI and have them act as humans do (assuming that it isn't for "research purposes" which is ridiculous as well because a transhuman civilization of that level wouldn't actually need the information from it)?

  • I want to simulate history.
  • I'm a human.
  • Therefore, some humans want to simulate history.

This doesn't actually invalidate my statement. I don't see how it makes a difference, though, unless you can prove that a lot of people are very interested in creating ancestor simulations- enough to utilize large amounts of resources to achieve that end- or that one day you'll be able to create worlds on your personal computer.

The rest of your comment seems incredibly...uninformed of the relevant literature, to say the least.

The article held up Civilization as a precursor to future ancestor sims. I pointed out how ridiculous that was. I suppose Occam's Razor works if you believe in an infinite reality, which I'm not certain of.

Dwarf Fortress.

Please read the referenced articles by Bostrom. See simulation-argument.com

Well, I don't want to go through all of that just to find where it talks about my specific objections... but let me ask, how many people here believe this?

According to the 2011 survey results, the median reported probability for "We're living in a simulation" is 5%.

Today, we often simulate our human ancestors' lives, e.g., Civilization. Our descendants will like want to simulate their own ancestors, namely us, and they may have much-improved simulation technology which support sentience. So, our simulators are likely to be our (post-)human descendants.

This sounds a lot like "If our simulators are like us then they are like us" or (more fairly) "if our simulators are like us in respect X, then they are also likely to be like us in respect Y,Z..."

Do we have any reason to think the simulators will be like us in respect to the issue of wanting to simulate beings like themselves?

I think it's more that if we are being simulated, the most likely simulating party is our future, if we take our OWN propensity to simulate our past as evidence. Human simulators are far more likely to simulate humans out of all possible intelligences than other intelligences are.

If our simulations were conscious, many of them would logically conclude that our world is inhabited by orcs and elves with magical powers.

Does that imply that the laws of the basement universe are even more 'boring' than our own? They decided to spice it up a little with 'magic' like... umm... well, I can't think of any disposable rules. Maybe the weak force, to make nukes possible. But that doesn't seem very fun.

Thanks. There was also a comic in which the people of world realize that they are in a simulation, a video game. They all refuse to fight for the simulator's entertainment. The last panel is the teenager playing the game who says "this is boring" and turns it off. Do you have a link for that one?

Some people consider games like WoW boring, so... it depends. Even if the basement universe is more boring for its inhabitants, it does not need to be more boring for us.

Maybe our second law of thermodynamics is a limitation of the game. A time limit to prevent basement universe basement dwellers to literally spend eternity playing. :D

If our simulators are human, that implies that their universe has laws of physics similar to our own. But if we're living in a simulation, I think it's more plausible that our simulators exist in a world operating under different laws of physics (e.g. they live in a universe which is more amenable to our-universe-scale simulation.) So I think other factors are in play which could lessen the probability that we are being simulated by humans, let alone our future.

[-]TimS40

Our simulators want to trade with us.

Acausal trade confuses me. Is the following right?

Humanity should simulate other agents who (a) would value being simulated and (b) would simulate us

Because it isn't clear to me that humanity is the type of agent that would value unconnected copies being simulated. (this is distinct but dependent on the assertion that simulated humans are entitled to moral consideration regardless of whether actual humans are sufficiently causally connected to the simulated humans).

The simulators are much like us, or at least are our post-human descendants.

No, they probably have a much greater desire to run simulations than we do or ever will.

Or maybe just greater means. I imagine many humans would run universe-scale simulations, if they had the means.

Today, we often simulate our human ancestors' lives, e.g., Civilization. Our descendants will like want to simulate their own ancestors, namely us, and they may have much-improved simulation technology which support sentience. So, our simulators are likely to be our (post-)human descendants.

  • People like us like to simulate people like themselves.
  • Therefore, if the simulators are like us, then they simulate people like them.
  • They simulate us.
  • Therefore, if the simulators are like us, then we are like them.

That argument seems a bit empty, to say the least.

Now I suppose your argument is more like "We exist, therefore our descendants are more likely to exist than the average mind; and we want to simulate stuff, therefore our descendants are more likely to be simulators than the average mind. Therefore, a simulator is more likely to be one of our descendants than to be any random mind.", which is valid. But I deny the premise; we like to simulate some things about our ancestors, but not our ancestors themselves.

Our simulators clearly don't care about morality, for general reasons of theodicy. If they are like our descendants, then our descendants are likely to be unspeakably evil. This implies that we should destroy humanity now, to avoid birthing a race that will simulate matryoshka atrocities.

Thanks for starting this discussion!

[-][anonymous]10

If we are in a simulation, then (a) at least one universe, ours, is a simulation; and (b) at least one world includes a simulation with sentience. This gives some evidence that being simulated or being a simulator are not too unusual, and the stack may lead way down to the basement world, the ultimate unsimulated simulator.

...

[1] If there is a stack of simulators, with one world simulating another, the "basement level" is the world in which the stack bottoms out, the one which is simulating and not simulated. This uses a metaphor in which the simulators are below the simulated. An alternative metaphor, in which the simulators "look down" on the simulated, is also used.

Or maybe there isn't a basement universe. Maybe the stack never bottoms out. Think about it do you actually need one of those or are we just assuming you do because we assumed we lived in an unsimulated universe for most of our lives?

The basement universe if it exists seems likely to have certain odd traits.

If the stack never bottoms out, I would expect by Occam's that we would eventually find some evidence that we are not the top, and that there is probably no top, ever.

This would, as far as I can tell, imply various uncomfortable things, and would require getting over the deep-seated objections some people have against any conception of the universe that contains the concept "infinity" anywhere in it.

I was expecting a post along the lines of "what are the tests that would falsify the hypothesis that we're in a simulated world?"

[-]avr00

I recently came across a pertinent thought experiment. It's pretty entertaining (it's a short story) and it really makes you think.

[This comment is no longer endorsed by its author]Reply

If we are in a simulation, then (a) at least one universe, ours, is a simulation; and (b) at least one world includes a simulation with sentience. This gives some evidence that being simulated or being a simulator are not too unusual, and the stack may lead way down to the basement world, the ultimate unsimulated simulator.

In a Tegmark level IV multiverse there is no basement universe but every universe is simulated by some other universe.

[Added] The simulations are stacked.

Just a few question for this proposal:

  • Wouldn't the infinitely stacking simulations lag and further increase the costs to support the simulation in the level below it.?

  • Is it more likely that, if we are in a simulation, we are "close" to "basement level" rather than far out in an infinitely stacking simulation because the basement level will find it too costly to run an infinitely stacking simulation?

The simulators are much like us, or at least are our post-human descendants.

I estimate this particular proposal to be very low. We might just be the by-product of, or one of many civilizations of, a simulation. Humanity and its time of existence might just be a blip on the way to the real reason of the simulation – a civilization that will exist billions of years from now. In other words, perhaps we are merely the extras in a movie in which the audience have come to watch the stars.

Everything outside our vicinity, e.g., outside our solar system, will be calculated planetarium-style, and not from the level of particle physics.

If the physics on which ultra-high-energy cosmic ray sources run is not the same physics on which we run but only an approximation thereof, we might eventually notice weird things with them.

The way you typically converge an adaptive simulation is to start with a cheap coarse-grained approximation, then:

  1. Run your simulation.
  2. Check to see if it was accurate enough on the whole for you 2b. If so quit.
  3. Do some a posteriori error estimation to find out where the coarseness was most damaging to your accuracy. 3b. Replace the coarse discretization in those locations (or time steps, models, etc) with a more refined version
  4. Go back to step 1.

I'm not sure how this analogy affects astrophysicists' decision making processes, though. After seeing odd results, what do you say to yourself (and any hypothetical omniscient listeners) in a loud voice?

"Wow, that certainly looked wrong! Clearly something funny is going on which requires more investigation!" (saving the entire universe from fate 2b) or "Well, that's close enough for me! Nothing strange or erroneous going on there!" (saving our local chunk of universe from being refined-into-something-else via fate 3b)

Personally I would say the latter, but historically the UHECR community has been prone to say things like the former. (E.g., when AGASA failed to detect the GZK cutoff, everyone was like “there must be new physics allowing particles to evade the cutoff!”, as opposed to “there must be something wrong with the experiment” -- but given that all later experiments have seen a cutoff, it's most likely that AGASA did indeed do something wrong. OTOH I can't recall anyone making “planetarium”-like hypotheses, except jokingly (I suppose).)

EDIT: Also, I can't count the times people have claimed to detect an anisotropy in the UHECR arrival direction distribution and then retracted them after more statistics was available. Which doesn't surprise me, given the badly unBayesian ad-hockeries (to borrow E.T. Jaynes' term) they use to test them. And now, I'll tap out for, ahem, decision-theoretical reasons.

How confident are you that we would notice?

If the heuristics of the simulator are good enough, it might just do something akin to detecting our attempts at analyzing low-res data, and dynamically generate something relevant and self-consistent.

Or, the simulation might be paused while the system or the engineers come up with a way to resolve the problem, which to us would still appear as if the whole thing had all been in the same resolution all along, since whatever they change while we're paused will happen in zero time for us.

How confident are you that we would notice?

Honestly, not much, at least in the foreseeable future -- data from cosmic ray experiments are way too noisy to discriminate between source models. (We've been able to rule out the hypothesis that a sizeable fraction of UHECRs are decay products of as-yet-unknown extremely heavy particles, but that's pretty much it.) But see this. (I've tried a dozen times to download the paper and failed -- are the Simulators messing with me? Aaaargh.)

Ah, I've read that article before. From what I understood, they essentially conclude "Here's a way we could tell the difference if we were simulated with system X. However, it's unlikely that we would be simulated with system X." without giving all that much evidence concerning other possible simulation systems.

Personally, I hold the belief that if 1) we are a simulation and 2) the simulation will not be stopped at some near point in time, then we will eventually discover the fact that we are running in a simulated universe and begin learning about the "outside", by reasoning that:

  • Running simulations of other universes at a rate slower than one's own universe defeats the purpose of most plausible reasons to run the simulation.
  • If we are running faster than the Simulators, then our own intelligence and information capabilities will eventually exceed theirs, which, if also given that they are aware of our existence, is likely to be part of the very purpose of the simulation.
  • If given that we become more intelligent than them, it becomes increasingly likely that we will outsmart (perhaps accidentally) any safety measures they might take or heuristics built into the program, since they won't be able to understand what we're doing anymore (presumably).

However, I doubt we'll find this by noticing any discrepancy in the resolution of the simulation in different parts of it.

If the heuristics of the simulator are good enough, it might just do something akin to detecting our attempts at analyzing low-res data, and dynamically generate something relevant and self-consistent.

In other words, maybe the simulator is doing the equivalent of ray-tracing. When a ray of light impacts the simulated Earth, the process that generated it is simulated in detail only when a bit of Earth becomes suitably entangled with the outcome - but not if the ray serves to merely heat up the atmosphere a bit.

[-]TimS-40

Our simulators want to trade with us.

Acausal trade confuses me. Is the following right?

Humanity should simulate other agents who (a) would value being simulated and (b) would simulate us

Because it isn't clear to me that humanity is the type of agent that would value unconnected copies being simulated. (this is distinct but dependent on the assertion that simulated humans are entitled to moral consideration regardless of whether actual humans are sufficiently causally connected to the simulated humans).

[This comment is no longer endorsed by its author]Reply

Tim, it confuses me too, but I don't think that that summary is right. Instead: Humans should give another agent what it wants if it would give humans what we want in other conditions (or: in another part of the multiverse).

An "agent" here is just a computer program, an algorithm. "Paying" it in an acausal trade may well mean running it (simulating it).

[-]TimS40

Ok, Tile-the-Universe-with-Smiles should make some paperclips because Clippy will put smiles on something. But both agents are so far apart that they can't empirically verify the other agent's existence.

So, this makes sense if Clippy and Tiling can deduce each other's existence without empirical evidence, and each one thinks this issue is similar enough to Newcomb's problem that they pre-commit to one-boxing (aka following through even if they can't empirically verify follow through by the other party.

But treating this problem like Newcomb's instead of like one-shot Prisoner's dilemma seems wrong to me. Even using some advanced decision theory, there doesn't seem to be any reason either agent thinks the other is similar enough to cooperate with. Alternatively, each agent might have some way of verifying compliance - but then labeling this reasoning "acausal" seems terribly misleading.


Internet connection wonkiness = inadvertent double post. Sorry about that, folks.

Umm, prety much all of the advanced decision theories talked about here do cooperate on the prisoners dilemma. In fact, it's sometimes used as a criterion I'm prety sure.

[-]TimS00

The advanced decision theories cooperate with themselves. They also try to figure out if the counter-party is likely to cooperate. But they don't necessarily cooperate with everyone - consider DefectBot.

This was to obvious for me to notice the assumption.

@TimS, this is an important objection. But rather than putting my reply under this downvoted thread, I will save it for a later.

Because the post was retracted, it will not be downvoted any further, so you're safe to respond.