Spoiled Discussion of Permutation City, A Fire Upon The Deep, and Eliezer's Mega Crossover
Permutation City is an awesome novel that was written in 1994. Even if the author, Greg Egan, used a caricature of this community as a bad guy in a more recent novel, his work is still a major influence on a lot of people around these parts who have read it. It dissolves so many questions around uploading and simulation that it's hard for someone who has read the book to talk about simulationist metaphysics without wanting to reference the novel... but doing that runs into constraints imposed by spoiler etiquette.
So go read Permutation City if you haven't read it already because it's philosophically important and a reasonably fun read.
In the meantime, if you haven't then you should also read A Fire Upon The Deep by Vernor Vinge (of "singularity" coining fame) and then read Eliezer's fan fic The Finale of the Ultimate Meta Mega Crossover which references both of them in interesting ways to make substantive philosophical points and doesn't take too long to read.
In the comments below there will be discussion that has spoilers for all three works.
The distinctive unifying idea of special relativity is that the geometry of the universe is that of Minkowski space. The reality of things is a four-dimensional partially ordered set of point events which can be divided into hypersurfaces of equal coordinate time in a variety of ways. Two events may be simultaneous in one description of a physical process, but if they are spatially separated, then we can transform to another description in which they are no longer simultaneous. Simultaneity is a coordinate artefact and there is no universal time.
Wavefunctions are always defined with respect to a particular time-slicing, a particular foliation of space-time into hypersurfaces. All a Lorentz transform can do is change the tilt of those hypersurfaces with respect to the time axis of your coordinate system; you're still stuck with a particular preferred foliation as the ontological base of your wavefunction's time evolution. If you reify the wavefunction, you end up reifying your coordinate system as well.
That is the substance of MWI's problem with relativity. A physicist might call it "ontological gauge-fixing". In theories with a gauge symmetry, you're allowed to work in a particular gauge for the purposes of calculation, but the end product, the quantitative predictions, must be gauge-invariant. If they show a dependence on your choice of gauge, you've done something wrong.
Here, we're trying to construct an ontology for quantum theory. The ontological significance of special relativity for time is that universal time is a coordinate artefact, yet in MWI ontologies, we end up with an objective universal time. That's evidence of a mistake, and the mistake is that you are treating a wavefunction as an objectively existing thing, when it is just a frame-dependent tabulation of probability amplitudes.
Versions of MWI which fix a universal time and a particular Hilbert-space basis (such as position) may not be believable, but at least they are clear and explicit about what it is that is supposed to exist. When I think of supposedly more sophisticated approaches to MWI, where That Which Exists is just the universal wavefunction, in its sublime purity, beyond all specific choices of coordinate and basis, and where the contingent particularities of individual worlds are supposed to be logically implicit in its structure somehow... my overall impression is of utterly contemptible vagueness and handwaving, often suffused with a mystic adoration of the Big Psi.
Let's pretend for a moment that I don't have an issue with the idea that the world only exists "approximately", and that the ultimate objective reality is some sort of wavefunction. That's your hypothesis, fine. Can you tell me the nature of this approximateness? Are we taking some sort of limit? If so, can you be more specific? Your original words were
I asked for more details and you linked to the MWI FAQ - which I have seen many times before, and which mostly consists of word-pictures. That is not enough, although a word-picture can be a fine way to fool yourself into thinking that your ideas make sense.
I am going to pose a challenge to you. The issue which is ultimately at stake: whether a "Many Approximate Worlds Interpretation" even makes sense as a theory. However, we won't try to resolve that directly. The actual challenge is much simpler. I ask merely that you exhibit a mathematically exact definition of an "approximate world". This is theoretical physics we're discussing, it uses exact mathematics, and if the concept is genuinely relevant, it will have a mathematical formulation and not just a verbal one.
Since these approximate worlds are supposed to be contained implicitly within the universal wavefunction, that is the relationship which I am expecting to see formally elucidated. I don't expect to see statements about a theory of everything; supposedly all of this can be explained at the level of nonrelativistic quantum mechanics. I just want to see an exact statement of what the relationship between world and wavefunction is supposed to be.
If you want to point to a particular proposition in a paper somewhere, fine. Just give me more to work with than shapeless verbal formulations!
There are many other things to discuss in your comment, we can return to those later. You should also feel free to delay responding to what I said about relativity if doing so would interfere with this challenge. What I want from you now, more than anything else, is an exact definition of what a "world" or an "approximate world" is, stated in the same mathematical language that we use to talk about everything else in quantum theory. If you can't tell me what you mean, we have nothing to talk about.
I appreciate that you chose to "raise the bar" here.
I agree that when we're seeking to 'interpret' a theory like QFT, which is Lorentz-invariant, we ought to postulate an ontology which respects this symmetry. However, from a certain perspective, I think it's obvious that it must be possible to paint a mathematical picture of a Lorentz-invariant "many worlds type" theory.
Let's assume that somehow or other, it's possible to develop QFT from axiomatic foundations, and in such a way that when we take the appropriate low-velocity, low-energy limit, we recover "wavefunctions" and the Schrödinger equation exactly as they appear in QM. As far as I know, QFT and (non-relativistic) QM are, broadly speaking, cut from the same cloth: Both of them make predictions through a process of adding up quantum amplitudes for various possibilities then interpreting the square-norms as probabilities. Neither of them stipulate that there is a fact of the matter about which slit the electron went through, unless you augment them with 'hidden variables'. Neither of them can define what counts as a "measurement". In both theories, the only strictly correct way to compute the probabilities of the results of a second measurement, in advance of a first, is to do a calculation that takes into account all of the possible ways the first measuring device might 'interfere' with the stuff being measured. In practice we don't need to do this - in any remotely reasonable experiment, when some of the degrees of freedom become entangled with a macroscopic measuring device, we can treat them as having "collapsed" and assumed determinate values. But in theory, there's nothing in QM or QFT to rule out macroscopic superpositions (e.g. you can do a "two-slit experiment" with people rather than electrons).
The reason I'm pointing all of these things out is to motivate the following claim: A 'no collapse, no hidden variable' interpretation of QFT is every bit as natural as a 'no collapse, no hidden variable' interpretation of QM. By 'natural' I mean that unless you deliberately 'add something' to the mathematics, you won't get collapses or hidden variables. (The real problem for Everettians is that (prima facie) you won't get Born probabilities either! But we can talk about that later.)
Next, I claim that a 'no collapse, no hidden variable' theory (taken together with the metaphysical assumption that 'the entities and processes described are real, not merely instruments for computing probabilities') is obviously a 'many worlds' theory. This is because it implies that the man over there, listening to a Geiger counter, is constantly splitting into superpositions. Although his superposed selves are overwhelmingly like to carry on their lives independently of each other, there's no limit to how many of them we may need to take account of in order to get our predictions right.
Finally, since the predictions of QFT are Lorentz-invariant, if it can be given a mathematical foundation at all then there must be some way to give it a Lorentz-invariant mathematical foundation.
Putting all of this together, I have my claim.
I'm "cheating" you'll say because I haven't done any hard work - I haven't told you how one can have a Lorentz-invariant ontology of things resembling "wavefunctions". Regretfully I'm not able to do that - if I could I would - but for present purposes I don't think it's necessary.
Personally I'd just say that That Which Exists is whatever it is that, when supplemented with co-ordinates and a position basis, and passing to the nonrelativistic limit, looks like a wavefunction. I don't know whether that has to take the form of a 'universal wavefunction'. (By the way, I don't think the position basis is an 'arbitrary choice' in the same way that a foliation of Minkowski space is arbitrary. This is because of the analogy with classical mechanics, where the elements of a basis correspond to points in a phase space, and changing basis is like changing co-ordinates in the phase space. But a phase space is a symplectic manifold, not an unstructured set. I'm guessing that in quantum theory too there must be some extra structure in the Hilbert space which implies that some bases (or more generally, some Hermitian operators) are "physical" and others not.)
Anyway, I don't really have any idea what a maximally elegant mathematical presentation of the Underlying Reality would look like. I just think it's misleading to use the words "MWI is inconsistent with special relativity" when what you actually mean is that "no-one has yet formulated an axiomatic presentation of QFT". Because the very moment we have the latter, we will (immediately, effortlessly) have a version of MWI that is consistent with SR, simply by making the same interpetative 'moves' that Everett made. (This is a point which the MWI FAQ tries to drive home.) And if we cannot put QFT on firm mathematical foundations then all metaphysically realistic interpretations will suffer equally badly.
Finally, let's switch briefly to the outside view. I've never before seen a critique of MWI made along the lines that it presupposes a notion of absolute simultaneity. Now that could be because I just haven't been looking hard enough (but actually I have read a fair few attacks on MWI), or not understanding what I've been reading (but I think I have, at least in outline), and it could be because almost everyone who writes about this is distracted by whatever agendas and pet theories they have, and missing the more 'obvious' line of attack right under their noses; but I think it's much more likely that you've misunderstood the relation between MWI and the QM-style wavefunction, which isn't as close as you think.
Actually, the overall impression I had prior to this conversation with you is that compatibility with SR is one of MWI's greatest strengths (especially when compared with Bohm's theory.)
Let's move on.
I'd have to do a lot of reading before I could answer this properly. From what I've seen so far, the critical concepts seem to be einselection and decoherence. The universe can be thought of as a collection of interacting 'systems' (such that the Hilbert space of the whole universe is the tensor product of the Hilbert spaces corresponding to each system). When a system interacts with its environment, its reduced density matrix changes exactly as if the environment was performing a 'measurement' on it. However, the environment is much more likely to perform certain 'measurements' than others. Those states which are sufficiently near to being eigenstates of a 'typical measurement' thus have a degree of stability not shared by an arbitrary superposition of such states. In this way, the environment selects a so called "measurement basis" (which consists of what we can call "classical states").
A 'world' is just a component of the universe's wavefunction, when decomposed with respect to this "measurement basis". As I understand it, this notion of 'world' coincides with (or at least is closely related to) the thermodynamic concept of a 'macrostate'. In particular, one can no more (and no less) give a mathematically rigorous definition of 'world' than one can of 'macrostate'. This is just an assumption on my part, but I don't think the "measurement basis" is strictly speaking a basis - it's more like a decomposition of the Hilbert space as a direct sum of subspaces which are themselves fairly large. The indeterminacy ("approximateness") in the notion of world arises from the indeterminacy of what counts as a 'typical' interaction between systems. There may be no "fact of the matter" about whether or not two subspaces ought to be 'merged together' (i.e. whether or not a particular property somewhere between the 'micro' and 'macro' scales deserves to be called 'macroscopic enough' to make the difference between two macrostates).