Posts

Sorted by New

Wiki Contributions

Comments

TAG15d2-2

"it" isn't a single theory.

The argument that Everettian MW is favoured by Solomonoff induction, is flawed.

If the program running the SWE outputs information about all worlds on a single output tape, they are going to have to be concatenated or interleaved somehow. Which means that to make use of the information, you gave to identify the subset of bits relating to your world. That's extra complexity which isn't accounted for because it's being done by hand, as it were..

TAG5d20

Huh? The whole point of the Born rule is to get a set of ordinary probabilities, which you can then test frequentistically, over a run of experiments. Quantum mechanical measure-- amplitude-- isn't ordinary probability, but that's the thing you put into the Born rule, not the thing you get out of it. And it has it's own role, which is explaining how much contribution to a coherent superposition each component state makes.

ETA

There is a further problem interpreting the probabilities of fully decohered branches. (Calling then Everett branches is very misleading -- a clear theory of decoherence is precisely what's lacking in Everett's work)

Whether you are supposed to care about them ethically is very unclear, since it is not clear how utilitarian style ethics would apply, even if you could make sense of the probabilities. But you are not supposed to care about them for the purposes of doing science, since they can no longer make any difference to your branch. MWI works like a collapse theory in practice.

always thought that in naive MWI what matters is not whether something happens in absolute sense, but what Born measure is concentrated on branches that contain good things instead of bad things.

It's tempting to ethically discount low measure decoherent branches in some way, because that most closely approximates conventional single world utilitarianism -- that is something "naive MWI" might mean. However, one should not jump to the conclusion that something is true just because it is convenient. And of course, MWI is a scientific theory so it doesn't comes with built in ethics.

The alternative view starts with the question of whether a person low measure world still count as a full.person? If they should not, is that because they are a near-zombie, with a faint consciousness that weighs little in a hedonic utilitarian calculus? If they are not such zombies, why would they not count as a full person -- the standard utilitarian argument that people in far-off lands are still moral patients seems to apply. Of course, MWI doesn't directly answer the question about consciousness.

(For example, if I toss a quantum fair coin n times, there will be 2^n branches with all possible outcomes.)

If "naive MWI" means the idea that any elementary interaction produces decoherent branching, then it is wrong for the reasons I explain here. Since there are some coherent superpositions, and not just decoherent branches, there are cases where the Born rule gives you ordinary probabilities, as any undergraduate physics student knows.

(What is the meaning of the probability measure over the branches if all branches coexist?)

It's not the existence, it's the lack of interaction/interference.

TAG8d1-2

By "equally" I meant:

"in the same ways (and to the same degree)".

If you actually believe in florid many worlds, you would end up pretty insuoucient, since everything possible happens, and nothing can be avoided.

TAG8d20

Same way you know anything. "Sharp valued" and "classical" have meanings, which cash out in expected experience.

TAG8d30

This question doesn’t really make sense from a naturalistic perspective, because there isn’t any causal mechanism that could be responsible for the difference between “a version of me that exists at 3pm tomorrow, whose experiences I should anticipate experiencing” and “an exact physical copy of me that exists at 3pm tomorrow, whose experiences I shouldn’t anticipate experiencing”.

There is, and its multi-way splitting, whether through copying or many worlds branching. The present you can't anticipate having all their experiences, because experience is experienced one-at-a-time. They can all look back at their memories, and conclude that they were you, but you can't simply reverse that and conclude that you will be them , because the set-up is asymmetrical.

Scenario 1 is crazy talk, and it’s not the scenario I’m talking about. When I say “You should anticipate having both experiences”, I mean it in the sense of Scenario 2.

Scenario 2: “Two separate screens.” My stream of consciousness continues from Rob-x to Rob-y, and it also continues from Rob-x to Rob-z. Or, equivalently: Rob-y feels exactly as though he was just Rob-x, and Rob-z also feels exactly as though he was just Rob-x (since each of these slightly different people has all the memories, personality traits, etc. of Rob-x — just as though they’d stepped through a doorway).

But that isn't an experience. It's two experiences. You will not have an experience of having two experiences. Two experiences will experience having been one person.

If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self?

  1. Yeah.

Are you going to care about 1000 different copies equally?

ETA.

The particular* brain states* look no different in the teleporter case than if I’d stepped through a door; so if there’s something that makes the post-teleporter Rob “not me” while also making the post-doorway Rob “me”, then it must lie outside the brain states, a Cartesian Ghost.

There's another option: door-Rob has physical continuity. There's an analogy with the identity-over-time of physical objects: if someone destroyed the Mona Lisa, and created an atom-by-atom duplicate some time later, the duplicate would not be considered the same entity (numerical identity).

TAG8d20

I am talking about the minimal set of operations you have to perform to get experimental results. A many worlder may care about other branches philosophically, but if they don't renormalise , their results will be wrong, and if they don't discard, they will do unnecessary calculation.

TAG8d20

Err...physicists can make them in the laboratory. Or were you asking whether they are fundamental constituents of reality?

TAG9d62

The claim that humans are at least TM's is quite different to the claim that humans are at most TM's. Only the second is computationalism.

TAG9d00

Meanwhile the many-worlds interpretation suffers from the problem that it is hard to bridge to experience,

Operationally, it's straightforward: you keep "erasing the part of the (alleged) wavefunction that is inconsistent with my indexical observations, and then re-normalizing the wavefunction"...all the time murmering under your breath "this is not collapse..this is not collapse".

(Lubos Motl is quoted making a similar comment here https://www.lesswrong.com/posts/2D9s6kpegDQtrueBE/multiple-worlds-one-universal-wave-function?commentId=8CXRntS3JkLbBaasx)

TAG10d20

That claim is unjustified and unjustifiable

Nothing complex is a black box , because it has components, which can potentially be understood.

Nothing artificial is a black box to the person who built it.

An LLM is , of course, complex and artificial.

Everything is fundamentally a black box until proven otherwise.

What justifies that claim?

Our ability to imagine systems behaving in ways that are 100% predictable and our ability to test systems so as to ensure that they behave predictably

I wasn't arguing on that basis.

Load More