Let's consider a scenario.

You've become aware that there exists a pair of brains, and that you are one of them, but you can't tell which. There are practical reasons you would really like to know: Each of the brains are destined for a different fate, they must do different things to prepare. And let's say they're selfish, they're each only optimizing for their own outcome.

Crucially: You know that the brains are physically implemented in very different ways. One has twice the mass as the other (it's an older model), while the other consists of three redundant instances (for robustness to error), so strictly speaking the triplicate brain is pretty much three brains, they just happen to be synchronized and wired out to one body.

The following question has clarified my thoughts about the hard problem of consciousness a lot. It can only be answered once we have a theory of how moments of experience bind to matter.

What is your probability that you're the heavier brain?

A naive answer is "1/2", but ignoring differences in the brains' physical implementations turns out to be kind of indefensible.

Consider how many different ways of thinking about anthropic binding there are, see that they predict different answers and see that they are all uncomfortably weird:

  • A count theory might give the answer 1/4, because there is a sense in which the heavier brain counts as one object and the other counts as three objects (and there are other senses where they don't. Which sense would nature care about?)
  • A mass theory gives the answer 2/3, because the heavier brain consists of twice as much of the apparent basic stuff of existence than the triplicate brains combined.
  • A force-oriented theory (not recommended) says 1/2, because each system commands equal quantities of physical force, labor, or votes, and so the force theorist reasons that neither matters more than the other. The force theorist doesn't care which one plays host to more moments of life, being, feeling, witness, pleasure or suffering, the quantity of the moments of experience don't give that brain any additional realpolitik leverage over it, so it just asserts the equality of the two objects.
    But being equal in that sense simply doesn't imply equality of probability of observing from within.
    Force theory does not help you to answer the focus question, to predict which brain is you. It ignores the problem. It's less a theory of measure binding and more of a... common way of accidentally denying the existence of the problem.
    • It is possible to formulate a realpolitik-utilitarian type agent who should tend to follow the Force theory, for whom it's correct, but humans, with our selfish aspects, we run into this problem, and I'll argue a bit further down that compassion, concern for the quantity of experience in another, also runs into this problem.
  • The pattern theory (also not recommended) would predict 1/2, because the two brains have roughly equal computational heft. A simulation operating with scarce compute would notice they're mostly the same computation, doing the same thing, so most of the computation would be shared, with a few inert tags on the outside saying things like "and it's heavy" or "and there are three of them". Those extra features would be unimportant on the physical computing substrate level, and so would not effect anthropic measure's binding to any physical systems.
    That  theory is wrong because nature is not lossily compressed in that way. Even if it the universe is a simulation (and to be fair it probably is), the next level above the simulation is probably not a simulation, and the next level above that is almost certainly not a simulation, or whatever, it's not going to be so deep that we can justify founding our anthropic principles on the assumption that we don't need a theory of anthropic measure for basic physical reality. We will need one at some point.
    Why would anyone believe the pattern theory? I get the impression that there's this kind of intellectual, to them the whole world is just patterns to be understood, so they apprehend the weight, or the nature, of an object, to be all about its computational pattern, they don't think anything else about it could possibly matter, because in their professional life it wouldn't, for their purposes of understanding and anticipating the system's outward behaviour, nothing else matters. So they don't think about its other aspects like, whether, or to what extent the computing elements exist.
    • A concerning implication that pattern theory seems to have: If you believe that the set of everything that exists, is very big (most of us do), then you'll get a tendency towards marginal outcomes where, since identical observer-moments only count as one moment, normal moments would not be more frequent than abnormal moments. If all patterns have the same measure regardless of their incidence rate, then rare ones count just as much as common ones, which means that since there are more distinct patterns of moment in the weird fringes, most of the moments are weird and you should expect to live statistics-defyingly weird life. And there's no denying that this is a weird implication, especially given that you seem to be living a life that follows statistics quite closely.

(personally, I'm mostly a mass-energy theorist, but I'm not sure how to explain why my anthropic moment has landed in biological intelligence, panpsychism seems not to be completely true, it seems like something more complicated than mass-energy theory is probably going on.)

So, even though you can't run a decision theory with a humanlike utility function without running into this question, there is no normal answer. Whichever way you want to go, we're going to have to do some work to acclimate ourselves to it.

The question suggests a sort of binding weighting associated with every physical system in your reference class, that determines the probability of finding out that you're that one, rather than another one.

Let's also name that entity in our utility function that gives a thing moral personhood, a compassion measure, that determines the rate at which a thing outside of your reference class, that almost certainly isn't you, can experience suffering or joy or meaning or grace, as a component in the extent to which you will care what it's experiencing.

It would seem to me that the binding weighting and the compassion measure are basically the same concept. It's hard to justify separating them.

This is not really in the domain of observation or reason, though, it's ethos, It's in the domain of the inspection of our desires. While the binding weighting could belong to any selfish agent, the compassion measure belongs to the human utility function. Whether this strange thing I've introduced to you is recognizable to you as your utility function's compassion measure will depend on the way your utility function binds to objects in the external reality under shifts in frame.

All I can really say to convince you that these things are the same, is to mention that my utility function, does reliably bind its "compassion measure" tags to any simulacra I bear of others' anthropic binding weighting. It does that. And since you and I are the same species, I'll assume for now that yours does the same.

So, for us, the question of how anthropic binding works is highly consequential: If we don't have a theory, then we wont know the extent to which the artificial brains that we mean to create, will experience being. We need to know, as that tells us how good or bad it is when they experience beauty or suffering or whatever else. Making heaven for a bunch of brains with negligible anthropic measure would be a catastrophic waste. It is burdensome that we must attend such metaphysical issues, but our utility function seems to care about these things.

The question is extremely hard to empirically test despite having great material impact, because each of us constitute the relation of just one physical system to just one anthropic moment, and we cannot ever witness another without leaving our current datapoint behind. Somehow, we'll have to draw all of our inferences from this one association that we have, between,

a human brain,

and the experience of reaching the end of a short philosophy post.


This post presents the same points as the scifi short The Mirror Chamber, but with less adornment, worldbuilding, and with more structure and concision.

I feel like I should credit Eliezer's Ebborians Post as formative, but in retrospect I really don't have any trace of having read it. I think I might have actually missed it. I think we both just ended up asking similar questions because there's no way to think really deeply about epistemology without noticing that anthropic measure is seriously weird.

New Comment
42 comments, sorted by Click to highlight new comments since:

The pattern theory is wrong because nature is not lossily compressed in that way.

Speaking as one of the people you describe to whom the pattern theory comes naturally: I don't quite get how you know this. The following paragraphs do not seem to me to establish it. From the fact that a simulation could do something, why does it follow that "base-level reality" can not? Is there an implied argument here involving locality of effects, or similar?

Yeah, it wasn't really trying to be a conclusive refutation of pattern theory, I think that could end up being too long. It was more about, showing that some of the reasons people have for believing it, hopefully clearly enough that you can see why I wouldn't buy them, some readers would recognize that their own pattern theory intuition was arrived at via those sorts of reasoning, and seeing that reasoning rendered explicitly, it would be easy to see the problems in it, and relinquish the belief.

If you have some other reason to think that base level reality is pattern-compressed, addressing that would be beyond the scope of the article and I'm not even sure you're wrong because I'm probably not familiar with your argument.

I guess the weirdness argument could have been an explicit reduction to absurdity of pattern theory? I didn't emphasize this, but isn't there an argument that very stochastic physical laws like our own would not look like they do, if the pattern theory were true? Our impression of the existence of consistent physical laws, at least on the newtonian level, is just a product of normal timelines being more frequent, but under a pattern theory, no unique pattern is more frequent than any other.
I'm not completely sure. It's conceivable that if you reason through what patternist weirdness would end up looking like, deeply enough, it would eventually end up explaining quantum physics, or something (similarly (equivalently?) to wolfram's hypergraph merges?). So I don't want to treat it as damning. I can't really tell where it goes.

I've always maintained that in order to solve this issue we must first solve the question of, what does it even mean to say that a physical system is implementing a particular algorithm? Does it make sense to say that an algorithm is only approximately implemented? What if the algorithm is something very chaotic such as prime-checking, where approximation is not possible? 

An algorithm should be a box that you can feed any input into, but in the real, causal world, there is no such choice, any impression that you "could" input anything into your pocket calculator is due to the counterfactuals your brain can consider purely because it has some uncertainty about the world (an omniscient being could not make any choice at all! -- assuming complete omniscience is possible, which I don't think it is, but let us imagine the universe as an omniscient being or something).

This leads me to believe that "anthropic binding" cannot be some kind of metaphysical primitive, since for it to be well-defined it needs to be considered by an embedded agent! Indeed, I claimed that recognizing algorithms "in the wild" requires the use of counterfactuals, and omniscient beings (such as "the universe") cannot use counterfactuals. Therefore I do not see how there could be a "correct" answer to the problem of anthropic binding.

Hmm. Are you getting at something like: How can there possibly be an objective way of associating an experiential reference class with a system of matter.. when the reference class is an algorithm, and algorithms only exist as abstractions, and there are various reasons the multiverse can't be an abstraction-considerer, so anthropic binding couldn't be a real metaphysical effect and must just be a construct of agents?

There are some accounts of anthropic binding that allow for it to just be a construct.

I removed this from the post, because it was very speculative and conflicted with some other stuff and I wanted the post to be fairly evergreen, but it was kind of interesting, so here's some doubts I had about whether I should really dismiss the force theory:

I'm not completely certain that sort of self-reference is coherent as a utility function. That's one of the assumptions we could consider throwing out to escape the problem, this assumption that utility functions should be able to refer to "I", rather than being restricted to talking about the state of the physical world.
If they couldn't have an "I" in the utility function, then it seems like their expected probability of being one or the other should no longer factor into their decisions, IIRC a similar thing happens in some variants of the sleeping beauty problem: The beauty has a probability about which day's beauty she is, but if she's able to report any probability she chooses, as a deliberate bet, she bets according to a policy designed to maximize some final total across all days, which totally ignores her estimates about which day it is. Similarly, our agents, shorn of "I", would cooperate in service of whatever entities their theory of cosmological measure says are most important.
It would boil down to cosmological measure, though cosmological measure is also full of weird open problems, perhaps there are fewer.

Yes, I am arguing against the ontological realism of anthropic binding. Beyond that, I feel like there ought to be some way of comparing physical systems and having a (subjective) measure of how similar they are, though I don't know how to formalize it.

It is for example clear that I can relate to a dolphin, even though I am not a penguin. Meaning that the penguin and I probably share some similar subsystems, and therefore if I care about the anthropic measure of my subsystems then I should care about penguins, too.

 

What is your probability that you're the heavier brain?

Undefined.  It matters a lot what rent the belief is paying.  The specifics of how you'll resolve your probability (or at least what differential evidence would let you update) will help you pick the reference class(es) which matter, and inform your choice of prior (in this case, the amalgam of experience and models, and unidentified previous evidence you've accumulated).

Wow.  someone really didn't like this.  any reason for the strong downvotes?

I don't get a strong impression that you read the post. It was pretty clear about what rents the beliefs are paying.

Generally it sucks to see someone take a there is no answer, the question is ill-specified transcended analytic philosopher posture towards a decision problem (or a pair of specific decision problems that fall under it) that actually is extremely well-specified and that it seems like a genuinely good analytic philosopher should be able to answer. Over the course of our interactions I get the impression that you're mainly about generating excuses to ignore any problem that surprises you too much, I've never seen you acknowledge or contribute to solving a conceptual problem. I love a good excuse to ignore a wrong question, but they haven't been good excuses.

I would say it's extremely unclear to me that the question "what is your probability that you are agent X" in an anthropic question like this is meaningful and has a well-defined answer? You said "there are practical reasons you'd like to know", but you haven't actually concretely specified what will be done with the information.

In the process of looking for something I had previously read about this, I found the following post:

https://www.lesswrong.com/posts/y7jZ9BLEeuNTzgAE5/the-anthropic-trilemma

Which seems to be asking a very similar question to the one you're considering. (It mentions Ebborians, but postdates that post significantly.)

I then found the thing I was actually looking for: https://www.lesswrong.com/tag/sleeping-beauty-paradox

Which demonstrates why "what rent the belief is paying" is critical:

If Beauty's bets about the coin get paid out once per experiment, she will do best by acting as if the probability is one half. If the bets get paid out once per awakening, acting as if the probability is one third has the best expected value.

Which says, to me, that the probability is not uniquely defined -- in the sense that a probability is really a claim about what sort of bets you would take, but in this case the way the bet is structured around different individuals/worlds is what controls the apparent "probability" you should choose to bet with.

Ahh. I'm familiar with that case. Did having that in mind make you feel like there's too much ambiguity in the question to really want to dig into it. I wasn't considering that sort of scenario ("they need to know their position" rules it out), but I can see why it would have come to mind.

You might find this removed part relevant https://www.lesswrong.com/posts/gx6GEnpLkTXn3NFSS/we-need-a-theory-of-anthropic-measure-binding?commentId=mwuquFJHNCiYZFwzg

It acknowledges that some variants of the question can have that quality of.. not really needing to know their position.

I'm going to have to think about editing that stuff back in.

I don't get a strong impression that you read the post. It was pretty clear about what rents the beliefs are paying.

I think I did, and I just read it again, and still don't see it.  What anticipated experiences are contingent on this?  What is the (potential) future evidence which will let you update your probability, and/or resolve whatever bets you're making?

Well, ask the question, should the bigger brain receive a million dollar, or do you not care?

I do not have a lot of evidence or detailed thinking to support this viewpoint, but I think I agree with you. I have the general sense that anthropic probabilities like this do not necessarily have well-defined values.

I'm definitely open to that possibility, but it seems like we don't really have a way of reliably distinguishing these sorts of anthropic probabilities out from like, 'conventional' probabilities? I'd guess it's tangled up with the reference class selection problem.

Thank you for bringing attention to this issue— I think it's an under-appreciated problem. I agree with you that the "force" measure is untenable, and the "pattern" view, while better, probably can't work either.

Count-based measures seem to fail because they rely on drawing hard boundaries between minds. Also, there are going to be cases where it's not even clear whether a system counts as a mind or not, and if we take the "count" view we will probably be forced to make definitive decisions in those cases.

Mass/energy-based measures seem better because they allow you to treat anthropic measure as the continuous variable that it is, but I also don't think they can be the answer. In particular, they seem to imply that more efficient implementations of a mind (in terms of component size or power consumption or whatever) would have lower measure than less efficient ones, even if they have all the same experiences.

This is debatable, but it strikes me that anthropic measure and "degree of consciousness" are closely related concepts. Fundamentally, for a system to have any anthropic measure at all, it needs to be able to count as an "observer" or an "experiencer" which seems pretty close to saying that it's conscious on some level.

If we equate consciousness with a kind of information processing, then anthropic measure could be a function of "information throughput" or something like that. If a System A can "process" more bits of information per unit time than System B, then it can have more experiences than System B, and arguably should be given more anthropic measure. In other words, if you identify "yourself" with the set of experiences you're having in a given moment, then it's more likely that those experiences are being realized in a system with more computing power, more ability to have more experiences, than a system with less compute. Note that, on this view, the information that's being processed doesn't have to be compressed/deduplicated in any way; systems running the same computation on many threads in parallel would still have more measure than single-threaded systems ceteris paribus.

There's a lot that needs to be fleshed out with this "computational theory of anthropic measure" but it seems like the truth has to be something in this general direction.

Update: I don't think I agree with this anymore, after listening to what Vanessa Kosoy said about anthropics and infra-Bayesianism during her recent AXRP interview. Her basic idea is that the idea of "number of copies" of an agent, which I take to be closely related to anthropic measure, is sort of incoherent and not definable in the general case. Instead you're just supposed to ask, given some hypothesis H, what is the probability that the computation corresponding to my experience is running somewhere, anywhere?

If we assume that you start out with full Knightian uncertainty over which of the two brains you are, then infra-Bayesianism would (I think) tell you to act as if you're the brain whose future you believe to have the lowest expected utility, since that way you avoid the worst possible outcome in expectation.

sort of incoherent and not definable in the general case

Why? Solomonoff inducting, producing an estimate of the measure of my existence (the rate of the occurrence of the experience I'm currently having) across all possible universe-generators weighted inversely to their complexity seems totally coherent to me. (It's about 0.1^10^10^10^10)

infra-Bayesianism would (I think) tell you to act as if you're the brain whose future you believe to have the lowest expected utility

I haven't listened to that one yet, but ... wasn't it a bit hard to swallow as a decision rule?
What if all of the worlds with the lowest EU are completely bizarre (like, boltzmann brains, or worlds that have somehow fallen under the rule of fantastical devils with literally no supporters). This seems to make expected utility even more prone to not converging under sustained thought, than the longtermist cluelessness we were already dealing with.

I'll address your points in reverse order.

What if all of the worlds with the lowest EU are completely bizarre (like, boltzmann brains, or worlds that have somehow fallen under the rule of fantastical devils with literally no supporters).

The Boltzmann brain issue is addressed in infra-Bayesian physicalism with a "fairness" condition that excludes worlds from the EU calculation where you are run with fake memories or the history of your actions is inconsistent with what your policy says you would actually do. Vanessa talks about this in AXRP episode 14. The "worlds that have somehow fallen under the rule of fantastical devils" thing is only a problem if that world is actually assigned high measure in one of the sa-measures (fancy affine-transformed probability distributions) in your prior. The maximin rule is only used to select the sa-measure in your convex set with lowest EU, and then you maximize EU given that distribution. You don't pick the literal worst conceivable world.

Notably, if you don't like the maximin rule, it's been shown in Section 4 of this post that infra-Bayesian logic still works with optimism in the face of Knightian uncertainty, it's just that you don't get worst-case guarantees anymore. I'd suspect that you could also get away with something like "maximize 10th percentile EU" to get more tempered risk-averse behavior.

Solomonoff inducting, producing an estimate of the measure of my existence (the rate of the occurrence of the experience I'm currently having) across all possible universe-generators weighted inversely to their complexity seems totally coherent to me. (It's about 0.1^10^10^10^10)

I'm not sure I follow your argument. I thought your view was that minds implemented in more places, perhaps with more matter/energy, have more anthropic measure? The Kolmogorov complexity of the mind seems like an orthogonal issue.

Maybe you're already familiar with it, but I think Stuart Armstrong's Anthropic Decision Theory paper (along with some of his LW posts on anthropics) do a good job of "deflating" anthropic probabilities and shifting the focus to your values and decision theory.

Leinbiz's Law says that you cannot have separate objects that are indistinguishable from each other. It sounds like that is what you are doing with the 3 brains. That might be a good place to flesh out more to make progress on the question. What do you mean exactly by saying that the three brains are wired up to the same body and are redundant?

[-][anonymous]50

Biological meat doesn't have the needed properties but this is how SpaceX and others avionics control works.  Inputs are processed in discrete frames, all computers receive a frame of [last output | sensor_inputs], and implement a function where output = f(frame) : the output depends only on the frame input and all internal state is identical between the 3 computers.

Then, after processing, last_output=majority(output1, output2, output_n)

So even when one of the computers makes a transient fault, it can contribute to the next frame.  

Current neural network based systems can be architected to work this way.  

The computers are not identical but if your only information about them is their outputs they are indistinguishable from each other.

I'm thinking of something like this: https://en.wikipedia.org/wiki/Triple_modular_redundancy

I've noticed another reason a person might need a binding theory, if they were the sort of agent who takes Rawlsian Veils to their greatest extent, the one who imagines that we are all the same eternal actor playing out every part in a single play, who follows a sort of timeless, placeless cooperative game theory that obligates them to follow a preference utilitarianism extending — not just over the other stakeholders in some coalition, as a transparent pragmatist might — but over every observer that exists.

They'd need a good way of counting and weighting observers, to reflect the extent to which their experiences are being had. That would be their weighting.

I also could suggest energy-used-for-computation-measure. First, assume that there is a minimal possible energy of computation, plank computation, which uses most theoretically effective computing. Each brain could be counterfactually sliced as sum of minimal computations. Now we could calculate which brain has more slices and conclude that I am more likely to be in such brain.

This measure is more probable than mass- or volume-based measure, as mass could be mass of computationally inert parts of the brain like bones.

I agree that energy measure seems intuitive.

I don't see how the divisibility story helps with that though? I can say similar things about mass.

Might be able to push back on it a bit: Consider the motion of the subatomic particles that make the matter, pushing against each other, holding their chemical bonds. Why doesn't that count as a continuous flowing of energy around the system? How do you distinguish that from the energy of computation? It still has the shape of the computation.

We wouldn't normally think of it as an exchange of energy because none of it gets converted to heat in the process of doing the thing, it's conserved. Is that metaphysically relevant? Maybe it is??

This is rough (I've made a bit of progress on this and should write another version soon), but in there I come across reason to think that this would all work out very neatly if observer-moments only appear in low-entropy systems.

So if you're saying that the experience is had in the traversal from low-entropy to higher-entropy, That'd do it. It would make more sense than I can easily articulate right now, if that's how anthropic binding works.

I come to the idea of the energy as a measure of computation based on the exploration of Ebborian brains, which are 2 dimensional beings which have thickness in 3d dimension. They could be sliced horizontally, creating copies. 

The biggest part of the mass of a computer could be removed without affecting the current computations, like different bearing and auxiliary circuits. They maybe helpful in the later computations but only current are important for observer-counting. 

This also neatly solves Boltzmann brains problem: they by definition have very low energy of computation, so they are very improbable.

And this helps us to explain the problem of thermodynamics which you mentioned. The chaotic movement of electrons could be seen as sum of many different computations. However, each individual computation has very small energy and "observer weight" if it is conscious.

I didn't read the post you linked yet, and will comment on it later. 

Boltzmann brains necessarily have a low energy of computation? Relative to what? The heat surrounding them?

Don't we also, relative to that?

Can it actually be argued that the total energy of life-supporting universes (not even limiting it to just the cognitions inside them, just the whole thing) is higher than the total energy of boltzmann brains within the much more frequent highly entropic universes? I'm not even sure of that.
See, I'd expect orderly universes to be much less frequent than highly entropic universes, order requires highly intricate machines - with just the right balance of push and pull and support for variation but not too much - which so easily collapse into entropy when the parameters are even slightly off.

But I'm not sure how that ratio compares to the rate of boltzmann braining, which is also very low.

I think that most BBs have low energy in absolute terms, that is, in joules. 

While total energy and measure of BBs may be very large, there are several penalties which favour real minds in anthropics:

  1. Complexity. Real mind capable to think about anthropic is rather complex, and most BBs are much simpler, and by saying "much" I mean double exponent of the brain size. 
  2. Content. Even a complex BB has the same probability to think about any random thing as about anthropics. It gives 10-100 orders of magnitude penalty. 
  3. Energy. Human mind consumes for computations, say, 1 Watt, but a BB will consume 10-30 orders of magnitude less. Here I assume that the measure is proportional to the energy of computations. 

 

Side note: there is a interesting novel about how universe tries to return to the normal state of highest entropy via creating some unexpected miracles on earth which stop progress. https://en.wikipedia.org/wiki/Definitely_Maybe_(novel) 

Note that the part of your reply about entropy is related to a plot of fictional novel. However, the plot has some merit, and a similar idea of anthropic miracles was later explored by Bostrom in "Adam and Eve, UN++"

Is this really an anthropic problem? It looks more like a problem of decision making under uncertainty.

The fundamental problem is that you have two hypotheses, and don't know which is true. There are no objective probabilities for the hypotheses, only subjective credences. There may even be observers in your epistemic situation for which neither hypothesis is true.

As an even more fun outcome, consider the situation where the mind states of the two types of entity are so exactly alike that they are guaranteed to think exactly the same thoughts and make the same decision, and only diverge afterward. Are there one, two, or four people making that decision? Does the distinction even make a difference, if the outcomes are exactly the same in each case?

If an anthropic problem wasn't a decision making under uncertainty, it wouldn't be worth thinking about. If anthropic reasoning wasn't applicable to decisionmaking, it wouldn't be worth talking about. Finding a situation where a particular concept is necessary for survival or decisionmaking is how we demonstrate that the concept is worth talking about, and not just angels dancing on pinheads.

That's fair. One problem is that without constraining the problem to a ridiculous degree, we can't even agree on how many of these decisions are being made and by whom.

A child is supposed to be transported to another town in a car by his father. His family has two cars. The first is a blue electric car with 3 passenger seats. The second car is red with only 1 passager seat and works on gas. Which car should the child expect to find themself in with which probability?

There can be possible situations where the information about the number of seats, the color of the car or its tipe of engine is relevant. Maybe his father like the color red more? Or maybe there is a law against transporting a child on a front seat? Maybe electric car isn't charged? Maybe childs father determined which seat to put him through random sample among all four seats? Maybe the father cares about environment and doesn't use the gas car anymore?

But if the child doesn't know anything about in which situation they are, anything about the actual causal process which would put them in the car, if all the information they have is just the fact that there are two cars with different characteristics, then the naive answer: 1/2 for every car due to the equiprobable prior is correct and any attempt to persuade themselves that "universe cares more" about red cars or cars with more seats, without having any evidence, is plain wrong.

The more I think about anthropics the more I realize there is no rational theory for anthropic binding. For the question "what is the probability that I am the heavy brain?" there really isn't a rational answer. 

I agree that there doesn't seem to be a theory, and there are many things about the problem that makes reaching any level of certainty about it impossible (the we can only have one sample thing). I do not agree that there's a principled argument for giving up looking for a coherent theory.

I suspect it's going to turn out to be like it was with priors about the way the world is: Lacking information, we have just fall back on solomonoff induction. It works well enough, and it's all we have, and it's better than nothing.

So... oh... we can define priors about our location in the in terms of the complexity of a description of their locations. This feels like most of the solution, but I can't tell, there are gaps left, and I can't tell how difficult it will be to complete the bridges.

Compassion measure binds to anthropic binding weight because anthropic binding weight is compassion measure: there is no objective answer to "What is your probability" - probabilities are just numbers you multiply with utilities if you don't want to be endlessly pumped for money.

I like the thought experiment and it seems like an interesting way to split a bunch of philosophical models.

One thing I'm curious about is which (the small three or the big one) you would prefer to be, and whether that preference should factor into your beliefs here.

Generally preference should only affect our beliefs when we're deciding which self fulfilling prophesies to promote. (Hmm or when deciding which branches of thoughts and beliefs to examine or not, which might be equally salient in real bounded agents like us)

I can't see any there? What's your hunch?

The weird thing here is how anthropics might interact w/ the organization of the universe.  Part of this is TDT-ish.

For example, if you were to construct this scenario (one big brain, three smaller brains) and you had control over which one each would prefer to be, how would you line up their preferences?

Given no other controls, I'd probably construct them such as they would prefer to be the construction they are.

So, it seems worth considering (in a very hand wavy and weak in terms of actual evidence) prior is that in worlds where I expect things like myself are setting up experiments like this, I'm slightly more likely to be an instance of the one I would have the preference of being.

[-]ike00

Probability is in the mind. Your question is meaningless. What is meaningful is what your expectations should be for specific experiences, and that's going to depend on the kind of evidence you have and Solomonoff induction - it's pretty trivial once you accept the relevant premises.

It's just not meaningful to say "X and Y exist", you need to reframe it in terms of how various pieces of evidence affect induction over your experiences.

I already am thinking about it in those terms, so I'm not sure what's going wrong here.

Would it have been clearer if the focusing question was more like "what is the probability that, if you manage to find a pair of mirrors that you can use to check the model number on the back of your head, you'll see a model number corresponding to the heaver brain?"

[-]ike30

I have nothing wrong with the probability request here, I have a problem with the scenario. What kind of evidence are you getting that makes these two and only these two outcomes possible? Solomonoff/Bayes would never rule out any outcome, just make some of them low probability.

I've talked about the binding problem in Solomonoff before, see https://www.lesswrong.com/posts/Jqwb7vEqEFyC6sLLG/solomonoff-induction-and-sleeping-beauty and posts it links back to. See also "dust theory".