One of the key problems with anthropics is establishing the appropriate reference class. When we attempt to calculate a probability accounting for anthropics, do we consider all agents or all humans or all humans who understand decision theory?

If a tree falls on Sleeping Beauty argues probability is not ontologically basic and the "probability" depends on how you count bets. In this vein, one might attempt to solve anthropics by asking about whose decision to take a bet is linked to yours. You could then count up all the linked agents who observe A and all the agents who observe not A and then calculate the expected value of the bet. More generally, if you can solve bets, my intuition is that you can answer any other question that you would like about the decision by reframing it as a bet.

New Answer
New Comment

2 Answers sorted by

abramdemski

270

I think it depends on how much you're willing to ask counterfactuals to do.

In the paper Anthropic Decision Theory for Self-Locating Agents, Stuart Armstrong says "ADT is nothing but the anthropic version of the far more general Updateless Decision Theory and Functional Decision Theory" -- suggesting that he agrees with the idea that a proposed solution to counterfactual reasoning gives a proposed solution to anthropic reasoning. The overall approach of that paper is to side-step the issue of assigning anthropic probabilities, instead addressing the question of how to make decisions in cases where anthropic questions arise. I suppose this might either be said to "solves anthropics" or "side-steps anthropics", and this choice would determine whether one took Stuart's view to answer "yes" or "no" to your question.

Stuart mentions in that paper that agents making decisions via CDT+SIA tend to behave the same as agents making decisions via EDT+SSA. This can be seen formally in Jessica Taylor's post about CDT+SIA in memoryless cartesian environments, and Caspar Oesterheld's comment about the parallel for EDT+SSA. The post discusses the close connection to pure UDT (with no special anthropic reasoning). Specifically, CDT+SIA (and EDT+SSA) are consistent with the optimality notion of UDT, but don't imply it (UDT may do better, according to its own notion of optimality). This is because UDT (specifically, UDT 1.1) looks for the best solution globally, whereas CDT+SIA can have self-coordination problems (like hunting rabbit in a game of stag hunt with identical copies of itself).

You could see this as giving a relationship between two different notions of counterfactual, with anthropic reasoning mediating the connection.

CDT and EDT are two different ways of reasoning about the consequences of actions. Both of them are "updateful": they make use of all information available in estimating the consequences of actions. We can also think of them as "local": they make decisions from the situated perspective of an information state, whereas UDT makes decisions from a "global" perspective considering all possible information states.

I would claim that global counterfactuals have an easier job than local ones, if we buy the connection between the two suggested here. Consider the transparent Newcomb problem: you're offered a very large pile of money if and only if you're the sort of agent who takes most, but not all, of the pile. It is easy to say from an updateless (global) perspective that you should be the sort of agent who takes most of the money. It is more difficult to face the large pile (an updateful/local perspective) and reason that it is best to take most-but-not-all; your counterfactuals have to say that taking all the money doesn't mean you get all the money. The idea is that you have to be skeptical of whether you're in a simulation; ie, your counterfactuals have to do anthropic reasoning.

In other words: you could factor the whole problem of logical decision theory in two different ways.

  • Option 1:
    • Find a good logically updateless perspective, providing the 'global' view from which we can make decisions.
    • Find a notion of logical counterfactual which combines with the above to yield decisions.
  • Option 2:
    • Find an updateful but skeptical perspective, which takes (logical) observations into account, but also accounts for the possibility that it is in a simulation and being fooled about those observations.
    • Find a notion of counterfactual which works with the above to make good decisions.
    • Also, somehow solve the coordination problems (which otherwise make option 1 look superior).

With option 1, you side-step anthropic reasoning. With option 2, you have to tackle it explicitly. So, you could say that in option 1, you solve anthropic reasoning for free if you solve counterfactual reasoning; in option 2, it's quite the opposite: you might solve counterfactual reasoning by solving anthropic reasoning.

I'm more optimistic about option 2, recently. I used to think that maybe we could settle for the most basic possible notion of logical counterfactual, ie, evidential conditionals, if combined with logical updatelessness. However, a good logically updateless perspective has proved quite elusive so far.

Anyway, this answers one of key questions: whether it is worth working on anthropics or not. I put some time into reading about it (hopefully I get time to pick up Bostrom's book again at some point), but I got discouraged when I started wondering if the work on logical counterfactuals would make this all irrelevant. Thanks for clarifying this. Anyway, why do you think the second approach is more promising?

2abramdemski
My thoughts on that are described further here.

"The idea is that you have to be skeptical of whether you're in a simulation" - I'm not a big fan of that framing, though I suppose it's okay if you're clear that it is an analogy. Firstly, I think it is cleaner to seperate issues about whether simulations have consciousness or not from questions of decision theory given that functionalism is quite a controversial philosophical assumption (even though it might be taken for granted at MIRI). Secondly, it seems as though that you might be able to perfectly predict someone from h... (read more)

2abramdemski
(Definitely not reporting MIRI consensus here, just my own views:) I find it appealing to collapse the analogy and consider the DT considerations to be really touching on the anthropic considerations. It isn't just functionalism with respect to questions about other brains (such as their consciousness); it's also what one might call cognitive functionalism -- ie functionalism with respect to the map as opposed to the territory (my mind considering questions such as consciousness). What I mean is: if the decision-theoretic questions were isomorphic to the anthropic questions, serving the same sort of role in decision-making, then if I were to construct a mind thinking about one or the other, and ask it about what it is thinking, then there wouldn't be any questions which would differentiate anthropic reasoning from the analogous decision-theoretic reasoning. This would seem like a quite strong argument in favor of discarding the distinction. I'm not saying that's the situation (we would need to agree, individually, on seperate settled solutions to both anthropics and decision theory in order to compare them side by side in that way). I'm saying that things seem to point in that direction. It seems rather analogous to thinking that logic and mathematics are distinct (logical knowledge encoding tautology only, mathematical knowledge encoding a priori analytic knowledge, which one could consider distinct... I'm just throwing out philosophy words here to try to bolster the plausibility of this hypothetical view) -- and then discovering that within the realm of what you considered to be pure logic, there's a structure which is isomorphic to the natural numbers, with all reasoning about the natural numbers being explainable as purely logical reasoning. It would be possible to insist on maintaining the distinction between the mathematical numbers and the logical structure which is analogous to them, referring to the first as analytic a priori knowledge and the second as t
2Chris_Leong
I'm confused. This comment is saying that there isn't a strict divide between decision theory and anthropics, but I don't see how that has any relevance to the point that I raised in the comment it is responding to (that a perfect predictor need not utilise a simulation that is conscious in any sense of the word).
2abramdemski
Maybe I'm confused about the relevance of your original comment to my answer. I interpreted as being about the relationship I outline between anthropics and decision theory -- ie, anthropic reasoning may want to take consciousness into account (while you might think it plausible you're in a physics simulation, it is plausible to hold that you can't be living in a more general type of model which predicts you by reasoning about you rather than simulating you if the model is not detailed enough to support consciousness) whereas decision theory only takes logical control into account (so the relevant question is not whether a model is detailed enough to be conscious, but rather, whether it is detailed enough to create a logical dependence on your behavior). I took as an objection to the connection I drew between thinking you might be in a simulation (ie, the anthropic question) and decision theory. Maybe you were objecting to the connection between thinking your in a simulation and anthropics? If so, the claimed connection to decision theory is still relevant. If you buy the decision theory connection, it seems hard to not buy the connection between thinking you're in a simulation and anthropics. I took to be an attempt to drive a wedge between anthropics and decision theory, by saying that a prediction might introduce a logical correlation without introducing an anthropic question of whether you might be 'living in' the prediction. To which my response was, I may want to bite the bullet on that one for the elegance of treating anthropic questions as decision-theoretic in nature. I took to be an attempt to drive a wedge between decision-theoretic and anthropic cases by the fact that we need to assign a small anthropic probability to being the simulation if it is only run once, to which I responded by saying that the math will work out the same on the decision theory side according to Jessica Taylor's theorem. My interpretation now is that you were never objecti
2Chris_Leong
For the first point, I meant that in order to consider this purely as a decision theory problem without creating a dependency on a particular theory of consciousness, you would ideally want a general theory that can deal with any criteria of consciousness (including just being handed a list of entities that count as conscious). Regarding the second, when you update your decision algorithm, you have to update everything subjunctively dependent on you regardless of whether they are are agent or not, but that is distinct from "you could be that object". On the third, biting the bullet isn't necessary to turn this into a decision theory problem as I mention in my response to the first point. But further, elegance alone doesn't seem to be a good reason to accept a theory. I feel I might be misunderstanding your reasoning for biting this bullet. I haven't had time to read Jessica Taylor's theorem yet, so I have no comment on the forth.
4abramdemski
Second point: the motto is something like "anything which is dependent on you must have you inside its computation". Something which depends on you because it is causally downstream of you contains you in its computation in the sense that you have to be calculated in the course of calculating it because you're in its past. The claim is that this observation generalizes.
2Chris_Leong
This seems like a motte and bailey. There's a weak sense of "must have you inside its computation" which you've defined here and a strong sense as in "should be treated as containing as consciousness".
2abramdemski
Well, in any case, the claim I'm raising for consideration is that these two may turn out to be the same. The argument for the claim is the simplicity of merging the decision theory phenomenon with the anthropic phenomenon.
2abramdemski
I note that I'm still overall confused about what the miscommunication was. Your response now seems to fit my earlier interpretation. First point: I disagree about how to consider things as pure decision theory problems. Taking as input a list of conscious entities seems like a rather large point against a decision theory, since it makes it dependent on a theory of consciousness. If you want to be independent of questions like that, far better to consider decision theory on its own (thinking only in terms of logical control, counterfactuals, etc), and remain agnostic on the question of a connection between anthropics and decision theory. In my analogy to mathematics, it could be that there's a lot of philosophical baggage on the logic side and also a lot of philosophical baggage on the mathematical side. Claiming that all of math is tautology could create a lot of friction between these two sets of baggage, meaning one has to bite a lot of bullets which other people wouldn't consider biting. This can be a good thing: you're allowing more evidence to flow, pinning down your views on both sides more strongly. In addition to simplicity, that's related to a theory doing more to stick its neck out, making bolder predictions. To me, when this sort of thing happens, the objections to adopting the simpler view have to be actually quite strong. I suppose that addresses your third point to an extent. I could probably give some reasons besides simplicity, but it seems to me that simplicity is a major consideration here, perhaps my true reason. I suspect we don't actually disagree that much about whether simplicity should be a major consideration (unless you disagree about the weight of Occam's razor, which would surprise me). I suspect we disagree about the cost of biting this particular bullet.
2Chris_Leong
You wrote: Which I interpreted to be you talking about avoiding the issue of consciousness by acting as though any process logically dependent on you automatically "could be you" for the purpose of anthropics. I'll call this the Reductive Approach. However, when I said: I was thinking about separating these issues, not by using the Reductive Approach, but by using what I'll call the Abstracting Approach. In this approach, you construct a theory of anthropics that is just handed a criteria of which beings are conscious and it is expected to be able to handle any such criteria. Part of the confusion here is that we are using the word "depends" in different ways. When I said that the Abstracting Approach avoided creating a dependency on a theory of consciousness, I meant that if you follow this approach, you end up with a decision theory which can have any theory of consciousness just substituted in. It doesn't depend on these theories, as if you discover your theory of consciousness is wrong, you just throw in a new one and everything works. When you talk about "depends" and say that this is a disadvantage, you mean that in order to obtain a complete theory of anthropics, you need to select a theory of consciousness to be combined with your decision theory. I think that this is actually unfair, because in the Reductive Approach, you do implicitly select a theory of consciousness, which I'll call Naive Functionalism. I'm not using this name to be pejorative, it's the best descriptor I can think of for the version of functionalism which you are using that ignores any concerns that high-level predictors might not deserve to be labelled as a consciousness. With the Abstracting Approach I still maintain the option of assuming Naive Functionalism, in which case it collapses down to the Reductive Approach. So given these assumptions, both approaches end up being equally simple. In contrast, given any other theory of consciousness, the Reductive Approach complains that
2abramdemski
I agree that we are using "depends" in different ways. I'll try to avoid that language. I don't think I was confusing the two different notions when I wrote my reply; I thought, and still think, that taking the abstraction approach wrt consciousness is in itself a serious point against a decision theory. I don't think the abstraction approach is always bad -- I think there's something specific about consciousness which makes it a bad idea. Actually, that's too strong. I think taking the abstraction approach wrt consciousness is satisfactory if you're not trying to solve the problem of logical counterfactuals or related issues. There's something I find specifically worrying here. I think part of it is, I can't imagine what else would settle the question. Accepting the connection to decision theory lets me pin down what should count as an anthropic instance (to the extent that I can pin down counterfactuals). Without this connection, we seem to risk keeping the matter afloat forever. Making a theory of counterfactuals take an arbitrary theory of consciousness as an argument seems to cement this free-floating idea of consciousness, as an arbitrary property which a lump of matter can freely have or not have. My intuition that decision theory has to take a stance here is connected to an intuition that a decision theory needs to depend on certain 'sensible' aspects of a situation, and is not allowed to depend on 'absurd' aspects. For example, the table being wood vs metal should be an inessential detail of the 5&10 problem. This isn't meant to be an argument, only an articulation of my position. Indeed, my notion of "essential" vs "inessential" details is overtly functionalist (eg, replacing carbon with silicon should not matter if the high-level picture of the situation is untouched). Still, I think our disagreement is not so large. I agree with you that the question is far from obvious. I find my view on anthropics actually fairly plausible, but far from determined
2Chris_Leong
The argument that you're making isn't that the Abstraction Approach is wrong, it's that by supporting other theories of consciousness, it increases the chance that people will mistakenly fail to choose Naive Functionalism. Wrong theories do tend to attract a certain number of people believing in them, but I would like to think that the best theory is likely to win out over time on Less Wrong. And there's a cost to this. If we remove the assumption of a particular theory of consciousness, then more people will be able to embrace the theories of anthropics that are produced. And partial agreement is generally better than none. This is an argument for Naive Functionalism vs other theories of consciousness. It isn't an argument for the Abstracting Approach over the Reductive approach. The Abstracting Approach is more complicated, but it also seeks to do more. In order to fairly compare them, you have to compare both on the same domain. And given the assumption of Naive Functionalism, the Abstracting Approach reduces to the Reductive Approach. I provided reasons why I believe that Naive Functionalism is implausible in an earlier comment. I'll admit that inconsistency is too strong of a word. My point is just that you need an independent reason to bite the bullet other than simplicity. Like simplicity combined with reasons why the bullets sound worse than they actually are. Yes. It works with any theory of consciousness, even clearly absurd ones.
3abramdemski
(I note that I flagged this part as not being an argument, but rather an attempt to articulate a hazy intuition -- I'm trying to engage with you less as an attempt to convince, more to explain how I see the situation.) I don't think that's quite the argument I want to make. The problem isn't that it gives people the option of making the wrong choice. The problem is that it introduces freedom in a suspicious place. Here's a programming analogy: Both of us are thinking about how to write a decision theory library. We have a variety of confusions about this, such as what functionality a decision theory library actually needs to support, what the interface it needs to present to other things is, etc. Currently, we are having a disagreement about whether it should call an external library for 'consciounsens' vs implement its own behavior. You are saying that we don't want to commit to implementing consciousness a particular way, because we may find that we have to change that later. So, we need to write the library in a way such that we can easily swap consciousness libraries. When I imagine trying to write the code, I don't see how I'm going to call the 'consciousness' library while solving all the other problems I need to solve. It's not that I want to write my own 'consciousness' functionality. It's that I don't think 'consciousness' is an abstraction that's going to play well with the sort of things I need to do. So when I'm trying to resolve other confusions (about the interface, data types I will need, functionality which I may want to implement, etc) I don't want to have to think about calling arbitrary consciousness libraries. I want to think about the data structures and manipulations which feel natural to the problem being solved. If this ends up generating some behaviors which look like a call to the 'naive functionalism' library, this makes me think the people who wrote that library maybe were on to something, but it doesn't make me any more inclined to r
2Chris_Leong
That makes your position a lot clearer. I admit that the Abstraction Approach makes things more complicated and that this might affect what you can accomplish either theoretically or practically by using the Reductive Approach, so I could see some value in exploring this path. For Stuart Armstrong's paper in particular, the Abstraction Approach wouldn't really add much in the way of complications and it would make it much clearer what was going on. But maybe there are other things you are looking into where it wouldn't be anywhere near this easy. But in any case, I'd prefer people to use the Abstraction Approach in the cases where it is easy to do so. True, and I can imagine a level of likelihood below which adopting the Abstraction Approach would be adding needless complexity and mostly be a waste of time. I think it is worth making a distinction between complexity in the practical sense and complexity in the hypothetical sense. In the practical sense, using the Abstraction Approach with Naive Functionalism is more complex than the Reductive Approach. In the hypothetical sense, they are equally complex in term of explaining how anthropics works given Naive Functionalism as we haven't postulated anything additional within this particular domain (you may say that we've postulated consciousness, but within this assumption it's just a renaming of a term, rather than the introduction of an extra entity). I believe that Occam's Razor should be concerned with the later type of complexity, which is why I wouldn't consider it a good argument for the Reductive Approach. I'm very negative on Naive Functionalism. I've still got some skepticism about functionalism itself (property dualism isn't implausible in my mind), but if I had to choose between Functionalist theories, that certainly isn't what I'd pick.
2abramdemski
I'm trying to think more about why I feel this outcome is a somewhat plausible one. The thing I'm generating is a feeling that this is 'how these things go' -- that the sign that you're on the right track is when all the concepts start fitting together like legos. I guess I also find it kind of curious that you aren't more compelled by the argument I made early on, namely, that we should collapse apparently distinct notions if we can't give any cognitive difference between them. I think I later rounded down this argument to occam's razor, but there's a different point to be made: if we're talking about the cognitive role played by something, rather than just the definition (as is the case in decision theory), and we can't find a difference in cognitive role (even if we generally make a distinction when making definitions), it seems hard to sustain the distinction. Taking another example related to anthropics, it seems hard to sustain a distinction between 'probability that I'm an instance' and 'degree I care about each instance' (what's been called a 'caring measure' I think), when all the calculations come out the same either way, even generating something which looks like a Bayesian update of the caring measure. Initially it seems like there's a big difference, because it's a question of modeling something as a belief or a value; but, unless some substantive difference in the actual computations presents itself, it seems the distinction isn't real. A robot built to think with true anthropic uncertainty vs caring measures is literally running equivalent code either way; it's effectively only a difference in code comment.
2Chris_Leong
"Namely, that we should collapse apparently distinct notions if we can't give any cognitive difference between them" - I don't necessarily agree that being subjunctively linked to you (such that it gives the same result) is the same as being cognitively identical, so this argument doesn't get off the ground for me. If adopt a functionalist theory, it seems quite plausible that the degree of complexity is important too (although perhaps you'd say that isn't pure functionalism?) It might be helpful to relate this to the argument I made in Logical Counterfactuals and the Cooperation Game. The point I make there is that the processes are subjunctively linked to you is more a matter of your state of knowledge than anything about the intrinsic properties of the object itself. So if you adopt the position that things that are subjunctively linked to you are cognitively and hence consciously the same, you end up with a highly relativistic viewpoint. I'm curious, how much do people at MIRI lean towards naive functionalism? I'm mainly asking because I'm trying to figure out whether there's a need to write a post arguing against this.
2abramdemski
I haven't heard anyone else express the extremely naive view we're talking about that I recall, and I probably have some specific decision-theory-related beliefs that make it particularly appealing to me, but I don't think it's out of the ballpark of other people's views so to speak. I (probably) agree with this point, and it doesn't seem like much of an argument against the whole position to me -- coming from a Bayesian background, it makes sense to be subjectivist about a lot of things, and link them to your state of knowledge. I'm curious how you would complete the argument -- OK, subjunctive statements are linked to subjective states of knowledge. Where does that speak against the naive functionalist position?
2Chris_Leong
"OK, subjunctive statements are linked to subjective states of knowledge. Where does that speak against the naive functionalist position?" - Actually, what I said about relativism isn't necessarily true. You could assert that any process that is subjunctively linked to what is generally accepted to be a consciousness from any possible reference frame is cognitively identical and hence experiences the same consciousness. But that would include a ridiculous number of things. By telling you that a box will give the same output as you, we can subjunctively link it to you, even if it is only either a dumb box that immediately outputs true or a dumb box that immediately outputs false. Further, there is no reason why we can't subjunctively link someone else facing a completely different situation to the same black box, since the box doesn't actually need to receive the same input as you to be subjunctively linked (this idea is new, I didn't actually realise that before). So the box would be having the experiences of two people at the same time. This feels like a worse bullet than the one you already want to bite.
4abramdemski
The box itself isn't necessarily thought of as possessing an instance of my consciousness. The bullet I want to bite is the weaker claim that anything subjunctively linked to me has me somewhere in its computation (including its past). In the same way that a transcript of a conversation I had contains me in its computation (I had to speak a word in order for it to end up in the text) but isn't itself conscious, a box which very reliably has the same output as me must be related to me somehow. I anticipate that your response is going to be "but what if it is only a little correlated with you?", to which I would reply "how do we set up the situation?" and probably make a bunch of "you can't reliably put me into that epistemic state" type objections. In other words, I don't expect you to be able to make a situation where I both assent to the subjective subjunctive dependence and will want to deny that the box has me somewhere in its computation. For example, the easiest way to make the correlation weak is for the predictor who tells me the box has the same output as me to be only moderately good. There are several possibilities. (1) I can already predict what the predictor will think I'll do, which screens off its prediction from my action, so no subjective correlation; (2) I can't predict confidently what the predictor will say, which means the predictor has information about my action which I lack; then, even if the predictor is poor, it must have a significant tie to me; for example, it might have observed me making similar decisions in the past. So there are copies of me behind the correlation.
2Chris_Leong
"The bullet I want to bite is the weaker claim that anything subjunctively linked to me has me somewhere in its computation (including its past)" - That doesn't describe this example. You are subjunctively linked to the dumb boxes, but they don't have you in their past. The thing that has you in its past is the predictor.
4abramdemski
I disagree, and I thought my objection was adequately explained. But I think my response will be more concrete/understandable/applicable if you first answer: how do you propose to reliably put an agent into the described situation? The details of how you set up the scenario may be important to the analysis of the error in the agent's reasoning. For example, if the agent just thinks the predictor is accurate for no reason, it could be that the agent just has a bad prior (the predictor doesn't really reliably tell the truth about the agent's actions being correlated with the box). To that case, I could respond that of course we can construct cases we intuitively disagree with by giving the agent a set of beliefs which we intuitively disagree with. (This is similar to my reason for rejecting the typical smoking lesion setup as a case against EDT! The beliefs given to the EDT agent in smoking lesion are inconsistent with the problem setup.) I'm not suggesting that you were implying that, I'm just saying it to illustrate why it might be important for you to say more about the setup.
2Chris_Leong
"How do you propose to reliably put an agent into the described situation?" - Why do we have to be able to reliably put an agent in that situation? Isn't it enough that an agent may end up in that situation? But in terms of how the agent can know the predictor is accurate, perhaps the agent gets to examine its source code after it has run and its implemented in hardware rather than software so that the agent knows that it wasn't modified? But I don't know why you're asking so I don't know if this answers the relevant difficulty. (Also, just wanted to check whether you've read the formal problem description in Logical Counterfactuals and the Co-operation Game)
2abramdemski
It occurs to me that although I have made clear that I (1) favor naive functionalism and (2) am far from certain of it, I haven't actually made clear that I further (3) know of no situation where I think the agent has a good picture of the world and where the agent's picture leads it to conclude that there's a logical correlation with its action which can't be accounted for by a logical cause (ie something like a copy of the agent somewhere in the computation of the correlated thing). IE, if there are outright counterexamples to naive functionalism, I think they're actually tricky to state, and I have at least considered a few cases -- your attempted counterexample comes as no surprise to me and I suspect you'll have to try significantly harder. My uncertainty is, instead, in the large ambiguity of concepts like "instance of an agent" and "logical cause".
2abramdemski
For example, we can describe how to put an agent into the counterfactual mugging scenario as normally described (where Omega asks for $10 and gives nothing in return), but critically for our analysis, one can only reliably do so by creating a significant chance that the agent ends up in the other branch (where Omega gives the agent a large sum if and only if Omega would have received the asked-for $10 in the other branch). If this were not the case, the argument for giving the $10 would seem weaker. I'm asking for more detail about how the predictor is constructed such that the predictor can accurately point out that the agent has the same output as the box. Similarly to how counterfactual mugging would be less compelling if we had to rely on the agent happening to have the stated subjunctive dependencies rather than being able to describe a scenario in which it seems very reasonable for the agent to have those subjunctive dependencies, your example would be less compelling if the box just happens to contain a slip of paper with our exact actions, and the predictor just happens to guess this correctly, and we just happen to trust the predictor correctly. Then I would agree that something has gone wrong, but all that has gone wrong is that the agent had a poor picture of the world (one which is subjunctively incorrect from our perspective, even though it made correct predictions). On the other hand, if the predictor runs a simulation of us, and then purposefully chose a box whose output is identical to ours, then the situation seems perfectly sensible: "the box" that's correlated with our output subjectively is a box which is chosen differently in cases where our output is different; and, the choice-of-box contains a copy of us. So the example works: there is a copy of us somewhere in the computation which correlates with us. I've read it now. I think you could already have guessed that I agree with the 'subjective' point and disagree with the 'meaningless to cons
2Chris_Leong
""The box" that's correlated with our output subjectively is a box which is chosen differently in cases where our output is different; and, the choice-of-box contains a copy of us. So the example works" - that's a good point and if you examine the source code, you'll know it was choosing between two boxes. Maybe we need an extra layer of indirection. There's a Truth Tester who can verify that the Predictor is accurate by examining its source code and you only get to examine the Truth Tester's code, so you never end up seeing the code within the predictor that handles the case where the box doesn't have the same output as you. As far as you are subjectively concerned, that doesn't happen.
4abramdemski
Ok, so you find yourself in this situation where the Truth Tester has verified that the Predictor is accurate, and you've verified that the Truth Tester is accurate, and the Predictor tells you that the direction you're about to turn your head has a perfect correspondence to the orbit of some particular asteroid. Lacking the orbit information yourself, you now have a subjective link between your next action and the asteroid's path. This case does appear to present some difficulty for me. I think this case isn't actually so different from the previous case, because although you don't know the source code of the Predictor, you might reasonably suspect that the Predictor picks out an asteroid after predicting you (or, selects the equation relating your head movement to the asteroid orbit after picking out the asteroid). We might suspect this precisely because it is implausible that the asteroid is actually mirroring our computation in a more significant sense. So using a Truth Teller intermediary increases the uncertainty of the situation, but increased uncertainty is compatible with the same resolution. What your revision does do, though, is highlight how the counterfactual expectation has to differ from the evidential conditional. We may think "the Predictor would have selected a different asteroid (or different equation) if its computation of our action had turned out different", but, we now know the asteroid (and the equation); so, our evidential expectation is clearly that the asteroid has a different orbit depending on our choice of action. Yet, it seems like the sensible counterfactual expectation given the situation is ... hm. Actually, now I don't think it's quite that the evidential and counterfactual expectation come apart. Since you don't know what you actually do yet, there's no reason for you to tie any particular asteroid to any particular action. So, it's not that in your state of uncertainty choice of action covaries with choice of asteroid (via so
2abramdemski
Ah, I had taken you to be asserting possibilities and a desire to keep those possibilities open rather than held views and a desire for theories to conform to those views. Maybe something about my view which I should emphasize is that since it doesn't nail down any particular notion of counterfactual dependence, it doesn't actually directly bite bullets on specific examples. In a given case where it may seem initially like you want counterfactual dependence but you don't want anthropic instances to live, you're free to either change views on one or the other. It could be that a big chunk of our differing intuitions lies in this. I suspect you've been thinking of me as wanting to open up the set of anthropic instances much wider than you would want. But, my view is equally amenable to narrowing down the scope of counterfactual dependence, instead. I suspect I'm much more open to narrowing down counterfactual dependence than you might think.
2Chris_Leong
Oh, I completely missed this. That said, I would be highly surprised if these notions were to coincide since they seem like different types. Something for me to think about.

avturchin

10

There is a "natural reference class" for any question X: it is everybody who asks the question X.

In the case of classical anthropic questions like Doomsday Argument such reasoning is very pessimistic, as the class of people who knows about DA is very short and its end is very soon.

Members of the natural reference class could bet on the outcome of X, but the betting result depends on the betting procedure. If betting outcome doesn't depend on the degree of truth (I am either right or wrong), when we get weird anthropic effects.

Such weird anthropic is net winning in betting: the majority of the members of DA-aware reference class live not in the beginning of the world, and DA may be used to predict the end of the world.

If we take into account the edge cases which produce very false results, this will compensate net winning.

This supposedly "natural" reference class is full of weird edge cases, in the sense that I can't write an algorithm that finds "everybody who asks the question X". Firstly "everybody" is not well defined in a world that contains everything from trained monkeys to artificial intelligence's. And "who asks the question X" is under-defined as there is no hard boundary between a different way of phrasing the same question and slightly different questions. Does someone considering the argument in chinese fall int... (read more)

1avturchin
Edge cases do not account for the majority of cases (in most cases) :) But for anthropics we need only majority of cases. I don't ignore other facts based on nitpicking. The fact needs to have strong, one-to-one causal connection with the computations' result for not be ignored. The color of my socks is random variable to my opinion about DA, because it doesn't affect my conclusions. I personally think on two languages about DA, and the result is the same, so the language is also random variable for this reasoning.

I had that idea at first, but of the people asking the question, only some of them actually know how do anthropics. Others might be able to ask the anthropic question, but have no idea how to solve it, so toss up their hands and ignore the entire issue, in which case it is effectively the same as them never asking it in the first place. Others may make an error in their anthropic reasoning which you know how to avoid; similarly they aren't in your reference class because their reasoning process is disconnected from yours. Whenever you make a decision, you are implicitly making a bet. Anthropic considerations alter how the bet plays plays out and in so far as you can account for this, you can account for anthropics.

1avturchin
For any person who actually understands anthropics, there are 10 people who ask questions without understanding (and 0.1 people who know anthropic better) - but it doesn't change my relative location in the middle. No matter if there are 20 people behind me and 20 ahead – or 200 behind and 200 ahead, – if all of them live in the same time interval, like between 1983 and 2050. However, before making any anthropic bet, I need to take into account logical uncertainty, that is, the probability that anthropic is not bullshit. I estimate such meta-level uncertainty as 0.5.(wrote more about in the meta-doomsday argument text).
2Chris_Leong
Them knowing anthropics better than you only makes a difference insofar as they utilitise a different algorithm/make decisions in a way that is disconnected from you. For example, if we are discussing anthropics problem X which you can both solve and they can solve Y and Z which you can't, that is irrelevant here as we are only asking about X. Anyway, I don't think you can assume that people will be evenly distributed. We might hypothesis, for example, that the level of anthropics knowledge will go up over time. "However, before making any anthropic bet, I need to take into account logical uncertainty" - that seems like a reasonable thing to do. However, at this particular time, I'm only trying to solve anthropics from the inside view, not from the outside view. The later is valuable, but I prefer to focus on one part of a problem at a time.
15 comments, sorted by Click to highlight new comments since:

No. If there's a coinflip that determines whether an identical copy of me is created tomorrow, my ability to perfectly coordinate the actions of all copies (logical counterfactuals) doesn't help me at all with figuring out if I should value the well-being of these copies with SIA, SSA or some other rule.

That sounds a lot like Stuart Armstrong's view. I disagree with it, although perhaps our differences are merely definitional rather than substantive. I consider questions of morality or axiology seperate from questions of decision theory. I believe that the best way to model things is such that agents only care about their own overall utility function, however, this is a combination of direct utility (utility the agent experiences directly) and indirect utility (value assigned by the agent to the overall world state excluding the agent, but including other agents). So from my perspective this falls outside of the question of anthropics. (The only cases where this breaks down are Evil Genie like problems where there is no clear referent for "you")

I consider questions of morality or axiology seperate from questions of decision theory.

The claim is essentially that specification of anthropic principles an agent follows belongs to axiology, not decision theory. That is, the orthogonality thesis applies to the distinction, so that different agents may follow different anthropic principles in the same way as different stuff-maximizers may maximize different kinds of stuff. Some things discussed under the umbrella of "anthropics" seem relevant to decision theory, such as being able to function with most anthropic principles, but not, say, choice between SIA and SSA.

(I somewhat disagree with the claim, as structuring values around instances of agents doesn't seem natural, maps/worlds are more basic than agents. But that is disagreement with emphasizing the whole concept of anthropics, perhaps even with emphasizing agents, not with where to put the concepts between axiology and decision theory.)

Hmm... interesting point. I've briefly skimmed Stuart Armstrong's paper and the claim that different moralities end up as different anthropic theories assuming that you care about all of your clones seems to be mistaking a cool calculation trick as having some deeper meaning, which does not automatically follow without further justification.

On reflection, what I said above doesn't perfectly capture my views. I don't want to draw the boundary so that anything in axiology is automatically not a part of anthropics. Instead, I'm just trying to abstract out questions about how desirable other people and states of the world are, so that we can just focus on building a decision theory on top of this. On the other hand, I consider axiology relevant in so far as it relates directly to "you".

For example, in Evil Genie like situations, you might find out that if you had chosen A instead of B, it would have contradicted your existence and the task of trying to value this seems relevant to anthropics. And I still don't know precisely where I stand on these problems, but I'm definitely open to the possibility that this is orthogonal to other questions of value. PS. I'm not even sure at this stage whether Evil Genie problems most naturally fall into anthropics or a seperate class of problems.

I also agree that structuring values around instances of agents seems unnatural, but I'd suggest discussing agent-instances instead of map/worlds.

Yeah, looks like a definitional disagreement.

So do you think logical counterfactuals would solve anthropics given my definition of the scope?

I don't know your definition.

I'll register in advance that this story sounds too simplistic to me (I may add more detail later), but I suspect this question will be a good stimulus for kicking off a discussion

From an agent's first-person perspective there is no reference class for himself, i.e. he is the only one in its reference class. A reference class containing multiple agents only exists if we employ an outsider view.

When beauty wakes up in the experiment she can tell it is "today" and she's experiencing "this awakening". That is not because she knows any objective differences between "today" and "the other day" or between "this awakening" and "the other awakening". It is because from her perspective "today" and "this awakening" is most immediate to her subjective experience which makes them inherently unique and identifiable. She doesn't need to consider the other day(s) to specify today. "Today" is in a class of its own to begin with. But if we reason as an objective outsider and not use any perspective center in our logic then none of the two days are inherently unique. To specify one among the two would require a selection process. For example a day can be specified by say "the earlier day of the two", "the hotter day of the two" or the old fashioned "the randomly selected among of the two". (an awakening can similarly be specified among all awakenings the same way) It is this selection process from the outsider view that defines the reference class.

Paradoxes happens when we mix reasonings from the first-person perspective and the outsider's perspective in the same logic framework. "Today" becomes both uniquely identifiable while at the same time also belongs to a reference class of multiple days. The same can be said about "this awakening". This difference leads to the debate between SIA and SSA.

The importance of perspectives also means when using betting argument we need to repeat the experiment from the perspective of the agent as well. This also means from an agent's first-person perspective, if his objective is simply to maximize his own utility no other agent's decision need to be considered.

Ok, imagine a Simple Beauty problem, without the coin toss: she only wakes up on Monday and on Tuesday. When she wakes up, she know, that it is "today", but "today" is an unknown variable, which could be either Monday or Tuesday, and she doesn't know the day.

In that case she (or me on her place) will still use the reference class logic to get 0.5 probability of Tuesday.

In this case beauty still shouldn't use the reference class logic to assign a probability of 0.5. I argue for sleeping beauty problem the probability of "today" being Monday/Tuesday is an incoherent concept so it do not exist. To ask this question we must specify a day from the view of an outsider. E.g. "what's the probability the hotter day is Monday?" or " what is the probability the randomly selected day among the two is Monday?".

Imagine you participate in a cloning experiment. At night when you are sleeping a highly accurate clone of you with indistinguishable memory is created in an identical room. When waking up there is no way to tell if you are old or new. It might be tempted to ask "what's the probability of "me" being the clone?" I would guess your answer is 0.5 as well. But you can repeat the same experiment as many times as you want by falling asleep let another clone of you be created and wake up again. Each time waking up you can easily tell "this is me", but the is no reason to expect in all these repetitions the "me" would be the new clone about half the times. In fact there is no reason the relative frequency of me being the clone would converge to any value as the number of repetition increases. However if instead of this first person concept of "me" we use an outsider's specification then the question is easily answerable. E.g. what is the probability the randomly chosen version among the two is the clone? The answer is obviously 0.5. If we repeat the experiments and each time let an outsider randomly choose a version then the relative frequency would obviously approach 0.5 as well.

On a side note this also explains why double-halving is not unBayesian.

If the original is somehow privileged over its copies, when his "me" statistic will be different of the copies statistic.

Not sure if I'm following. I don't see in anyway the original is privileged over its copies. In each repetition after waking up I could be the newly created clone just like in the first experiment. The only privileged concepts are due to my first-person perspective such as here, now, this, or the "me" based on my subjective experience.

I would say that the concept of probability works fine in anthropic scenarios, or at least there is a well defined number that is equal to probability in non anthropic situations. This number is assigned to "worlds as a whole". Sleeping beauty assigns 1/2 to heads, and 1/2 to tails, and can't meaningfully split the tails case depending on the day. Sleeping beauty is a functional decision theory agent. For each action A, they consider the logical counterfactual that the algorithm they are implementing returned A, then calculate the worlds utility in that counterfactual. They then return whichever action maximizes utility.

In this framework, "which version am I?" is a meaningless question, you are the algorithm. The fact that the algorithm is implemented in a physical substrate give you means to affect the world. Under this model, whether or not your running on multiple redundant substrates is irrelivant. You reason about the universe without making any anthropic updates. As you have no way of affecting a universe that doesn't contain you, or someone reasoning about what you would do, you might as well behave as if you aren't in one. You can make the efficiency saving of not bothering to simulate such a world.

You might, or might not have an easier time effecting a world that contains multiple copies of you.

"I would say that the concept of probability works fine in anthropic scenarios" - I agree that you can build a notion of probability on top of a viable anthropic decision theory. I guess I was making two points a) you often don't need to b) there isn't a unique notion of probability, but it depends on the payoffs (which disagrees with what you wrote, although the disagreement may be more definitional than substantive)

"As you have no way of affecting a universe that doesn't contain you, or someone reasoning about what you would do, you might as well behave as if you aren't in one" - anthropics isn't just about existence/non-existence. Under some models there will be more agents experiencing your current situation.

"You might, or might not have an easier time effecting a world that contains multiple copies of you" - You probably can, but this is unrelated to anthropics