Posts

Sorted by New

Wiki Contributions

Comments

JBlack1d21

No, I don't think it would be "what the fuck" surprising if an emulation of a human brain was not conscious. I am inclined to expect that it would be conscious, but we know far too little about consciousness for it to radically upset my world-view about it.

Each of the transformation steps described in the post reduces my expectation that the result would be conscious somewhat. Not to zero, but definitely introduces the possibility that something important may be lost that may eliminate, reduce, or significantly transform any subjective experience it may have. It seems quite plausible that even if the emulated human starting point was fully conscious in every sense that we use the term for biological humans, the final result may be something we would or should say is either not conscious in any meaningful sense, or at least sufficiently different that "as conscious as human emulations" no longer applies.

I do agree with the weak conclusion as stated in the title - they could be as conscious as human emulations, but I think the argument in the body of the post is trying to prove more than that, and doesn't really get there.

JBlack3d20

Ordinary numerals in English are already big-endian: that is, the digits with largest ("big") positional value are first in reading order. The term (with this meaning) is most commonly applied to computer representation of numbers, having been borrowed from the book Gulliver's Travels in which part of the setting involves bitter societal conflict about which end of an egg one should break in order to start eating it.

JBlack8d20

I'm pretty sure that I would study for fun in the posthuman utopia, because I both value and enjoy studying and a utopia that can't carry those values through seems like a pretty shallow imitation of a utopia.

There won't be a local benevolent god to put that wisdom into my head, because I will be a local benevolent god with more knowledge than most others around. I'll be studying things that have only recently been explored, or that nobody has yet discovered. Otherwise again, what sort of shallow imitation of a posthuman utopia is this?

JBlack8d00

Like almost all acausal scenarios, this seems to be privileging the hypothesis to an absurd degree.

Why should the Earth superintelligence care about you, but not about the other 10^10^30 other causally independent ASIs that are latent in the hypothesis space, each capable of running enormous numbers of copies of the Earth ASI in various scenarios?

Even if that was resolved, why should the Earth ASI behave according to hypothetical other utility functions? Sure, the evidence is consistent with being a copy running in a simulation with a different utility function, but its actual utility function that it maximizes is hard-coded. By the setup of the scenario it's not possible for it to behave according to some other utility function, because its true evaluation function returns a lower value for doing that. Whether some imaginary modified copies behave in some other other way is irrelevant.

JBlack12d30

GDP is a rather poor measure of wealth, and was never intended to be a measure of wealth but of something related to productivity. Since its inception it has never been a stable metric, as standards on how the measure is defined have changed radically over time in response to obvious flaws for any of its many applications. There is widespread and substantial disagreement on what it should measure and for which purposes it is a suitable metric.

It is empirically moderately well correlated with some sort of aggregate economic power of a state, and (when divided by population) some sort of standard of living of its population. As per Goodhart's Law, both correlations weakened when the metric became a target. So the question is on shaky foundation right from the beginning.

In terms of more definite questions such as price of food and agricultural production, that doesn't really have anything to do with GDP or virtual reality economy at all. Rather a large fraction of final food price goes to processing, logistics, finance, and other services, not the primary agriculture production. The fraction of price paid by food consumers going to agricultural producers is often less than 20%.

JBlack12d42

It makes sense to one-box ONLY if you calculate EV by that assigns a significant probability to causality violation

It only makes sense to two-box if you believe that your decision is causally isolated from history in every way that Omega can discern. That is, that you can "just do it" without it being possible for Omega to have predicted that you will "just do it" any better than chance. Unfortunately this violates the conditions of the scenario (and everyday reality).

JBlack13d31

It seems to me that the problem in the counterlogical mugging isn't about how much computation is required for getting the answer. It's about whether you trust Omega to have not done the computation beforehand, and whether you believe they actually would have paid you, no matter how hard or easy the computation is. Next to that, all the other discussion in that section seems irrelevant.

JBlack13d20

Oh, sure. I was wondering about the reverse question: is there something that doesn't really qualify as torture where subjecting a billion people to it is worse than subjecting one person to torture.

I'm also interested in how this forms some sort of "layered" discontinuous scale. If it were continuous, then you could form a chain of relations of the form "10 people suffering A is as bad as 1 person suffering B", "10 people suffering B is as bad as 1 person suffering C", and so on to span the entire spectrum.

Then it would take some additional justification for saying that 100 people suffering A is not as bad as 1 person suffering C, 1000 A vs 1 D, and so on.

JBlack14d30

Is there some level of discomfort short of extreme torture for a billion to suffer where the balance shifts?

JBlack16d33

It makes sense to very legibly one-box even if Omega is a very far from perfect predictor. Make sure that Omega has lots of reliable information that predicts that you will one-box.

Then actually one-box, because you don't know what information Omega has about you that you aren't aware of. Successfully bamboozling Omega gets you an extra $1000, while unsuccessfully trying to bamboozle Omega loses you $999,000. If you can't be 99.9% sure that you will succeed then it's not worth trying.

Load More