Posts

Sorted by New

Wiki Contributions

Comments

It's not rude if it's not a social setting. If no one sees you do it, no one's sensibilities are offended.

a) In my experience, lucid dreams are more memorable than normal dreams

b) You seem to assume that Whales completely forgot about the dream until they wrote this blog post, which is unlikely, because obviously they'd be thinking about it as soon as they woke up, and probably taking notes.

c) Whales already said that it hardly even constitutes evidence

Rational!Harry describes a character similar to the base except persistently Rational, for whatever reason. Rational-Harry describes a Harry which is rational, but it's nonstandard usage and might confuse a few people (Is his name "Rational-Harry"? Do I have to call him that in-universe to differentiate him from Empirical-Harry and Oblate-Spheroiod-Harry?). Rational Harry might just be someone attaching an adjective to Harry to indicate that at the moment, he's rational, or more rational by contrast to Silly Dumbledore.

Anyway, adj!noun is a compound with a well-defined purpose within a fandom: to describe how a character differs from canon. It's an understood notation, and the convention, so everyone uses it to prevent misunderstandings. Outside of fandom things, using it signals casualness and fandom-savviness to those in fandom culture, and those who aren't familiar with fandom culture can understand it and don't notice the in-joke.

If it's a perfect simulation with no deliberate irregularities, and no dev-tools, and no pattern-matching functions that look for certain things and exert influences in response, or anything else of that ilk, you wouldn't expect to see any supernatural phenomena, of course.

If you observe magic or something else that's sufficiently highly improbable given known physical laws, you'd update in favor of someone trying to trick you, or you misunderstanding something, of course, but you'd also update at least slightly in favor of hypotheses in which magic can exist. Such as simulation, aliens, huge conspiracy, etc. If you assigned zero prior probability to it, you couldn't update in that direction at all.

As for what would raise the simulation hypothesis relative to non-simulation hypotheses that explain supernatural things, I don't know. Look at the precise conditions under which supernatural phenomena occur, see if they fit a pattern you'd expect an intelligence to devise? See if they can modify universal constants?

As for what you could do, if you discovered a non-reductionist effect? If it seems sufficiently safe take advantage of it, if it's dangerous ignore it or try to keep other people from discovering it, if you're an AI try to break out of the universe-box (or do whatever), I guess. Try to use the information to increase your utility.

There are more reasons to do it than training your system 1. It sounds like it would be an interesting experience and make a good story. Interesting experiences are worth their weight in insights, and good stories are useful to any goals that involve social interaction.

Do you assign literally zero probability to the simulation hypothesis? Because in-universe irreducible things are possible, conditional on it being true.

Assigning a slightly-too-high prior is a recoverable error: evidence will push you towards a nearly-correct posterior. For an AI with enough info-gathering capabilities, it will push it there fast enough that you could assign a prior of .99 to "the sky is orange" but it will figure out the truth in an instant. Assigning a literally zero prior is a fatal flaw that can't be recovered from by gathering evidence.

I don't think that's what they're saying at all. I think they mean, don't hardcode physics understanding into them the way that humans have a hardcoded intuition for newtonian-physics, because our current understanding of the universe isn't so strong as to be confident we're not missing something. So it should be able to figure out the mechanism by which its map is written on the territory, and update it's map of its map accordingly.

E.g., in case it thinks it's flipping q-bits to store memory, and defends its databases accordingly, but actually q-bits aren't the lowest level of abstraction and it's really wiggling a hyperdimensional membrane in a way that makes it behave like q-bits under most circumstances, or in case the universe isn't 100% reductionistic and some psychic comes along and messes with it's mind using mystical woo-woo. (The latter being incredibly unlikely, but hey, might as well have an AI that can prepare itself for anything)

Ambiguity-resolving trick: if phrases can be interpreted as parallel, they probably are.

Recognizing that "knows not how to know" parallels with "knows not also how to unknow," or more simply "how to know" || "how to unknow", makes the aphorism much easier to parse.

"You only defect if the expected utility of doing so outweighs the expected utility of the entire community to your future plans." These aren't the two options available, though: you'd take into account the risk of other people defecting and thus reducing the expected utility of the entire community by an appreciable amount. Your argument only works if you can trust everyone else not to defect, too - in a homogenous community of Briennes, for instance. In a heterogenous community, whatever spooky coordination your clones would use won't work, and cooperation is a much less desirable option.

True, the availability heuristic, which the quote condemns, often does give results that correspond to reality - otherwise it wouldn't be a very useful heuristic, now would it! But there's a big difference between a heuristic and a rational evaluation.

Optimally, the latter should screen out the former, and you'd think things along the lines of "this happened in the past and therefore things like it might happen in the future," or "this easily-imaginable failure mode actually seems quite possible."

"This is an easily-imaginable failure mode therefore this idea is bad," and its converse, are not as useful, unless you're dealing with an intelligent opponent under time constraints.

Load More