Query the LessWrong Hivemind

Often, there are questions you want to know the answers to. You want other people's opinions, because knowing the answer isn't worth the time you'd have to spend to find it, or you're unsure whether your answer is right.

LW seems like a good place to ask these questions because the people here are pretty rational. So, in this thread: You post a top-level comment with some question. Other people reply to your comment with their answers. You upvote answers that you agree with and questions whose answers you'd like to know.

 

A few (mostly obvious) guidelines:

For questions:

  • Your question should probably be in one of the following forms:
    • Asking for the probability some proposition is true.
    • Asking for a confidence interval.
  • Be specific. Don't ask when the singularity will happen unless you define 'singularity' to reasonable precision.
  • If you have several questions, post each separately, unless they're strongly related.

For answers:

  • Give what the question asks for, be it a probability or a confidence interval or something else. Try to give numbers.
  • Give some indication of how good your map is, i.e why is your answer that? If you want, give links.
  • If you think you know the answer to your own question, you can post it.
  • If you want to, give more information. For instance, if someone asks whether it's a good idea to brush their teeth, you can include info about flossing.
  • If you've researched something well but don't feel like typing up a long justification of your opinions, that's fine. Rather give your opinion without detailed arguments than give nothing at all. You can always flesh your answer out later, or never.

 

This thread is primarily for getting the hivemind's opinions on things, not for debating probabilities of propositions. Debating is also okay, though, especially since it will help question-posters to make up their minds.

Don't be too squeamish about breaking the question-answer format.

This is a followup to my comment in the open thread.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 12:50 AM
Select new highlight date
All comments loaded

If you query Less Wrong, what is the probability that the median response is acceptably close to correct? Please provide confidence intervals, feel free to break out any classes of propositions if you feel that it would be unfair/poor form/not very fun at all to group all classes together but explain why.

Confidence interval(s): If the typical LWer knew the extent of all effects of {cardiovascular, weight-training, other} exercise, and they were able to commit to any amount of said exercise and stick to it, how much would they do?

Assume that any time they spend doing exercise would otherwise have been spent doing other work.

If you want to be more specific, what advice would you give to healthy 25-year-olds, to healthy 40-year-olds, etc.?

I would assume this varies greatly by individual, based on biological factors. My answer would be "enough to feel fit", where moderate levels of physical play would be considered fun rather than dreaded. For me that's about 40min/day, 4x per week, of weight training. I don't know if that's typical, but having lived on both sides of the line, the benefits of feeling fit are very easily worth that lost time. Everything else I do benefits, it's akin to getting enough sleep.

Going on my own anecdotal experience, I think there is substantial marginal benefit to cardiovascular exercise even at the level of 4+ hours a day of said exercise.

For a while I've been doing 4 or 5 hours a day of cycling and it seems to have weird cognitive effects similar to those of caffeine. While exercising I seem to sometimes go into some sort of trance state where I'm pretty excited and can do lots of work (at the same time as the cycling) pretty quickly without it feeling like actual work. I can listen to the same dance track 30 times without becoming bored of it. I think this might be due to 'runner's high', or due to increased blood flow to the brain.

Probability: If the typical modern {person, LWer} knew all the positive and negative effects of taking {modafinil, piracetam, etc.} they would pay present prices to take them.

  • Caffeine: person - 0.9, LW - 0.9
  • Nicotine: (not including reasons for smoking, addictiveness of smoking, or taking nicotine products to break smoking addiction) person - 0.1, LW - 0.8
  • Piracetam: person - 0.05, LW - 0.25
  • Oxi/Ani/other 'potent' racetams: person - 0.05, LW - 0.4
  • Amphetamines (adderall, dexamphetamines, ritalin): person - 0.1, LW - 0.8
  • Modafinil: person - 0.3, LW - 0.99 (!!)

Source: a rationalist's interest in nootropics and stimulants, gwern's site, personal experience with the first four but no statistics. Typical modern person probabilities from discussions with acquaintances of various levels of openness. Summary of probabilities: nicotine and amphetamines are very useful but have negative associations that require high levels of rationality to overcome. Modafinil is very useful but seeing that and using it properly is probably a tad difficult for the average person.

What is the minimum effective period over which one should try a new dietary plan, before reaching conclusions on its effectiveness?

(In other words, what is the time granularity for dietary self-experimentation? This question could be generalized to other health issues where self-experimentation is appropriate.)

My wild ass guess is one month.

I base this number on marijuana testing supposedly useless after a month. It does work very well for two weeks though because some byproduct is fat soluble and is absorbed by your fat cells where it takes them up to a month to completely rinse out. So, for example if you were eating a lot of high hormone ranch animal meat there may well be junk in there that takes your system a month to completely purge it out after you stop eating the high hormone ranch animal meat and you would not be able to see the effects undiluted for a month. Also this is the time granule I use for my own diet self-experiments which I seem to never get to the end of. My diets are for fitness and health purposes only as I have never been much over- or underweight.

I base this number on marijuana testing supposedly useless after a month.

Did you just base an estimate of how long to test a diet over on a speculative guideline for how to circumvent a blood test that relies on the removal half life of a specific metabolite of a certain recreational drug? Wow. Fermi would love you!

Probability that the universe is infinitely large.

Isn't that an unknowable? We literally have no means of deriving information about the universe beyond our lightcone. And that's not even touching on what qualifies as "part of" the universe, depending on which definition of "universe" you are using.

It's an undefined question, I feel.

We literally have no means of deriving information about the universe beyond our lightcone.

Mathematics.

Let G be a a grad student with an IQ of 130 and a background in logic/math/computing.

Probability: The quality of life of G will improve substantially as a consequence of reading the sequences.

Probability: Reading the sequences is a sound investment for G (compared to other activities)

Probability: If every person on the planet were trained in rationality (as far as IQ permits) humanity would allocate resources in a sane manner.

1 & 2: Yes, 80% confidence. However I don't think reading the sequences should be a chore. Start with the daily Seq Reruns and follow them for a week or two. If you don't enjoy it, don't read it. The reason I (and probably most people) read the Sequences was because they were fun to read.

3: "Sane" isn't precise enough to answer. However I would say that the allocation would be more sane than currently practiced with 98% confidence.

For 1 and 2:

I think you need to qualify 'quality of life' a bit. Are you asking if the sequences will make you happier? Resolve some cognitive dissonance? Make you 'win' more (make better decisions)? Even with that sort of clarification, however, it seems difficult to say.

For me, I could say that I feel like I've cleared out some epistemological and ethical cobwebs (lingering bad or inconsistent ideas) by having read them. In any event, there are too many confounding variables, and this requires too much intepretation for me to feel comfortable assigning an estimate at this time.

For 3: I think I would need to know what it means to "train someone in rationality". Do you mean have them complete a course, or are we instituting a grand design in which every human being on Earth is trained like Brennan?

P(substantial improvement) ~ .2 P(sound investment) ~ .8 P(rationaltopia) ~ .01

What's the probability that the Swiss central bank will maintain its cap on the franc vs. euro? And what is your confidence interval for when they might give it up if they do decide to give it up.

Probability: You are living in a simulation run by some sort of intelligence.

Probability: Other people exist independently of your own mind.

Probability: You are dreaming at this very moment. (Learning to dream lucidly is largely a matter of giving this a high probability and keeping it in mind, and updating on it when you encounter, for instance, people asking whether you're dreaming.)

Meta comment: If this questions were in separate comments, I'd upvote/downvote differently. I'm interested in thoughts/arguments related to probability of simulation and I have little interest in solipsism or lucid dreaming. They don't seem very much related topics to me. Am I missing something?

They all seem to be asking variants on the question "how likely is apparent reality real?". They also all seem to have weird properties as far as evidence is concerned, because the observable evidence must all come from the very source (observed reality) whose credibility we're questioning.

Also, except for the solipsism one, they seem to be questions where, contrary to LW canon, it might be a good idea to deliberately self-delude (by which I mean, for instance, not bothering to look at the evidence in-depth). If I really felt a .5 probability in my bones that I was living in a simulation, I don't think I'd be able to work as hard at achieving my goals; I wouldn't have as much will to power when it could all disappear any moment.

Aside: I'm genuinely surprised at the lack of discussion of lucid dreaming on LW. Lucid dreaming seems like a big gaping loophole in reality, like one of the elements you'd need in a real-life equivalent of the infinite-wish-spell-cycle, yet nobody seems to be seriously experimenting with finding innovative uses for it.

In hindsight, though, it seems like removing the middle question might have been better.

If I really felt a .5 probability in my bones that I was living in a simulation, I don't think I'd be able to work as hard at achieving my goals; I wouldn't have as much will to power when it could all disappear any moment.

Would that depend at all on your beliefs about the simulators?

E.g., if you felt a .5 probability that you were in a simulation being run by a real person who shared various important attributes with you, who was attempting to determine the best available strategy for achieving their goals, such that you being successful at achieving yours led directly to them being more successful at achieving theirs, would your motivations change?

P(simulation) ~ .01 P(other minds) ~ .9999 P(dreaming) ~ .0001

P(Simulation) < 0.01; little evidence in favor of it and it requires that there is some other intelligence doing the simulation, that there can be the kind of fault-tolerant hardware that can (flawlessly) compute the universe. I don't think posthuman ancestors are capable of running a universe as a simulation. I think Bostrom's simulation argument is sound.

1 - P(Solipsism) > 0.999; My mind doesn't contain minds that are consistently smarter than I am and can out-think me on every level.

P(Dreaming) < 0.001; We don't dream of meticulously filling out tax forms and doing the dishes.

[ Probabilities are not discounted for expecting to come into contact with additional evidence or arguments ]

My mind doesn't contain minds that are consistently smarter than I am and can out-think me on every level.

Idea: play a game of chess against someone while in a lucid dream.

If you won or lost consistently, it would show that you are better at chess than you are at chess.

If anyone actually does this, I think you should alternate games sitting normally and with your opponent's pieces on your side of the board (i.e. the board turned 180 degrees), because I'd expect your internal agents to think better when they're seeing the board as they would in a real chess match.

My favorite moment along those lines was at work years ago, when a developer asked me to validate the strategy she was proposing to solve a particular problem.

She laid out the strategy for me, I worked through some examples, and said "OK... this looks right to me. But you should ask Mark about it, too, because Mark is way more familar with our tax code than I am, and he might notice something I didn't... like, for example, the fact that this piece over here will fail under this obscure use case."

Then I blinked, listened to what I'd just said, and added "Which I, of course, would never notice. So you should go ask Mark about it."

She, being very polite, simply nodded and smiled and backed away quickly.

(Learning to dream lucidly is largely a matter of giving this a high probability and keeping it in mind, and updating on it when you encounter, for instance, people asking whether you're dreaming.)

I find this statement curious. Perhaps my memory is simply biased on the matter but every dream I can recall -- or, rather, every dream I recall recalling (and those are far and few between at that) -- has always been lucid. Even growing up this was the case. I've always had bouts of insomnia as well. I cannot discount the possibility that I'm simply recalling those things that conform to the patterns of my expectations, but I do know for a fact that I never had to "learn" how to dream lucidly. I recall one particularly vivid string of dreams I had as a child -- or, rather, one particular recurring facet of said dreams -- that all involved me being able to walk two inches off the ground. This is actually one of my earliest memories (I recall little about my early childhood). This "walking off the ground" was something I did because I knew it was a dream.

I have no inclination towards guessing the significance (or magnitude of that significance) of this.

Some people are naturally better at lucid dreaming than others. There is a great forum for lucid dreaming if you're interested at dreamviews.com

I have a question about Pascal's mugging. This does break the standard question-answer format, but you said not to be squeamish about that, so here goes the problem I am currently considering.

According to the wiki, the Standard Pascal's mugging is formulated like this:

Now suppose someone comes to me and says:

"Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people."

Now, further suppose that someone says

"Never give into a Pascal's Mugging except this one. If you do, I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills n^^^^n people, where n is the amount of people threatened by the other Pascal's mugger."

Let's call this a Meta Pascal's Mugging, since it is a Pascal's Mugging which is contingent on your reaction to a Standard Pascal's Mugging. This is a fairly complicated mugging!

Now further suppose a third person says:

"Regardless of the fact that you are under a Meta Pascal's Mugging to not give into a Standard Pascal's Mugging, I am still going to commit Pascal's Mugging on you for five dollars. If you don't give me the money, I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills m^^^^m people, where m is the amount of people threatened by the Meta Pascal's mugger who threatened you if you gave into another Pascal's mugging."

So we could call this a Recursive Pascal's Mugging. Both people are making muggings which refer to mugging MORE people than the other one, since the Meta Pascal's mugging applied to all other muggings, regardless of their level or recursion, although it itself did not start a recursive loop.

Now let's say I am mugged by all THREE Pascal's muggers simultaneously. What do I do?

Clearly, the answer "All Pascal's muggings are not worth worrying about and I don't need to give into any of them." is an answer. But it's also really easy to get to in answer space, so I'm curious if there are any other answers I might not be thinking of.

My own response is that all Pascal's muggings are not worth worrying about.

I'm curious why you only take into consideration scenarios that someone informs you of. That is, suppose a fourth person sits in their control center and decides that every time MichealOS refuses to give money to a Pascal's Mugger, they will simulate m^^^m people and give them fantastically happy eternal lives -- but they don't inform you of that decision.

The probability of this is vanishingly small, of course, but it's only marginally lower than the probability of your other proposed muggings. So presumably you have to take it into account along with everything else, right?

That's a good point. Let me see if I understand the conclusion correctly:

I should consider that there is a opposing Pascal's Anti-Mugging for any Pascal's Mugging, and it seems reasonable that I don't have any reason to consider an Unknown Anti-Mugging more likely than a Unknown Mugging before someone tells me which is occurring.

Once the mugger asserts that there is a mugging, I can ask "What evidence can you show me that gives you reason to believe that the mugging scenario is more likely than the anti-mugging scenario?" If this is a fake mugging (which seems likely), he won't have any evidence he can show me, which means there is no reason to adjust the priors between the mugging and the anti-mugging so I can continue not worrying about the mugging.

If I understood you correctly, that sounds like a pretty good way of thinking about it that I hadn't thought of. If it sounds like I haven't gotten it, please explain in more detail.

Either way, thank you for the explanation!

So, this is correct enough, but I would recommend generalizing the principle.

The (nominally) interesting thing about Pascal's Mugging scenarios (and also about the original Pascal's Wager, which inspired them) is that we can posit hypothetical scenarios that involve utility shifts so vast that even if they are vanishingly unlikely scenarios, the result of multiplying the probability of the scenario by the magnitude of the utility shift should it come to pass is still substantial. This allows a decision system that operates based on the expected value of a scenario (that is, the expected value of the scenario times its likelihood) to be manipulated by presenting it with carefully tailored scenarios of this sort (e.g., Pascal's mugging).

It's conceivable that a well-calibrated decision system would not be subject to such manipulation, because it would assign each scenario a probability that reflected such things... e.g., it would estimate the likelihood of there actually existing an Omega capable of creating 2N units of disutility as no more than .5 the likelihood of an Omega capable of creating only N units.

But I've never met any decision system that well calibrated. So, as bounded systems running on inadequate corrupted hardware, we have to come up with other tactics that keep us from driving off cliffs.

In general, one such tactic is to maintain a broader perspective than just the specific problem I've been invited to think about.

So when the Mugger asserts that there is a mugging, I can ask "Why should I care? What other things do I have roughly the same reason to care about, and why is my attention being directed to this particular choice within that set?"

The same thing goes when Pascal himself argues that I ought to worship the Christian God, for example, because no matter how unlikely I consider His existence, the sheer magnitude of the stakes (Heaven and Hell) dwarf that unlikelihood. If I find that compelling, I should find a vast number of competing Gods' claims equally compelling.

The same thing goes (on a smaller scale) when someone tries to sell me insurance against some specific bad thing happening.