Open thread, October 2011

This thread is for discussing anything that doesn't seem to deserve its own post.

If the resulting discussion becomes impractical to continue here, it means the topic is a promising candidate for its own thread.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 5:54 AM
Select new highlight date
Rendering 50/314 comments  show more

Some time ago, I had a simple insight that seems crucial and really important, and has been on my mind a lot. Yet at the same time, I'm unable to really share it, because on the surface it seems so obvious as to not be worth stating, and very few people would probably get much out of me just stating it. I presume that this an instance of the Burrito Phenomenon:

While working on an article for the Monad.Reader, I’ve had the opportunity to think about how people learn and gain intuition for abstraction, and the implications for pedagogy. The heart of the matter is that people begin with the concrete, and move to the abstract. Humans are very good at pattern recognition, so this is a natural progression. By examining concrete objects in detail, one begins to notice similarities and patterns, until one comes to understand on a more abstract, intuitive level. This is why it’s such good pedagogical practice to demonstrate examples of concepts you are trying to teach. It’s particularly important to note that this process doesn’t change even when one is presented with the abstraction up front! For example, when presented with a mathematical definition for the first time, most people (me included) don’t “get it” immediately: it is only after examining some specific instances of the definition, and working through the implications of the definition in detail, that one begins to appreciate the definition and gain an understanding of what it “really says.”

Unfortunately, there is a whole cottage industry of monad tutorials that get this wrong. To see what I mean, imagine the following scenario: Joe Haskeller is trying to learn about monads. After struggling to understand them for a week, looking at examples, writing code, reading things other people have written, he finally has an “aha!” moment: everything is suddenly clear, and Joe Understands Monads! What has really happened, of course, is that Joe’s brain has fit all the details together into a higher-level abstraction, a metaphor which Joe can use to get an intuitive grasp of monads; let us suppose that Joe’s metaphor is that Monads are Like Burritos. Here is where Joe badly misinterprets his own thought process: “Of course!” Joe thinks. “It’s all so simple now. The key to understanding monads is that they are Like Burritos. If only I had thought of this before!” The problem, of course, is that if Joe HAD thought of this before, it wouldn’t have helped: the week of struggling through details was a necessary and integral part of forming Joe’s Burrito intuition, not a sad consequence of his failure to hit upon the idea sooner.

I'm curious: do others commonly get this feeling of having finally internalized something really crucial, which you at the same time know you can't communicate without spending so much time as to make it not worth the effort? I seem to get one such feeling maybe once a year or a couple.

To clarify, I don't mean simply the feeling of having an intuition which you can't explain because of overwhelming inferential distance. That happens all the time. I mean the feeling of something clicking, and then occupying your thoughts a large part of the time, which you can't explain because you can't state it without it seeming entirely obvious.

(And for those curious - what clicked for me this time around was basically the point Eliezer was making in No Universally Compelling Arguments and Created Already in Motion, but as applied to humans, not hypothetical AIs. In other words, if a person's brain is not evaluating beliefs on the basis of their truth-value, then it doesn't matter how good or right or reasonable your argument is - or for that matter, any piece of information that they might receive. And brains can never evaluate a claim on the basis of the claim's truth value, for a claim's truth value is not a simple attribute that could just be extracted directly. This doesn't just mean that people might (consciously or subconsciously) engage in motivated cognition - that, I already knew. It also means that we ourselves can never know for certain whether hearing the argument that should convince us if we were perfect reasoners will in fact convince us, or whether we'll just dismiss it as flawed for basically no good reason. )

I propose a thread in which people practice saying they were wrong and possibly also saying they were surprised.

For the passed year or two I've felt like there are literally no avenues open to me towards social, romantic, or professional advancement, up from my current position of zero. On reflection, it seems highly unlikely that this is actually true, so it follows that I'm rather egregiously missing something. Are there any rationalist techniques designed to make one better at noticing opportunities (ones that come along and ones that have always been there) in general?

I was about to explain why nobody has an answer to the question you asked, when it turned out you already figured it out :) As for what you should actually do, here's my suggestion:

  1. Explain your actual situation and ask for advice.

  2. For each piece of advice given, notice that you have immediately come up with at least one reason why you can't follow it.

  3. Your natural reaction will be to post those reasons, thereby getting into an argument with the advice givers. You will win this argument, thereby establishing that there is indeed nothing you can do.

  4. This is the important bit: don't do step 3! Instead, work on defeating or bypassing those reasons. If you can't do this by yourself, go ahead and post the reasons, but always in a frame of "I know this reason can be defeated or bypassed, help me figure out how," that aligns you with instead of against the advice givers.

  5. You are allowed to reject some of the given advice, as long as you don't reject all of it.

Okay, I've read through the other responses and I think I understand what you're asking for, but correct me if I'm wrong.

A technique I've found useful for noticing opportunities once I've decided on a goal is thinking and asking about the strategies that other people who have succeeded at the goal used, and seeing if any of them are possible from my situation. This obviously doesn't work so well for goals sufficiently narrow or unique that no one has done them before, but that doesn't seem to be what you're talking about.

Social advancement: how do people who have a lot of friends and are highly respected make friends and instill respect? Romantic advancement: How did the people in stable, committed relationships (or who get all the one-night stands they want, whichever) meet each other and become close? Professional advancement: How did my boss (or mentor) get their position?

Edit: Essentially I'm saying the first step to noticing more opportunities is becoming more familiar with what an opportunity looks like.

You might be interested in The Luck Factor-- it's based on research about lucky and unlucky people, and the author says that lucky people are high on extroversion, have a relaxed attitude toward life (so that they're willing to take advantage of opportunities as they appear (in other words, they don't try to force particular outcomes, and they haven't given up on paying attention to what might be available), and openness to new experiences.

The book claims that all these qualities can be cultivated.

Alright, since no one seems to be understanding my question here, I'll try to reframe it.

(First, to be clear, I'm not having a problem with motivation. I'm not having a problem with indecision. I'm not having a problem with identifying my terminal goal(s).)

To use an analogy, imagine you're playing a video game, and at some point you come to a room where the door shuts behind you and there's no other way out. There's nothing in the room you can interact with, nothing in your inventory that does anything; you poor over every detail of the room, and find there is no way to progress further; the game has glitched, you are stuck. There is literally no way beyond that room and no way out of it except reseting to an earlier save point.

That is how my life feels from the inside: no available paths. (In the glitched video game, it is plausible that there really is no action that will lead to progression beyond the current situation. In real life, not so much.)

Given that it is highly unlikely that this is an accurate Map of the Territory that is the real world, clearly there is a flaw in how I generate my Map in regards to potential paths of advancement in the Territory. It is that cognitive flaw that I wish to correct.

I am asking only for a way to identify and correct that flaw.

On the Freakonomics blog, Steven Pinker had this to say:

There are many statistical predictors of violence that we choose not to use in our decision-making for moral and political reasons, because the ideal of fairness trumps the ideal of cost-effectiveness. A rational decision-maker using Bayes’ theorem would say, for example, that one should convict a black defendant with less evidence than one needs with a white defendant, because these days the base rates for violence among blacks is higher. Thankfully, this rational policy would be seen as a moral abomination.

I've seen a common theme on LW that is more or less "if the consequences are awful, the reasoning probably wasn't rational". Where do you think Pinker's analysis went wrong, if it did go wrong?

One possibility is that the utility function to be optimized in Pinker's example amounts to "convict the guilty and acquit the innocent", whereas we probably want to give weight to another consideration as well, such as "promote the kind of society I'd wish to live in".

A rational decision-maker using Bayes’ theorem would say, for example, that one should convict a black defendant with less evidence than one needs with a white defendant, because these days the base rates for violence among blacks is higher.

One would compare black defendants with guilty black defendants and white defendants with guilty white defendants. It's far from obvious that (guilty black defendants/black defendants) > (guilty white defendants/white defendants). Differing arrest rates, plea bargaining etc. would be factors.

Where do you think Pinker's analysis went wrong, if it did go wrong?

He began a sentence by characterizing what a member of a group "would say".

One would compare black defendants with guilty black defendants and white defendants with guilty white defendants. It's far from obvious that (guilty black defendants/black defendants) > (guilty white defendants/white defendants). Differing arrest rates, plea bargaining etc. would be factors.

60% of convicts who have been exonerated through DNA testing are black. Whereas blacks make up 40% of inmates convicted of violent crimes. Obviously this is affected by the fact that "crimes where DNA evidence is available" does not equal "violent crimes". But the proportion of inmates incarcerated for rape/sexual assault who are black is even smaller: ~33%. There are other confounding factors like which convicts received DNA testing for their crime. But it looks like a reasonable case can be made that the criminal justice system's false positive rate is higher for blacks than whites. Of course, the false negative rate could be higher too. If cross-racial eyewitness identification is to blame for wrongful convictions then uncertain cross-racial eyewitnesses might cause wrongful acquittals.

If you instituted a policy to require less evidence to convict black defendants, you would convict more black defendants, which would make the measured "base rates for violence among blacks" go up, which would mean that you could need even less evidence to convict, which...

Pinker didn't address evidence screening off other evidence. Race would be rendered zero evidence in many cases, in particular in criminal cases for which there is approximately enough evidence to convict. I'm not exactly sure how often, I don't know how much e.g. poverty, crime, and race coincide.

It is perhaps counterintuitive to think that Bayesian evidence can apparently be ignored, but of course it isn't really being ignored, just carefully not double counted.

The percentage arrestees who are black is higher than the percentage of offenders who are black as reported by victims in surveys.

(Suggesting the base rate is screened out by the time the matter gets to the court room)

The problem isn't using it as evidence. The problem is that it is extremely likely that humans will use such evidence in much greater proportion than is actually statistically justified. If juries were perfect Bayesians this wouldn't be a problem.

Grocery stores should have a lane where they charge more, such as 5% more per item. It would be like a toll lane for people in a hurry.

People are bothered by some words and phrases.

Recently, I learned that the original meaning of "tl;dr" has stuck in people's mind such that they don't consider it a polite response. That's good to know.

Some things that bother me are:

  • Referring to life extension as "immortality".
  • Referring to AIs that don't want to kill humans as "friendly".
  • Referring to AIs that want to kill humans as simply "unfriendly".
  • Expressing disagreement as false lack of understanding, e.g. "I don't know how you could possibly think that."
  • Referring an "individual's CEV".
  • Referring to "the singularity" instead of "a singularity".

I'm not going to pretend that referring to women as"girls" inherently bothers me, but it bothers other people, so it by extension bothers me and I wouldn't want it excluded from this discussion.

Some say to say not "complexity" or "emergence".

I propose a thread in which people refine their questions for the speakers at the Singularity Summit.

Why does the argument "I've used math to justify my views, so it must have some validity" tend to override "Garbage In - Garbage Out"? It can be this thread:

I estimate, that a currently working and growing superintelligence has a probability in a range of 1/million to 1/1000. I am at least 50% confident that it is so.

or it can be the subprime mortgage default risk.

What is the name for this cognitive bias of trusting the conclusions more (or sometimes less) when math is involved?

He didn't use math to justify his views. He used it to state them.

Sounds like a special case or "judging an argument by its appearance" (maybe somebody can make that snappier). It's fairly similar to "it's in latin, therefore it must be profound", "it's 500 pages, therefore it must be carefully thought-out" and "it's in helvetica, therefore it's from a trustworthy source".

Note that this is entirely separate from judging by the arguer's appearance.

I'm having trouble finding a piece which I am fairly confident was either written on LW or linked to from here. It dealt with a stone which had the power render all the actions of the person who held it morally right. So a guy goes on a quest to get the stone, crossing the ocean and defeating the fearful guardian, and finds it and returns home. At some point he kills another guy, and gets sentenced by a judge, and it is pointed out that the stone protects him from committing morally wrong actions, not from the human institution of law. Then the guy notices that he is feeling like crap because he is a murderer and it is pointed out that the stone isn't supposed to protect him from his feelings of guilt. And so on, with the stone proving to be useless because the "morality" wasn't attached to anything real.

If somebody knows what I'm talking about, could they be so kind as to point me towards it?

The Heartstone in Yvain's Consequentialism FAQ. Except it's a cat the guy kills.