What is your opinion on rationality-promoting articles by Gleb Tsipursky / Intentional Insights? Here is what I think:
Trying to teach someone to think rationally is a long process -- maybe even impossible for some people. It's about explaining many biases that people do naturally, demonstrating the futility of "mysterious answers" on gut level; while the student needs the desire to become stronger, the humility of admitting "I don't know" together with the courage to give a probabilistic answer anyway; resisting the temptation to use the new skills to cleverly shoot themselves in the foot, keeping the focus on the "nameless virtue" instead of signalling (even towards the fellow rationalists). It is a LW lesson that being a half-rationalist can hurt you, and being a 3/4-rationalist can fuck you up horribly. And the online clickbait articles seem like one of the worst choices for a medium to teach rationality. (The only worse choice that comes to my mind would be Twitter.)
On the other hand, imagine that you have a magical button, and if you press it, all not-sufficiently-correct-by-LW-standards mentions of rationality (or logic, or science) would disappear from the world. Not to be replaced by something more lesswrongish, but simply by anything else that usually appears in the given medium. Would pressing that button make the world a more sane place? What would have happened if someone had pressed that button hundred years ago? In other words, I'm trying to avoid the "nirvana fallacy" -- I am not asking whether those articles are the perfect vehicle for x-rationality, but rather, whether they are a net benefit or a net harm. Because if they are a net benefit, then it's better having them, isn't it?
Assuming that the articles are not merely ignored (where "ignoring" includes "thousands of people with microscopic attention spans read them and then forget them immediately), the obvious failure mode is people getting wrong ideas, or adopting "rationality" as an attire. Is it really that wrong? Aren't people already having absurdly wrong ideas about rationality? Remember all the "straw Vulcans" produced by the movie industry; Terminator, The Big Bang Theory... Rationality already is associated with being a sociopathic villain, or a pathetic nerd. This is where we are now; and the "rationality" clickbait, however sketchy, cannot make it worse. Actually, it can make a few people interested to learn more. At least, it can show people that there is more than one possible meaning of the word.
To me it seems that Gleb is picking the low-hanging fruit that most rationalists wouldn't even touch for... let's admit it... status reasons. He talks to the outgroup, using the language of the outgroup. But if we look at the larger picture, that specific outgroup (people who procrastinate by reading clickbaity self-improvement articles) actually aren't that different from us. They may actually be our nearest neighbors in the human intellectual space. So what some of us (including myself) feel here is the uncanny valley. Looking at someone so similar to ourselves, and yet so dramatically different in some small details which matter to us strongly, that it feels creepy.
Yes, this whole idea of marketing rationality feels wrong. Marketing is like almost the very opposite of epistemic rationality ("the bottom line" et cetera). On the other hand, any attempt to bring rationality to the masses will inevitably bring some distortion; which hopefully can be fixed later when we already have their attention. So why not accept the imperfection of the world, and just do what we can.
As a sidenote, I don't believe we are at risk of having an "Eternal September" on LessWrong (more than we already have). More people interested in rationality (or "rationality") will also mean more places to debate it; not everyone will come here. People have their own blogs, social network accounts, et cetera. If rationality becomes the cool thing, they will prefer to debate it with their friends.
EDIT: See this comment for Gleb's description of his goals.
Why?
Now that we know that Newtonian physics was wrong, and Einstein was right, would you support my project to build a time machine, travel to the past, and assassinate Newton? I mean, it would prevent incorrect physics from being spread around. It would make Einstein's theory more acceptable later; no one would criticize him for being different from Newton.
Okay, I don't really know how to build a time machine. Maybe we could just go burn some elementary-school textbooks, because they often contain too simplified information. Sometimes with silly pictures!
Seems to me that I often see the sentiment that we should raise people from some imaginary level 1 directly to level 3, without going through level 2 first, because... well, because level 3 is better than level 2, obviously. And if those people perhaps can't make the jump, I guess they simply were not meant to be helped.
This is why I wrote about "the low-hanging fruit that most rationalists wouldn't even touch for... let's admit it... status reasons". We are (or imagine ourselves to be) at level 3, and all levels below us are equally deplorable. Helping someone else to get on level 3, that's a worthy endeavor. Helping people get from level 1 to level 2, that's just pathetic, because the whole level 2 is pathetic. Even if we could do that at a fraction of the cost.
Maybe that's true when building a superhuman artificial intelligence (better getting it hundred years later than getting it wrong), but it doesn't apply for most areas of human life. Usually, an improvement is an improvement, even when it's not perfect.
Making all people rationalists could be totally awesome. But making many stupid people slightly less stupid, that's also useful.
Let's start with a false statement from one of Gleb's articles:
What's false? Researchers don't use the terms "intentional system" and "autopilot system".
Why is that the problem? Aren't the terms near enough to system I and system II? A person who's interested might want to read additional literature on the subject. The fact that the terms Gleb invented don't match with the existing literature means that it's harder for a person to go from reading Gleb articles to reading higher level material.
If the person digs deeper they will sooner or later run into trouble. The might have a conversation with a genuine neuroscientist and talk about the "intentional system" and "autopilot system" and find that the neuroscientist hasn't heard of making the distinction in those terms. It might take a while till they understand that deception happened but it might hinder them from propressing.
I think talking about system I and system II in the way Gleb does raises the risk of readers coming a way with believing that reflective thinking is superior to intuitive thinking. It suggests that it's about using system II for important issues instead of focusing on aligning system I and system II with each other the way CFAR proposes. The stereotype of people who categorically prefer system II to system I is straw-vulcan's. Level 2 of rationality is not "being a straw-vulcan".
In the article on his website Gleb says:
That sounds to me like neurobabble. Kahnmann doesn't say that system II is about a specific part of the brain. Even if it would be completely true, having that knowledge doesn't help a person to be more rational. If you want to make a message as simple as possible you could drop that piece of information without any problem.
Why doesn't he drop it and make the article simpler? Because it helps with pushing an ideology. What other people in this thread called rationality as religion. The rationality that fills someone sense of belong to a group.
I don't see that people rationality get's raised in the process of that. That leads to the question of "what are the basics of rationality?"
I think the facebook group provides sometimes a good venue to understand what new people get wrong. Yesterday one person accused another of being a fake account. I asked the accuser for his credence but he replied that he can't give a probability for something like that. The accuser didn't thought in terms of Cromwell's rule. Making that step from thinking "you are a fake account" to having a mental category of "80% certainty: you are a fake account" is progress. No neuroscience is needed to make that progress.
Rationality for beginners could attempt to teach Cromwell's rule while keeping it as simple as possible. I'm even okay if the term Cromwell's rule doesn't appear. The article can have pretty pictures, but it shouldn't make any false claims.
I admit that "What are the basics of rationality?" isn't an easy question. This community often complicates things. Scott recently wrote what developmental milestones are you missing. That article list 4 milestones with one of them being Cromwell's rule (Scott doesn't name it).
In my current view of rationality other basics might be TAPs, noticing, tiny habits, "how not to be a straw-vulcan" and "have conversation with the goal of learning something new yourself, instead of having the goal of just effecting the other person".
A good way to searching for basics might also be to notice events where you yourself go: "Why doesn't this other person get how the world works, X is obvious to people at LW, why to I have to suffer from living in a world where people don't get X?". I don't think the answer to that question will be that people think that the prefrontal cortex is about system II thinking.