Marketing Rationality

What is your opinion on rationality-promoting articles by Gleb Tsipursky / Intentional Insights? Here is what I think:

Trying to teach someone to think rationally is a long process -- maybe even impossible for some people. It's about explaining many biases that people do naturally, demonstrating the futility of "mysterious answers" on gut level; while the student needs the desire to become stronger, the humility of admitting "I don't know" together with the courage to give a probabilistic answer anyway; resisting the temptation to use the new skills to cleverly shoot themselves in the foot, keeping the focus on the "nameless virtue" instead of signalling (even towards the fellow rationalists). It is a LW lesson that being a half-rationalist can hurt you, and being a 3/4-rationalist can fuck you up horribly. And the online clickbait articles seem like one of the worst choices for a medium to teach rationality. (The only worse choice that comes to my mind would be Twitter.)

On the other hand, imagine that you have a magical button, and if you press it, all not-sufficiently-correct-by-LW-standards mentions of rationality (or logic, or science) would disappear from the world. Not to be replaced by something more lesswrongish, but simply by anything else that usually appears in the given medium. Would pressing that button make the world a more sane place? What would have happened if someone had pressed that button hundred years ago? In other words, I'm trying to avoid the "nirvana fallacy" -- I am not asking whether those articles are the perfect vehicle for x-rationality, but rather, whether they are a net benefit or a net harm. Because if they are a net benefit, then it's better having them, isn't it?

Assuming that the articles are not merely ignored (where "ignoring" includes "thousands of people with microscopic attention spans read them and then forget them immediately), the obvious failure mode is people getting wrong ideas, or adopting "rationality" as an attire. Is it really that wrong? Aren't people already having absurdly wrong ideas about rationality? Remember all the "straw Vulcans" produced by the movie industry; Terminator, The Big Bang Theory... Rationality already is associated with being a sociopathic villain, or a pathetic nerd. This is where we are now; and the "rationality" clickbait, however sketchy, cannot make it worse. Actually, it can make a few people interested to learn more. At least, it can show people that there is more than one possible meaning of the word.

To me it seems that Gleb is picking the low-hanging fruit that most rationalists wouldn't even touch for... let's admit it... status reasons. He talks to the outgroup, using the language of the outgroup. But if we look at the larger picture, that specific outgroup (people who procrastinate by reading clickbaity self-improvement articles) actually aren't that different from us. They may actually be our nearest neighbors in the human intellectual space. So what some of us (including myself) feel here is the uncanny valley. Looking at someone so similar to ourselves, and yet so dramatically different in some small details which matter to us strongly, that it feels creepy.

Yes, this whole idea of marketing rationality feels wrong. Marketing is like almost the very opposite of epistemic rationality ("the bottom line" et cetera). On the other hand, any attempt to bring rationality to the masses will inevitably bring some distortion; which hopefully can be fixed later when we already have their attention. So why not accept the imperfection of the world, and just do what we can.

As a sidenote, I don't believe we are at risk of having an "Eternal September" on LessWrong (more than we already have). More people interested in rationality (or "rationality") will also mean more places to debate it; not everyone will come here. People have their own blogs, social network accounts, et cetera. If rationality becomes the cool thing, they will prefer to debate it with their friends.

EDIT: See this comment for Gleb's description of his goals.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 6:05 AM
Select new highlight date
All comments loaded

I have not a clue whether this sort of marketing is a good idea. Let me be clear what I mean: I think there's maybe a 30-40% chance that Gleb is having a net positive impact through these outreach efforts. I also think there's maybe a 10-20% chance that he's having a horrific long-term negative impact through these outreach efforts. Thus the whole thing makes me uncomfortable.

So here's some of the concerns I see; I've gone to some effort to be fair to Gleb, and not assume anything about his thoughts or motivations:

  • By presenting these ideas in weakened forms (either by giving short or invalid argumentation, or putting it in venues or contexts with negative associations), he may be memetically immunizing people against the stronger forms of the ideas.
  • By teaching people using arguments from authority, he may be worsening the primary "sanity waterline" issues rather than improving them. The articles, materials, and comments I've seen make heavy use of language like "science-based", "research-based" and "expert". The people reading these articles in general have little or no skill at evaluating such claims, so that they effectively become arguments from authority. By rhetorically convincing them to adopt the techniques or thoughts, he's spreading quite possibly helpful ideas, but reinforcing bad habits around accepting ideas.
  • Gleb's writing style strikes me as very unauthentic feeling. Let me be clear I don't mean to accuse him of anything negative; but I intuitively feel a very negative reaction to his writing. It triggers emotional signals in me of attempted deception and rhetorical tricks (whether or not this is his intent!) His writing risks associating "rationality" with such signals (should other people share my reactions) and again causing immunization, or even catalyzing opposition.

An illustration of the nightmare scenario from such an outreach effort would be that, 3 years from now when I attempt to talk to someone about biases, they respond by saying "Oh god don't give me that '6 weird tips' bullshit about 'rational thinking', and spare me your godawful rhetoric, gtfo."

Like I said at the start, I don't know which way it swings, but those are my thoughts and concerns. I imagine they're not new concerns to Gleb. I still have these concerns after reading all of the mitigating argumentation he has offered so far, and I'm not sure of a good way to collect evidence about this besides running absurdly large long-term "consumer" studies.

I do imagine he plans to continue his efforts, and thus we'll find out eventually how this turns out.

I really appreciate you sharing your concerns. It helps me and other involved in the project learn more about what to avoid going forward and optimize our methods. Thank you for laying them out so clearly! I think this comment will be something that I will come back to in the future as I and others create content.

I want to see if I can address some of the concerns you expressed.

In my writing for venues like Lifehack, I do not speak of rationality explicitly as something we are promoting. As in this post, I talk about growing mentally stronger or being intentional - euphemisms that do not associate rationality as such with what we're doing. I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.

I would question the point about arguing from authority. One of the goals of Intentional Insights is to convey what science-based itself means. For example, in this article, I specifically discuss research studies as a key way of validating truth claims. Recall that we are all suffering from a position of curse of knowledge on this point. How can we expect to teach people who do not know what science-based means without teaching it to them in the first place? Do you remember when you were at a stage when you did not know the value of scientific studies, and then came to learn about them as a useful way of validating evidence? This is what I'm doing in that article above. Hope this helps address some of the concerns about arguing from authority.

I hear you about the inauthentic feeling writing style. As I told Lumifer in my comment below, I cringed at that when I was learning how to write that way, too. You can't believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It's very ughy. This writing style is much more natural for me. So is this.

However, this inauthentic-feeling writing style is the writing style needed to get into Lifehack. I have been trying to change my writing style to get into venues like that for the last year and a half, and only succeeded in changing my writing style in the last couple of months sufficiently to be published in Lifehack. Unfortunately, when trying to spread good ideas to the kind of people who read Lifehack, it's necessary to use the language and genre and format that they want to read, and that the editors publish. Believe me, I also had my struggles with editors there who cut out more complex points and links to any scientific papers as too complex for their audience.

This gets at the broader point of who reads these articles. I want to quote a comment that Tem42 made in response to Lumifer:

Unless you mean simply the site that it is posted on smells of snake oil. In that case I agree, but at the same time, so what? The people that read articles on that site don't smell snake oil, whether they should or not. If the site provides its own filter for its audience, that only makes it easier for us to present more highly targeted cognitive altruism.

Indeed, the site itself provides a filter. The people who read that site are not like you and me. Don't fall for the typical mind fallacy here. They have complete cognitive ease with this content. They like to read it. They like to share it. This is the stuff they go for. My articles are meant to go higher than their average, such as this or this, conveying both research-based tactics applicable to daily life and frameworks of thinking conducive to moving toward rationality (without using the word, as I mentioned above). Hope this helps address the concerns about the writing style and the immunization of people to good ideas, since the readers of this content are specifically looking for this kind of writing style.

Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?

One idea is to try to teach your audience about overconfidence first, e.g. the way this game does with the calibration questions up front. See also.

I would argue that your first and third points are not very strong.

I think that it is not useful to protect an idea so that it is only presented in its 'cool' form. A lot of harm is done by people presenting good ideas badly, and we don't want to do any active harm, but at the same time, the more ways and the more times that an idea is adequately expressed, the more likely that idea will be remembered and understood.

People who are not used to thinking in strict terms are more likely to be receptive to intuition pumps and frequent reminders of the framework (evidence based everything). Getting people into the right mindset is half the battle.

I do however, agree with your second point, strongly. It is very hard to get people to actually care about evidence, and most people would not click through to formal studies; even fewer would read them. Those who would read them are probably motivated enough to Google for information themselves. But actually checking the evidence is so central to rationality that we should always remind new potential rationalists that claims are based on strong research. If clickbait sites are prone to edit out that sort of reference, we should link to articles that are more reader friendly but do cite (and if possible, link to) supporting studies. This sort of link is triple plus good: it means that the reader can see the idea in another writer's words; it introduces them to a new, less clickbaity site that is likely to be good for future reading; and, of course, it gives access to sources.

I think that one function that future articles of this sort should focus on as a central goal is to subtly introduce readers to more and better sites for more and better reading. However, the primary goal should remain as an intro level introduction to useful concepts, and intro level means, unfortunately, presenting these ideas in weakened forms.

Agreed with presenting them to intro-level means, so that there is less of an inference gap.

Good idea on subtly introducing readers to more and better sites for further and better reading, updating on this to do so more often in my articles. Thanks!

Thank you for bringing this up as a topic of discussion! I'm really interested to see what the Less Wrong community has to say about this.

Let me be clear that my goal, and that of Intentional Insights as a whole, is about raising the sanity waterline. We do not assume that all who engage with out content will get to the level of being aspiring rationalists who can participate actively with Less Wrong. This is not to say that it doesn't happen, and in fact some members of our audience have already started to do so, such as Ella. Others are right now reading the Sequences and are passively lurking without actively engaging.

I want to add a bit more about the Intentional Insights approach to raising the sanity waterline broadly.

The social media channel of raising the sanity waterline is only one area of our work. The goal of that channel is to use the strategies of online marketing and the language of self-improvement to get rationality spread broadly through engaging articles. To be concrete and specific, here is an example of one such article: "6 Science-Based Hacks for Growing Mentally Stronger." BTW, editors are usually the ones who write the headline, so I can't "take the credit" for the click-baity nature of the title in most cases.

Another area of work is publishing op-eds in prominent venues on topical matters that address recent political matters in a politically-oriented manner. For example, here is an article of this type: "Get Donald Trump out of my brain: The neuroscience that explains why he’s running away with the GOP."

Another area of work is collaborating with other organizations, especially secular ones, to get our content to their audience. For example, here is a workshop we did on helping secular people find purpose using science.

We also give interviews to prominent venues on rationality-informed topics: 1, 2.

Our model works as follows: once people check out our content on other websites and venues, some will then visit the Intentional Insights website to engage with its content. As an example, after the article on 6 Science-Based Hacks for Growing Mentally Stronger appeared, it was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds. Then, over 1K people visited the Intentional Insights website directly from the Lifehack website. In other words, they were interested enough to not only skim the article, but also follow the links to Intentional Insights, which was listed in my bio. Of those, some will want to engage with our content further. As an example, we had a large wave of new people follow us on Facebook and other social media and subscribe to our newsletter in the week after the article came out. I can't say how many did so as a result of seeing the article or other factors, but there was a large bump. So there is evidence of people wanting to get more thoroughly engaged.

The articles we put out on other media channels and on which we collaborate with other groups are more oriented toward entertainment and less oriented toward education in rationality, although they do convey some rationality ideas. For those who engage more thoroughly with out content, we then provide resources that are more educationally oriented, such as workshop videos, online classes, books, and apps, all described on the "About Us" page. Our content is peer reviewed by our Advisory Board members and others who have expertise in decision-making, social work, education, nonprofit work, and other areas.

Finally, I want to lay out our Theory of Change. This is a standard nonprofit document that describes our goals, our assumptions about the world, what steps we take to accomplish our goals, and how we evaluate our impact. The Executive Summary of our Theory of Change is below, and there is also a link to the draft version of our full ToC at the bottom.

Executive Summary 1) The goal of Intentional Insights is to create a world where all rely on research-based strategies to make wise decisions and lead to mutual flourishing. 2) To achieve this goal, we believe that people need to be motivated to learn and have broadly accessible information about such research-based strategies, and also integrate these strategies into their daily lives through regular practice. 3) We assume that:

  • Some natural and intuitive human thinking, feeling, and behavior patterns are flawed in ways that undermine wise decisions.
  • Problematic decision making undermines mutual flourishing in a number of life areas.
  • These flawed thinking, feeling, and behavior patterns can be improved through effective interventions.
  • We can motivate and teach people to improve their thinking, feeling, and behavior patterns by presenting our content in ways that combine education and entertainment. 4) Our intervention is helping people improve their patterns of thinking, feeling, and behavior to enable them to make wise decisions and bring about mutual flourishing. 5) Our outputs, what we do, come in the form of online content such as blog entries, videos, etc., on our channels and in external publications, as well as collaborations with other organizations. 6) Our metrics of impact are in the form of anecdotal evidence, feedback forms from workshops, and studies we run on our content.

Here is the draft version of our Theory of Change.

Also, about Endless September. After people engage with our content for a while, we introduce them to more advanced things on ClearerThinking, and we are in fact discussing collaborating with Spencer Greenberg, as I discussed in this comment. After that, we introduce them to CFAR and Less Wrong. So those who go through this chain are not the kind who would contribute to Endless September.

The large majority we expect would not go through this chain. They instead engage in other venues with rational thinking, as Viliam mentioned above. This fits into the fact that my goal, and that of Intentional Insights as a whole, is about raising the sanity waterline, and only secondarily getting people to the level of being aspiring rationalists who can participate actively with Less Wrong.

Well, that's all. Look forward to your thoughts! I'm always looking looking for better ways to do things, so very happy to update my beliefs about our methods and optimize them based on wise advice :-)

EDIT: Added link to comment where I discuss our collaboration with Spencer Greenberb's ClearerThinking and also about our audience engaging with Less Wrong such as Ella.

it was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds. Then, over 1K people visited the Intentional Insights website directly from the Lifehack website and elsewhere.

I'm curious: do you use a unified software for tracking the impact of articles through the chain?

For how many times the article itself was shared, Lifehack has that prominently displayed on their website. Then, we use Google Analytics, which gives us information on how many people visited out website from Lifehack itself. We can't track them further than that. If you have ideas about how to track them further, especially using free software, I'd be interested in learning about that!

My main update from this discussion has been a strong positive update about Gleb Tsipursky's character. I've been generally impressed by his ability to stay positive even in the face of criticism, and to continue seeking feedback for improving his approaches.

I just wanted to interject a comment here as someone who is friends with Gleb in meatspace (we're both organizers of the local meetup). In my experience Gleb is kinda spooky in the way he actually updates his behavior and thoughts in response to information. Like, if he is genuinely convinced that the person who is criticizing him is doing so out of a desire to help make the world a more-sane place (a desire he shares) then he'll treat them like a friend instead of a foe. If he thinks that writing at a lower-level than most rationality content is currently written will help make the world a better place, he'll actually go and do it, even if it feels weird or unpleasant to him.

I'm probably biased in that he's my friend. He certainly struggles with it sometimes, and fails too. Critical scrutiny is important, and I'm really glad that Viliam made this thread, but it kinda breaks my heart that this spirit of actually taking ideas seriously has led to Gleb getting as much hate as it has. If he'd done the status-quo thing and stuck to approved-activities it would've been emotionally easier.

(And yes, Gleb, I know that we're not optimizing for warm-fuzzies. It still sucks sometimes.)

Anyway, I guess I just wanted to put in my two (biased) cents that Gleb's a really cool guy, and any appearance of a status-hungry manipulator is just because he's being agent-y towards good ends and willing to get his hands dirty along the way.

Yeah, we're not optimizing for warm-fuzzies from Less Wrongers, but for a broad impact. Thanks for the sympathetic words, my friend.

This road of effective cognitive altruism is a hard one to travel, neither being really appreciated, at least at first, by the ones who we are trying to reach, nor by the ones among our peers whose ideas we are bringing to the masses.

Well, if my liver gets consumed daily by vultures, this is the road I've chosen. Glad to have you by my side, and hope this doesn't rebound on you much.

Thank you, I really appreciate it! I try to stay positive and seek optimizing opportunities :-)

In writing this I considered the virtue of silence, and decided to voice something explicitly.

If rationality is ready to outreach it should be doing it in an as bulletproof way as possible.

Before today I hadn't read deeply into the articles published by Gleb. Owing to this comment:

http://lesswrong.com/lw/mze/marketing_rationality/cwki

and

http://lesswrong.com/lw/mz4/link_lifehack_article_promoting_lesswrong/cw8n

I explicitly just read a handful of Gleb's articles. Prior to this I have just avoided getting in his way (virtue of silence - avoid reading means avoiding being critical and avoid judging someone who is trying to make progress)

These here (to be clear):

I don't like any of them. I find the quality of the rationality to be weak; I find the prose to be varying degrees of spider-creepy (although not as bad as OrphanWilde finds things). If I had a button that I could push to make these go away today I would. I would also be disheartened if Gleb stopped trying to do what he is trying to do. (this is a summary of my experiences with these articles. I can break them down but that would take longer to do)

I believe in spreading rationality; I just need the material to pass my bullshit meters and preferably be right up there as Bulletproof if it can be done. Unfortunately the process of generating material is literally hard work that I want to not do (for the most part), and I expect other people also want to avoid doing hard work. (I sometimes do hard work, and sometimes find work-arounds for doing it anyway, but it's still hard. If rationality were easy/automatic; more would already be doing it)

Hopefully this adds volume to the side of the discussion opposing Gleb's work so far; without sounding like it's attacking...

Something said earlier:

[an article Gleb wrote...] was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds. Then, over 1K people visited the Intentional Insights website directly from the Lifehack website.

I wanted to add that this is a pretty low number for clickbait. almost worth considering a "failed clickbait" to me.

If rationality is ready to outreach it should be doing it in an as bulletproof way as possible.

Why?

Now that we know that Newtonian physics was wrong, and Einstein was right, would you support my project to build a time machine, travel to the past, and assassinate Newton? I mean, it would prevent incorrect physics from being spread around. It would make Einstein's theory more acceptable later; no one would criticize him for being different from Newton.

Okay, I don't really know how to build a time machine. Maybe we could just go burn some elementary-school textbooks, because they often contain too simplified information. Sometimes with silly pictures!

Seems to me that I often see the sentiment that we should raise people from some imaginary level 1 directly to level 3, without going through level 2 first, because... well, because level 3 is better than level 2, obviously. And if those people perhaps can't make the jump, I guess they simply were not meant to be helped.

This is why I wrote about "the low-hanging fruit that most rationalists wouldn't even touch for... let's admit it... status reasons". We are (or imagine ourselves to be) at level 3, and all levels below us are equally deplorable. Helping someone else to get on level 3, that's a worthy endeavor. Helping people get from level 1 to level 2, that's just pathetic, because the whole level 2 is pathetic. Even if we could do that at a fraction of the cost.

Maybe that's true when building a superhuman artificial intelligence (better getting it hundred years later than getting it wrong), but it doesn't apply for most areas of human life. Usually, an improvement is an improvement, even when it's not perfect.

Making all people rationalists could be totally awesome. But making many stupid people slightly less stupid, that's also useful.

Let's start with a false statement from one of Gleb's articles:

Intuitively, we feel our mind to be a cohesive whole, and perceive ourselves as intentional and rational thinkers. Yet cognitive science research shows that in reality, the intentional part of our mind is like a little rider on top of a huge elephant of emotions and intuitions. This is why researchers frequently divide our mental processes into two different systems of dealing with information, the intentional system and the autopilot system.

What's false? Researchers don't use the terms "intentional system" and "autopilot system".

Why is that the problem? Aren't the terms near enough to system I and system II? A person who's interested might want to read additional literature on the subject. The fact that the terms Gleb invented don't match with the existing literature means that it's harder for a person to go from reading Gleb articles to reading higher level material.

If the person digs deeper they will sooner or later run into trouble. The might have a conversation with a genuine neuroscientist and talk about the "intentional system" and "autopilot system" and find that the neuroscientist hasn't heard of making the distinction in those terms. It might take a while till they understand that deception happened but it might hinder them from propressing.

I think talking about system I and system II in the way Gleb does raises the risk of readers coming a way with believing that reflective thinking is superior to intuitive thinking. It suggests that it's about using system II for important issues instead of focusing on aligning system I and system II with each other the way CFAR proposes. The stereotype of people who categorically prefer system II to system I is straw-vulcan's. Level 2 of rationality is not "being a straw-vulcan".

In the article on his website Gleb says:

The intentional system reflects our rational thinking, and centers around the prefrontal cortex, the part of the brain that evolved more recently.

That sounds to me like neurobabble. Kahnmann doesn't say that system II is about a specific part of the brain. Even if it would be completely true, having that knowledge doesn't help a person to be more rational. If you want to make a message as simple as possible you could drop that piece of information without any problem.

Why doesn't he drop it and make the article simpler? Because it helps with pushing an ideology. What other people in this thread called rationality as religion. The rationality that fills someone sense of belong to a group.

I don't see that people rationality get's raised in the process of that. That leads to the question of "what are the basics of rationality?"

I think the facebook group provides sometimes a good venue to understand what new people get wrong. Yesterday one person accused another of being a fake account. I asked the accuser for his credence but he replied that he can't give a probability for something like that. The accuser didn't thought in terms of Cromwell's rule. Making that step from thinking "you are a fake account" to having a mental category of "80% certainty: you are a fake account" is progress. No neuroscience is needed to make that progress.

Rationality for beginners could attempt to teach Cromwell's rule while keeping it as simple as possible. I'm even okay if the term Cromwell's rule doesn't appear. The article can have pretty pictures, but it shouldn't make any false claims.

I admit that "What are the basics of rationality?" isn't an easy question. This community often complicates things. Scott recently wrote what developmental milestones are you missing. That article list 4 milestones with one of them being Cromwell's rule (Scott doesn't name it).

In my current view of rationality other basics might be TAPs, noticing, tiny habits, "how not to be a straw-vulcan" and "have conversation with the goal of learning something new yourself, instead of having the goal of just effecting the other person".

A good way to searching for basics might also be to notice events where you yourself go: "Why doesn't this other person get how the world works, X is obvious to people at LW, why to I have to suffer from living in a world where people don't get X?". I don't think the answer to that question will be that people think that the prefrontal cortex is about system II thinking.

I address the concerns about the writing style and content in my just-written comment here. Let me know your thoughts about whether that helps address your concerns.

Regarding clickbait and sharing, let's actually evaluate the baseline. I want to highlight that 2K is quite a bit higher than the average for a Lifehack article. A typical article does not rise above 1K, and that's considered pretty good. So my articles have done really well by comparison to other Lifehack articles. Since that's the baseline, I'm pretty happy with where the sharing is.

Why would you be disheartened if I stopped what I was trying to do?

EDIT: Also forgot to add that some of the articles you listed were not written by me but by another aspiring rationalist, so FYI.

I'll talk about marketing, actually, because part of the problem is that, bluntly, most of you are kind of inept in this department. By "kind of" I mean "have no idea what you're talking about but are smarter than marketers and it can't be nearly that complex so you're going to talk about it anyways".

Clickbait has come up a few times. The problem is that that isn't marketing, at least not in the sense that people here seem to think. If you're all for promoting marketing, quit promoting shit marketing because your ego is entangled in complex ways with the idea and you feel you have to defend that clickbait.

GEICO has good marketing, which doesn't sell you on their product at all. Indeed, the most prominent "marketing" element of their marketing - the "Saves you 15% or more" bit - mostly serves to distract you from the real marketing, which utilizes the halo effect, among other things, to get you to feel positively about them. (Name recognition, too.) The best elements of their marketing don't get noticed as marketing, indeed don't get noticed at all.

The issue with this entire conversation is that everybody seems to think marketing is noticed, and uses the examples they notice as examples of good marketing. Those are -terrible- examples, as demonstrated by the fact that you think of them when you think of marketing - and anybody you market to will, too. And then you justify these examples of marketing by relying on an unrealistically low opinion of average people - which many average people share.

Do you think somebody clicking on a "One Weird Trick" tries it out? No, they click on clickbait to see what it says, then move on, which is exactly its goal - be attractive enough to get someone's attention, entertaining enough to keep them interested, and no more. Clickbait doesn't impart anything - its goal isn't to be remembered or to change minds or to sell anything except itself, because its goal is to serve up ads to a steady stream of readers.

And if you click on Clickbait to see what stupid people are being tricked into believing - guess what, you're the "stupid person". You were the target audience, which is anybody they can get to click on their stuff, for any reason at all. The author of "This One Weird Trick" doesn't want to convince you to use it, they want you to add a little bit of traffic to the site, and if they can do that by crafting an article and headline that makes intelligent people want to click to see what gullible morons will buy into, they'll do it.

Clickbait isn't the answer. "Rationalist's One Weird Trick To a Happy Life" isn't the answer - indeed, it's the opposite of the answer, because it's deliberately setting rationality up as a sideshow to sell tickets to so people can laugh at what gullible morons buy into.

Not sure if it makes any difference, but instead of "stupid people" I think about people reading articles about 'life hacking' as "people who will probably have little benefit from the advice, because they will most likely immediately read hundred more articles and never apply the advice"; and also that the format of the advice completely ignores the inferential distances, so pretty much the only useful thing such article could give you is a link to a place that provides the real value. And if you are really really lucky, you will notice the link, follow the link, stay there, and get some of the value.

If I'd believe the readers were literally stupid, then of course I wouldn't see much value in advertising LW to them. LW is not useful for stupid people, but can be useful to people... uhm... like I used to be before I found LW.

Which means, I used to spend a lot of time browsing random internet pages, a few times I found a link to some LW article that I read and moved on, and only after some time I realized: "Oh, I have already found a few interesting articles on the same website. Maybe instead of randomly browsing the web, reading this one website systematically could be better!" And that was my introduction to the rationalist community; these days I regularly attend LW meetups.

Could Gleb's articles provide the same gateway for someone else (albeit only for a tiny fraction of the readership)? I don't see a reason why not.

Yes, the clickbait site will make money. Okay. If instead someone would make paper flyers for LW, then the printing company would make money.

Indeed, the people who read one of our articles, for example the Lifehack article, are not inherently stupid. They have that urge for self-improvement that all of us here on Less Wrong have. They just way less education and access to information, and also of course different tastes, preferences, and skills. Moreover, the inferential gap is huge, as you correctly note.

The question is what will people do: will they actually follow the links to get more deep engagement? Let's take the Lifehack article as an example to describe our broader model, which assumes that once people check out our content on other websites and venues, some will then visit the Intentional Insights website to engage with its content. So after the Lifehack article on 6 Science-Based Hacks for Growing Mentally Stronger appeared, it was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds.

Then, over 1K people visited the Intentional Insights website directly from the Lifehack website. In other words, they were interested enough to not only skim the article, but also follow the links to Intentional Insights, which was listed in my bio and elsewhere. Of those, some will want to engage with our content further. As an example, we had a large wave of new people follow us on Facebook and other social media and subscribe to our newsletter in the week after the article came out. I can't say how many did so as a result of seeing the article or other factors, but there was a large bump. So there is evidence of people wanting to get more thoroughly engaged.

The articles are meant to provide a gateway, in other words. And there is evidence of people following the breadcrumbs. Eventually, after they receive enough education, we would introduce them to ClearerThinking, CFAR, and LW. We are careful to avoid Endless September scenarios by not explicitly promoting Less Wrong heavily. For more on our strategy, see my comment below.

/writes post

/reads link to The Virtue of Silence

/deletes post

My overall updating from this thread has been:

Learning a lot more about the diversity of opinions and concerns among Less Wrongers.

  • 1) Learning that there are a lot more risk-averse people on LW who are opposed to experimenting with new things, learning from experience, improving going forward, and optimizing the world, than I had previously thought.

  • 2) Learned a lot about Less Wrongers' "ew" experiences and flinching away from [modern marketing], despite some getting it

  • 3) Learned that many Less Wrongers are strongly oriented toward perfectionism and bulletproof arguments at the expense of clarity and bridging inference gaps.
  • 4) Surprised to see positive updates on my character (1, 2)as the result of this discussion, and will pay more attention to issues of character in the future - I think I paid too much attention to content previously and insufficient attention to character.

Updated toward some different strategies with Intentional Insights

  • 1) Orienting Intentional Insights content more toward providing breadcrumbs of links toward more higher-quality materials than the people on Lifehack and The Huffington Post are currently reading
  • 2) Teaching our audience about the dangers of overconfidence sooner.
  • 3) Taking more concrete steps to minimize the risk of Endless September and tainting the term "rationality" by decreasing mentions of Less Wrong and rationality in our content.
  • 4) Being more clear and specific in communicating scientific thinking to our audiences.
  • 5) Learned more about The Virtue of Silence and need to keep this virtue in mind.
  • 6) Learned to consider more the trade-offs of using and simplifying certain terms and concepts
  • 7) Updating more toward taking well-considered action despite opposition, and avoiding falling into status-quo bias and information bias.
  • 8) Stopping unproductive conversations sooner
  • 9) Overall, I need to focus more on striving to learn things even from highly negative feedback, and avoid the instinct to flinch away or swing back. This is my aspiration, and I did not always succeed in the course of this discussion. However, I believe this experience will help me grow stronger in this domain.

Thanks all for your participation. As you see, you all taught me something. I appreciate you revealing your mental maps to the extent you chose to do so, and now my territory is clearer. My gratitude to you.

EDIT: Edited for formatting, the bullet points did not come out right away.

Okay well it seems like I'm a bit late to the discussion party. Hopefully my opinion is worth something. Heads up: I live in Columbus Ohio and am one of the organizers of the local LW meetup. I've been friends with Gleb since before he started InIn. I volunteer with Intentional Insights in a bunch of different ways and used to be on the board of directors. I am very likely biased, and while I'm trying to be as fair as possible here you may want to adjust my opinion in light of the obvious factors.

So yeah. This has been the big question about Intentional Insights for its entire existence. In my head I call it "the purity argument". Should "rationality" try to stay pure by avoiding things like listicles or the phrase "science shows"? Or is it better to create a bridge of content that will move people along the path stochastically even if the content that's nearest them is only marginally better than swill? (<-- That's me trying not to be biased. I don't like everything we've made, but when I'm not trying to counteract my likely biases I do think a lot of it is pretty good.)

Here's my take on it: I don't know. Like query, I don't pretend to be confident one way or the other. I'm not as scared of "horrific long-term negative impact", however. Probably the biggest reason why is that rationality is already tainted! If we back off of the sacred word, I think we can see that the act of improving-how-we-think exists in academia more broadly, self-help, and religion. LessWrong is but a single school (so to speak) of a practice which is at least as old as philosophy.

Now, I think that LW style rationality is superior than other attempts at flailing at rationality. I think the epistemology here is cleaner than most academic stuff and is at least as helpful as general self-help (again: probably biased; YMMV). But if the fear is that Intentional Insights is going to spoil the broth, I'd say that you should be aware that things like https://www.stephencovey.com/7habits/7habits.php already exist. As Gleb has mentioned elsewhere on the thread, InIn doesn't even use the "rationality" label. I'd argue that the worst thin InIn does to pollute the LW meme-pool is that there are links and references to LW (and plenty of other sources, too).

In other words, I think at worst* InIn is basically just another lame self-help thing that tells people what they want to hear and doesn't actually improve their cognition (a.k.a. the majority of self-help). At best, InIn will out-compete similar things and serve as a funnel which pulls people along the path of rationality, ultimately making the world a nicer, more sane place. Most of my work with InIn has been for personal gain; I'm not a strong believer that it will succeed. What I do think, though, is that there's enough space in the world for the attempt, the goal of raising the sanity waterline is a good one, and rationalists should support the attempt, even if they aren't confident in success, instead of getting swept up in the typical-mind fallacy and ingroup/outgroup and purity biases.

* - Okay, it's not the worst-case scenario. The worst-case scenario is that the presence of InIn aggravates the lords of the matrix into torturing infinite copies of all possible minds for eternity outside of time. :P

(EDIT: If you want more evidence that rationality is already a polluted activity, consider the way in which so many people pattern-match LW as a phyg.)

Do you believe that the "one weird trick to effortlessly lose fat" articles promote healthy eating and are likely to lead people to approach nutrition scientifically?

Beware of other-modeling!

Average Lumifer is most definitely not a good model of average person. Does "one weird trick" promotes improvement? I don't know, but I do know that your gut reaction is not a good model for the answer.