[I apologize if this is considered lower-quality content, but I felt this was too big for an open thread post, and Discussion is currently the next-lowest option.]

Inspired by the recent discussion surrounding Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction, this thread is intended for brainstorming ideas to improve Less Wrong, and also for including polls with your ideas as an easy way to get an estimate of what people in the community think of them.

Information on putting polls in comments can be found here.  Please be thoughtful about how you word your poll options, and (in the case of multiple-choice polls) include a "no vote" or equivalent option so that people who just want to see the poll results won't be forced into picking an option they don't really support.

Thanks to everyone who shares ideas and everyone who takes the time to think about them and vote!

 

New Comment
36 comments, sorted by Click to highlight new comments since:

I think you missed a step...

Did we spend 5 minutes holding off on proposing solutions, or define the problem yet?

Problem: less "insight per hour of reading" than years ago.

[-]gjm80

For whom, reading what?

Someone interested in rationality but not already familiar with the last few decades' research on heuristics and biases, who hasn't spend a long time thinking about the scientific method and how religions spread and so on and so forth, coming to LW in (say) 2008, would have found Eliezer posting his "Sequences", which set out a large quantity of fairly-basic material in friendly form. They'd have got plenty of insight per hour of reading.

Someone in the exact same epistemic situation now could come along to LW and ... read the Sequences. They would get more or less the same amount of insight per hour. (Maybe either a bit more or a bit less, because they wouldn't be interacting with the comments in the same way as when originally posted.)

Or they could come along to LW and read whatever's in Main and Discussion at the time. That will probably give them less insight per hour, because LW isn't now full of systematically presented material for beginning-to-intermediate rationalists, because Eliezer very kindly already did that and it's all in the archives.

The only way to deliver that level of insight-per-hour to that reader is to fill LW with the same sort of material as the Sequences, but there probably just isn't all that much beginning-to-intermediate rationalism tutorial stuff that hasn't mostly been covered in the Sequences and other Sequence-ish articles since then (e.g., Yvain's "Diseased thinking" and "Generalizing from one example").

So, if the "problem" we are trying to solve is that current LW content doesn't deliver as much insight to beginning-to-intermediate rationalists as in 2008, then "solving" that problem probably implies filling LW with material that strongly overlaps stuff that's been posted here long before.

Perhaps we should consider a different audience. Existing LW regulars, for instance. (Many of whom were the first kind of reader, back in 2008.) Those readers have read most of the Sequence-ish material, or familiarized themselves with similar ideas by other means. Rehashing the same stuff won't give them much insight per hour. They're familiar with all the standard stuff. What they need is lots of new ideas.

So, if the the "problem" we are trying to solve is that current LW content doesn't deliver as much insight to advanced rationalists as LW did to beginning-to-intermediate rationalists in 2008, then "solving" that problem implies delivering new ideas at a rate comparable to the rate at which Eliezer delivered the not-all-so-new ideas in the Sequences back in 2008.

There is scope for debate about just how new those ideas were then. (You can find some in this post and its comments.) But even the ideas that were entirely original to Eliezer were, I think, ones he had had some time before; he was mostly not generating them on the fly. I think it's fairly clear that to deliver the same rate of insights-per-hour to someone who's read everything ever posted on LW it would be necessary to generate novelty faster than Eliezer generated summaries -- and to do so working on more difficult material.

To summarize:

  • For a beginning-to-intermediate rationalist, willing to read old material: Same insight-per-hour is available now as then: read the same things as they'd have read then, all of which are still in the archives.
  • For a beginning-to-intermediate rationalist, reading new material: Same insight-per-hour would require posting a lot of unoriginal material; do we really want that?
  • For someone familiar with most existing LW material: Same insight-per-hour would require generating new research at the same rate at which Eliezer generated summaries.

So it seems to me that "less insight per hour of reading" may not be a problem we should be trying to solve. It's either already solved (if you take the first of those interpretations) or not worth solving (if you take the second) or maybe humanly impossible (if you take the third).

I would say that one of the biggest things that changed is the fact that there are too many posts like the one i'm responding to. I'm not sure what it is, but i think most others can see it.

The wrong thing to link to is the "typical mind fallacy".

[-]gjm10

As probably the person with most to gain from understanding what you think is deplorable about the comment you were replying to (I'm assuming, perhaps wrongly, that you are referring to my comment rather than the original post), I regret that it's not at all clear to me; perhaps my brain just doesn't work as well as those of "most others". Perhaps you might like to give me a clue? Even if you're not sure what you don't like, you must have some idea.

  • Navel-gazing introspection about Less Wrong?
  • Lengthy analysis of something you consider not worth analysing at length?
  • Something you think I got wrong in that analysis?
  • Something you don't like about my comments about the "Sequences"?
  • A writing style that doesn't appeal to you?
  • Something entirely different?

(Responding super-briefly to the first three: I agree that LW has too much navel-gazing and mostly talk about other things; I think thinking clearly about things is a useful skill and worth practising even when the objects available for practising on aren't the most interesting imaginable; I may have made errors but they aren't obvious to me. I don't think I have anything to say about the others without more specific criticism.)

I thought the comment was good and I don't have any idea what SanguineEmpiricist was talking about.

It's hard to explain, i'll edit it in later if I think of a good explanation.

It's just the overly pedantic style complimented by a lovely personality and the passive framing. It has to do with the organizational style as well, maybe a bit too spruced up? Don't let me get you down though, I didn't mean it like that.

[-]gjm00

Well, of course if S.E. is correct that "there are too many posts like the one I'm responding to" then we should expect that other people will like that sort of thing even though s/he doesn't.

(Unsurprisingly, I think my comment was perfectly OK too. Thanks for the expression of support.)

Okay, so I see two solutions here:

1) put a link to "Rationality" e-book on the LW title page, and maybe also in the sidebar -- for the new users;

2) for the old users -- someone should study something rationality-related, and then post it in a form similar to Sequences (for me that means: motivating and easy to read), adding their own insights. Easier said than done, but this approximately what Sequences were about.

None of this seems related to the typical solutions on "improving Less Wrong", but maybe other people are trying to solve a different problem.

That's unavoidable -- once you picked all the low-hanging fruit, diminishing returns kick in.

[-][anonymous]40

I recently noticed how active the Facebook lesswrong group is. A lot of websites are becoming more and more dependent on social media. I would suggest having new discussion posts automatically get shared on Facebook and Twitter. This would increase the amount of traffic and grow the community. As is, the site is too dependent on the blogosphere I would say.

No, I don't want 5 different buttons on every article encouraging me to share this. Why the need to shove facetwittergram everywhere?

A lot of websites are becoming more and more dependent on social media.

No reason to encourage that. Being owned by Zuckerberg is not a good thing.

What Lumifer said. If you want to drive away all the users firmly anti-FB, by all means integrate the two websites. There's enough connection to Facebook already, no reason to start packing up the internet and moving it there. Would it increase traffic? Yes, but probably not by the right criteria.

I would suggest having new discussion posts automatically get shared on Facebook and Twitter.

This would circumvent the whole system of voting on articles. Someone posts a stupid or trolling article, gets immediately downvoted on LW, but people on F and T will have a laugh because it will seem like a usual post.

The sequences are far from the easiest read. I've had several people quit after the first post or two, simply because of that.

Perhaps a rewrite more focused on the laymen would be in order, with a focus on keeping things short, simple, and readable by the average high schooler?

[-][anonymous]20

I’m new to this discussion, so please excuse if this point has been discussed before: Perhaps it would be helpful to allow LW users to send optional (?) and anonymous messages when downvoting. I think, what is inefficient or even harsh about downvotes is that you cannot judge (1) whether the downvoter is actually in the position to judge you and (2) what the exact reason for the downvote is. It’s a bit like the (sometimes anxiety inducing) exercise of reading social cues, but it is in this case even more difficult due to the abstraction to a number.

Edit: Perhaps the downvote could also provide some anonymized information like karma, comment count and account creation date.

[-][anonymous]20

I just had another idea which may or may not be worthwhile to look into: How about allowing LW users to remove a downvote that they have received in case they see it as unjustified? I’m not entirely sure what all implications of that would be, but at least people would be able to recover from a negative score after they have corrected their comment for example.

This is already possible by logging into the Username account and sending a message or reply from there, but we could do something to make it more convenient. Thanks for the idea.

ETA: One possible issue with this I see is that the anonymity might encourage people to be meaner than they would be when posting/messaging under their main account, but perhaps there are ways around this?

[pollid:1000]

Ideas from recent discussion regarding changes to the Promotion system:

[in progress]

Have Promotion be based on some kind of popular vote (not necessarily karma) or some other kind of community decision, instead of an Editor's decision. [pollid:1004]

Allow posts from Discussion to be Promoted without having to first be moved to Main.

Ideas from recent discussion regarding changes to the karma system and/or addition of new sections:

[in progress]

Make downvotes cost some of the downvoter's karma. (h/t RichardKennaway and Houshalter) [pollid:997]

Only allow users with a certain amount of karma to downvote. (The actual amount can be worked out later.) (h/t ete and Houshalter) [pollid:998]

Create a new and separate from Main and Discussion section with either no karma, like the SSC discussions, or only upvotes, like Tumblr, Facebook and other social media services used by rationalists. [pollid:999]

Create a new section for links to interesting posts on external rationalist blogs. (h/t Houshalter) [I apologize for the overlap with the previous poll; I found this idea only after I'd made it.] [pollid:1001]

Keep the same upvote/downvote system for individual comments and posts, but don't keep a total karma score for each user on their user page. Alternatively, keep a total karma score, but don't keep a total percent-positive score. (I believe the EA forum uses the former system, and Reddit the latter, but please correct me if I'm wrong about this.) [pollid:1006]

Create a new section for links to interesting posts on external rationalist blogs

There already is 'recently on rationality blogs'

True, but that's only from a very limited number of sources (~4?); it doesn't include the dozens of smaller blogs. It's also a straight feed--no filtering out of housekeeping, meta posts, etc.--and it only shows 5 links which are quickly pushed aside by newer ones, while a section for links would keep all of them accessible and searchable.

I'd be interested to see data on how often posts with this topic (LW improvement) have been posted and what has been implemented.

It would also be cool to see some traffic and activity data visualizations for LW, LW's FB, CFAR, SSC, Overcoming Bias, EY's Harry Potter fanfic over the last ~5 years. That would be even cooler, and would make me forget about the first thing I was interested in.

what has been implemented.

Well, this part is easy. ;)

Can we ban AI/EA/commonly recurring topics that show up too often? I think there's more than a few posts on discussion about AI currently. I realize this probably isn't going to happen but I'd still like to raise the point that it can get a little repetitive especially if you don't take part in those discussions.

I'd like the LW women series to resume. It was a nice initiative although in practice many comments (and some posts to be fair) weren't really interesting to read. Only problems I see with this is that 1. I'm not aware if there is a seizable amount of women here and 2. The bottom line looks like there isn't going to be anything significantly different between men and women anyway.

The LW Women series had a tendency to rapidly devolve into a fight between the worst elements of feminism and masculinism. Also, the whole "Post inflammatory, frequently misandrist, things and demand non-judgment from the community" thing was a recipe guaranteed to fail.

It was interesting, to be certain, but it also gutted any respect I had for the karma system, which was used primarily as a weapon against ideological enemies. The majority of karma points I have came from that debacle, so I don't say that as somebody bitter about being downvoted - rather, I watched my participation in an inane and pointless debate "reward" me far in excess of anything meaningful I had attempted to contribute before.

The whole "LW Women" was simply a project to claim LW as another territory for a specific tribe. It used dishonest methods of argumentation, and as soon as it failed the author was no longer interested in rationalist community -- which makes me suspect they were never interested in the community in the first place, other than as another place to conquer for their tribe.

I consider it one of two historical attempts where another tribe tried to conquer Less Wrong for them; the other one was the mass downvoting campaign by Eugine Nier. In both cases, I try not to blame all people from the given tribe, but I do not ignore the fact that those tribes can be dangerous to LW. Whether by public shaming or by secretly exploiting the rules, in both cases the goal was to remove their perceived ideological opponents from LW.

The unfortunate consequence is that both situations made a part of a debate taboo. We didn't debate gender balance since then, if I remember correctly. We tried to have a rational debate on politics a few times, but it was always immediately used as a platform to promote one specific fringe group which tries very hard to associate itself publicly with LW despite being only a tiny minority here.

I could like to have a debate about gender balance on LW -- and let's be honest, it's not just about LW; if you go to any atheist conference, or AI conference, you will find a similar imbalance -- but the debate itself should be rational. By which I mean:

  • People speak for themselves, or link to solid evidence, or say "I am not sure, but it seems to me that maybe ...". As opposed to e.g. speaking in the name of all women not participating on LW because obviously they must have exactly the same opinion as you do.

  • Avoiding blatantly manipulative techniques, such as using filtered evidence to prove your point; e.g. by asking women about their experience, but not asking men whether they had similar experience; or pressure like "nice website you have here, would be a shame if someone accused you of sexism".

as soon as it failed the author was no longer interested in rationalist community -- which makes me suspect they were never interested in the community in the first place, other than as another place to conquer for their tribe.

This is not my read of Daenerys, having met her in person, or at least it's a very slanted presentation of the same underlying expectations.

As opposed to e.g. speaking in the name of all women not participating on LW because obviously they must have exactly the same opinion as you do.

This in particular seems ridiculously uncharitable. From the call for experiences starting off the LW Women project:

When these gender discussions come up, I am often tempted to write in with my own experiences and desires. But I generally don't because I don't want to generalize from one example, or claim to be the Voice of Women, etc.

What was so bad about it, anyway? My most salient participation in those threads has been a disappointed comment on how easily baited into controversy LWers are, contrary to my expectations at the time. A comment count in the triple or quadruple digits, on a website that purportedly taught its users how to overcome bias, stank to me of loads and loads of pent-up "strong opinions" just waiting to be shared.

But that's a comment about the users, not about the topic, or the intent behind it. It didn't read as a feminist crusade, except to the extent that you can call inviting women to speak about gender issues, under the protection of anonymity, a feminist crusade. (Some women's strategy on male-dominated, questionably female-friendly websites is to declare themselves male and to shut up about feminism, to remove their gender as an avenue for further attack. This preserves their personal peace of mind, the collective peace of the community, and it preserves the social norms that made them feel uncomfortable in the first place. If you can't beat 'em, join 'em. I wouldn't be surprised if this was going on on LW, but then again it might not, and the only way to know is to ask.)

I went to check those threads, by the way. Yes, users were drawn like flies and the discussions were heated and low-quality (although I've learned in the meantime not to expect much), but on the plus side (though the author might disagree about that) they weren't echo-chambers. There was some positive feedback and some negative feedback about the initiative. Insofar as it endeavoured to make LW more feminist, it made both feminist and anti-feminist narratives more prominent. And if I remember correctly... daenerys was one of the users who complained about mass downvotes from Eugine_Nier, and that's why she left and deleted her account. So basically, The LessWrong Culture Wars: An Eternal Golden Braid...

Well, the primary issue was that the entire exercise began by a process which, by its nature, over-represented crusaders who had an axe to grind. The voice of dissent (In the Female privilege post) was accused of being a fake, even as other women chimed in in the thread to say that it was the first "LW Women" post they identified with at all.

It wasn't about women. If it was about women, they wouldn't have felt so omitted and under or un-represented. It was about feminism.

Can we ban AI/EA/commonly recurring topics that show up too often?

Why would you ban the main topics where deep discussion happens?

Banning specific topics seems an overblown response to their (arguably excessive) prevalence. You're free not to participate in them. Besides, at least half the higher-ups here work on AI-related matters, give to & endorse effective altruist causes, or both, so those specific topics are not going to be verboten any time soon. (I'm aware that there might have been a moratorium on a specific topic some time in the earlier history of the website, but I don't think I know which topic that was. Might have been FAI-related.)

I'm not aware if there is a seizable amount of women here

seizable

Oh man. My sides...

Honest mistake? Accidental sexism? Freudian slip? My money is on the first, as hilarious as the other two hypotheses are; nonetheless, be more careful with spelling next time.

be more careful with spelling next time.

He must seize and desist!

X-)

Why check your privilege when you can check your spelling?

(It's so tempting to continue the pun train, but it seizes to be fun after the first few posts.)