On the importance of Less Wrong, or another single conversational locus

Epistemic status: My actual best bet.  But I used to think differently; and I don't know how to fully explicate the updating I did (I'm not sure what fully formed argument I could give my past self, that would cause her to update), so you should probably be somewhat suspicious of this until explicated.  And/or you should help me explicate it.

It seems to me that:
  1. The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have.

  2. Despite all priors and appearances, our little community (the "aspiring rationality" community; the "effective altruist" project; efforts to create an existential win; etc.) has a shot at seriously helping with this puzzle.  This sounds like hubris, but it is at this point at least partially a matter of track record.[1]

  3. To aid in solving this puzzle, we must probably find a way to think together, accumulatively. We need to think about technical problems in AI safety, but also about the full surrounding context -- everything to do with understanding what the heck kind of a place the world is, such that that kind of place may contain cheat codes and trap doors toward achieving an existential win. We probably also need to think about "ways of thinking" -- both the individual thinking skills, and the community conversational norms, that can cause our puzzle-solving to work better. [2]

  4. One feature that is pretty helpful here, is if we somehow maintain a single "conversation", rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another.  By "a conversation", I mean a space where people can e.g. reply to one another; rely on shared jargon/shorthand/concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).

  5. One feature that really helps things be "a conversation" in this way, is if there is a single Schelling set of posts/etc. that people (in the relevant community/conversation) are supposed to read, and can be assumed to have read.  Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.

  6. We have lately ceased to have a "single conversation" in this way.  Good content is still being produced across these communities, but there is no single locus of conversation, such that if you're in a gathering of e.g. five aspiring rationalists, you can take for granted that of course everyone has read posts such-and-such.  There is no one place you can post to, where, if enough people upvote your writing, people will reliably read and respond (rather than ignore), and where others will call them out if they later post reasoning that ignores your evidence.  Without such a locus, it is hard for conversation to build in the correct way.  (And hard for it to turn into arguments and replies, rather than a series of non sequiturs.)


It seems to me, moreover, that Less Wrong used to be such a locus, and that it is worth seeing whether Less Wrong or some similar such place[3] may be a viable locus again.  I will try to post and comment here more often, at least for a while, while we see if we can get this going.  Sarah Constantin, Ben Hoffman, Valentine Smith, and various others have recently mentioned planning to do the same.

I suspect that most of the value generation from having a single shared conversational locus is not captured by the individual generating the value (I suspect there is much distributed value from having "a conversation" with better structural integrity / more coherence, but that the value created thereby is pretty distributed).  Insofar as there are "externalized benefits" to be had by blogging/commenting/reading from a common platform, it may make sense to regard oneself as exercising civic virtue by doing so, and to deliberately do so as one of the uses of one's "make the world better" effort.  (At least if we can build up toward in fact having a single locus.)

If you believe this is so, I invite you to join with us.  (And if you believe it isn't so, I invite you to explain why, and to thereby help explicate a shared body of arguments as to how to actually think usefully in common!)



[1] By track record, I have in mind most obviously that AI risk is now relatively credible and mainstream, and that this seems to have been due largely to (the direct + indirect effects of) Eliezer, Nick Bostrom, and others who were poking around the general aspiring rationality and effective altruist space in 2008 or so, with significant help from the extended communities that eventually grew up around this space.  More controversially, it seems to me that this set of people has probably (though not indubitably) helped with locating specific angles of traction around these problems that are worth pursuing; with locating other angles on existential risk; and with locating techniques for forecasting/prediction (e.g., there seems to be similarity between the techniques already being practiced in this community, and those Philip Tetlock documented as working).

[2] Again, it may seem somewhat hubristic to claim that that a relatively small community can usefully add to the world's analysis across a broad array of topics (such as the summed topics that bear on "How do we create an existential win?").  But it is generally smallish groups (rather than widely dispersed millions of people) that can actually bring analysis together; history has often involved relatively small intellectual circles that make concerted progress; and even if things are already known that bear on how to create an existential win, one must probably still combine and synthesize that understanding into a smallish set of people that can apply the understanding to AI (or what have you).

It seems worth a serious try to see if we can become (or continue to be) such an intellectually generative circle; and it seems worth asking what institutions (such as a shared blogging platform) may increase our success odds.

[3]  I am curious whether Arbital may become useful in this way; making conversation and debate work well seems to be near their central mission.  The Effective Altruism Forum is another plausible candidate, but I find myself substantially more excited about Less Wrong in this regard; it seems to me one must be free to speak about a broad array of topics to succeed, and this feels easier to do here.  The presence and easy linkability of Eliezer's Less Wrong Sequences also seems like an advantage of LW.

Thanks to Michael Arc (formerly Michael Vassar) and Davis Kingsley for pushing this/related points in conversation.
Rendering 200/357 comments, sorted by
magical algorithm
(show more)
Highlighting new comments since Today at 11:33 AM
Select new highlight date
Moderation Guidelinesexpand_more

Hi Anna,

Please consider a few gremlins that are weighing down LW currently:

  1. Eliezer's ghost -- He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the "owner" of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.

  2. the no politics rule (related to #1) -- We claim to have some of the sharpest thinkers in the world, but for some reason shun discussing politics. Too difficult, we're told. A mindkiller! This cost us Yvain/Scott who cited it as one of his reasons for starting slatestarcodex, which now dwarfs LW. Oddly enough I recently saw it linked from the front page of realclearpolitics.com, which means that not only has discussing politics not harmed SSC, it may actually be drawing in people who care about genuine insights in this extremely complex space that is of very high interest.

  3. the "original content"/central hub approach (related to #1) -- This should have been an aggregator since day 1. Instead it was built as a "community blog". In other words, people had to host their stuff here or not have it discussed here at all. This cost us Robin Hanson on day 1, which should have been a pretty big warning sign.

  4. The codebase, this website carries tons of complexity related to the reddit codebase. Weird rules about responding to downvoted comments have been implemented in there, nobody can make heads or tails with it. Use something modern, and make it easy to contribute to. (telescope seems decent these days).

  5. Brand rust. Lesswrong is now kinda like myspace or yahoo. It used to be cool, but once a brand takes a turn for the worse, it's really hard to turn around. People have painful associations with it (basilisk!) It needs burning of ships, clear focus on the future, and as much support as possible from as many interested parties, but only to the extent that they don't dillute the focus.

In the spirit of the above, I consider Alexei's hints that Arbital is "working on something" to be a really bad idea, though I recognise the good intention. Efforts like this need critical mass and clarity, and diffusing yet another wave of people wanting to do something about LW with vague promises of something nice in the future (that still suffers from problem #1 AFAICT) is exactly what I would do if I wanted to maintain the status quo for a few more years.

Any serious attempt at revitalising lesswrong.com should focus on defining ownership and plan clearly. A post by EY himself recognising that his vision for lw 1.0 failed and passing the batton to a generally-accepted BDFL would be nice, but i'm not holding my breath. Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence. LW as an aggregator-first (with perhaps ability to host content if people wish to, like hn) is fine. HN may have degraded over time, but much less so than LW, and we should be able to improve on their pattern.

I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it's something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.

Re: 1, I vote for Vaniver as LW's BDFL, with authority to decree community norms (re: politics or anything else), decide on changes for the site; conduct fundraisers on behalf of the site; etc. (He already has the technical admin powers, and has been playing some of this role in a low key way; but I suspect he's been deferring a lot to other parties who spend little time on LW, and that an authorized sole dictatorship might be better.)

Anyone want to join me in this, or else make a counterproposal?

Agree with both the sole dictatorship and Vaniver as the BDFL, assuming he's up for it. His posts here also show a strong understanding of the problems affecting less wrong on multiple fronts.

Who is empowered to set Vaniver or anyone else as the BDFL of the site? It would be great to get into a discusion of "who" but I wonder how much weight there will be behind this person. Where would the BDFL's authority eminate from? Would he be granted, for instance, ownership of the lesswrong.com domain? That would be a sufficient gesture.

I'm empowered to hunt down the relevant people and start conversations about it that are themselves empowered to make the shift. (E.g. to talk to Nate/Eliezer/MIRI, and Matt Fallshaw who runs Trike Apps.).

I like the idea of granting domain ownership if we in fact go down the BDFL route.

An additional point is that you you can only grant the DFL part. The B part cannot be granted but can only be hoped for.

I'm concerned that we're only voting for Vaniver because he's well known, but I'll throw in a tentative vote for him.

Who are our other options?

I'll second the suggestion that we should consider other options. While I know Vaniver personally and believe he would do an excellent job, I think Vaniver would agree that considering other candidates too would be a wise choice. (Narrow framing is one of the "villians" of decision making in a book on decision making he suggested to me, Decisive.) Plus, I scanned this thread and I haven't seen Vaniver say he is okay with such a role.

I think Vaniver would agree that considering other candidates too would be a wise choice.

I do agree; one of the reasons why I haven't accepted yet is to give other people time to see this, think about it, and come up with other options.

(I considered setting up a way for people to anonymously suggest others, but ended up thinking that it would be difficult to find a way to make it credibly anonymous if I were the person that set it up, and username2 already exists.)

I'm concerned that we're only voting for Vaniver because he's well known

Also because he already is a moderator (one of a few moderators), so he already was trusted with some power, and here we just saying that it seems okay to give him more powers. And because he already did some useful things while moderating.

Do we know anyone who actually has experience doing product management? (Or has the sort of resume that the best companies like to see when they hire for product management roles. Which is not necessarily what you might expect.)

I've done my fair bit of product management, mostly on resin.io and related projects (etcher.io and resinos.io) and can offer some help in re-imagining the vision behind lw.

I do. I was a product manager for about a year, then founder for a while, and am now manager for a data science team, where part of my responsibilities are basically product management for the things related to the team.

That said, I don't think I was great at it, and suspect most of the lessons I learned are easily transferred.

Edit: I actually suspect that I've learned more from working with really good product managers than I have from doing any part of the job myself. It really seems to be a job where experience is relatively unimportant, but a certain set of general cognitive patterns is extremely important.

It would be good to know what he thinks the direction of LW should be, but I would really like to see a new BDFL.

I agree, assuming that "technical admin powers" really include access to everything he might need for his work (database, code, logs, whatever).

I concur with placing Vaniver in charge. Mainly, we need a leader and a decision maker empowered to execute on suggestions.

On the idea of a vision for a future, if I were starting a site from scratch, I would love to see it focus on something like "discussions on any topic, but with extremely high intellectual standards". Some ideas:

  • In addition to allowing self-posts, a major type of post would be a link to a piece of content with an initial seed for discussion
  • Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. "agree with the conclusion but disagree with the argument", or "accurate points, but ad-hominem tone".
  • A fairly strict and clearly stated set of site norms, with regular updates, and a process for proposing changes
  • Site erring on the side of being over-opinionated. It doesn't necessarily need to be the community hub
  • Votes from highly-voted users count for more.
  • Integration with predictionbook or something similar, to show a user's track record in addition to upvotes/downvotes. Emphasis on getting many people to vote on the same set of standardized predictions
  • A very strong bent on applications of rationality/clear thought, as opposed to a focus on rationality itself. I would love to see more posts on "here is how I solved a problem I or other people were struggling with"
  • No main/discussion split. There are probably other divisions that make sense (e.g. by topic), but this mostly causes a lot of confusion
  • Better notifications around new posts, or new comments in a thread. Eg I usually want to see all replies to a comment I've made, not just the top level
  • Built-in argument mapping tools for comments
  • Shadowbanning, a la Hacker News
  • Initially restricted growth, e.g. by invitation only

"Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. "agree with the conclusion but disagree with the argument", or "accurate points, but ad-hominem tone"." - this seems complex and better done via a comment

For the Russian LessWrong slack chat we agreed on the following emoji semantics:

  • :+1: means "I want to see more messages like this"
  • :-1: means "I want to see less messages like this"
  • :plus: means "I agree with a position expressed here"
  • :minus: means "I disagree"
  • :same: means "it's the same for me" and is used for impressions, subjective experiences and preferences, but without approval connotations
  • :delta: means "I have changed my mind/updated"

We also have 25 custom :fallacy_*: emoji for pointing out fallacies, and a few other custom emoji for other low-effort, low-noise signaling.

It all works quite well and after using it for a few months the idea of going back to simple upvotes/downvotes feels like a significant regression.

Some sort of emoticon could work, like what Facebook does.

Personally, I find the lack of feedback from an upvote or downvote to be discouraging. I understand that many people don't want to take the time to provide a quick comment, but personally I think that's silly as a 10 second comment could help a lot in many cases. If there is a possibility for a 1 second feedback method to allow a little more information than up or down, I think it's worth trying.

I'm reminded of Slashdot. Not that you necessarily want to copy that, but that's some preexisting work in that direction.

Integration with predictionbook or something similar, to show a user's track record in addition to upvotes/downvotes. Emphasis on getting many people to vote on the same set of standardized predictions

This would be a top recommendation of mine as well. There are quite a few prediction tracking websites now: PredictionBook, Metaculus, and Good Judgement Open come to mind immediately, and that's not considering the various prediction markets too.

I've started writing a command line prediction tracker which will integrate with these sites and some others (eventually, at least). PredictionBook and Metaculus both seem to have APIs which would make the integration rather easy. So integration with LessWrong should not be particularly difficult. (The API for Metaculus is not documented best I can tell, but by snooping around the code you can figure things out...)

I think you're right that wherever we go next needs to be a clear schelling point. But I disagree on some details.

  1. I do think it's important to have someone clearly "running the place". A BDFL, if you like.

  2. Please no. The comments on SSC are for me a case study in exactly why we don't want to discuss politics.

  3. Something like reddit/hn involving humans posting links seems ok. Such a thing would still be subject to moderation. "Auto-aggregation" would be bad however.

  4. Sure. But if you want to replace the karma system, be sure to replace it with something better, not worse. SatvikBeri's suggestions below seem reasonable. The focus should be on maintaining high standards and certainly not encouraging growth in new users at any cost.

  5. I don't believe that the basilisk is the primary reason for LW's brand rust. As I see it, we squandered our "capital outlay" of readers interested in actually learning rationality (which we obtained due to the site initially being nothing but the sequences) by doing essentially nothing about a large influx of new users interested only in "debating philosophy" who do not even read the sequences (Eternal November). I, personally, have almost completely stopped commenting since quite a while, because doing so is no longer rewarding.

doing essentially nothing about a large influx of new users interested only in "debating philosophy" who do not even read the sequences (Eternal November).

This is important. One of the great things about LW is/was the "LW consensus", so that we don't constantly have to spend time rehashing the basics. (I dunno that I agree with everything in the "LW consensus", but then, I don't think anyone entirely did except Eliezer himself. When I say "the basics", I mean, I guess, a more universally agreed-on stripped down core of it.) Someone shows up saying "But what if nothing is real?", we don't have to debate them. That's the sort of thing it's useful to just downvote (or otherwise discourage, if we're making a new system), no matter how nicely it may be said, because no productive discussion can come of it. People complained about how people would say "read the sequences", but seriously, it saved a lot of trouble.

There were occasional interesting and original objections to the basics. I can't find it now but there was an interesting series of posts responding to this post of mine on Savage's theorem; this response argued for the proposition that no, we shouldn't use probability (something that others had often asserted, but with much less reason). It is indeed possible to come up with intelligent objections to what we consider the basics here. But most of the objections that came up were just unoriginal and uninformed, and could, in fact, correctly be answered with "read the sequences".

That's the sort of thing it's useful to just downvote (or otherwise discourage, if we're making a new system), no matter how nicely it may be said, because no productive discussion can come of it.

When it's useful it's useful, when it's damaging it's damaging, It's damaging when the sequences don't actually solve the problem. The outside view is that all too often one is directed to the sequences only to find that the selfsame objection one has made has also been made in the comments and has not been answered. It's just too easy to silently downvote, or write "read the sequences". In an alternative universe there is a LW where people don't RTFS unless they have carefully checked that the problem has really been resolved, rather than superficially pattern matching. And the overuse of RTFS is precisely what feeds the impression that LW is a cult...that's where the damage is coming from.

Unfortunately, although all of that is fixable, it cannot be fixed without "debating philosophy".

ETA

Most of the suggestions here have been about changing the social organisation of LW, or changing the technology. There is a third option which is much bolder than than of those: redoing rationality. Treat the sequences as a version 0.0 in need of improvement. That's a big project which will provide focus, and send a costly signal of anti-cultishness, because cults don't revise doctrine.

Good point. I actually think this can be fixed with software. StackExchange features are part of the answer.

I think the basilisk is at least a very significant contributor to LW's brand rust. In fact, guilt by association with the basilisk via LW is the reason I don't like to tell people I went to a CFAR workshop (because rationality -> "those basilisk people, right?")

"debating philosophy

As opposed to what? Memorising the One true Philosophy?

As opposed to what? Memorising the One true Philosophy?

The quotes signify that they're using that specifically as a label; in context, it looks like they're pointing to the failure mode of preferring arguments as verbal performance to arguments as issue resolution mechanism. There's a sort of philosophy that wants to endlessly hash out the big questions, and there's another sort of philosophy that wants to reduce them to empirical tests and formal models, and we lean towards the second sort of philosophy.

How many problems has the second sort solved?

Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?

How many problems has the second sort solved?

Too many for me to quickly count?

Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?

Yes. It seems to me that both of those factors drive discussions, and most conversations about philosophical problems can be easily classified as mostly driven by one or the other, and that it makes sense to separate out conversations where the difficulty is natural or manufactured.

I think a fairly large part of the difference between LWers and similarly intelligent people elsewhere is the sense that it is possible to differentiate conversations based on the underlying factors, and that it isn't always useful to manufacture difficulty as an opportunity to display intelligence.

What I have in mind there is basically 'approaching philosophy like a scientist', and so under some views you could chalk up most scientific discoveries there. But focusing on things that seem more 'philosophical' than not:

How to determine causality from observational data; where the perception that humans have free will comes from; where human moral intuitions come from.

Approaching philosophy as science is not new. It has had a few spectacular successes, such as the wholesale transfer of cosmology from science to philosophy, and a lot of failures, judging by the long list of unanswered philosophical questions (about 200, according to wikipedia). It also has the special pitfall of philosophically uninformed scientists answering the wrong question:-

How to determine causality from observational data;

What causality is is the correct question/.

where the perception that humans have free will comes from;

Whether humans have the power of free will is the correct question.

where human moral intuitions come from.

Whether human moral intuitions are correct is the correct question.

What causality is is the correct question/.

Oh, if you count that one as a question, then let's call that one solved too.

Whether humans have the power of free will is the correct question.

Disagree; I think this is what it looks like to get the question of where the perception comes from wrong.

Whether human moral intuitions are correct is the correct question.

Disagree for roughly the same reason; the question of where the word "correct" comes from in this statement seems like the actual query, and is part of the broader question of where human moral intuitions come from.

Off the top of my head: Fermat's Last Theorem, whether slavery is licit in the United States of America, and the origin of species.

I am working on a project with this purpose, and I think you will find it interesting:

http://metamind.pro

It is intended to be a community for intelligent discussion about rationality and related subjects. It is still a beta version, and has not launched yet, but after seeing this topic, I have decided to share it with you now.

It is based on the open source platform that I'm building:

https://github.com/raymestalez/nexus

This platform will address most of the issues discussed in this thread. It can be used both like a publishing/discussion platform, and as a link aggregator, because it supports both twitter-like discussion, reddit-like communities, and medium-like long form articles.

This platform is in active development, and I'm very interested in your feedback. If LessWrong community needs any specific functionality that is not implemented yet - I will be happy to add it. Let me know what you think!

On (4), does anyone have a sense of how much it would cost to improve the code base? Eg would it be approximately $1k, $10k, or $100k (or more)? Wondering if it makes sense to try and raise funds and/or recruit volunteers to do this.

I think a good estimate is close to $10k. Expect to pay about $100/hr for developer time, and something like 100 hours of work to get from where we are to where we want to be doesn't seem like a crazy estimate. Historically, the trouble has been finding people willing to do the work, not the money to fund people willing to do the work.

If you can find volunteers who want to do this, we would love code contributions, and you can point them towards here to see what needs to be worked on.

I think you are underestimating this, and a better estimate is "$100k or more". With an emphasis on the "or more" part.

Historically, the trouble has been finding people willing to do the work, not the money to fund people willing to do the work.

Having "trouble to find people willing to do the work" usually means you are not paying enough to solve the problem. Market price, by definition, is a price at which you can actually buy a product or service, not a price that seems like it should be enough but you just can't find anyone able and/or willing to accept the deal.

The problem with volunteers is that LW codebase needs too much highly specialized knowledge. Python and Ruby just to get a chance, and then study the code which was optimized for perfomance and backwards compatibility, at the expense of legibility and extensibility. (Database-in-the-database antipattern; values precomputed and cached everywhere.) Most of the professional programmers are simply unable to contribute, without spending a lot of time studying something they will never use again. For a person who has the necessary skills, $10k is about their monthly salary (if you include taxes), and one month feels like too short time to understand the mess of the Reddit code, and implement everything that needs to be done. And the next time, if you need another upgrade, and the same person isn't available, you need another person to spend the same time to understand the Reddit code.

I believe in long term it would be better to rewrite the code from scratch, but that's definitely going to take more than one month.

Having "trouble to find people willing to do the work" usually means you are not paying enough to solve the problem.

I had difficulties finding people without mentioning a price; I'm pretty sure the defect was in where and how I was looking for people.

I also agree that it makes more sense to have a small number of programmers make extensive changes, rather than having a large number of people become familiar with how to deal with LW's code.

I believe in long term it would be better to rewrite the code from scratch, but that's definitely going to take more than one month.

I will point out there's no strong opposition to replacing the current LW codebase with something different, so long as we can transfer over all the old posts without breaking any links. The main reason we haven't been approaching it that way is that it's harder to make small moves and test their results; either you switch over, or you don't, and no potential replacement was obviously superior.

I'm new and came here from Sarah Constantin's blog. I'd like to build a new infrastructure for LW, from scratch. I'm in a somewhat unique position to do so because I'm (1) currently searching for an open source project to do, and (2) taking a few months off before starting my next job, granting the bandwidth to contribute significantly to this project. As it stands right now, I can commit to working full time on this project for the next three months. At that point, I will continue to work on the project part time and it will be robust enough to be used in an alpha or beta state, and attract devs to contribute to further development.

Here is how I envision the basic architecture of this project:

  1. A server that manages all business logic (i.e. posting, moderation, analytics) and interfaces with the frontend (2) and database (3).
  2. A standalone, modular frontend (probably built with React, maybe reusing components provided by Telescope) that is modern, beautiful, and easily extensible/composable from a dev perspective.
  3. A database, possibly NoSql given the nature of the data that needs to be stored (posts, comments, etc). The first concern is security, all others predicated on that.

I will kickstart all three parts and bring them to a good place. After this threshold, I will need help with the frontend - this is not my forte and will be better executed by someone passionate about it.

I'm not asking for any compensation for my work. My incentive is to create a project that is actually immediately useful to someone; open-sourcing it and extending that usability is also nice. I also sympathize with the LW community and the goals laid out in this post.

I considered another approach: reverse-engineer HackerNews and use that as the foundation to be adapted to LW's unique needs. If this approach would be of greater utility to LW, I'd be happy to take it.

Thanks for the offer! Maybe we should talk by email? (this username @ gmail.com)

If you don't get a proper response, it may be worthwhile to make this into its own post, if you have the karma. (Open thread is another option.)

Well, if someone would be willing me to pay for one year of full-time work, I would be happy to rewrite the LW code from scratch. Maybe one year is an overestimate, but maybe not -- there is this thing known as planning fallacy. That would cost somewhat less than $100k. Let's say $100k, and that included a reserves for occassionally paying someone else to help me with some specific thing, if needed.

I am not saying that paying me for this job is a rational thing to do; let's just take this as an approximate estimate of the upper bound. (The lower bound is hoping that one day someone will appear and do it for free. Probably also not a rational thing to do.)

Maybe it was a mistake that I didn't mention this option sooner... but hearing all the talk about "some volunteers doing it for free in their free time" made me believe that this offer would be seen as exaggerated. (Maybe I was wrong. Sorry, can't change the past.)

I certainly couldn't do this in my free time. And trying to fix the existing code would probably take just as much time, the difference being that at the end, instead of new easily maintainable and extensible code, we would have the same old code with a few patches.

And there is also a risk that I am overestimating my abilities here. I never did a project of this scale alone. I mean, I feel quite confident that I could do it in a given time frame, but maybe there would be problems with performance, or some kind of black swan.

I will point out there's no strong opposition to replacing the current LW codebase with something different, so long as we can transfer over all the old posts without breaking any links.

I would probably try to solve it as a separate step. First, make the new website, as good as possible. Second, import the old content, and redirect the links. Only worry about the import when the new site works as expected.

Or maybe don't even import the old stuff, and keep the old website frozen. Just static pages, without ability to edit anything. All we lose is the ability to vote or comment on a years-old content. At the moment of transition, open officially the new website, block the ability to post new articles on the old one, but still allow people to post comments on the old one for the following three months. At the end, all old links will work, read-only.

Not trolling here, genuine question.

How is the LW codebase so awful? What makes it so much more complicated than just a typical blog, + karma? I feel like I must be missing something.

From a UI perspective it is text boxes and buttons. The data structure that you need to track doesn't SEEM too complicated (Users have names, karma totals, passwords and roles? What am I not taking into account?

How is the LW codebase so awful?

Age, mostly. My understanding is Reddit was one of the first of its kind, and so when building it they didn't have a good sense of what they were actually making. One of the benefits of switching to something new is not just that it's using technology people are more likely to be using in their day jobs, but also that the data arrangement is more aligned with how the data is actually used and thought about.

Strong writers enjoy their independence.

This is, I think, the largest social obstacle to reconstitution. Crossposting blog posts from the diaspora is a decent workaround, though -- if more than a few can be convinced to do it.

Speaking as a writer for different communities, there are 2 problems with this:

  • Duplicate content: unless explicitly canonized via headers, Google is ambiguous about which version should rank for keywords. This hits small & upcoming authors like a ton of bricks, because by default, the LW version is going to get ranked (on basis of authority), and their own content will be marked both as a duplicate, and as spam, and their domain deranked as a result.

  • "An audience of your own": if a reasonable reader can reasonably assume, that "all good content will also be cross-posted to LW anyways", that strongly eliminates the reason why one should have the small blogger in their RSS reader / checking once a day in the first place.

The HN "link aggregator" model works, because by directly linking to a thing, you will bump their ranking; if it ranks up to the main page, it drives an audience there, who can be captured (via RSS, or newsletters); and therefore have limited downside of participation.

"Strong LW diaspora writers" is a small enough group that it should be straightforward to ask them what they think about all of this.

My willingness to cross post from Putanumonit will depend on the standards of quality and tone in LW 2.0. One of my favorite things about LW was the consistency of the writing: the subject matter, the way the posts were structured , the language used and the overall quality. Posting on LW was intimidating, but I didn't necessarily consider it a bad thing because it meant that almost every post was gold.

In the diaspora, everyone sets their own standards. I consider myself very much a rationality blogger and get linked from r/LessWrong and r/slatestarcodex, but my posts are often about things like NBA stats or Pokemon, I use a lot of pictures and a lighter tone, and I don't have a list of 50 academic citations at the bottom of each post. I feel that my much writing isn't a good fit for G Wiley's budding rationalist community blog, let alone old LW.

I guess what I'm saying is that there's a tradeoff between catching more of the diaspora and having consistent standards. The scale goes from old LW standards (strictest) -> cross posting -> links with centralized discussion -> blogroll (loosest). Any point on the scale could work, but it's important to recognize the tradeoff and also to make the standards extremely clear so that each writer can decide whether they're in or out.

I have been doing exactly this. My short-term goal is to get something like 5-10 writers posting here. So far, some people are willing, and some have some objections which we're going to have to figure out how to address.

Re: #2, it seems like most of the politics discussion places online quickly become dominated by one view or another. If you wanted to solve this problem, one idea is

  1. Start an apolitical discussion board.

  2. Gather lots of members. Try to make your members a representative cross-section of smart people.

  3. Start discussing politics, but with strong norms in place to guard against the failure mode where people whose view is in the minority leave the board.

I explained here why I think reducing political polarization through this sort of project could be high-impact.

Re: #3, I explain why I think this is wrong in this post. "Strong writers enjoy their independence" - I'm not sure what you're pointing at with this. I see lots of people who seem like strong writers writing for Medium.com or doing newspaper columns or even contributing to Less Wrong (back in the day).

(I largely agree otherwise.)

If I were NRx, I would feel very amused at the idea of LW people coming to believe that they need to invite an all-powerful dictator to save them from decay and ruin... :-D

What's hilariously ironic is that our problem immigrants are Eugine's sockpuppets, when Eugine is NRx and anti-immigrant.

That Eugine is so much of a problem is actually evidence in favour of some of his politics.

And when the dictator stops Eugine, it will also prove that Cthulhu always swims left.

(Meanwhile, in a different tribe: "So, they have a dictator now, and of course it's a white male. That validates our beliefs!")

  1. I agree completely.

  2. Politics has most certainly damaged the potential of SSC. Notably, far fewer useful insights have resulted from the site and readership than was the case with LessWrong at it's peak, but that is how Yvain wanted it I suppose. The comment section has, according to my understanding become a haven for NRx and other types considered unsavoury by much of the rationalist community, and the quality of the discussion is substantially lower in general than it could have been.

  3. Sure.

  4. Codebase, just start over, but carry over the useful ideas implemented, such as disincentivizing flamewars by making responses to downvoted comments cost karma, zero initial karma awarded for posting, and any other rational discussion fostering mechanics which have become apparent since then.

  5. I agree, make this site read only, use it and the wiki as a knowledge base, and start over somewhere else.