On Wednesday I had lunch with Raph Levien, and came away with a picture of how a website that fostered the highest quality discussion might work.
Principles:
- It’s possible that the right thing is a quick fix to Less Wrong as it is; this is about exploring what could be done if we started anew.
- If we decided to start anew, what the software should do is only one part of what would need to be decided; that’s the part I address here.
- As Anna Salamon set out, the goal is to create a commons of knowledge, such that a great many people have read the same stuff. A system that tailored what you saw to your own preferences would have its own strengths but would work entirely against this goal.
- I therefore think the right goal is to build a website whose content reflects the preferences of one person, or a small set of people. In what follows I refer to those people as the “root set”.
- A commons needs a clear line between the content that’s in and the content that’s out. Much of the best discussion is on closed mailing lists; it will be easier to get the participation of time-limited contributors if there’s a clear line of what discussion we want them to have read, and it’s short.
- However this alone excludes a lot of people who might have good stuff to add; it would be good to find a way to get the best of both worlds between a closed list and an open forum.
- I want to structure discussion as a set of concentric circles.
- Discussion in the innermost circle forms part of the commons of knowledge all can be assumed to be familiar with; surrounding it are circles of discussion where the bar is progressively lower. With a slider, readers choose which circle they want to read.
- Content from rings further out may be pulled inwards by the votes of trusted people.
- Content never moves outwards except in the case of spam/abuse.
- Users can create top-level content in further-out rings and allow the votes of other users to move it closer to the centre. Users are encouraged to post whatever they want in the outermost rings, to treat it as one would an open thread or similar; the best content will be voted inwards.
- Trust in users flows through endorsements starting from the root set.
More specifics on what that vision might look like:
- The site gives all content (posts, top-level comments, and responses) a star rating from 0 to 5 where 0 means “spam/abuse/no-one should see”.
- The rating that content can receive is capped by the rating of the parent; the site will never rate a response higher than its parent, or a top-level comment higher than the post it replies to.
- Users control a “slider” a la Slashdot which controls the level of content that they see: set to 4, they see only 4 and 5-star content.
- By default, content from untrusted users gets two stars; this leaves a star for “unusually bad” (eg rude) and one for “actual spam or other abuse”.
- Content ratings above 2 never go down, except to 0; they only go up. Thus, the content in these circles can grow but not shrink, to create a stable commons.
- Since a parent’s rating acts as a cap on the highest rating a child can get, when a parent’s rating goes up, this can cause a child’s rating to go up too.
- Users rate content on this 0-5 scale, including their own content; the site aggregates these votes to generate content ratings.
- Users also rate other users on the same scale, for how much they are trusted to rate content.
- There is a small set of “root” users whose user ratings are wholly trusted. Trust flows from these users using some attack resistant trust metric.
- Trust in a particular user can always go down as well as up.
- Only votes from the most trusted users will suffice to bestow the highest ratings on content.
- The site may show more trusted users with high sliders lower-rated content specifically to ask them to vote on it, for instance if a comment is receiving high ratings from users who are one level below them in the trust ranking. This content will be displayed in a distinctive way to make this purpose clear.
- Votes from untrusted users never directly affect content ratings, only what is shown to more trusted users to ask for a rating. Downvoting sprees from untrusted users will thus be annoying but ineffective.
- The site may also suggest to more trusted users that they uprate or downrate particular users.
- The exact algorithms by which the site rates content, hands trust to users, or asks users for moderation would probably want plenty of tweaking. Machine learning could help here. However, for an MVP something pretty simple would likely get the site off the ground easily.
I don't really understand the reasons behind a lot of the proposed site mechanics, but I've been toying around with an idea similar to your slider, but for a somewhat different purpose.
Consider this paradox:
As far as I can tell, humor and social interaction is crucial to keeping a site fun and alive. People have to be free to say whatever is on their mind, without worrying too much about social repercussions. They have to feel safe and be able to talk freely.
This is, to some extent, at odds with keeping quality high. Having extremely high standards is one of the things that makes LW valuable, and gives it such a high signal to noise.
So, how do we cope with this dichotomy? One way is to allow users to either submit a comment/post to the outer circles, or to an inner one. I think this is part of what we were going for with the Discussion/Main dichotomy, but no one posted to Main, so people don't even check it anymore. But, because of our quality standards for Discussion, people also hadn't felt comfortable posting there, until recently when things have started picking up with a lot of good interesting articles. So, most of the actual discussion got pushed to the weekly open threads, or off the site entirely.
One way around this would be to have 2 "circles" as you call them. Users tag their own comments and submissions as either "cannon" or "non-cannon", based on epistemic status, whether they've thought about it for at least 5 min, whether it's informative or just social, whether they've read the Sequences yet or are a newbie, etc. You could, of course, add more circles for more granularity, but 2 is the minimum.
Either way, it's extremely important that the user's self-rating is visible, alongside the site's rating, so that people aren't socially punished for mediocre or low quality content if they made no claim to quality in the first place. This allows them to just toss ideas out there without having totally refined potential diamonds in the rough.
An interesting thing you could do with this, to discourage overconfidence and encourage the meek, would be to show the user their calibration curve. That is, if they routinely rank their own comments as outer circle quality, but others tend to vote them up to inner quality status, the system will visually show a corrected estimate of quality when they slide the bar on their own comment.
Maybe even autocorrect it, so that if someone tries to rate a comment with 1 star, but their average 1 star comment is voted to 3 stars, then the system will start it at 3 stars instead. Probably best to let people rate them themselves, though, since the social pressure of having to live up to the 3 star quality might cause undue stress, and lead to less engagement.