Open thread, Apr. 10 - Apr. 16, 2017

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 10:43 AM
Select new highlight date
All comments loaded

You think like a human because you are a human. Not because this is how an intelligent being thinks.

Just a thought.

Sometimes we talk about unnecessarily complex potential karma/upvote systems, so I thought I would throw out an idea along those lines:

Every time you post, you're prompted to predict the upvote/downvote ratio of your post.

Instead of being scored on raw upvotes, you're scored on something more like how accurately you predicted the future upvote/downvote ratio.

So if you write a good post that you expect to be upvoted, then you predict a high upvote/downvote ratio, and if you're well calibrated to your audience, then you actually achieve the ratio you predicted, and you're rewarded "extra" by the system.

And here's the cool part. If you write a lazy low-effort post, or if you're trolling, or you write any kind of post that you expect to be poorly received, then you have two options. You can either lie about the expected upvote/downvote ratio, input a high expected ratio, and then the system penalizes you even more when you turn out to get a low u/d ratio, and considers you to be a poorly calibrated poster. Or you can be honest about the u/d ratio you expect, in which case the system can just preemptively tell you not to bother posting stuff like that, or hide it, or penalize it in some other way.

Overall you end up with a system that rewards users who (1) are well-calibrated regarding the quality of their posts and (2) refrain from posting content they know to be bad by explicitly making them admit that it's bad before they post it and also maybe hiding the content.

refrain from posting content they know to be bad

Knowing that your post will get a low score is not equivalent to knowing that it is bad.

I get the feeling that LW has a lot of lurkers with interesting things to say, but who are too afraid to say them. They may eventually build up the courage they need to contribute to the community, but this system would scare them off. They don't yet have enough data to predict how well their posts would be received. We need to be doing the opposite and remove some of the barriers to joining in.

On the other hand, trolls don't care that much about karma. They'll just exploit sock puppets.

Yeah, LW would probably not be the place to try this. I would guess that most potential karma systems only truly function correctly with a sufficient population of users, a sufficient number of people reading each post. LW has atrophied too much for this.

The thing is that without downvotes, there aren't actually that many barriers to joining in. If someone has a problem with something you say, they have to actually say so, instead of just downvoting, which is what often happens on Reddit. And I think this is better because it forces negative reward to be associated with feedback, so that people who either have misunderstandings or are poor articulators of their views can get better over time. The worst thing is getting downvoted without knowing why. I don't know if this has been tried anywhere, but maybe a system where every vote would necessitate a comment would work better, so that why a remark was received a particular way by the community would be well understood.

This is a response to this comment.

Can you clarify what you mean by phenomenological and existentialist stances, and what you mean by saying that there is no true ontology? I agree that we could use somewhat different models of the world. For example, we don't have to divide between dogs and wolves, but could just call them one common name. I don't see what difference this makes. Dogs and wolves still exist in the world and would be potentially distinguishable in the way that we do, even if we did not distinguish them, and likewise the common thing would still exist even if we did explicitly think of it.

Many opinions that are not normally counted as moral realism are in fact forms of moral realism, if moral realism is understood to mean "moral statements make claims about the facts in the world, and the ones that people accept normally make true claims." For example, if someone says that saying that it is good to do something means that he wants to do it, and saying that something is bad means that he doesn't want to do it or want other people to do it, then when he says, "murder is bad," he is making a true claim about the world, namely that he does not want to murder and does not want other people to murder. Likewise, Eliezer's theory is morally realist in this sense. However there other opinions which say that moral statements are either meaningless or false, like error theory, which would say that they are false. It was my impression that you were denying moral realism in this stronger sense.

I think that moral realism is true and in a stronger sense than in Eliezer's theory, but the facts a statement would depend on in order to be true in my theory are very much like the facts that make such statements true according to him.

Pointing to some aspects where my theory is different from his:

  • in my theory, the universe and life are good in themselves, not indifferent.
  • "good" is thought of as the cause of desire, not as the output of a function. This of course is a common sense way of thinking about good, but it seems backwards to many people after thinking about it. But it is exactly right: for example, the fact that food is good for us is the cause, over geological time, of the fact that we desire it. Likewise if you are standing in front of an ice cream shop and see the ice cream, it is physically the light coming from the ice cream which begins the chain of physical causes that end in you desiring it.
  • these things imply that although good is relative in the sense that what is good for me is different from what is good for you, and what is good for humans is different from what is good for e.g. babyeaters, all of those things fall under the concept of good, even as applied by me. I do not say, "This is babyeaterish for babyeaters," like Eliezer; I say, "this is good for babyeaters, although not for us." That implies e.g. that I do not want to impose human values on babyeaters, and I think that would be an evil thing.
  • human life has an objective purpose. Eliezer's theory sort of has this implication but not in a robust sense, since he thinks it only has that purpose from a human point of view, and babyeaters would not accept it. I think that informed babyeaters would accept my moral theory, and therefore they would agree with us about the purpose of human life.