Sam Harris is here offering a substantial amount of money to anyone who can show a flaw in the philosophy of 'The Moral Landscape' in 1000 word or less, or at least the best attempt.
http://www.samharris.org/blog/item/the-moral-landscape-challenge1
Up to $20,000 is on offer, although that's only if you change his mind. Whilst we know that this is very difficult, note how few people offer large sums of money for the privelage of being disproven.
In case anyone does win, I will remind you that this site is created and maintained by people who work at MIRI and CFAR, which rely on outside donations, and with whom I am not affiliated.
Note: Is this misplaced in Discussion? I imagine that it could be easily overlooked in an open thread by the sorts of people who would be able to use this information well?
The error with Harris' main point is hard to pin down, because it seems to me that his main fault is that his beliefs regarding morality aren't clearly worked out in his own head. This can be seen from his confusion as to why anyone would find his beliefs problematic, and his tendency to hand-wave criticism with claims that "it's obvious".
Interpreted favourably, I agree with his main point, that questions about morality can be answered using science, as moral claims are not intrinsically different from any other claim (no separate magisteria s'il vous plaît). Basically, what all morality boils down to is that people have certain preferences, and these preferences determine whether certain actions and outcomes are desirable or not (to those people that is). I agree with Harris that the latter can be deduced logically, or determined scientifically. Furthermore, the question of what people's preferences are in the first place can be examined using for example neuroscience. In this sense, questions of morality can be entirely answered scientifically, assuming they are formulated in a meaningful way (otherwise the answer is mu).
The problem is that Harris' main position can also be taken to mean that science can determine what preferences people ought to have in the first place, which is not possible as this is circular, and this is the main source of criticism he receives. Unfortunately Harris does not seem to get this as he never addresses the issue: In an example of super-intelligent aliens for example, he states that it is "obviously" right for us to let them eat us if this will increase total utility. This implies that everyone should feel compelled to maximise total utility, though he supplies no argument as to why this should be the case. Unfortunately I am not confident I could convince Sam Harris of his own confusion, however.
I suspect that a winning letter to Sam Harris would interpret his position favourably, agree with him on most points, and then raise a compelling new point that he has not yet thought of that causes him to change his mind slightly but which does not address the core of his problem.
I think his beliefs are worked out and make sense, but aren't articulated well. What he's really doing is trying to replace morality-speak with a new, slightly different and more homogeneous way of speaking in order to facilitate scientific research (i.e., a very loose operationalization) and political cooperation (i.e., a common language).
But, I gather, he can't emphasize that point because then he'll start sounding like a moral anti-realist, and even appearing to endorse anything in the neighborhood of relativism will reliably explode most people's brains. (The realists will panic and worry we have to stop locking up rapists if we lose their favorite Moral System. The relativists will declare victory and take this metaphysical footnote as a vindication of their sloppy, reflectively inconsistent normative talk.)
This is not true. He recognizes this point repeatedly in the book and in follow-ups, and his response is simply that it doesn't matter. He's never claimed to have a self-justifying system, nor does he take it to be a particularly good argument against disciplines that can't achieve the inconsistent goal of non-circularly justifying themselves.
Check out his response to critics. That should clarify a lot.
What do you mean by 'utility' here? If 'utility' is just a measure of how much something satisfies our values, then the obviousness seems a lot less mysterious.
Yeah, I plan to do basically that. (Not just as a tactic, though. I do agree with him on most of his points, and I do disagree with him on a specific just-barely-core issue.)