I'd be willing to give this a shot, but his thesis, as stated, seems very slippery (I haven't read the book):

"Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe."

This needs to be reworded but appears to be straightforwardly true and uncontroversial: morality is connected to well-being and suffering.

"Conscious minds and their states are natural phenomena, fully constrained by the laws of Nature (whatever these turn out to be in the end)."

True and uncontroversial on a loose enough interpretation of "constrained".

"Therefore, there must be right and wrong answers to questions of morality and values that potentially fall within the purview of science."

This is the central claim in the thesis - and the most (only?) controversial one - but he's already qualifying it with "potentially." I'm guessing any response of his will turn on (a) the fact that he's only saying it might be the case and (b) arbitrarily broadening the definition of science. Nevertheless, moral questions aren't (even potentially) empirical, since they're obviously seeking normative and not factual answers. But given that this is obvious, it's hard to imagine that one could change his mind. It's rather like being invited to challenge the thesis of someone who claims scientific theories are works of fiction. You've got your work cut out when somebody has found themselves that far off the beaten path. I suspect the argument of the book runs: this philosophical thesis is misguided, this philosophical thesis is misguided, etc, science is good, we can get something that sort of looks like morality from science, so science - i.e., he takes himself to be explaining morality when he's actually offering a replacement. That's very hard to argue against. I think, at best, you're looking at $2000 for saying something he finds interesting and new, but that's very subjective.

"On this view, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life."

Assuming "what they deem important in life" is supposed to be parsed as "morality" then this appears to follow from his thesis.

P/S/A - Sam Harris offering money for a little good philosophy

Sam Harris is here offering a substantial amount of money to anyone who can show a flaw in the philosophy of 'The Moral Landscape' in 1000 word or less, or at least the best attempt.

http://www.samharris.org/blog/item/the-moral-landscape-challenge1

Up to $20,000 is on offer, although that's only if you change his mind. Whilst we know that this is very difficult, note how few people offer large sums of money for the privelage of being disproven.

In case anyone does win, I will remind you that this site is created and maintained by people who work at MIRI and CFAR, which rely on outside donations, and with whom I am not affiliated.

 

Note: Is this misplaced in Discussion? I imagine that it could be easily overlooked in an open thread by the sorts of people who would be able to use this information well?

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 9:28 AM
Select new highlight date
Rendering 50/78 comments  show more

Sam Harris is here offering a substantial amount of money to anyone who can show a flaw in the philosophy of 'The Moral Landscape' in 1000 word or less, or at least the best attempt.

More accurately, he is "offering a substantial amount of money to anyone who can" convince him to publicly acknowledge that there is a "flaw in the philosophy of 'The Moral Landscape' in 1000 word or less." This is quite a different feat from merely finding a flaw.

Up to $20,000 is on offer, although that's only if you change his mind. Whilst we know that this is very difficult, note how few people offer large sums of money for the privelage of being disproven.

I'm not so sure this is a wise decision if you are trying to improve your epistemic rationality. What he has just done, is to give himself a $10,000 reason not to change his mind.

What he has just done, is to give himself a $10,000 reason not to change his mind.

Maybe. But how much is $10000 to Sam Harris? And how much credit would he get for publicly changing his mind in such a way that costs him $10000? And if he did so, he might be getting an excuse to market another book on morality in the bargain.

It looks like he's having a third party judge the results, but I can't tell since it's only a tweet and isn't explicit about whether or not the reward is determined by the third party. He tweeted:

"I am happy to say that Russell Blackford has agreed to judge the essays, pick the winner, and evaluate my response."

This is for the best essay, not for the main prize.

If true, this is good news for the sanity of Sam Harris, although the original post showed no indication that this would be the case.

Sam did something similar on his tour for the book. He invited people to come up and correct his views on his book.

It's was either clueless, or fundamentally dishonest. Sam can add 2 and 2. The problems with his book, like most others, are primarily conceptual, and impossible to correct in a 30 second response to Sam after his lecture. He chose not to engage the professional literature on his rehash of utilitarianism and moral objectivism, and then invites people to correct him in a 30 second response to his lecture. Unserious.

I don't think any of his fundamental moves pass a laugh test. But it's extremely difficult to help a conceptually confused person see the error of their ways. We can't do it for him. He has to decide to face serious interrogation by his critics, where he attempts to clarify his own argument, and sees if he can do it. He's shown no indication of a willingness to do this. Instead, he'll just read essays, cram them into his conceptual confusion, and dismiss them, most likely claiming that they didn't understand his argument, where I'd argue that neither did he. What a pointless exercise.

Here's my response. I had a LW-geared TL:DR which assumed shorter inferential distance and used brevity-aiding LW jargon, but then I removed it because I want to see if this makes sense to LW without any of that.


This debate boils down to a semantic confusion.

Lets consider the word "heat(1)". Some humans chose the word "heat" to mean "A specific subset of environmental conditions that lead to the observation of feeling hot, of seeing water evaporate..." and many other things too numerous to mention.

Once "heat" was defined, science could begin to quantify how much of it there was using "temperature". We can use our behavior to increase or decrease the heat, and some behaviors are objectively more heat-inducing than others.

But who defined heat in the first place? We did. We set the definition. It was an arbitrary decision. If our linguistic history had gone differently, "heat" could have meant any number of things.

If we were lucky, a neighboring culture would use "heat(2)" to mean "the colors red and yellow" and everyone would recognize that these were two separate words that meant different things but happened to be homonyms with a common root - since most warm things are red or yellow, it's easy to see how definitions diverge. No one would be so silly as to argue about heat(1) and heat(2). If we were unlucky, a neighboring culture might decide to use "heat(3)" to mean "subjective feelings resulting from temperature-receptor activation", and we'd have endless philosophical debates about what heat really is. All this useless debate because one culture decided to use "heat(3)" to refer to the subjective feeling of being hot, while another culture decided to use "heat(1)" to refer to a complex phenomenon which causes a bunch of observable effects, one of which is usually but not always the subjective experience of feeling hot.

One day, a group of humans which included one named Sam Harris decided to define "Good(1) and Best(1)" as "Well-Being among all Conscious Beings". (Aside - In an effort to address the central theme and avoid tangents, let's just assume that "Conscious Beings" here means "regular humans" and not create hypothetical situations containing eldritch beings with alien goals. Since we haven't rigorously defined "Well-Being" and "Conscious-Being", we won't go into the question of whether "Well-Being" is a coherent construct for all "Conscious Beings" . We can deal with that problem later - that's not the central issue. For now, we will simply go by our common intuitions of what those words mean.)

Can you measure "well-being" in humans? Sure you can! You can use questionnaires to measure satisfaction, you can measure health and vibrancy and do all sorts of things. And you can arrange your actions to maximize these measurements, creating the Best(1) Possible Universe. And some hypothesis about what actions you aught to take to reach the Best(1) Possible Universe are incorrect, while others are correct.

One day, a group of humans which did not include one named Sam Harris decided to define "Good(2)" as "The sum of all my goals". Can science measure that? Actually, yes! - I can measure my emotional response to various hypothetical situations, and try to scientifically pinpoint what my goals are. I can attempt to describe my goals, and sometimes I will be incorrect about my own goals, and sometimes I will be correct - we've almost all been in situations where we thought we wanted something, and then realized we didn't. Likewise, there is a certain set of actions that I can take to maximize the fulfillment of my goals, to reach my Best(2) Possible Universe. And I can use observation and logic to measure your goals as well, and calculate your Best(2) Possible Universe.

But can my goals themselves be incorrect? No - my goals are imbedded in my brain, in my software. My goals are physically a part of the universe. You can't point to a feature of the universe and call it "incorrect". You can only say that my goals are incompatible with yours, that our Best(2) Possible Universes are different. Mine is Better(2) for me, yours is Better(2) for you.

Our culture is unlucky, because Good(1) and Good(2) are homonyms whose definitions are far too close together. It doesn't make sense to ask which definition is "correct" and which is "wrong", any more than it makes sense to ask whether "Ma" means Mother (English) or Horse (Chinese). The entire argument stems from the two sides using the same word to mean entirely different things. It's a stupid argument, and there are no new insights gained from going back and forth on the matter of which arbitrary definition is better. If only Good(1) and Good(2) didn't sound so similar, there would be no confusion.

(Note: Of course, I've ridiculously oversimplified both Good(1) and Good(2), and I haven't gone into Good 1.1, Good 1.2, Good 2.1, Good 2.2, etc. But I think it's safe to say that most definitions of Good currently fall into either camp 1 or camp 2, and this argument is a misunderstanding between the definitional camps)

ask whether "Ma" means Mother (English) or Horse (Chinese).

"Ma" also means mother, depending on the tone. Actually, this example backfires since the word "mama" or some variation of it (ma, umma) means "mother" in almost every language in the world.

I haven't read the book but this sounds pretty good to me. Since Harris himself is the judge calling his argument "stupid" might not be the best idea.

Could someone provide a quote or two showing that Sam disagrees with any of the above? Steel-manning only a little, I believe Harris' goal isn't to find the One True Definition of morality, but to get rid of some useless folk concepts in favor of a more useful concept for scientific investigation and political collaboration. He antecedently thinks improving everyone's mental health is a worthy goal, so he pins the word 'morality' to that goal to make morality-talk humanly useful. Quoting him (emphasis added):

[T]he fact that millions of people use the term “morality” as a synonym for religious dogmatism, racism, sexism, or other failures of insight and compassion should not oblige us to merely accept their terminology until the end of time. [...]

Everyone has an intuitive “physics,” but much of our intuitive physics is wrong (with respect to the goal of describing the behavior of matter). Only physicists have a deep understanding of the laws that govern the behavior of matter in our universe. I am arguing that everyone also has an intuitive “morality,” but much of our intuitive morality is clearly wrong (with respect to the goal of maximizing personal and collective well-being).

I think this view is more sophisticated than is usually recognized. Though it's definitely true he doesn't do a lot to make that clear, if so.

Precommitting to lose $18,000 if you (publicly) change your mind is a really good way to make sure you don't do so.

ETA: Looks like it's a $9,000 precommit, not $18,000. See this comment. I still find this pretty funny.

He's actually only going to lose $9000 in that situation -- half the money comes from someone matching his offer. (Unless he just made that person up to have an excuse for doubling the stakes, I guess, but that seems improbable.)

What can convince a philosopher to change her mind, anyway? I mean, it's not like there is an experiment that can be conclusively set up. Is it some logical argument she is unable to find a fault in? If so, then how come there are multiple schools of philosophy disagreeing on the basics? Can someone point to an example of a (prominent) philosopher changing his/her mind and hopefully the stated and unstated reasons for doing so?

Hilary Putnam, one of the most prominent living philosophers, is known for publicly changing his mind repeatedly on a number of issues. In the Philosophical Lexicon, which is kind of an inside-joke philosophical dictionary, a "hilary" is defined thus:

A very brief but significant period in the intellectual career of a distinguished philosopher. "Oh, that's what I thought three or four hilaries ago."

One issue on which Putnam changed his mind is computational functionalism, a theory of mind he actually came up with in the 60s, which is now probably the most popular account of mental states among cognitive scientists and philosophers. Putnam himself has since disavowed this view. Here is a paper tracking Putnam's change of mind on this topic, if you're interested in the details.

The definition of functionalism from that paper:

Computational functionalism is the view that mental states and events – pains, beliefs, desires, thoughts and so forth – are computational states of the brain, and so are defined in terms of “computational parameters plus relations to biologically characterized inputs and outputs” (1988: 7). The nature of the mind is independent of the physical making of the brain: “we could be made of Swiss cheese and it wouldn’t matter” (1975b: 291). What matters is our functional organization: the way in which mental states are causally related to each other, to sensory inputs, and to motor outputs. Stones, trees, carburetors and kidneys do not have minds, not because they are not made out of the right material, but because they do not have the right kind of functional organization. Their functional organization does not appear to be sufficiently complex to render them minds. Yet there could be other thinking creatures, perhaps even made of Swiss cheese, with the appropriate functional organization.

The paper I linked has much more on the structure of Putnam's functionalism and his reasons for believing it.

The reasons for which Putnam subsequently rejected functionalism are a bit hard to convey briefly to someone without a philosophy background. The basic idea is this: many mental states have content, i.e. they somehow say something about the world outside the mind. Beliefs are representations (or possibly misrepresentations) of aspects of the world, desires are directed at particular states of the world, etc. This "outward-pointing" aspect of certain mental states is called, in philosophical parlance, the intentional aspect of mental states. Putnam essentially repudiated functionalism because he came to believe that the functional aspect of a mental state -- it's role in the computational process being implemented by the brain -- does not determine its intentional aspect. And since intentionality is a crucial feature of some mental states, we cannot therefore define a mental state in terms of its functional role.

Putnam's arguments for the gap between the functional and intentional are again detailed in the paper I linked (section 3). It's kind of obvious that if we consider a computational process by itself we cannot conclusively determine what role that process is playing in the surrounding ecology -- syntax doesn't determine semantics. Putnam's initial hope had been that by specifying "biologically characterized inputs and outputs" in addition to the computational structure of the mental process, we include enough information about the relationship to the external world to fix the content of the mental state. But he eventually came up with a thought experiment (the now notorious "Twin Earth" experiment) that (he claimed) showed that two individuals could be implementing the exact same mental computations and have the exact same sensory and motor inputs and outputs, and yet have different mental states (different beliefs, for instance).

Another motivation for Putnam changing his mind is that he claimed to have come up with a proof that every open system can, with appropriate definitions of states, be said to implementing any finite automaton. The gist of the proof is in the linked paper (section 3.2.1). If the conclusion is correct, then functionalism seemingly collapses into vacuity. All open systems, including rocks and carburetors, can be described as having any mental state you'd like. To avoid this conclusion, we need constraints on interpretation -- which physical process can be legitimately interpreted as a computational process -- but this tells against the substrate-independence that is supposed to be at the core of functionalism.

So that's one example. Putnam came to believe in functionalism because he thought there were strong arguments for it, both empirical and theoretical, but he subsequently developed counter-arguments that he regarded as strong enough to reject the position despite those initial arguments. Putnam is particularly known for changing his mind on important issues because he has done it so many times, but there are many other prominent philosophers who have had significant changes of mind. Another very prominent example is Ludwig Wittgenstein, who is basically famous for two books, the first of which promulgated a radical view of the relationship between language, the mind and the world (an early form of logical positivism), and the second of which extensively (and, to my mind, quite devastatingly) criticized this view.

Excellent response. Another example of a famous philosopher changing his mind publicly a lot is Bertrand Russell; he changed his views in all areas of philosophy, often more than once:

  • In metaphysics, he started his career as an Absolute Idealist (believing that pluralities of objects are unreal and only an universal spirit is real); then became convinced of the reality of object and extended his newfound realism to relations and mathematical concepts, becoming a Platonist of sorts, and later became more and more of a nominalist, though never a complete one.

  • Concerning perception, after switching first from idealism to a sort of naive realism, he developed a new theory in which physical objects reduce to collections of sense-data, and later repudiated this theory in favor of one where physical objects cause sense-data.

  • He also changed his views on the self, from seeing it as an entity to reducing it to a collection of perceptions.

  • Finally, in metaethics, he started out believing that the Good was an objective, independent property, but was convinced to abandon this view and become more of a naturalist and subjectivist by the arguments that Santayana raised against him. (Santayana's critique can be read here and is a fascinating early version of the kind of metaethical view accepted by Eliezer and most LWers).

At least one of Putnam's changes is a bit of a tricky case; he's famous for being a co-author of the early pro-reductionist essay "Unity of Science as a Working Hypothesis," and for later being one of the most prominent anti-reductionists. However, I have heard that the other co-author of that paper, Paul Oppenheim, paid Putnam (who was then just starting out and so not in the greatest financial shape) to help him write a paper advancing his own views. I've also heard that Putnam was not the only young scholar Oppenheim did this with. All of Oppenheim's well-known publications are co-authored, and I've actually heard that they all involved similar arrangements, but when I heard this story Putnam was cited as the instance my (highly trustworthy) source knew for certain (my source claimed to have heard this from Putnam himself, and is someone Putnam plausibly might have told this to).

Interesting. So the examples of Putnam and Wittgenstein show that a philosopher can be persuaded by his own logical arguments. Maybe some even listen to the arguments of others, who knows. I wonder what makes an argument persuasive to some philosophers and not to others.

Well shit...your post made me realize I've never really changed my mind on any non-empirical issue - although I have had blank spaces filled in, of course.

Would you consider EY prominant? He is here, at least. Here is a description of his conversion from the (I say surely false) belief that Aumann's agreement theorem would cause rational agents to behave morally to the (I say surely true) belief in No Universally Compelling Arguments. He did it at age 18 and wrote essays on it too, so its not like he just filled in an empty space - he actually had to reject a previous belief, which he had given a lot of thought about.

http://lesswrong.com/lw/u2/the_sheer_folly_of_callow_youth/

Constructing a response after reading his response to critics would be good. His core reservations presented seem to be:

If you can say that there's no correct morality, why can't you say that there's no correct math, or no correct science?

If there's two different visions of well-being, isn't this just a small difference? ("This is akin to trying to get me to follow you to the summit of Everest while I want to drag you up the slopes of K2" [...] "In any case, I suspect that radically disjoint peaks are unlikely to exist for human beings.")

And he presents some rationalizations that seem to be ingrained:

"Is it unscientific to value health and seek to maximize it within the context of medicine? No. Clearly there are scientific truths to be known about health." That is, he conflates "there are truths" with "there is a truth of the sort I want."

Later he conflates 'an ideal world by my egalitarian values is possible' with 'so don't bother thinking about other peoples' values,' specifically citing selfish values. This is the logically even worse version of objection #2.

Plenty of people disagree with SH without saying there's no correct morality...

Paraphrasing Sam:

If there's two different visions of well-being, isn't this just a small difference?

Not between the Deathists and me.

This is an interesting way to setup a lottery while promoting one's ideas.

note how few people offer large sums of money for the privelage of being disproven.

The usual reason for doing so is signalling: look how sure I am of my ideas, I am willing to put my money on the line. Most people who see this offer (aptly called a "challenge") won't hear "he would be happy to be disproven, what a rational fellow"; they will hear "he is sure he can't be disproven, what a confident fellow".

I haven't read Harris's book and don't know anything about it. However, I do feel that a genuine "challenge" should have a formal verification procedure for proposed answers, or at least a third party to judge them. Judging answers by whether they convince Harris himself requires extremely high confidence in his skills as a rationalist, even apart from his incentives.

On the other hand, what purpose is served by publishing the best answer even if it fails to convince him? He may end up publishing an answer that he thinks is completely wrong (and necessarily saying so), and maybe most other people will think it's wrong too (but that some other answer is right). The submitter will be rewarded with 1000$ although he hadn't convinced anyone, and nobody will change their opinions.

The error with Harris' main point is hard to pin down, because it seems to me that his main fault is that his beliefs regarding morality aren't clearly worked out in his own head. This can be seen from his confusion as to why anyone would find his beliefs problematic, and his tendency to hand-wave criticism with claims that "it's obvious".

Interpreted favourably, I agree with his main point, that questions about morality can be answered using science, as moral claims are not intrinsically different from any other claim (no separate magisteria s'il vous plaît). Basically, what all morality boils down to is that people have certain preferences, and these preferences determine whether certain actions and outcomes are desirable or not (to those people that is). I agree with Harris that the latter can be deduced logically, or determined scientifically. Furthermore, the question of what people's preferences are in the first place can be examined using for example neuroscience. In this sense, questions of morality can be entirely answered scientifically, assuming they are formulated in a meaningful way (otherwise the answer is mu).

The problem is that Harris' main position can also be taken to mean that science can determine what preferences people ought to have in the first place, which is not possible as this is circular, and this is the main source of criticism he receives. Unfortunately Harris does not seem to get this as he never addresses the issue: In an example of super-intelligent aliens for example, he states that it is "obviously" right for us to let them eat us if this will increase total utility. This implies that everyone should feel compelled to maximise total utility, though he supplies no argument as to why this should be the case. Unfortunately I am not confident I could convince Sam Harris of his own confusion, however.

I suspect that a winning letter to Sam Harris would interpret his position favourably, agree with him on most points, and then raise a compelling new point that he has not yet thought of that causes him to change his mind slightly but which does not address the core of his problem.

The error with Harris' main point is hard to pin down, because it seems to me that his main fault is that his beliefs regarding morality aren't clearly worked out in his own head.

I think his beliefs are worked out and make sense, but aren't articulated well. What he's really doing is trying to replace morality-speak with a new, slightly different and more homogeneous way of speaking in order to facilitate scientific research (i.e., a very loose operationalization) and political cooperation (i.e., a common language).

But, I gather, he can't emphasize that point because then he'll start sounding like a moral anti-realist, and even appearing to endorse anything in the neighborhood of relativism will reliably explode most people's brains. (The realists will panic and worry we have to stop locking up rapists if we lose their favorite Moral System. The relativists will declare victory and take this metaphysical footnote as a vindication of their sloppy, reflectively inconsistent normative talk.)

The problem is that Harris' main position can also be taken to mean that science can determine what preferences people ought to have in the first place, which is not possible as this is circular, and this is the main source of criticism he receives. Unfortunately Harris does not seem to get this as he never addresses the issue

This is not true. He recognizes this point repeatedly in the book and in follow-ups, and his response is simply that it doesn't matter. He's never claimed to have a self-justifying system, nor does he take it to be a particularly good argument against disciplines that can't achieve the inconsistent goal of non-circularly justifying themselves.

Check out his response to critics. That should clarify a lot.

In an example of super-intelligent aliens for example, he states that it is "obviously" right for us to let them eat us if this will increase total utility. This implies that everyone should feel compelled to maximise total utility, though he supplies no argument as to why this should be the case.

What do you mean by 'utility' here? If 'utility' is just a measure of how much something satisfies our values, then the obviousness seems a lot less mysterious.

I suspect that a winning letter to Sam Harris would interpret his position favourably, agree with him on most points, and then raise a compelling new point that he has not yet thought of that causes him to change his mind slightly but which does not address the core of his problem.

Yeah, I plan to do basically that. (Not just as a tactic, though. I do agree with him on most of his points, and I do disagree with him on a specific just-barely-core issue.)

I did read his response to critics in addition to skimming through his book. As far as I remember his position really does seem vague and inconsistent, and he never addresses things like the supposed is-ought problem properly. He just handwaves it by saying it does not matter, as you point out, but this is not what I would call addressing it properly.

Utility always means satisfying preferences, as far as I know. The reason his answer is not obvious is that it assumes that what is desirable for the aliens must necessarily be desirable for us. In other words, it assumes a universal morality rather than a merely "objective" one (he assumes a universally compelling moral argument, to put it in less wrong terms). My greatest frustration in discussing morality is that people always confuse the ability to handle a moral issue objectively with being able to create a moral imperative that applies to everyone, and Harris seems guilty of this as well here.

he never addresses things like the supposed is-ought problem properly. He just handwaves it by saying it does not matter, as you point out, but this is not what I would call addressing it properly.

I don't know. What more is there to say about it? It's a special case of the fact that for any sets of sentences P and Q, P cannot be derived from Q if P contains non-logical predicates that are absent from Q and we have no definition of those predicates in terms of Q-sentences. All non-logical words work in the same way, in that respect.

The interesting question isn't Hume's is/ought distinction, since it's just one of a billion other distinctions of the same sort, e.g., the penguin/economics distinction, and the electron/bacon distinction. Rather, the interesting question is Moore's Open Question argument, which is an entirely distinct point and can be adequately answered by: 'Insofar as this claim about the semantics of 'morality' is right, it seems likely that an error theory of morality is correct; and insofar as it is usefully true to construct normative language that is reducible to descriptions, we will end up with a language that does not yield an Open Question in explaining why that is what's 'moral' rather than something else.

I agree Harris should say that somewhere clearly. But this is all almost certainly true given his views; he just apparently isn't interested in hashing it out. TML is a book on the rhetoric and pragmatics of science (and other human collaborations), not on metaphysics or epistemology.

The reason his answer is not obvious is that it assumes that what is desirable for the aliens must necessarily be desirable for us.

Ideally desirable, not actually desired.

In other words, it assumes a universal morality rather than a merely "objective" one (he assumes a universally compelling moral argument, to put it in less wrong terms).

No. See his response to the Problem of Persuasion; he doesn't care whether the One True Morality would persuade everyone to be perfectly moral; he assumes it won't. His claim about aliens is an assertion about his equivalent of our coherently extrapolated moral volition; it's not a claim about what arguments we would currently find compelling.

If you're willing to satisfy my curiosity, what's that specific issue? Would an argument falsifying his position on that issue amount to a refutation of the central argument of the book? If not, wouldn't your essay just be ineligible?

The issue I have in mind wasn't explicitly cited in the canonical summary he gives in the FAQ, but I asked Sam personally and he said the issue qualifies as 'central'. I can give you more details in February. :)

Well, now he has another reason not to change his mind. Seems unwise, even if he's right about everything.

I'd be willing to give this a shot, but his thesis, as stated, seems very slippery (I haven't read the book):

"Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe."

This needs to be reworded but appears to be straightforwardly true and uncontroversial: morality is connected to well-being and suffering.

"Conscious minds and their states are natural phenomena, fully constrained by the laws of Nature (whatever these turn out to be in the end)."

True and uncontroversial on a loose enough interpretation of "constrained".

"Therefore, there must be right and wrong answers to questions of morality and values that potentially fall within the purview of science."

This is the central claim in the thesis - and the most (only?) controversial one - but he's already qualifying it with "potentially." I'm guessing any response of his will turn on (a) the fact that he's only saying it might be the case and (b) arbitrarily broadening the definition of science. Nevertheless, moral questions aren't (even potentially) empirical, since they're obviously seeking normative and not factual answers. But given that this is obvious, it's hard to imagine that one could change his mind. It's rather like being invited to challenge the thesis of someone who claims scientific theories are works of fiction. You've got your work cut out when somebody has found themselves that far off the beaten path. I suspect the argument of the book runs: this philosophical thesis is misguided, this philosophical thesis is misguided, etc, science is good, we can get something that sort of looks like morality from science, so science - i.e., he takes himself to be explaining morality when he's actually offering a replacement. That's very hard to argue against. I think, at best, you're looking at $2000 for saying something he finds interesting and new, but that's very subjective.

"On this view, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life."

Assuming "what they deem important in life" is supposed to be parsed as "morality" then this appears to follow from his thesis.

"Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe."

So if we couldn't suffer, we wouldn't have any values? I don't think so.

It seems like he's groping towards the concept of CEV?

I got that impression as well. And to be honest, I haven't ever seen a good argument for why CEV has any fixed points in morality-space. Or rather, if fixed points exist, it's not immediately obvious to me why two distinct CEV-flows couldn't result in mutually irreconcilable value systems.

Which is why Sam's argument isn't super convincing to me.