Followup toThe Moral Void

Three people, whom we'll call Xannon, Yancy and Zaire, are separately wandering through the forest; by chance, they happen upon a clearing, meeting each other.  Introductions are performed.  And then they discover, in the center of the clearing, a delicious blueberry pie.

Xannon:  "A pie!  What good fortune!  But which of us should get it?"

Yancy:  "Let us divide it fairly."

Zaire:  "I agree; let the pie be distributed fairly.  Who could argue against fairness?"

Xannon:  "So we are agreed, then.  But what is a fair division?"

Yancy:  "Eh?  Three equal parts, of course!"

Zaire:  "Nonsense!  A fair distribution is half for me, and a quarter apiece for the two of you."

Yancy:  "What?  How is that fair?"

Zaire:  "I'm hungry, therefore I should be fed; that is fair."

Xannon:  "Oh, dear.  It seems we have a dispute as to what is fair.  For myself, I want to divide the pie the same way as Yancy.  But let us resolve this dispute over the meaning of fairness, fairly: that is, giving equal weight to each of our desires.  Zaire desires the pie to be divided {1/4, 1/4, 1/2}, and Yancy and I desire the pie to be divided {1/3, 1/3, 1/3}.  So the fair compromise is {11/36, 11/36, 14/36}."

Zaire:  "What?  That's crazy.  There's two different opinions as to how fairness works—why should the opinion that happens to be yours, get twice as much weight as the opinion that happens to be mine?  Do you think your theory is twice as good?  I think my theory is a hundred times as good as yours!  So there!"

Yancy:  "Craziness indeed.  Xannon, I already took Zaire's desires into account in saying that he should get 1/3 of the pie.  You can't count the same factor twice.  Even if we count fairness as an inherent desire, why should Zaire be rewarded for being selfish?  Think about which agents thrive under your system!"

Xannon:  "Alas!  I was hoping that, even if we could not agree on how to distribute the pie, we could agree on a fair resolution procedure for our dispute, such as averaging our desires together.  But even that hope was dashed.  Now what are we to do?"

Yancy:  "Xannon, you are overcomplicating things.  1/3 apiece.  It's not that complicated.  A fair distribution is an even split, not a distribution arrived at by a 'fair resolution procedure' that everyone agrees on.  What if we'd all been raised in a society that believed that men should get twice as much pie as women?  Then we would split the pie unevenly, and even though no one of us disputed the split, it would still be unfair."

Xannon:  "What?  Where is this 'fairness' stored if not in human minds?  Who says that something is unfair if no intelligent agent does so?  Not upon the stars or the mountains is 'fairness' written."

Yancy:  "So what you're saying is that if you've got a whole society where women are chattel and men sell them like farm animals and it hasn't occurred to anyone that things could be other than they are, that this society is fair, and at the exact moment where someone first realizes it shouldn't have to be that way, the whole society suddenly becomes unfair."

Xannon:  "How can a society be unfair without some specific party who claims injury and receives no reparation?  If it hasn't occurred to anyone that things could work differently, and no one's asked for things to work differently, then—"

Yancy:  "Then the women are still being treated like farm animals and that is unfair.  Where's your common sense?  Fairness is not agreement, fairness is symmetry."

Zaire:  "Is this all working out to my getting half the pie?"

Yancy:  "No."

Xannon:  "I don't know... maybe as the limit of an infinite sequence of meta-meta-fairnesses..."

Zaire:  "I fear I must accord with Yancy on one point, Xannon; your desire for perfect accord among us is misguided.  I want half the pie.  Yancy wants me to have a third of the pie.  This is all there is to the world, and all there ever was.  If two monkeys want the same banana, in the end one will have it, and the other will cry morality.  Who gets to form the committee to decide the rules that will be used to determine what is 'fair'?  Whoever it is, got the banana."

Yancy:  "I wanted to give you a third of the pie, and you equate this to seizing the whole thing for myself?  Small wonder that you don't want to acknowledge the existence of morality—you don't want to acknowledge that anyone can be so much less of a jerk."

Xannon:  "You oversimplify the world, Zaire.  Banana-fights occur across thousands and perhaps millions of species, in the animal kingdom.  But if this were all there was, Homo sapiens would never have evolved moral intuitions.  Why would the human animal evolve to cry morality, if the cry had no effect?"

Zaire:  "To make themselves feel better."

Yancy:  "Ha!  You fail at evolutionary biology."

Xannon:  "A murderer accosts a victim, in a dark alley; the murderer desires the victim to die, and the victim desires to live.  Is there nothing more to the universe than their conflict?  No, because if I happen along, I will side with the victim, and not with the murderer.  The victim's plea crosses the gap of persons, to me; it is not locked up inside the victim's own mind.  But the murderer cannot obtain my sympathy, nor incite me to help murder.  Morality crosses the gap between persons; you might not see it in a conflict between two people, but you would see it in a society."

Yancy:  "So you define morality as that which crosses the gap of persons?"

Xannon:  "It seems to me that social arguments over disputed goals are how human moral intuitions arose, beyond the simple clash over bananas.  So that is how I define the term."

Yancy:  "Then I disagree.  If someone wants to murder me, and the two of us are alone, then I am still in the right and they are still in the wrong, even if no one else is present."

Zaire:  "And the murderer says, 'I am in the right, you are in the wrong'.  So what?"

Xannon:  "How does your statement that you are in the right, and the murderer is in the wrong, impinge upon the universe—if there is no one else present to be persuaded?"

Yancy:  "It licenses me to resist being murdered; which I might not do, if I thought that my desire to avoid being murdered was wrong, and the murderer's desire to kill me was right.  I can distinguish between things I merely want, and things that are right—though alas, I do not always live up to my own standards.  The murderer is blind to the morality, perhaps, but that doesn't change the morality.  And if we were both blind, the morality still would not change."

Xannon:  "Blind?  What is being seen, what sees it?"

Yancy:  "You're trying to treat fairness as... I don't know, something like an array-mapped 2-place function that goes out and eats a list of human minds, and returns a list of what each person thinks is 'fair', and then averages it together.  The problem with this isn't just that different people could have different ideas about fairness.  It's not just that they could have different ideas about how to combine the results.  It's that it leads to infinite recursion outright—passing the recursive buck.  You want there to be some level on which everyone agrees, but at least some possible minds will disagree with any statement you make."

Xannon:  "Isn't the whole point of fairness to let people agree on a division, instead of fighting over it?"

Yancy:  "What is fair is one question, and whether someone else accepts that this is fair is another question.  What is fair?  That's easy: an equal division of the pie is fair.  Anything else won't be fair no matter what kind of pretty arguments you put around it.  Even if I gave Zaire a sixth of my pie, that might be a voluntary division but it wouldn't be a fair division.  Let fairness be a simple and object-level procedure, instead of this infinite meta-recursion, and the buck will stop immediately."

Zaire:  "If the word 'fair' simply means 'equal division' then why not just say 'equal division' instead of this strange additional word, 'fair'?  You want the pie divided equally, I want half the pie for myself.  That's the whole fact of the matter; this word 'fair' is merely an attempt to get more of the pie for yourself."

Xannon:  "If that's the whole fact of the matter, why would anyone talk about 'fairness' in the first place, I wonder?"

Zaire:  "Because they all share the same delusion."

Yancy:  "A delusion of what?  What is it that you are saying people think incorrectly the universe is like?"

Zaire:  "I am under no obligation to describe other people's confusions."

Yancy:  "If you can't dissolve their confusion, how can you be sure they're confused?  But it seems clear enough to me that if the word fair is going to have any meaning at all, it has to finally add up to each of us getting one-third of the pie."

Xannon:  "How odd it is to have a procedure of which we are more sure of the result than the procedure itself."

Zaire:  "Speak for yourself."

 

Part of The Metaethics Sequence

Next post: "Moral Complexities"

Previous post: "Created Already In Motion"

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 12:48 PM
Select new highlight date
All comments loaded

My favorite answer to this problem comes from "How to Cut a Cake: And Other Mathematical Conundrums." The solution in the book was that "fair" means "no one has cause to complain." It doesn't work in the case here, since one party wants to divide the pie unevenly, but if you were trying to make even cuts, it works. The algorithm was:

  1. Make a cut from the center to the edge.
  2. Have one person hold the knife over that cut,
  3. Slowly rotate the knife (or the pie) at, say, a few degrees per second.
  4. At any time, any person (including the one holding the knife) can say "cut." A cut is made there, and the speaker gets the thus-cut piece.

At the end, anyone who thinks they got too little (meaning, someone else got too much) could have said "cut" before that other person's cut got too big.

I was expecting Xannon and Yancy to get into an exchange, only to find that Zaire had taken half the pie while they were talking. Xannon is motivated by consensus, Yancy is motivated by fairness, and Zaire is motivated by pie. I know who I bet on to end up with more pie.

(The cake was an honest mistake, not a lie.)

That is a very interesting dialogue.

It does not seem to come to any definite conclusion, instead simply presenting arguments and leaving the three participants in the dialogue with beliefs that are largely unchanged from their original position.

I am unable to come up with anything of substance to add, other than praise, but I feel compelled to comment anyway.

Why doesn't Zaire just divide himself in half, let each half get 1/4 of the pie, then merge back together and be in possession of half of the pie?

Or, Zaire might say: Hey guys, my wife just called and told me that she made a blueberry pie this morning and put it in this forest for me to find. There's a label on the bottom of the plate if you don't believe me. Do you still think 'fair' = 'equal division'?

Or maybe Zaire came with his dog, and claims that the dog deserves an equal share.

I appreciate the distinction Eliezer is trying to draw between the object level and the meta level. But why the assumption that the object-level procedure will be simple?

Okay, how does standard political philosophy say you should fairly / rightly construct an ultrapowerful superintelligence (not to be confused with a corruptible government) that can compute moral and metamoral questions only given a well-formed specification of what is to be computed?

After you've carried out these instructions, what's the standard reply to someone who says, "Friendly to who?" or "So you get to decide what's Friendly"?

Eliezer, what if they are all poisoned, and the only antidote is a full blueberry pie? is the obvious fair division still 1/3 to each?

What if only one is poisoned? Is it fair for the other two to get some of the (delicious) antidote?

First: let's assume there is an answer out there to moral questions, in a form that is accessible to a superintelligence, and let's just assume the hard problem away

Let's not. See, this is what I mean by saying that political philosophy is written for humans by humans.

Your other answer, "ideal democracy", bears a certain primitive resemblance to this, as you'd know if you were familiar with the Friendliness literature...

Okay, sorry about that, just emphasizing that it's not like I'm making all this up as I go along; and also, that there's a hell of a lot of literature out there on everything, but it isn't always easy to adapt to a sufficiently different purpose.

Nanani, I think it has more to do with my having just finished Portal. Fixed.

Certain desires help the human race, other desires hurt the human race, and these can be discovered in the same way we discover any other facts about the universe.

You simply passed the recursive buck to "help" and "hurt". I will let you take for granted the superintelligence's knowledge of, or well-calibrated probability distribution over, any empirical truth about consequences; but when it comes to the valuation of those consequences in terms of "helping" or "hurting" you must tell me how to compute it, or run a computation that computes how to compute it.

Right, but those questions are responsive to reasons too. Here's where I embrace the recursion. Either we believe that ultimately the reasons stop -- that is, that after a sufficiently ideal process, all of the minds in the relevant mind design space agree on the values, or we don't. If we do, then the superintelligence should replicate that process. If we don't, then what basis do we have for asking a superintelligence to answer the question? We might as well flip a coin.

Of course, the content of the ideal process is tricky. I'm hiding the really hard questions in there, like what counts as rationality, what kinds of minds are in the relevant mind design space, etc. Those questions are extra-hard because we can't appeal to an ideal process to answer them on pain of circularity. (Again, political philosophy has been struggling with a version of this question for a very long time. And I do mean struggling -- it's one of the hardest questions there is.) And the best answer I can give is that there is no completely justifiable stopping point: at some point, we're going to have to declare "these are our axioms, and we're going with them," even though those axioms are not going to be justifiable within the system.

What this all comes down to is that it's all necessarily dependent on social context. The axioms of rationality and the decisions about what constitute relevant mind-space for any such superintelligence would be determined by the brute facts of what kind of reasoning is socially acceptable in the society that creates such a superintelligence. And that's the best we can do.

Paul: Responsiveness to which reasons? For every mind in mind design space that sees X as a reason to value Y, there are other possible minds that see X as a reason to value ~Y.

The answer to "Friendly to who?" had damn well better always be "Friendly to the author and by proxy those things the author wants." Otherwise leaving aside what it actually is friendly to, it was constructed by a madman.

Optimistically, I would say that if the murderer perfectly knew all the relevant facts, including the victim's experience, ve wouldn't do it

The murderer may have all the facts, understand exactly what ve is doing and what the experience of the other will be, and just decide that ve doesn't care. Which fact is ve not aware of? Ve may understand all the pain and suffering it will cause, ve may understand that ve is wiping out a future for the other person and doing something that ve would prefer not to be on the receiving end of, may realize that it is behavior that if universalized would destroy society, may realize that it lessens the sum total of happiness or whatever else, may even know that "ve should feel compelled not to murder" etc. But at the end of the day, ve still might say, "regardless of all that, I don't care, and this is what I want to do and what I will do".

There is a conflict of desire (and of values) here, not a difference of fact. Having all the facts is one thing. Caring about the facts is something altogether different.

--

On the question of the bedrock of fairness, at the end of the day it seems to me that one of the two scenarios will occur:

(1) all parties happen to agree on what the bedrock is, or they are able to come to an agreement.

(2) all parties cannot agree on what the bedrock is. The matter is resolved by force with some party or coalition of parties saying "this is our bedrock, and we will punish you if you do not obey it".

And the universe itself doesn't care one way or the other.

This dialogue leads me to conclude that "fairness" is a form of social lubricant that ensures our pies don't get cold while we're busy arguing. The meta-rule for fairness rules would then be: (1) fast; (2) easy to apply; and (3) everybody gets a share.

I don't have an argument here; rather, I just want to see if I understand each position taken in the dialogue. After all, it would be a dreadful waste of time to argue one way or the other against our three musketeers while completely misunderstanding some key point. As far as I can tell, these are the essential arguments being made:

Yancy's position: that fairness is a rational (mathematical) system. There is no moral factor; rather than "to each according to his need," it is "to each according to the equation." This presumes fairness is a universal, natural system which people must follow, uncomplaining; any corruption of the system would be unfair, any bending or breaking of the rules renders them useless.

Zaire's position, that it is fair for individuals to define the product of fairness, handily illustrates this; his conception of fairness breaks down as soon as another conception is introduced. Fairness is entirely relative.

Xannon's initial position is that the product of fairness can be rationally derived from individually relative definitions of fairness; that fairness itself is the sum of differing concepts of fair.

Xannon revises this position, in that fairness is derived from the moral rights of a group and has an intrinsic, understood value. Those who do not inherently comprehend this value are not moral and do not belong to the group, like the murderer. Of course, this assumed that passersby, like Xannon, would side with the victim. Not only could they side with the assailant, they could even refuse to become involved. If the victim is "licensed" to resist being murdered, is the victim likewise licensed to kill the murderer in self-defense, and is that fair? The question of "who started it?" begins a new problem. If the observer is joined by five others who think that the victim, who killed his attacker in self-defense, must be put to death, is that also fair? This position argues for the existence of absolute morality, but only achieves a weak implication of moral relativity.

This is at odds with Xannon's initial position; if Zaire wants more of the pie than Yancy, but Xannon sides with Yancy, Xannon thinks it is fair to average their desires. However, Xannon would not average the desire of the murderer, the victim, and the passerby. If the murderer is presumed in the wrong, then Zaire is also presumed in the wrong. Therefore, it is unfair to attempt to combine Zaire's desire with Yancy and Xannon's.

In essence, Xannon's position is ultimately not far removed from Zaire's; where Zaire believes in the individual's right to define fairness, Xannon believes in the group's right to the same. Both believe in a moral right to inflict their own definition of fairness on the other. Yancy, in believing a universal system of fairness can be applied, would attempt the same. Further, even if Xannon agreed Yancy's proposal was fair, it would not be for the same reason, as Xannon believes fairness is derived from moral right; therefore, arriving at a fair decision through the amoral application of a rational system may not be "morally fair" to Xannon. There is no resolution to be found here.

I would like to see the question of the purpose of fairness addressed more comprehensively. If fairness as a system is not effective, why does it exist? If it is an artificial social construction, it must have a agreed-upon definition; if it is an evolved, biological system, it must have a physical basis; if it is a universal rule, there must be evidence of it.

This post reminds me a lot of DialogueOnFriendliness.

There's at least one more trivial mistake in this post:

Is their nothing more to the universe than their conflict?
s/their/there/

Constant wrote:

Arguably the difficulty the three have in coming to a conclusion is related to the fact that none of the three has anything close to a legitimate claim on the pie.
If you modify the scenario by postulating that the pie is accompanied by a note reading "I hereby leave this pie as a gift to whomever finds it. Enjoy. -- Flying Pie-Baking Monster", how does that make the problem any easier?

And then they discover, in the center of the clearing, a delicious blueberry pie.

If the pie is edible then it was recently made and placed there. Whoever made it is probably close at hand. That person has a much better claim on the pie than these three and is therefore most likely rightly considered the owner. Let the owner of the pie decide. If the owner does not show up, leave the pie alone. Arguably the difficulty the three have in coming to a conclusion is related to the fact that none of the three has anything close to a legitimate claim on the pie.

Gowder, I'm talking to the people who say unto me, "Friendly to who?" and "Oh, so you get to say what 'Friendly' means." I find that the existing literature rarely serves my purposes. In this case I'm driving at a distinction between the object level and the meta level, and the notion of bedrock (the Buck Stops Immediately). Does the political philosophy go there? - for I am not wholly naive, but of course I have only read a tiny fraction of what's out there. I fear that much political philosophy is written for humans by humans.

Roland, I confess that I'd intended the original reading of the dialogue as simple greed on Zaire's part, but your reading is also interesting... I would still tend to say that 1/3 apiece is the fair division, but that either of the other two are welcome to donate portions of pie to Zaire; the resulting division might perhaps be termed utilitarian. The morally interesting situation is when Xannon thinks Zaire deserves more pie but Yancy disagrees.

I would still tend to say that 1/3 apiece is the fair division

I'm curious why you personally just chose to use the norm-connoting term "fair" in place of the less loaded term "equal division" ... what properties does equal division have that make you want to give it special normative consideration? I could think of some, but I'm particularly interested in what your thoughts are here!

Does the point of the story have anything to do with the object desire switching from a pie to a cake and back again?

Does the point of the story have anything to do with the object desire switching from a pie to a cake and back again?

He realized that anyway, this cake is great. It's so delicious and moist.

Why are you still talking? There's science to do.

And make a neat gun?

I wish I could. I did not have access to enough qualified test subjects. But LW has plenty of people who are still alive so I'm GLaD to be here!

At first I tend to side with Zaire. The pie should be divided according to everyone's needs. But what if Zaire has a bigger body and generally needs to eat more? Should he always get more? Should the others receive less and be penalized because Zaire happens to be bigger? This is not easy, sigh...

Suppose that 98% of humans, under 98% of the extrapolated spread, would both choose a certain ordering of arguments, and also claim that this is the uniquely correct ordering. Is this sufficient to just go ahead and label that ordering the rational one? If you refuse to answer that question yourself, what is the procedure that answers it?

Again, this is why it's irreducibly social. If there isn't a procedure that yields a justified determinate answer to the rationality of that order, then the best we can do is take what is socially accepted at the time and in the society in which such a superintelligence is created. There's nowhere else to look.

Things like the ordering of arguments are just additional questions about the rationality criteria

...which problem you can't hand off to the superintelligence until you've specified how it decides 'rationality criteria'. Bootstrapping is allowed, skyhooking isn't. Suppose that 98% of humans, under 98% of the extrapolated spread, would both choose a certain ordering of arguments, and also claim that this is the uniquely correct ordering. Is this sufficient to just go ahead and label that ordering the rational one? If you refuse to answer that question yourself, what is the procedure that answers it?

That's a really fascinating question. I don't know that there'd be a "standard" answer to this -- were the questions taken up, they'd be subject to hot debate.

Are we specifying that this ultrapowerful superintelligence has mind-reading power, or the closest non-magical equivalent in the form of access to every mental state that an arbitrary individual human has, even stuff that now gets lumped under the label "qualia"/ability to perfectly simulate the neurobiology of such an individual?

If so, then two approaches seem defensible to me. First: let's assume there is an answer out there to moral questions, in a form that is accessible to a superintelligence, and let's just assume the hard problem away, viz., that the questioners know how to tell the superintelligence where to look (or the superintelligence can figure it out itself).

We might not be able to produce a well-formed specification of what is to be computed when we're talking about moral questions (it's easy to think that any attempt to do so would rig the answer in advance -- for example, if you ask it for universal principles, you're going to get something different from what you'd get if you left the universality variable free...). But if the superintelligence could simulate our mental processes such that it could tell what it is that we want (for some appropriate values of we, like the person asking or the whole of humanity if there was any consensus -- which I doubt), then in principle it could simply answer that by declaring what the truth of the matter is with respect to that which it has determined that we desire.

That assumes the superintelligence has access to moral truth, but once we do that, I think the standard arguments against "guardianship" (e.g. the first few chapters of Robert Dahl, Democracy and its Critics) fail, in that if they're true -- if people are really better off deciding for themselves (the standard argument), and making people better off is what is morally correct, then we can expect the superintelligence to return "you figure it out." And then the answer to "friendly to who" or "so you get to decide what's friendly" is simply to point to the fact that the superintelligence has access to moral truth.

The more interesting question perhaps is what should happen if the superintelligence doesn't have access to moral truth (either because there is no such thing in the ordinary sense, or because it exists but is unobservable). I assume here that being responsive to reasons is an appropriate way to address moral questions (if not, all bets are off). Then the superintelligence loses one major advantage over ordinary human reasoning (access to the truth on the question), but not the other (while humans are responsive to reasons in a limited and inconsistent sense, the supercomputer is ideally responsive to reasons). For this situation, I think the second defensible outcome would be that the superintelligence should simulate ideal democracy. That is, it should simulate all the minds in the world, and put them into an unlimited discussion with one another, as if they were bayesians with infinite time. The answers it would come up with would be the equivalent to the most legitimate conceivable human decisional process, but better...

I'm pretty sure this is a situation that hasn't come under sustained discussion in the literature as such (in superintelligence terms -- though it has come up in discussions of benevolent dictators and the value of democracy), so I'm talking out my ass a little here, but drawing on familiar themes. Still, the argument defending these two notions -- especially the second -- isn't a blog comment, it's a series of long articles or more.

Eliezer, to the extent I understand what you're referencing with those terms, the political philosophy does indeed go there (albeit in very different vocabulary). Certainly, the question about the extent to which ideas of fairness are accessible at what I guess you'd call the object level are constantly treated. Really, it's one of the most major issues out there -- the extent to which reasonable disagreement on object-level issues (disagreement that we think we're obligated to respect) can be resolved on the meta-level (see Waldron, Democracy and Disagreement, and, for an argument that this leads into just the infinite recursion you suggest, at least in the case of democratic procedures, see the review of the same by Christiano, which google scholar will turn up easy).

I think the important thing is to separate two questions: 1. what is the true object-level statement, and 2. to what extent do we have epistemic access to the answer to 1? There may be an objectively correct answer to 1, but we might not be able to get sufficient grip on it to legitimately coerce others to go along -- at which point Xannon starts to seem exactly right.

Oh, hell, go read Ch. 5. of Hobbes, Leviathan. And both of Rawls's major books.

I mean, Xannon has been around for hundreds of years. Here's Hobbes, from previous cite.

But no one mans Reason, nor the Reason of any one number of men, makes the certaintie; no more than an account is therefore well cast up, because a great many men have unanimously approved it. And therfore, as when there is a controversy in account, the parties must by their own accord, set up for right Reason, the Reason of some Arbitrator, or Judge, to whose sentence they will both stand, or their controversie must either come to blowes, or be undecided, for want of a right Reason constituted by Nature...

Why not divide the pie equally among cells, which make up the agglomerations we call "persons"? And if there is a distinction between voluntary and fair so that Xannon and Yancy honestly couldn't comfortably eat another bite and gave extra to Zaire, would that be unfair?

We've already got a society in which living things are treated like farm animals, by which of course I speak of farm animals themselves. They are of course privileged over a more defenseless living being that they live as parasites off of, which are plants. Some Swiss officials are working to remedy that situation, but it causes me to wonder why we should privilege the selfish replicating and entropy-producing patterns known as "life" over non-life? Fairness as symmetry is underdetermined and at least to me not particularly compelling.

In a hypothetical gender-chattel society, how does the notion that it is unfair pay rent?

I notice that Eliezer asks the question why it is we discuss fairness and find it compelling. He does not answer that question. My guess is that it signals a desire for the cooperation of others and establishes a Schelling point by which you are willing to cooperate with your allies against defectors. Upon violating the implied promise of future cooperation your reputation would take a significant hit. As I use a pseudonym only relevant in a restricted domain, I am free to ignore the damage to my reputation and the willingness of others to cooperate with me rather than punish.

When people get this embroiled in philosophy, I usually start eating pie.

However as I don't like blueberries, we will split the pie into thirds fairly as Yancy wants, then I will give 1/6th of my pie to Zaire so he has the half he wants, and I'll leave the other 1/6th where I found it since A PIE WE FOUND IN THE FOREST AND KNOW NOTHING ABOUT ISN'T NECESSARILY MINE TO STEAL FROM.

Xannon and Yancy offer Zaire 1/3 of the pie, if he'll accept that.

If he won't, they split the pie 50-50 between them, and leave Zaire with nothing.

Does that sound fair?

Yancy: "If someone wants to murder me, and the two of us are alone, then I am still in the right and they are still in the wrong, even if no one else is present."

So the trick here is to realize that fairness is defined with respect to an expected or typical observer -- when you try to murder me, and I scream "Foul play!", the propositional content of my cry is that I expect any human who happens to pass by to agree with me and to help stop the murder. If nobody passes by this time, well, that's just my bad luck, and I can go to my grave with the small comfort that whatever behavior of mine that led to my being murdered by you was at least marginally more adapative than a behavior that would to our fellow tribespeople thinking that you were justified in murdering me, because then I would have had no chance of survival, as opposed to having my survival depend upon having the good luck of being observed in a timely fashion.

On the other hand, if it were impossible for a disinterested party to pass by, because you and I were the only two intelligent beings in the known world, or because all known intelligent beings would have a political reason to pick one side or the other in our little tiff, then fairness would have no propositional content, and would be meaningless. That seems like a small bullet to bite -- it seems plausible to think that fairness norms really did evolve -- and that people continue to make a big deal about the concept -- because there were and often are disinterested third parties that observe two-party conflicts (or disinterested fourth parties who observe three-party conflicts, and so on). If there weren't any such thing as disinterested parties, it really wouldn't make any sense to talk about "fairness" as an arrangement that's distinct from "equal division".