'Effective Altruism' as utilitarian equivocation.

Summary: The term 'effective altuist' invites confusion between 'the right thing to do' and 'the thing that most efficiently promotes welfare.' I think this creeping utilitarianism is a bad thing, and should at least be made explicit. This is not to accuse anyone of deliberate deception.

Over the last year or so, the term 'Effective Altruist' has come into use. I self-identified as one on the LW survey, so I speak as a friend. However, I think there is a very big danger with the terminology.

The term 'Effective Altruist' was born out of the need for a label for those people who were willing to dedicate their lives to making the world a better place in rational ways, even if that meant doing counter-intuitive things, like working as an Alaskan truck driver. The previous term, 'really super awesome hardcore people', was indeed a little inelegant.

However, 'Effective Altruist' has a major problem: it refers to altruism, not ethics. Altruism may be a part of ethics (though the etymology of the term gives some concern), but it is not all there is to ethics. Value is complex. Helping people is good, but so is truth, and justice, and freedom, and beauty, and loyalty, and fairness, and honor, and fraternity, and tradition, and many other things.

A charity that very efficiently promoted beauty and justice, but only inefficiently produced happiness, would probably not be considered an EA organization. A while ago I suggested to [one of the leaders of the Center for Effective Altruism] the creation of a charity to promote promise-keeping. I didn't claim such a charity would be an optimal way of promoting happiness, and to them, this was sufficient to show 1) that it was not EA - and  hence 2) inferior to EA things.

Such thinking involves either a equivocation or a concealed premise. If 'EA' is interpreted literally, so 'the primary/driving goal is to help others', then something not being EA is insufficient for it to not be the best thing you could do - there is more to ethics and the good, than altruism and promoting welfare. Failure to promote one dimension of the good doesn't mean you're not the optimal way of promoting their sum. On the other hand, if 'EA' is interpreted broadly, as being concerned with 'happiness, health, justice, fairness and/or other values', then merely failing to promote welfare/happiness does not mean a cause is not EA. Much EA discussion, like on the popular facebook group, equivocates between these two meanings.*

...Unless one thought that helping people was all their was to ethics, in which case this is not equivocation. As virtually all of CEA's leaders are utilitarians, it is plausible that is was the concealed premise in their argument. In this case, there is no equivocation, but a different logical fallacy, that of an omitted premise, has been committed. And we should be just as wary as in the case of equivocation.

Unfortunately, utilitarianism is false, or at least not obviously true. Something can be the morally best thing to do, while not being EA. Just because some utilitarians have popularized a term which cleverly equivocates between "promotes welfare" and "is the best thing" does not mean we should be taken in. Every fashionable ideology likes to blurr the lines between its goals and its methods (is Socialism about helping the working man or about state ownership of industry? is libertarianism about freedom or low taxes?) in order to make people who agree with the goals forget that there might be other means of achieving them.

There are two options: recognize 'EA' as referring to only a subset of morality, or recognize as 'EA' actions and organizations that are ethical through ways other than producing welfare/happiness.

* Yes, one might say that promoting X's honor thereby helped X, and thus there was no distinction. However, I think people who make this argument in theory are unlikely to observe it in practice - I doubt that there will be an EA organisation dedicated to pure retribution, even if it was both extremely cheap to promote and a part of ethics.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 4:02 AM
Select new highlight date
Rendering 50/81 comments  show more

Hi,

Thanks for this post. The relationship between EA and well-known moral theories is something I've wanted to blog about in the past.

So here are a few points:

1. EA does not equal utilitarianism.

Utilitarianism makes many claims that EA does not make:

EA does not claim whether it's obligatory or merely supererogatory to spend one's resources helping others; utilitarianism claims that it is obligatory.

EA does not make a claim about whether there are side-constraints - certain things that it is impermissible to do, even if it were for the greater good. Utilitarianism claims that it's always obligatory to act for the greater good.

EA does not claim that there are no other things besides welfare that are of value; utilitarianism does claim this.

EA does not make a precise claim about what promoting welfare consists in (for example, whether it's more important to give one unit of welfare to someone who is worse-off than someone who is better-off; or whether hedonistic, preference-satisfactionist or objective list theories of wellbeing are correct); any specific form of utilitarianism does make a precise claim about this.

Also, note that some eminent EAs are not even consequentialist leaning, let alone utilitarian: e.g. Thomas Pogge (political philosopher) and Andreas Mogensen (Assistant Director of Giving What We Can) explicitly endorse a rights-based theory of morality; Alex Foster (epic London EtG-er) and Catriona MacKay (head of the GWWC London chapter) are both Christian (and presumably not consequentialist, though I haven't asked).

2. Rather, EA is something that almost every plausible moral theory is in favour of.

Almost every plausible moral theory thinks that promoting the welfare of others in an effective way is a good thing to do. Some moral theories that promoting the welfare of others is merely supererogatory, and others think that there are other values at stake. But EA is explicitly pro promoting welfare; it's not anti other things, and it doesn't claim that we're obligated to be altruistic, merely that it's a good thing to do.

3. Is EA explicitly welfarist?

The term 'altruism' suggests that it is. And I think that's fine. Helping others is what EAs do. Maybe you want to do other things effectively, but then it's not effective altruism - it's "effective justice", "effective environmental preservation", or something. Note, though, that you may well think that there are non-welfarist values - indeed, I would think that you would be mistaken not to act as if there were, on moral uncertainty grounds alone - but still be part of the effective altruism movement because you think that, in practice, welfare improvement is the most important thing to focus on.

So, to answer your dilemma:

EA is not trying to be the whole of morality.

It might be the whole of morality, if being EA is the only thing that is required of one. But it's not part of the EA package that EA is the whole of morality. Rather, it represents one aspect of morality - an aspect that is very important for those living in affluent countries, and who have tremendous power to help others. The idea that we in rich countries should be trying to work out how to help others as effectively as possible, and then actually going ahead and doing it, is an important part of almost every plausible moral theory.

Thanks for the response. I agree with most of the territory covered, of course, but my objection here is to the framing, not the philosophy.

Maybe you want to do other things effectively, but then it's not effective altruism

So why does the website explicitly list fairness, justice and trying to do as much good as possible as EA goals in themselves? And why does user:weeatquince (whose identity we both know but I will not 'out' on a public forum) think that "actions and organizations that are ethical through ways other than producing welfare/happiness, as long as they apply rationality to doing good" are EA?

I think the simple answer is that "effective altruism" is a vague term. I gave you what I thought was the best way of making it precise. Weeatquince, and Luke Muelhauser wanted to make it precise in a different way. We could have a debate about which is the more useful precisifcation, but I don't think that here is the right place for that.

On either way of making the term precise, though, EA is clearly not trying to be the whole of morality, or to give any one very specific conception of morality. It doesn't make a claim about side-constraints; it doesn't make a claim about whether doing good is supererogatory or obligatory; it doesn't make a claim about the nature of welfare. EA is broad tent, and deliberately so: very many different ethical perspectives will agree, for example, that it's important to find out which charities do the most to improve the welfare of those living in extreme poverty (as measured by QALYs etc), and then encouraging people to give to those charities. If so, then we've got an important activity that people of very many different ethical backgrounds can get behind - which is great!

EA does not make a precise claim about what promoting welfare consists in (for example, whether it's more important to give one unit of welfare to someone who is worse-off than someone who is better-off; or whether hedonistic, preference-satisfactionist or objective list theories of wellbeing are correct); any specific form of utilitarianism does make a precise claim about this.

That's rather a double standard there. Any specific form of EA does make a precise claim about what should be maximized.

I'm not a memetic architect of the EA movement; but speaking as an observer it seems pretty clear that EA is about doing good by helping others. If you care about other things in addition to helping others, there's still a place for you in the movement, as long as you want to set aside a portion of your resources and help others as much as possible with it. On the other hand, if you aren't interested in the charities that most effectively help people, GiveWell is of no use to you and the EA movement doesn't seem very relevant to you either.

Your interests are broader than the interests held in common by the EA community. This shouldn't be a problem, but reading between the lines it looks like you're disappointed by the movement because they were dismissive of a charitable cause that's important to you. I think it would be best if the EA movement framed its distinctions in terms of "effective (at helping people) vs. not effective (at helping people) charities", instead of "good vs. bad charities", at least in public statements. It was my impression that they're doing a good job of this, but I could be wrong.

it seems pretty clear that EA is about doing good by helping others.

I half agree... except they explicitly include "justice, fairness and/or other values" in the movement.. Perhaps Luke was not speaking on behalf of the movement there, but it was posted on their website without disclaimer.

So instead of being altruistic, you should be Friendly (in the AI sense).

"Hey, we're Friendly Optimizers."

"Hey, we're Effectively Friendly."

"Hey, we're about as Friendly as a Friendly AI would be if it were human."

'Friendly in the AI sense' is a quite compact summary of a precise (albeit non-constructive) definition of perfectly globally ethically correct behavior. Nice that you pointed it out. I'd hope for a more readable version. 'AI-friendly' will not do. Maybe 'total friendlyness'? If it can be a goal for an AI it can be an ideal for mere humans.

I think putting "altruist" in the name is more explicit about their utilitarianism than any disclaimer could possibly be.

I agree. Every non-sentientist value that you add to your pool of intrinsic values needs an exchange rate (which can be non-linear and complex and whatever) that implies you'd be willing to let people suffer in exchange for said value. This seems egoistic rather than altruistic because you'd be valuing your own preference for tradition more than you value the well-being of others for their own sake. If other people value tradition intrinsically, then preference utilitarianism will output that tradition counts to the extent that it satisfies people's preferences for it. This would be the utilitarian way to include "complexity of value".

If other people value tradition intrinsically, then preference utilitarianism will output that tradition counts to the extent that it satisfies people's preferences for it. This would be the utilitarian way to include "complexity of value".

If other people value tradition instead of helping other people, then the utilitarian thing to do is to get them to value helping other people more and tradition less. And on it goes, until you've tiled the universe with altruistic robots who only care about helping other altruistic robots (help other altrustic robots (help other altruistic robots (....(...(

Utilitarianism is fundamentally incompatible with value complexity.

This seems egoistic rather than altruistic because you'd be valuing your own preference for tradition more than you value the well-being of others for their own sake.

If you're a moral realist, you're not letting others suffer for the sake of your preference for tradition, you're letting them suffer for the sake of the moral value of tradition.

Otherwise, one could equally accuse the utilitarian of selfishly valuing their own preference for hedonism more than they value tradition for its own sake.

1) As an EA I strongly resist any attempt to say that EA as utilitarianism as I would see doing so as harmful for the movement and it would exclude many of the non-utilitarian EAs I know.

Ea is not utilitarianism. There is no reason why you cannot apply rationality to doing good and be an EA and believe in Christian ethics / ethical anti-realism / virtue ethics / deontolgical ethics / etc. For example I have an EA friend who would never kill one person to save 5 people, but believes strongly that we should research and give to the very best charities and so on. I see the above point as unequivecal, insofar as I

2. I would or recognize as 'EA' actions and organizations that are ethical through ways other than producing welfare/happiness, as long as they apply rationality to doing good. EG. if someone truly believed in some Rawlesian concept of justice and supported a charity that best lead to that idea. HOWEVER

  • I have some arbitrarily ill-defined limits on what counts as good. Eg I would never except as an EA someone who believed that killing Jews is the good.
  • If I meet someone with a very strange view (Eg the best cause is saving snails) I would assume that they are being irrational rather than just had a different understanding of morality.

3. I think it is bad of CEA to push OP away on utilitarian grounds. That said I find it hard to conceive of any form of moral view that would lead someone to believe that the best action they could take would be to create a charity to promote promise-keeping, so I have some sympathy for CEA. (Also I would be interested to hear an elaboration of why a promise keeping charity is the best thing to do.)

I would or recognize as 'EA' actions and organizations that are ethical through ways other than producing welfare/happiness, as long as they apply rationality to doing good.

You're a CEA employee, if I remember correctly? If so, your account of effective altruism seems rather different from Will's: "Maybe you want to do other things effectively, but then it's not effective altruism". This sort of mixed messaging is exactly what I was objecting too.

I would be interested to hear an elaboration of why a promise keeping charity is the best thing to do

I'm far from certain it is. But as far as I'm aware no effort at all is put into it at present, so there could be very low hanging fruit.

This sort of mixed messaging is exactly what I was objecting too

Firstly could you elaborate on how what I said differs from what Will has said please. I am fairly sure we both agree with what EA is.

You're a CEA employee

Incorrect although I do volunteer for them in ways that help spread EA.

My sentiments exactly. Thank you for this well-written and badly-needed post. (Also for correctly understanding the meaning of "utilitarianism".)

I found I agreed with the summary, but I think for different reason than the OP.

It would be more accurate to label what goes on around here in the name of Effective Altruism as Effective Utilitarianism, as an equal weighting between people is usually baked into the analysis. That doesn't have to be the case for Altruism.

Most people do not have identical values. This means that if you're trying to help a lot of people, you have to rely on things you can assess most easily. It's a lot harder to tell how much truth beauty or honor (ESPECIALLY honor) someone has access to than how much running water or whether they have malaria. I say we should concentrate on welfare and let people take care of their own needs for abstract morality, especially considering how much they will disagree on what they want.

Effective altruism doesn't say anything about general ethics, and I don't know why you're claiming it tries to. It's about how to best help the most people. It's about charity and reducing worldsuck. I think this is pretty obvious to everyone involved, and I don't think people are being fooled.

The issue is whether people like the OP and myself, who are interested in reducing worldsuck, but not necessarily in the same kind of way as utilitarians, belong in the EA community or not.

I'm quite confused about this. I think my values are pretty compatible with Yudkowsky's, but Yudkowsky seems to think he's an EA. On the other hand, my values seem incompatible with those of e.g. Paul Christiano, who I think everyone would agree clearly is an EA. Yet those two seem to act as though they believed their values were compatible with each other. Now both of them are as intelligent as I, maybe more. So if I update on their apparent beliefs about what sets of values are compatible, should I conclude that I'm an EA, despite my non-endorsement of utilitarianism or any other kind of extreme altruism, or should I instead conclude that I don't want Yudkowskian FAI after all, and start my own rival world-saving project?

Effective altruism doesn't say anything about general ethics

Except when it talks about fairness, justice and trying to do as much good as possible without restriction.

Value is complex. Helping people is good, but so is truth, and justice, and freedom, and beauty, and loyalty, and fairness, and honor, and fraternity, and tradition, and many other things.

I think your critique would have a higher chance of improving (in your view) something if you framed it as a concern about your personal values not being included adequately, rather than a two-line (plus an overused link that is begging the question as well) "refutation" of utilitarianism that also implicitly includes the controversial premise of moral realism.

The truth of utilitarianism doesn't matter to my argument. A strategy can be intellectually dishonest even if it's goal is correct.