In reply to:

I would argue (perhaps self-servingly) that academia is another example of such a path

Academia is, in my mind, the textbook example of people doing something because it's familiar, not because they've searched for it and it's the right choice. Most of the academics I know will freely state that it only makes sense to go into academia for fame, not for money- and so it's not clear to me what you think the EA benefit is. (Convincing students to become EA? Funding student organizations seems like a better way to do that.)

Most of the academics I know will freely state that it only makes sense to go into academia for fame, not for money- and so it's not clear to me what you think the EA benefit is.

The goal is to get direct impact by doing high-impact research. One of the key points here is that donating money is just one particularly straightforward way to do good!

Academia is, in my mind, the textbook example of people doing something because it's familiar, not because they've searched for it and it's the right choice.

I have certainly seen this before, although I think it's less prevalent (but by no means absent) near the top.

Another Critique of Effective Altruism

Cross-posted from my blog. It is almost certainly a bad idea to let this post be your first exposure to the effective altruist movement. You should at the very least read these two posts first.


Recently Ben Kuhn wrote a critique of effective altruism. I'm glad to see such self-examination taking place, but I'm also concerned that the essay did not attack some of the most serious issues I see in the effective altruist movement, so I've decided to write my own critique. Due to time constraints, this critique is short and incomplete. I've tried to bring up arguments that would make people feel uncomfortable and defensive; hopefully I've succeeded.

 

Briefly, here are some of the major issues I have with the effective altruism movement as it currently stands:

  • Over-focus on “tried and true” and “default” options, which may both reduce actual impact and decrease exploration of new potentially high-value opportunities.

  • Over-confident claims coupled with insufficient background research.

  • Over-reliance on a small set of tools for assessing opportunities, which lead many to underestimate the value of things such as “flow-through” effects.

The common theme here is a subtle underlying message that simple, shallow analyses can allow one to make high-impact career and giving choices, and divest one of the need to dig further. I doubt that anyone explicitly believes this, but I do believe that this theme comes out implicitly both in arguments people make and in actions people take.

 

Lest this essay give a mistaken impression to the casual reader, I should note that there are many examplary effective altruists who I feel are mostly immune to the issues above; for instance, the GiveWell blog does a very good job of warning against the first and third points above, and I would recommend anyone who isn't already to subscribe to it (and there are other examples that I'm failing to mention). But for the purposes of this essay, I will ignore this fact except for the current caveat.

 

Over-focus on "tried and true" options


It seems to me that the effective altruist movement over-focuses on “tried and true” options, both in giving opportunities and in career paths. Perhaps the biggest example of this is the prevalence of “earning to give”. While this is certainly an admirable option, it should be considered as a baseline to improve upon, not a definitive answer.

 

The biggest issue with the “earning to give” path is that careers in finance and software (the two most common avenues for this) are incredibly straight-forward and secure. The two things that finance and software have in common is that there is a well-defined application process similar to the one for undergraduate admissions, and given reasonable job performance one will continue to be given promotions and raises (this probably entails working hard, but the end result is still rarely in doubt). One also gets a constant source of extrinsic positive reinforcement from the money they earn. Why do I call these things an “issue”? Because I think that these attributes encourage people to pursue these paths without looking for less obvious, less certain, but ultimately better paths. One in six Yale graduates go into finance and consulting, seemingly due to the simplicity of applying and the easy supply of extrinsic motivation. My intuition is that this ratio is higher than an optimal society would have, even if such people commonly gave generously (and it is certainly much higher than the number of people who enter college planning to pursue such paths).


Contrast this with, for instance, working at a start-up. Most start-ups are low-impact, but it is undeniable that at least some have been extraordinarily high-impact, so this seems like an area that effective altruists should be considering strongly. Why aren't there more of us at 23&me, or Coursera, or Quora, or Stripe? I think it is because these opportunities are less obvious and take more work to find, once you start working it often isn't clear whether what you're doing will have a positive impact or not, and your future job security is massively uncertain. There are few sources of extrinsic motivation in such a career: perhaps moreso at one of the companies mentioned above, which are reasonably established and have customers, but what about the 4-person start-up teams working in a warehouse somewhere? Some of them will go on to do great things but right now their lives must be full of anxiousness and uncertainty.

 

I don't mean to fetishize start-ups. They are just one well-known example of a potentially high-value career path that, to me, seems underexplored within the EA movement. I would argue (perhaps self-servingly) that academia is another example of such a path, with similar psychological obstacles: every 5 years or so you have the opportunity to get kicked out (e.g. applying for faculty jobs, and being up for tenure), you need to relocate regularly, few people will read your work and even fewer will praise it, and it won't be clear whether it had a positive impact until many years down the road. And beyond the “obvious” alternatives of start-ups and academia, what of the paths that haven't been created yet? GiveWell was revolutionary when it came about. Who will be the next GiveWell? And by this I don't mean the next charity evaluator, but the next set of people who fundamentally alter how we view altruism.

 

Over-confident claims coupled with insufficient background research


The history of effective altruism is littered with over-confident claims, many of which have later turned out to be false. In 2009, Peter Singer claimed that you could save a life for $200 (and many others repeated his claim). While the number was already questionable at the time, by 2011 we discovered that the number was completely off. Now new numbers were thrown around: from numbers still in the hundreds of dollars (GWWC's estimate for SCI, which was later shown to be flawed) up to $1600 (GiveWell's estimate for AMF, which GiveWell itself expected to go up, and which indeed did go up). These numbers were often cited without caveats, as well as other claims such as that the effectiveness of charities can vary by a factor of 1,000. How many people citing these numbers understood the process that generated them, or the high degree of uncertainty surrounding them, or the inaccuracy of past estimates? How many would have pointed out that saying that charities vary by a factor of 1,000 in effectiveness is by itself not very helpful, and is more a statement about how bad the bottom end is than how good the top end is?

 

More problematic than the careless bandying of numbers is the tendency toward not doing strong background research. A common pattern I see is: an effective altruist makes a bold claim, then when pressed on it offers a heuristic justification together with the claim that “estimation is the best we have”. This sort of argument acts as a conversation-stopper (and can also be quite annoying, which may be part of what drives some people away from effective altruism). In many of these cases, there are relatively easy opportunities to do background reading to further educate oneself about the claim being made. It can appear to an outside observer as though people are opting for the fun, easy activity (speculation) rather than the harder and more worthwhile activity (research). Again, I'm not claiming that this is people's explicit thought process, but it does seem to be what ends up happening.

 

Why haven't more EAs signed up for a course on global security, or tried to understand how DARPA funds projects, or learned about third-world health? I've heard claims that this would be too time-consuming relative to the value it provides, but this seems like a poor excuse if we want to be taken seriously as a movement (or even just want to reach consistently accurate conclusions about the world).

 

Over-reliance on a small set of tools


Effective altruists tend to have a lot of interest in quantitative estimates. We want to know what the best thing to do is, and we want a numerical value. This causes us to rely on scientific studies, economic reports, and Fermi estimates. It can cause us to underweight things like the competence of a particular organization, the strength of the people involved, and other “intangibles” (which are often not actually intangible but simply difficult to assign a number to). It also can cause us to over-focus on money as a unit of altruism, while often-times “it isn't about the money”: it's about doing the groundwork that no one is doing, or finding the opportunity that no one has found yet.

 

Quantitative estimates often also tend to ignore flow-through effects: effects which are an indirect, rather than direct, result of an action (such as decreased disease in the third world contributing in the long run to increased global security). These effects are difficult to quantify but human and cultural intuition can do a reasonable job of taking them into account. As such, I often worry that effective altruists may actually be less effective than “normal” altruists. (One can point to all sorts of examples of farcical charities to claim that regular altruism sucks, but this misses the point that there are also amazing organizations out there, such as the Simons Foundation or HHMI, which are doing enormous amounts of good despite not subscribing to the EA philosophy.)

 

What's particularly worrisome is that even if we were less effective than normal altruists, we would probably still end up looking better by our own standards, which explicitly fail to account for the ways in which normal altruists might outperform us (see above). This is a problem with any paradigm, but the fact that the effective altruist community is small and insular and relies heavily on its paradigm makes us far more susceptible to it.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 9:43 AM
Select new highlight date
All comments loaded

I'm glad to see more of this criticism as I think it's important for reflection and moving things forward. However, I'm not really sure who you're critiquing or why. My response would be that your critique (a) appears to misrepresent what the "EA mainstream" is, (b) ignores comparative advantage, or (c) says things I just outright disagree with.

~

The EA Mainstream

Perhaps the biggest example of this is the prevalence of “earning to give”. While this is certainly an admirable option, it should be considered as a baseline to improve upon, not a definitive answer.

I imagine we know different people, even within the effective altruist community. So I'll believe you if you say you know a decent amount of people who think "earning to give" is the best instead of a baseline.

However, 80,000 Hours, the career advice organization that basically started earning to give have themselves written an article called "Why Earning to Give is Often Not the Best Option" and say "A common misconception is that 80,000 Hours thinks Earning to Give is typically the way to have the most impact. We’ve never said that in any of our materials.".

Additionally, the earning-to-give people I know (including myself) all agree with the baseline argument but believe earning to give either as best for them relative to other opportunities (e.g., using comparative advantage arguments) and/or believe earning to give to actually be best overall even when considering these arguments (e.g., by being skeptical of EA organizations).

~

Contrast this with, for instance, working at a start-up. Most start-ups are low-impact, but it is undeniable that at least some have been extraordinarily high-impact, so this seems like an area that effective altruists should be considering strongly. Why aren't there more of us at 23&me, or Coursera, or Quora, or Stripe?

I'm not quite sure what you mean by this:

If you're asking "why don't more people work in start-ups?", I don't think EAs are avoiding start-ups in any noticeable way. I'll be working in one, I know several EAs who are working in them, and it doesn't seem to be all that different from software engineers / web developers in non-startups, except as would be predicted by non start-ups providing even better hiring opportunities.

If you're asking "why don't more people start start-ups themselves?", I think you already answered your own question with regard to people being unwilling to take on high personal risk. 80,000 Hours advises people to do start-ups in essays like "Should More Altruists Consider Entreprenuership?" and "Salary or Start-up: How Do Gooders Can Gain More From Risky Careers". Also, I can think of a few EAs who have started their own start-ups on these considerations. So perhaps people are irrationally risk-averse -- that is a valid critique -- but I don't think it's unique to the EA movement or we can do much about it.

If you're asking "why don't more people go into start-ups because these start-ups are doing high impact things themselves and therefore are good opportunities to have direct impact?", then I think you've hit on a valid critique that many people don't take seriously enough. I've heard some EAs mention it, but it is outside the EA mainstream.

~

We want to know what the best thing to do is, and we want a numerical value. This causes us to rely on scientific studies, economic reports, and Fermi estimates. It can cause us to underweight things like the competence of a particular organization, the strength of the people involved, and other “intangibles” (which are often not actually intangible but simply difficult to assign a number to).

I think the EA mainstream would agree with you on this one as well -- GiveWell, for example, has explicitly distanced themselves from numerical calculations (albeit recently) and several EAs have called into question the usefulness of cost-effectiveness estimates, a charge that was largely lead by GiveWell.

~

Comparative Advantage

And beyond the “obvious” alternatives of start-ups and academia, what of the paths that haven't been created yet? GiveWell was revolutionary when it came about. Who will be the next GiveWell? And by this I don't mean the next charity evaluator, but the next set of people who fundamentally alter how we view altruism.

I definitely agree that fundamentally altering how people view altruism would be very high impact (if shifted in a beneficial way, of course). But I don't think everyone has the time, skills, or willingness to do this -- or that they even should. I think this ignores the benefits of some specialization of trade.

Likewise, instead of EAs taking classes on global security for themselves, many defer to GiveWell and expect GiveWell to perform higher-quality research on these giving opportunities. After all, if you have broad trust in GiveWell, it's hard to beat several full-time saavy analysts with your spare time. GiveWell has more comparative advantage here.

~

It also can cause us to over-focus on money as a unit of altruism, while often-times “it isn't about the money”: it's about doing the groundwork that no one is doing, or finding the opportunity that no one has found yet.

Right. But not everyone has the time or talents to do this groundwork. So it seems best if we set up some orgs to do this kind of groundwork (e.g., CEA, MIRI, etc.) and give money to them to let them specialize in these kinds of breakthroughs. And then the people who have the free time can start projects like Effective Fundraising or .impact.

If you're already raising a family and working a full-time job and donating 10%, I think in many cases it's not worth quitting your job or using your free time to look for more opportunities. We don't need absolutely everyone doing this search -- there's comparative advantage considerations here too.

~

Outright Disagreement

How many would have pointed out that saying that charities vary by a factor of 1,000 in effectiveness is by itself not very helpful, and is more a statement about how bad the bottom end is than how good the top end is?

I think this has been very helpful from a PR point of view. And even if you think flow-through effects even things out more so that charities only differ by 10x or 100x (which I currently don't), that's still significant.

And whether that's condemnation of the bad end or praise for the top end depends on your perspective and standards for what makes an org good or bad. At least, the slope of the curve suggests that a lot of the difference is coming from the best organizations being a lot better than the merely good ones as opposed to the very bad ones being exceptionally bad (i.e., the curve is skewed toward the top, not toward the bottom).

~

Quantitative estimates often also tend to ignore flow-through effects: [...] These effects are difficult to quantify but human and cultural intuition can do a reasonable job of taking them into account.

But can it? How do you know? I think you should take your own "research over speculation" advice here. I don't think we understand flow through effects well enough yet to know if they can be reliably intuited.

~

Outright Agreement

an effective altruist makes a bold claim, then when pressed on it offers a heuristic justification together with the claim that “estimation is the best we have”. [...] It can appear to an outside observer as though people are opting for the fun, easy activity (speculation) rather than the harder and more worthwhile activity (research).

I agree this is an unfortunate problem.

~

Conclusion

Lest this essay give a mistaken impression to the casual reader, I should note that there are many exemplary effective altruists who I feel are mostly immune to the issues above

This is where I get to the question of who your intended audience is. It seems like the EA mainstream either agrees with many of your critiques already (and therefore you're just trying to convince EAs to adopt the mainstream) or you're placing too much burden on EAs to ignore comparative advantage and have everyone become an EA trailblazer.

GiveWell, for example, has explicitly distanced themselves from numerical calculations (albeit recently) and several EAs have called into question the usefulness of cost-effectiveness estimates, a charge that was largely lead by GiveWell.

I'll speak up on this one. I am a booster of more such estimates, detailed enough to make assumptions and reasoning explicit. Quantifying one's assumptions lets other challenge the pieces individually and make progress, where with a wishy-washy "list of considerations pro and con" there is a lot of wiggle room about their strengths. Sometimes doing this forces one to think through an argument more deeply only to discover big holes, or that the key pieces also come up in the context of other problems.

In prediction tournaments training people to use formal probabilities has been helpful for their accuracy.

Also I second the bit about comparative advantage: CEA recently hired Owen Cotton-Barratt to do cause prioritization/flow-through effects related work. GiveWell Labs is heavily focused on it. Nick Beckstead and others at the FHI also do some work on the topic.

It seems like the EA mainstream either agrees with many of your critiques already (and therefore you're just trying to convince EAs to adopt the mainstream)

I think that on some of these questions there is also real variation in opinion that should not simply be summarized as a clear "mainstream" position.

The question to my mind is whether the value of attempting to make such estimates is sufficiently great so that time spent on them is more cost-effective than just trying to do something directly.

Can you give recent EA related examples of exercises in making quantitative estimates that you've found useful?

To be clear, I don't necessarily disagree with you (it depends on the details of your views on this point). I agree that laying out a list of pros and cons without quantifying things suffers from vagueness of the type you describe. But I strain to think of success stories.

I'll speak up on this one. I am a booster of more such estimates, detailed enough to make assumptions and reasoning explicit.

I generally agree. But I think there's a large difference between "here's a first-pass attempt at a cost-effectiveness estimate purely so we can compare numbers" and "this is how much it costs to save a life". Another problem is that I don't think people take much into account when comparing figures (e.g., comparing veg ads to GiveWell) is the differences in epistemic strength behind each number, so that could cause a concern.

~

I think that on some of these questions there is also real variation in opinion that should not simply be summarized as a clear "mainstream" position.

I don't know how much variation there is. I don't claim to know a representative sample of EAs. But I do think there's not much variation among the wisdom of EA orgs on these issues of which I proclaim mainstream.

Which positions are you thinking of?

But I think there's a large difference between "here's a first-pass attempt at a cost-effectiveness estimate purely so we can compare numbers" and "this is how much it costs to save a life".

You still have to answer questions like:

  • "I can get employer matching for charity A, but not B, is the expected effectiveness of B at least twice as great as that for A, so that I should donate to B?"
  • "I have an absolute advantage in field X, but I think that field Y is at least somewhat more important: which field should I enter?"
  • "By lobbying this organization to increase funds to C, I will reduce support for D: is it worth it?"

Those choices imply judgments about expected value. Being evasive and vague doesn't eliminate the need to make such choices, and tacitly quantify the relative value of options.

Being vague can conceal one's ignorance and avoid sticking one's neck out far enough to be cut off, and it can help guard against being misquoted and PR damage, but you should still ultimately be more-or-less assigning cardinal scores in light of the many choices that tacitly rely on them.

It's still important to be clear on how noisy different inputs to one's judgments are, to give confidence intervals and track records to put one's analysis in context rather than just an expected value, but I would say the basic point stands, that we need to make cardinal comparisons and being vague doesn't help.

I think there's a large difference between "here's a first-pass attempt at a cost-effectiveness estimate purely so we can compare numbers" and "this is how much it costs to save a life"

Note: I do want to know how much it costs to save a life (or QALY or some other easy metric of good). I'd rather have a ballpark conservative estimate than nothing to go off of.

Back when AMF was recommended, I considered the sentence: "we estimate the cost per child life saved through an AMF LLIN distribution at about $3,400.47" to be one of the most useful in the report, because it gave an idea of an approximate upper bound on the magnitude of good to be done and was easy to understand. Sure, it might not be nuanced - but there's a lot to be said for a simple measure of magnitude that helps people make decisions without large amounts of thinking.

When considering altruism (in the future - I don't earn yet) I wouldn't simply have a charity budget which simply goes to the most effective cause - I'd also be weighing the benefit to the most effective cause against the benefit to myself.

That is to say, if i find out that saving lives (or some other easy metric of good) is cheaper than I thought, that would encourage me to devote a greater proportion of income to said cause. The cheaper the cost of good, the more urgent it becomes to me that the good is done.

So it's not enough to simply compare charities in a relative sense to find the best. I think the magnitude of good per cost for the most efficient charity, in an absolute sense, is also pretty important for individual donors making decisions about whether to allocate resources to altruism or to themselves.

If you're asking "why don't more people go into start-ups because these start-ups are doing high impact things themselves and therefore are good opportunities to have direct impact?", then I think you've hit on a valid critique that many people don't take seriously enough. I've heard some EAs mention it, but it is outside the EA mainstream.

Especially because most start-ups don't have a direct impact in anything altruistic. Yeah, there are some really cool start-ups out there that can change the world. There are also start-ups with solid business plans that won't change the world. And then there are the majority (in our times of cheap VC money) that won't change the world and often don't even have a solid business plan.

I would argue (perhaps self-servingly) that academia is another example of such a path

Academia is, in my mind, the textbook example of people doing something because it's familiar, not because they've searched for it and it's the right choice. Most of the academics I know will freely state that it only makes sense to go into academia for fame, not for money- and so it's not clear to me what you think the EA benefit is. (Convincing students to become EA? Funding student organizations seems like a better way to do that.)

Most of the academics I know will freely state that it only makes sense to go into academia for fame, not for money- and so it's not clear to me what you think the EA benefit is.

The goal is to get direct impact by doing high-impact research. One of the key points here is that donating money is just one particularly straightforward way to do good!

Academia is, in my mind, the textbook example of people doing something because it's familiar, not because they've searched for it and it's the right choice.

I have certainly seen this before, although I think it's less prevalent (but by no means absent) near the top.

"Why haven't more EAs signed up for a course on global security, or tried to understand how DARPA funds projects, or learned about third-world health? I've heard claims that this would be too time-consuming relative to the value it provides, but this seems like a poor excuse. If we want to be taken seriously as a movement (or even just want to reach consistently accurate conclusions about the world)."

This one worries me quite a bit. The vast majority of EA's (including myself) have not spent very much time learning about what the large players in third world poverty are (e.g. WHO, UN). In fact you can be an "expert" in EA content and know virtually nothing about the rest of the non-profit/charity sector.

The history of effective altruism is littered with over-confident claims, many of which have later turned out to be false. In 2009, Peter Singer claimed that you could save a life for $200 (and many others repeated his claim). While the number was already questionable at the time, by 2011 we discovered that the number was completely off. Now new numbers were thrown around: from numbers still in the hundreds of dollars (GWWC's estimate for SCI, which was later shown to be flawed) up to $1600 (GiveWell's estimate for AMF, which GiveWell itself expected to go up, and which indeed did go up).

Another good example is GiveWell's 2009 estimate that "Because [our] estimate makes so many conservative assumptions, we feel it is overall reasonable to expect [Village Reach's] future activities to result in lives saved for under $1000 each."

"8 lives saved per dollar donated to the Machine Intelligence Research Institute. — Anna Salamon"

Pulling this number out of the video and presenting it by itself, as Kruel does, leaves out important context, such as Anna's statement "Don't trust this calculation too much. [There are] many simplifications and estimated figures. But [then] if the issue might be high stakes, recalculate more carefully." (E.g. after purchasing more information.)

However, Anna next says:

I've talked about [this estimate] with a lot of people and the bargain seems robust. Maybe you go for a soft takeoff scenario, [then the estimate] comes out maybe an order of magnitude lower. But it still comes out [as] unprecedentedly much goodness that you can purchase for a little bit of money or time.

And that is something I definitely disagree with. I don't think the estimate is anywhere near that robust.

I agree with Luke's comment; compared to my views in 2009, the issue now seems more complicated to me; my estimate of impact form donation re: AI risk is lower (though still high); and I would not say that a particular calculation is robust.

my estimate of impact form donation re: AI risk is lower (though still high)

Out of curiosity, what's your current estimate? I recognize it'll be rough, but even e.g. "more likely than not between $1 and $50 per life saved" would be interesting.

And that is something I definitely disagree with. I don't think the estimate is anywhere near that robust.

Is this MIRI official position? Because, AFAIK that estimate was never retracted.

Anyway, the problem doesn't seem to be much with the exact numbers, but with the process: what she did was essentially a travesty of a Fermi estimate, where she pulled numbers of out thin air and multiplied them together to get a self-serving result.

This person is "Executive Director and Cofounder" of CFAR. Is this what they teach for $1,000 a day? How to fool yourself by performing a mental ritual with made up numbers?

Is this MIRI official position? Because, AFAIK that estimate was never retracted.

I don't know what Anna's current view is. (Edit: Anna has now given it.)

In general, there aren't such things as "MIRI official positions," there are just individual persons' opinions at a given time. Asking for MIRI's official position on a research question is like asking for CSAIL's official opinion on AGI timelines. If there are "MIRI official positions," I guess they'd be board-approved policies like our whistleblower policy or something.

It seems to me that the effective altruist movement over-focuses on “tried and true” options, both in giving opportunities and in career paths. Perhaps the biggest example of this is the prevalence of “earning to give”.

I would have guessed that the biggest example is the focus on poverty reduction / global health initiatives that GiveWell and GWWC have traditionally focused nearly all their attention on. E.g. even though Holden has since the beginning suspected that the highest-EV altruistic causes are outside global health, this point isn't mentioned on GiveWell's historical "top charities" pages (2012, 2011, 2010, 2009, 2008), which emphasize the important focus on "tried and true" charitable interventions.

One in six Yale graduates go into finance and consulting, seemingly due to the simplicity of applying and the easy supply of extrinsic motivation. My intuition is that this ratio is higher than an optimal society would have, even if such people commonly gave generously.

Because those one-in-six don't all give generously, we can't conclude whether it's right at the margins for graduates to go into earning to give, even if we grant the assumption about the ratio in an optimal society.

I agree that it's worth looking at a wider spread of career possibilities, but this isn't the argument to use to get there.

careers in finance and software (the two most common avenues for this) are incredibly straight-forward and secure.

What are you talking about? Investment Banking, at least, has a huge attrition rate. Careers in IB are short and brutal.

(This comment is on career stuff, which is tangential to your main points)

I recently had to pick a computer science job, and spent a long time agonizing over what would have the highest impact (among other criteria). I'm not convinced startups or academia have a higher expected value than large companies. I would like to be convinced otherwise.

(Software) Startups:

1) Most startups fail. It's easy to underestimate this because you only hear the success stories.

2) Many startups are not solving "important" problems. They are solving relatively minor problems for relatively rich people, because that's where the money is. Snapchat, Twitter, Facebook, Instagram are examples.

3) Serious problems are complicated, and usually require more resources than a startup can bring to bear.

4) Financially: If you aren't a founder, your share of the company is negligible.

(Computer Science) Academia:

1) My understanding is that there are dozens of applications for each tenure-track opening. So your chance of success is low, and your marginal advantage over the next-best applicant is probably low.

2) I trust markets more than grant committees for distributing money.

3) It seems easier to get sidetracked into non-useful work in academia

Thanks for the interesting critique. I agree with you that EAs often make over-confident claims without solid evidence, although I don't think it's a huge issue that people sometimes understate how much it costs to save a life, as even the most pessimistic realistic estimates of this cost don't undermine the case for donating significant sums to cost-effective charities.

Am I right in understanding that you think that too many EAs are pursuing earning to give careers in finance and technology, whereas you think they'd have greater impact if they worked in start-ups? If so, could you provide some more explanation of why you think this? It seems plausible to me that earning to give is one of the highest-impact career options for many EAs, given the enormous amount of good that donations to the most effective charities can do.

Finally, you say you "worry that effective altruists may actually be less effective than “normal” altruists". That's a pretty striking claim! Can you expand on it a little? In particular, could you give a typical example of 'normal' altruism, and explain why you think it might be more effective than pursuing an earning to give career and donating large sums to a charity like SCI?

In particular, could you give a typical example of 'normal' altruism

I gave the Simons Foundation as an example in my essay. Among other things, they fund the arXiv, which already seems to me to be an extremely valuable contribution. Granted, Simons made huge amounts of money as a quant, but as far as I know he isn't explicitly an EA, and he certainly wasn't "earning to give" in the conventional sense of just giving to top GiveWell charities.

What is particularly worrysome to me is that the positive effects of interventions such as improvements in the education are much harder to qualitatively calculate.

Say, an individual can make the choice to be a judge in the US, or to be a banker and donate a lot of money to the charities. The straightforward calculation does not take into account the importance of good people among the justices; without such people US would probably have been in no position to send aid (and would need monetary aid itself).

Effective Altruism (and critiques of it) need to think at the margin. If I give $X to an organization doing good chances are this won't displace someone else from giving the organization money. In contrast, if I get a job at such an organization I have probably displaced someone else from taking that job. This kind of marginal analysis greatly strengthens the value of the “earning to give” path of effective altruism.

Naive efficient-market analysis suggests that if finance and computer programming are predictable and lucrative careers, there should be some less stable career option which is even more lucrative on average. For someone who's genuinely earning to give, and planning to keep only a pittance for their own survival regardless, that variability shouldn't matter.

This still feels like a "we need fifty Stalins" critique.

For me the biggest problems with the effective altruism movement are:

1: Most people aren't utilitarians.

2: Maximizing QALY's isn't even the correct course of action under utilitarianism - its short sighted and silly. Which is worse under utilitarianism: Louis Pasteur dying in his childhood or 100,000 children in a third world country dying? I would argue that the death of Louis Pasteur is a far greater tragedy since his contributions to human knowledge have saved a lot more than 100,000 lives and have advanced society in other ways. But a QALY approach does not capture this. That's extreme obviously, but my issue is that all lives are not equal. People in developed countries matter way more than people in developing countries in terms of advancing technology and society in general.

  1. How fungible is Louis Pasteur? If he had died as a child, someone else would have done the same work, just perhaps a little later. How many lives would have been lost as a result of this delay? I don't have a hard answer to this, but I have trouble putting the estimate as high as 100k.

  2. How predictable is Louis Pasteur? Looking at his Wikipedia article, if we look at him as a child, we don't predict he makes the contributions he does. Let's say there's a 0.1% chance of that happening. On the other hand, suppose there's a child dying in the third world who we could bring to the first world for the cost of Louis Pasteur not dying in his childhood who has a 1% chance of making the same contributions. Clearly, losing the latter child is, on average, a greater tragedy than Louis Pasteur.

It's reasonable to invest heavily in fewer people who can therefore make Pasteur-like contributions, rather than lightly in more people who won't. Unless I'm mistaken, this is essentially what CFAR is doing. However, bell curves tell us that there's more extraordinary people in the developing countries who could matter way more than people in developed countries, but only if we get them into developed countries where we can tap their potential. For every child in America who, given a standard education, has a 1% chance of making Pasteur-like contributions, there's three in India, and, if we can identify them cheaply, it's much more cost effective to move their chances of success from epsilon to 0.01 than the developed child's chances from 0.01 to 0.02.