The book is by William MacAskill, founder of 80000 Hours and Giving What We Can. Excerpt:
Effective altruism takes up the spirit of Singer’s argument but shields us from the full blast of its conclusion; moral indictment is transformed into an empowering investment opportunity...
Either effective altruism, like utilitarianism, demands that we do the most good possible, or it asks merely that we try to make things better. The first thought is genuinely radical, requiring us to overhaul our daily lives in ways unimaginable to most...The second thought – that we try to make things better – is shared by every plausible moral system and every decent person. If effective altruism is simply in the business of getting us to be more effective when we try to help others, then it’s hard to object to it. But in that case it’s also hard to see what it’s offering in the way of fresh moral insight, still less how it could be the last social movement we’ll ever need.
"Do you believe that impersonal and accidental forces of history generate as much misery, which you can fight against, as the deliberate efforts of people who disagree with you? Wouldn't that be surprising if it were true?"
Yes, I believe that, and no, it is not surprising. Issues where people disagree are likely to be mixed issues, where making changes will do harm as well as benefit. That is exactly why people disagree. So working on those issues will tend to do less benefit than working on the issues everyone agrees on, which are likely to be much less mixed.
Harm and benefit are two-place words; harm is always to someone, and according to someone's values or goals.
If two people have different values - which can be as simple as each wanting the same resource for themselves, or as complex as different religious beliefs - then harm to the one can be benefit to the other. It might not be a zero-sum game because their utility functions aren't exact inverses, but it's still a tradeoff between the two, and each prefers their own values over the other's.
On this view, such issues where people disagree are tautologically those where each change one of them wants benefits themselves and harms the other. Any changes that benefit everyone are quickly implemented until there aren't any left.
If you share the values of one of these people, then working on the problem will result in benefit (by your values), and you won't care about the harm (by some other person's values).
If on most or all such divisive issues, you don't side with any established camp, that is a very surprising fact that makes you an outlier. Can you build an EA movement out of altruists who don't care about most divisive issues?