I

Imagine an alternate version of the Effective Altruism movement, whose early influences came from socialist intellectual communities such as the Fabian Society, as opposed to the rationalist diaspora. Let’s name this hypothetical movement the Effective Samaritans.

Like the EA movement of today, they believe in doing as much good as possible, whatever this means. They began by evaluating existing charities, reading every RCT to find the very best ways of helping.

But many effective samaritans were starting to wonder. Is this randomista approach really the most prudent? After all, Scandinavia didn’t become wealthy and equitable through marginal charity. Societal transformation comes from uprooting oppressive power structures.

The Scandinavian societal model which lifted the working class, brought weekends, universal suffrage, maternity leave, education, and universal healthcare can be traced back all the way to 1870’s where the union and social democratic movements got their start.

In many developing countries wage theft is still common-place. When employees can’t be certain they’ll get paid what was promised in the contract they signed and they can’t trust the legal system to have their back, society settles on much fewer surplus producing work arrangements than is optimal.

Work to improve capacity of the existing legal structure is fraught with risk. One risks strengthening the oppressive arms used by the ruling and capitalist classes to stay in power.

A safer option may be to strengthen labour unions, who can take up these fights on behalf of their members. Being in inherent opposition to capitalist interests, unions are much less likely to be captured and co-opted. Though there is much uncertainty, unions present a promising way to increase contract-enforcement and help bring about the conditions necessary for economic development, a report by Reassess Priorities concludes.

Compelled by the anti-randomista arguments, some Effective Samaritans begin donating to the ‘Developing Unions Project’, which funds unions in developing countries and does political advocacy to increase union influence.

A well-regarded economist writes a scathing criticism of Effective Samaritanism, stating that they are blinded by ideology and that there isn’t sufficient evidence to show that increases in labor power leads to increases in contract enforcement.

The article is widely discussed on the Effective Samaritan Forum. One commenter writes a highly upvoted response, arguing that absence of evidence isn’t evidence of absence. The professor is too concerned with empirical evidence, and fails to engage sufficiently with the object-level arguments for why the Developing Unions Project is promising. Additionally, why are we listening to an economics professor anyways? Economics is completely bankrupt as a science, resting on empirically false ridiculous assumptions, and is filled with activists doing shoddy science to confirm their neoliberal beliefs.

 

I sometimes imagine myself trying to convince the Effective Samaritan why I’m correct to hold my current beliefs, many of which have come out of the rationalist diaspora.

I explain how I’m not fully bought into the analysis of labor historians, which credits labor unions and the Social Democratic movements for making Scandinavia uniquely wealthy, equitable and happy. If this were a driving factor, how come the descendants of Scandinavians who migrated to the US long before are doing just as well in America? Besides, even if I don’t know enough to dispute the analysis, I don't trust labor historians to arrive at unbiased and correct conclusions in the first place.

From my perspective, labor union advocacy seems as likely to result in restrictions of market participation as it is to encourage it. Instead, I’m more bullish for charter cities to bring institutional reform and encourage growth.

After all, many historical analyses by economic historians of the Chinese economic miracle would credit Deng Xiaoping’s decision to open four “special economic zones” inside of China with free-market oriented reforms, as the driving factor.

But the Effective Samaritan is similarly skeptical of the historical evidence I present suggesting charter cities to be a worthwhile intervention. “Hasn’t every attempt at creating a charter city failed?” they ask.

“A real charter city hasn’t been tried!” I reply. “The closest we got was in Honduras, and it barely got off the ground before being declared illegal by the socialist government. Moreover, special economic zones jump started the Chinese economic miracle, even if not exactly a charter city that’s gotta count for something!”

“Real socialism hasn’t been tried either!” the Effective Samaritan quips back. “Every attempt has always been co-opted by ruling elites who used it for their own ends. The closest we’ve gotten is Scandinavia which now has the world’s highest standards of living, even if not entirely socialist it’s gotta count for something!”

“Don’t you find it mighty suspicious how your intervention is suspiciously lacking in empirical evidence, and is held up only by theoretical arguments and the historic hand waving of biased academics?” We both exclaim in unison.

For every logical inference I make, they make the opposite. Every thoughtful prior of mine, they consider to be baseless prejudice. My modus ponens, their modus tollens.

It’s clear that we’re not getting anywhere. Neither one of us will change the other’s mind. We go back to funding our respective opposing charities, and the world is none the better.

 

II

In 2016 I was skipping school to compete in Starcraft tournaments. A competitive Starcraft match pits two players against each other, each playing one of the game’s three possible factions: Terran, Protoss or Zerg. To reach the level of competitive play, players opt to practice a single faction almost exclusively.

This has led to some fascinating dynamics in the Starcraft community.

At age 12 I began focusing on the Terran faction. At 16, I had racked up over ten thousand matches with the Terran faction. Over thousands of matches, you get to experience every intricate and quirky detail exclusive to your faction. I would spend hours practicing my marine-splits, a maneuver only my faction was required to do.

I experienced the humiliating defeat from a thousand dirty strategies available to my opponents’ factions, each more cheap and unfair than the last. Of course, they would claim my faction has cheap strategies too, but I knew those strategies were brittle, weak, and never worked against a sufficiently skilled player.

For as long as there have been forums for discussing Starcraft, they have been filled with complaints about the balance of the factions. Thousands of posts have been written presenting elaborate arguments and statistics, proving the very faction the author happens to play is, in fact, the weakest. The replies are just as thorough: “Of course if you look at tournament winnings in 2011-2012, Terran is going to be overrepresented, but that is due to a few outlier players who far outperformed everyone else. If you look at the distribution of grandmaster ranked players, terran underperforms!”

Like politics, the discussions can get heated, and it is not uncommon to see statements like: “How typical of you to say - Zerg players are all alike, always complaining about the difficulty of creep spreading, but never admitting their armies are much easier to control!”

There’s even a conspiracy theory currently circulating that a cabal of professional Zerg players sneakily are starting debates which pit Protoss and Terran players against one another to divert attention away from their faction’s current superiority.

Looking at it from a distance, it’s completely deranged. Why can’t anyone see the irony in the fact that everyone happens to think the very faction they play is the weakest?[1] Additionally, if they really believed it to be true, why doesn’t anybody ever switch to the faction they think is overpowered and start winning tournaments?

Moreover, the few people who do switch factions always end up admitting they were wrong. Their new faction is actually the most difficult! The few people who opt to play each match with a randomly selected faction mostly say the three factions are about equally difficult. But if there is one thing players of all three factions can agree on, it is that players who pick random are deceitful and not to be trusted.

I am aware of all these facts, it’s been almost a decade since I stopped competing, yet to this very day I remain convinced that Terran, the faction I arbitrarily chose when I was 12, was in fact the weakest faction during the era I played. Of course I recognize that the alternate version of me who picked a different faction, would have thought differently, but they would have been wrong.

My priors are completely and utterly trapped. Whatever opinion I hold of myself as a noble seeker of truth, my beliefs about Starcraft prove me a moron beyond any reasonable doubt.

My early intellectual influences were rationalists or free-market leaning economists, such as Scott Alexander and Robin Hanson. When I take a sincere look at the evidence today and try my very hardest to discern what is actually true from false, I conclude they mostly are getting things right.

But already in 7th grade, I distinctly remember staunchly defending my belief in unregulated biological modification and enhancement, much to the dismay of my teacher who in disbelief burst out that I was completely insane.

Of all the possible intellectuals I was exposed to, surely it is suspicious that the ones whose conclusions matched my already held beliefs were the ones who stuck. But what should I have done differently? To me, their arguments seemed the most lucid and their evidence the most compelling.

Why was my very first instinct as a seventh grader to defend bioenhancement and not the opposite? Where did that initial belief come from? I couldn’t explain to you basic calculus, yet I could tell you with unfounded confidence that bioenhancement would be good for humanity.

Like my beliefs about Starcraft, it seems so arbitrary. Had my initial instinct been the opposite, maybe I would have breezed past Hanson’s contrarian nonsense to one day discover truth and beauty reading Piketty.

III

I wake up to an email, thanking me and explaining how my donation has helped launch charter cities in two developing countries. Of course getting the approvals required some dirty political maneuvering, but that is the price of getting anything done.

I think of the Effective Samaritan, who has just woken up to a similar thankful email from the Developing Unions Project. In it, they explain how their donation helped make it possible for them to open a new branch of advocacy, lobbying to shut down two charter cities whose lax regulations are abused by employers to circumvent union agreements. It will require some dirty political maneuvering to get them shut down, but the ends will justify the means.

Yet, the combined efforts of our charity has added up to exactly nothing! I want to yell at the Samaritan whose efforts have invalidated all of mine. Why are they so hellbent on tearing down all the beauty I want to create? Surely we can do better than this.

But how can I collaborate with the Effective Samaritan, who I believe has deluded themselves into thinking outright harmful interventions are the most impactful?

We both believe in doing the most good, whatever that means, and we both believe in using evidence to inform our decision making. What evidence we can trust is contentious. And of the little evidence we both trust, we draw opposite conclusions!

For us to collaborate we need to agree on some basic principles which, when followed, produces knowledge that can fit into both our existing worldviews. We first try explicitly defining all our bayesian priors to see where they differ. This quickly proves tedious and intractable. The only way we can find to move forward is to take out priors from the equation entirely.

Simply run experiments and accept every result as true if the probability of it occurring by random chance falls below some threshold we agree on. This will lead us terribly astray every once in a while if we are not careful, but it also enables us to run experiments whose conclusions both of us can trust.[2]

To minimize the chance of statistical noise or incorrect inference polluting our conclusions, we create experiments with randomly chosen intervention and control groups, so we are sure the intervention is causally connected to the outcome.

As long as we follow these procedures exactly, we can both trust the conclusion. Others can even join in on the fun too.

Together we arrive at a set of ‘randomista’ interventions we both recognize as valuable. Even if we each have differing priors leading us to opposing preferred interventions, pooling our money together on the randomista interventions beats donating to causes which cancel each other out.

The world is some the better.

 

  1. ^

     I sometimes think about this when listening in on fervent debates over which gender has it better

  2. ^

     I don’t think it’s a coincidence frequentism came to dominate academia

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 9:02 PM

Our sensible Chesterton fences

His biased priors

Their inflexible ideological commitments

In addition to epistemic priors, there are also ontological priors and teleological priors to cross compare, each with their own problems. On top of which, people are even worse at comparing non epistemic priors than they are at comparing epistemic priors. As such, attempts to point out that these are an issue will be seen as a battle tactic: move the argument from a domain in which they have the upper hand (from their perspective) to unfamiliar territory in which you'll have an advantage, and resisted.

You may share the experience I've had that most attempts at discussion don't go anywhere. We mostly repeat our cached knowledge at each other. If two people who are earnestly trying to grok each other's positions drill down for long enough they'll get to a bit of ontology comparison, where it turns out they have different intuitions because they are using different conceptual metaphors for different moving parts of their model. But this takes so long that by the time it happens only a few bits of information get exchanged before one or both parties are too tired to continue. The workaround seems to be that if two people have a working relationship then, over time, they can accrue enough bits to get to real cruxes, and this can occasionally suggest novel research directions.

My main theory of change is therefore to find potentially productive pairings of people faster, and create the conditions under which they can speedrun getting to useful cruxes. Unfortunately, Eli Tyre tried this theory of change and reported that it mostly didn't work, after a bunch of good faith efforts from a bunch of people. I'm not sure what's next. I personally believe more progress could be made if people were trained in consciousness of abstraction (per Korzybski), but this is a sufficiently difficult ask as to fail people's priors on how much effort to spend on novel skills with unclear payoffs. And a theory of change that has a curiosity stopper that halts on "other people should do this thing that they clearly aren't going to do" is also not very useful.

Yeah, the trapped priors thing is pretty worrying to me too. But I'm confused about the opposing interventions thing. Do charter cities, or labor unions, rely on donations that much? Is it really so common for donations to cancel each other out? I guess advocacy donations (for example, pro-life vs pro-choice) do cancel each other out, so maybe we could all agree that advocacy isn't charity.

Priors are not things you can arbitrarily choose, and then throw you hands up and say "oh well, I guess I just have stuck priors, and that's why I look at the data, and conclude neoliberal-libertarian economics is mostly correct, and socialist economics is mostly wrong" to the extent you say this, you are not actually looking at any data, you are just making up an answer that sounds good, and then when you encounter conflicting evidence, you're stating you won't change your mind because of a flaw in your reasoning (stuck priors), and that's ok, because you have a flaw in your reasoning (stuck priors). Its a circular argument!

If this is what you actually believe, you shouldn't be making donations to either charter cities projects or developing unions projects[1]. Because what you actually believe is that the evidence you've seen is likely under both worldviews, and if you were "using" a non-gerrymandered prior or reasoning without your bottom-line already written, you'd have little reason to prefer one over the other.

Both of the alternatives you've presented are fools who in the back of their minds know they're fools, but care more about having emotionally satisfying worldviews instead of correct worldviews. To their credit, they have successfully double-thought their way to reasonable donation choices which would otherwise have destroyed their worldview. But they could do much better by no longer being fools.


  1. Alternatively, if you justify your donation anyway in terms of its exploration value, you should be making donations to both. ↩︎

[-]dr_s12d20

To be fair, any beliefs you form will be informed by your previous priors. You try to evaluate evidence critically, but your critical sense was developed by previous evidence, and so on so forth back to the brain you came out of the womb with. Obviously as long as your original priors were open minded enough, you can probably reach the point of believing in anything given sufficiently strong evidence - but how strong depends on your starting point.

Though this is only what Bayesianism predicts. A different theory of induction (e.g. one that explains human intelligence, or one that describes how to build an AGI) may not have an equivalent to Bayesian priors. Differences in opinions between two agents could instead be explained by having had different experiences, beliefs being path dependent (order of updates matters), or inference being influenced by random chance.

[-]dr_s12d20

I'm not sure how that works. Bayes' theorem, per se, is correct. I'm not talking about a level of abstraction in which I try to define decisions/beliefs as symbols, I'm talking about the bare "two different brains with different initial states, subject to the same input, will end up in different final states".

Differences in opinions between two agents could instead be explained by having had different experiences, beliefs being path dependent (order of updates matters), or inference being influenced by random chance.

All of that can be accounted for in a Bayesian framework though? Different experiences produce different posteriors of course, and as for path dependence and random chance, I think you can easily get those by introducing some kind of hidden states, describing things we don't quite know about the inner workings of the brain.

All of that can be accounted for in a Bayesian framework though?

I mean that those factors don't presuppose different priors. You could still end up with different "posteriors" even with the same "starting point".

An example for an (informal) alternative to Bayesian updating, that doesn't require subjective priors, is Inference to the Best Explanation. One could, of course, model the criteria that determine the goodness of explanations as a sort of "prior". But those criteria would be part of the hypothetical IBE algorithm, not a free variable like in Bayesian updating. One could also claim that there are no objective facts about the goodness of explanations and that IBE is invalid. But that's an open question.

Whenever I've seen people invoking Inference to the Best Explanation to justify a conclusion (as opposed to philosophising about the logic of argument), they have given no reason why their preferred explanation is the Best, they have just pronounced it so. A Bayesian reasoner can (or should be able to) show their work, but the ItoBE reasoner has no work to show.

These can often be operationalized 'How much of the variance in the output do you predict is controlled by your proposed input?'

IBE arguments don't exactly work that way. The argument is usually that one person is arguing that some hypothesis H is the best available explanation for the evidence E in question, and if the other person agrees with that, it is hard for them to not also agree that H is probably true (or something like that). Most people already accept IBE as an inference rule. They wouldn't say "Yes, the existence of an external world seems to be the best available explanation for our experiences, but I still don't believe the external world exists" nor "Yes, the best available explanation for the missing cheese is that a mouse ate it, but I still don't believe a mouse ate the cheese". And if they do disagree about H being the best available explanation, they usually feel compelled to argue that some H' is a better explanation.

What is the measure of goodness? How does one judge what is the "better" explanation? Without an account of that, what is IBE?

Without an account of that, IBE is the claim that something being the best available explanation is evidence that it is true.

That being said, we typically judge the goodness of a possible explanation by a number of explanatory virtues like simplicity, empirical fit, consistency, internal coherence, external coherence (with other theories), consilience, unification etc. To clarify and justify those virtues on other (including Bayesian) grounds is something epistemologists work on.

[-]dr_s11d20

I'd definitely call any assumption about which forms preferred explanations should take as a "prior". Maybe I have a more flexible concept of what counts as Bayesian than you, in that sense? Priors don't need to be free parameters, the process has to start somewhere. But if you already have some data and then acquire some more data, obviously the previous data will still affect your conclusions.

The problem with calling parts of a learning algorithm a prior that are not free variables, is that then anything (every part of any learning algorithm) would count as a prior. So even the Bayesian conditionalization rule itself. But that's not what Bayesians consider part of a prior.

I think charter cities are a questionable idea, even though I'm pro free markets. It seems that the sort of constitional change and stability required for a charter city is no easier to achieve then the kind of constitutional change and stability required for a free market in the entire country. I don't think trying either in developing countries as an outsider is a good use of anyone's resources.