Sometimes bad things happen to good people. Maybe most of the time even, it's hard to know, and even harder to create common knowledge on the topic because if it is so we don't want to know, and tell stories to cover it up when it happens.

Back in the golden age of psychology, long before the replication crisis, the scientific method was understood very differently from today. Effect sizes were expected to be visible with the naked eye, not just statistically significant, and practices such as IRBs, peer review and even the use of control groups were much more optional. For instance, the original Milgram Experiment lacked a control group, bur instead of saying that nothing had been learned, or suppressing it on dubious ethical grounds, the psychological community investigated an enormous number of variations on the Milgram experiment in order to tease out the impacts of slight variations, which were statistically compared with one another.

In 1965 Melvin J. Lerner discovered that experimental subjects disliked the people who they saw subjected to electric shocks. This effect was alleviated when the experimental subjects were able to offer the presumed victims appropriate compensation. Apparently, they wanted to make the situation fair. Unfortunately, if they couldn't make it fair with compensation they wanted to make it fair by claiming that the victim deserved it. Lerner and others followed up with a series of investigations victim blaming. They discovered that the phenomenon is pervasive, robust and measurable via psychological surveys. When given a story about a date, higher scores in 'Belief in a just world' are associated with a greater tendency to see whatever ending the reader is given as following with high probability from the earlier events even if the earlier events are identical while the endings differ.

"Might people on the internet sometimes lie?... If you’re like me, and you want to respond to this post with “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?”, then before you comment take a second to ask why the “they’re lying” theory is so hard to believe. And when you figure it out, tell me, because I really want to know."

-- Slate Star Codex, 12/12/2016

One advantage of just world belief is that it makes it much easier to believe anything at all. If lying is common, and is typically rewarded, the supposed facts out of which one makes sense of the world are called into much greater uncertainty. If you can't trust the apparently expert authorities who you grow up with to inculcate you into the truth, or the best approximation available, the process of seeking career advancement decouples almost entirely from the process of understanding, and the world appears far less knowable.

A number of the scientific concepts discussed in this year's essays seem to me to be specific corrections to the Just World Hypothesis. Stigler's Law of Eponymy, for instance, could be seen as the assertion that mathematical attribution is unjust, even on those occasions when historical scholarship enables objective investigation. The Fundamental Attribution Error is the error of believing that in the typical case the career and social paths that lead to power will select for and cultivate justice, rather than selecting against them, and thus that the right person could ascend in power without acting unjustly, or perhaps that someone could act unjustly for years in order to ascend in power only to turn around and behave justly due to dispositional factors once power is achieved. As an intellectual, or in almost any social context involving discussion, a person without the need for closure might appear hopelessly uninformed, uncooperative, and generally incapable of participation.

Without a just world, the hope of science, to gradually advance in a collaborative intellectual project, updating a shared set of beliefs appears chimerical. One might face, for instance, a crisis of replication during which the whole content of your field, the fruit of many lives of work, evaporates despite the community apparently making the strongest efforts to commit only type I errors. In such a situation, one might be particularly determined to avoid the type one error of disbelieving in a just world, and with it the possibility of joint intellectual endeavor. If this is the case, deliberate ignorance of an unjust world, rather than Bayesian updating of one's belief on the matter, might turn out to be the dominant strategy for participation in an intellectual community, whether it be an academic profession, political party, business or church.

"See no evil, hear no evil, speak no evil"

-- 17th-century carving over a door of the famous Tōshō-gū shrine in Nikkō, Japan

“In a country well governed, poverty is something to be ashamed of. In a country badly governed, wealth is something to be ashamed of.”

-- Confucius

"If you see fraud and don't shout fraud you are a fraud", but in a different context, "You may not be able to change the world but can at least get some entertainment and make a living out of the epistemic arrogance of the human race."

-- Nicholas Nassim Taleb

As we can see above, Asia's major spiritual traditions clash over whether to accept an unjust world graciously and compassionately turn a blind eye to the unjust, or to fight against it proudly with full knowledge that the way to power lies elsewhere. Our own intellectual tradition seems equally divided, even within a single voice. Signaling intensifies our difficulties, since in so far as others expect you to act in a self-interested manner, you may have to signal a belief in a just world in order to be listened to at all. We should expect both the just and the unjust to collude in maintaining a belief in a just world, whatever the evidence to the contrary. Essays that violate that expectation, such as this one, are Bayesian evidence for something, but one may have to think very hard in order to know what.

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 6:32 PM

The non-participation is the thing that stands out to me here.

I rely on research literature a lot, despite knowing that a bunch of it will turn out to be false. My rationale is something like "if I can't trust these sources, then I won't know much of anything at all." Similar stuff is going on with motivations like "if this method/tool that we've relied on is broken, then we'll be able to do a lot less science." Or even "if I didn't believe this startup had a chance of success, then I wouldn't be able to work on it." Not strictly logical reasoning, but maybe a reasonable EV calculation; you're betting on the viability of a method by continuing to use it, in situations where the failure of that method would mean everybody loses, so there is little or no way to profit by betting on failure.

You can't profit by betting on civilizational collapse, or radical skepticism, or other Big Bads. Even if you're right, you still lose.

Which means we're all underestimating the risk of Big Bads -- even when we're great at the sort of rationality that good scientists and traders practice, where they're eager to point out other people's wrongness and eager to avoid being proven wrong themselves. You can profit by disproving your neighbor's theory, but you can't profit by showing that all theories are founded on sand.

I see this as a result of two things, “Goodheart’s law” and “wireheading by default”.

Goodheart’s law means that if you’re aiming for “just”, as operationalized in a certain way, you will start to get thing that are unjust which nevertheless appear “just” through that limited lens. In order to combat this, you gotta keep your eye on the prize, and make sure you can find the injustices and update your definitions as fast as things get out of whack. If you’re more perceptive than the people setting the target, the world will always seem unjust from your perspective, but that just means you have something to offer if you can show people how their definitions fail to capture the value.

Pain is an error signal that tells you when something is wrong, but “out of the box” people have trouble distinguishing the pain signal from the thing that it is signalling. If you can’t tell the difference between “no error signals because everything is fine” and “no error signal because I’m looking the other way”, then a lack of an error signal is all the proof you need that things are good and you can relax. The moment you start seeing yourself get in trouble from failing to pay attention to error signals, avoiding those error signals by looking away from things stops seeming like such a good idea. Instead of plugging your ears, you want to hear what people are saying about you. After enough instances of hurting yourself worse because of pushing through physical pain, you ask the doctor he’s got anything to make the pain *worse*.

The problem with a lot of these things like “victim blaming” or the practices that led to the replication crisis is that the person doing the not-looking isn’t the one directly paying the price. In order for people to start noticing when they’re world isn’t just in the way that they want to believe it is, they have to see that not seeing it is even worse for them and the things that matter to them, and that means making sure that their unfairness is visible enough to be expected to be punished by others.

For context this is a response to this year's Edge essays https://www.edge.org/responses/what-scientific-term-or%C2%A0concept-ought-to-be-more-widely-known . Not sure if posted by Michael (which is Good News!) or on his behalf

The question then immediately becomes, pump against this or take advantage of it?

Please spell this one out.

I mean, I was mainly reiterating the point you make in your final paragraph. It's something like, I can imagine a sort of pessimistic, cynical perspective in which the idea that we can ever transcend false impressions of just universes is laughable, and awareness of this dynamic (and skill at exploiting it) simply becomes a part of the toolkit, and you hope that good people are more aware and more skilled because it's a symmetric weapon that makes winners win more and losers lose harder.

Or you could imagine someone with actual optimism that broken systems can be done away with entirely, and who thinks it's worthwhile to spend significant capital (both social or otherwise) to make sure that no one ever forgets that it's unjust and no one can get away with acting as if it is, while maybe also doing their best to try various experiments at installing new modes of understanding.

There seems to be a trap, wherein it's always easy to postpone changing the overall world order, because in any given instance, with any given set of limited goals, it's usually better to work within the system as-is and use realism and pragmatism and so forth. But on a policy level that just props the thing up forever.

What I'm not sure about is what balance to strike, at a policy level, between using the levers and knobs that the universe and the culture have provided, and trying to carve new control features into the existing system.

Alas, I had no clear or interesting thoughts, just the above expansion on the question you'd already gestured toward.

I'm pretty sure there's no good answer to this, yet. I have my own intuitions, which are vastly different from everyone else's - but the general pattern of what actually happens seems to be "keep going until some kind of cultural tipping point happens, the current regime loses the Mandate of Heaven, and a revolution puts them all up against the wall."

Some people are really good at telling when to foment a revolution, but this seems regrettably uncorrelated with being good at telling whether the revolution is justified, or will lead to better results.

[-]Ben1y20

I am interested in how historically recent this feature is.

One feature that stands out about ancient religions (eg. Egyptian, Viking) is that the fables don't consistently reward the good and punish the bad. They also have this aspect that (for me on first reading) was very surprising, that after death you go to different underworlds based not on merit or on accepting the one true faith but instead based on fairly random contingent things kind of outside your own control.  For Vikings Valhala only for those who die in battle. In Egypt while you cross over to the next world their is a wasteland where hyenas might eat your soul if you are simply unlucky (although this risk can be mitigated with certain charms and things).

My theory is that the "just world hypothesis" is not a feature of human nature at all, but a cultural meme that is currently doing quite well. Maybe we got it from the Greeks (via the Christians), Greek myths don't tend to feel culturally alien in the same way. (They punish people who are idiots (Icarus), or greedy (Midas). Way more compatible with our standards.)

To some extent, I feel as if one must be homeless to have the perspective to properly understand this (and still, there'll be some survivorship bias). To stand out in the darkness and look in at the light and know it is a lie; that there is merely a spectrum between Hollywood movies in which everyone finds true love, and restaurants you might visit which are disproportionately full of healthy, happy people rather than those with serious problems. To know that the people standing in the light cannot see the lie. To be constantly told that you could just put in a little effort and get a job, support yourself, find happiness. (Not to say that this is necessarily wrong; just systematically overestimated.) To see that society only optimizes that which has effective feedback loops (with incentives), and so it can't even see the dark, not properly.

It isn't even a person deceiving you about the way the world works; even without the social incentives for the just-world hypothesis which you discuss, the inspection paradox will produce the illusion.