This is adapted from a segment of a talk I gave last year, primarily the portion at 6:09:00-6:11:00 of this stream. The example in this article originates from my paper "A Language for Counterfactual Generative Models," specifically Appendix A, which also features code for this example. I originally learned the causal hierarchy from Chapter 7 of Pearl's book "Causality."

This post is written for Chris Leong's bounty for an explanation of counterfactuals.

Let's play a game.

In this game, I'm going to roll a die, and you're going to try to predict the roll.

We'll compare the numbers.

If your guess is within 1 of the true number, you win.

Otherwise, you lose.

 

 

This is a game of chance and decision. I can pose many questions about what would happen, and they'll be easy to analyze. And that makes it easier to analyze the questions themselves. It turns out that some questions are easy and could be solved by a smart bean-counter with no understanding of the game rules, while others require understanding things that may never occur naturally. These question types form the causal hierarchy, culminating in "what would have happened" questions, or counterfactuals. At the end of this article, you'll be able to explain exactly what is a counterfactual and how to compute them. You'll understand why they are hard to answer, even though they are conceptually as straightforward to compute as the probability formulas you learned in school.


 

Say I watch old Mary playing this game at the casino, over and over. Every round, I see her placing the same bet on her lucky number: 6. I can make predictions: if she plays long enough, the proportion of times she will win will be very close to 1⁄3. This is prediction.

We can refine this a little too. Say I'm no longer watching her across the room, but I'm instead looking at recordings of the game taken on a high-speed camera, playing them back frame by frame. As the die moves closer to a standstill, I can make better and better predictions about whether Mary will win. Or I can make abductions about where the die was a moment earlier. Each frame I see is an observation. And I can do all of this knowing nothing about the world other than having watched the game over and over. We call this Level 1 of the causal hierarchy.

But maybe I can do more than just watch. I can also intervene. Suppose one day I get tired of watching Mary lose, and I walk up to her and convince her to abandon her (not-so) lucky number and start playing 2. And I smile because I'm about to see her win rate climb up to  1⁄2.

But wait! How did I do that? We have never seen her play a number other than 6 before.  Perhaps I've never seen anyone else play this game, so I have no examples of this number being played. If I didn't know the rules, she could be playing any of a bajillion games that all work the same when she plays 6, but work completely differently when she plays a different number. How can I make such a prediction when I have no data to support it?

But thankfully, we do know the rules, so we can do things that can't be done alone from pure observation. The die is random, but the game is deterministic. Once Mary has picked her number and the die is rolled, we know everything about what is going to happen. The game is more than a natural process that I can observe, but is a system with rules that I understand. If we know the inputs, then we know the outputs, and so we can make predictions about situations we've never seen before. By getting Mary to do something she would not have done naturally, I've done an intervention or an action, which is Level 2 of the causal hierarchy.

We can conceptualize an intervention at the fine-grained level as well. Perhaps I have a device that can give a very well-timed puff of air to the die as it's rolling. It'll be harder to predict what will happen, but we can definitely do better than blind guessing.

Now I'm beaming as Mary has taken my advice, shouting "2" with every play. One round, I don't see the die clearly, but I do see Mary losing as the dealer takes her chips. A man next to me comments: "Shoulda played 4 instead. Then she'd've prolly' won." What does that mean?

It's pretty tough to talk about things that could have happened. This kind of question is a counterfactual and is the hardest of all.

Let's work it out. I know (observed) that she played 2 and lost. If the die was 1, 2, or 3, she would have won, so from this observation I can abduce information about the earlier event, and know the die must have been a 4, 5, or 6. Say I went back in time and intervened by telling her to play 4. Then in the two cases where the die is a 4 or 5, she wins; else she loses. Her chance of winning would have been 2⁄3.

 

The reasoning we used to determine what would have happened had she played a different number looks very similar to how we predicted the result of my earlier intervention. But it's actually different and requires much more information. Say the dealer decides to roll with a different hand based on the the numbers the players pick, not because it makes any difference, but just because he feels like it. Now I'd have no special information to tell Mary her chances had she played 4 vs. 2; it would be a different roll. These two processes — the hand-switching and hand-constant dealers — look identical. I could find a thousand Mary's at different tables and give them every command I could think of. If they handed me logs afterwards, I'd see the ones I told to play 1 or 6 winning 1⁄3 of the time, and the rest winning 1⁄2 of the time. I'd even see the same distribution of die rolls. But I'd have no idea which of them were facing hand-switching dealers vs. not.

To answer interventional questions, we needed to know only the probabilities of win and loss after telling Mary to play a specific number (the "interventional distribution"), which we could get by knowing the game rules and the distribution of the die rolls. To answer a counterfactual question, we need information about how the game chooses and uses random elements, so that we can know what would be the same across runs where we change some things and keep others the same, such as whether we get "the same" die roll or a different one. This is what makes counterfactual questions strictly harder than interventional ones, making it Level 3 of the causal hierarchy.

Overall, computing counterfactuals is a three step process. First we look at what happened (the "factual world"), and infer what we can about background facts. That's abduction. Then we mentally construct a new world where a few of those facts have changed. That' action or intervention. Then we simulate this "counterfactual world" forward and deduce what would have happened. That's prediction. And that's how counterfactual inference breaks down into three smaller pieces — or just two, actually, when you consider that abduction is just prediction run backwards. I've worked on a programming language for counterfactuals, by the way, and that's actually how it works: we designed probabilistic choice, conditioning/prediction, and intervention, and got counterfactuals for free. (Also the probability part is optional.)

 


 

In summary: Questions about a probabilistic process like a game of chance form a  "causal hierarchy" of three levels, each successively harder and requiring more information to answer. The three levels are:

  1. Observation/prediction: You watch the process passively and guess what the other parts are likely to be. Everything in this level can be solved by counting what happens naturally, without regard for relative timing. We can view the process  as a black box, taking in unseen or random parts of the world and spitting out outcomes.
  2. Action/intervention: You watch the process, and then intervene somehow and then predict the outcome. This requires knowing how the possible outcomes respond to changes in input. They generally can't be answered by passively observing counting, but don't require knowing anything about the randomness in the process other than the proportions of different outcomes under different actions. We can view the process as a set of different black boxes, one for each possible intervention.
  3. Counterfactual: Like interventional questions, except the intervention occurs before some of the observed portions. This requires reasoning backwards through the process and into its inputs, including the random or unseen parts, so as to predict what would happen changing the seen inputs and keeping the unseen ones the same. The process can no longer be viewed as a black box.

So, that's the causal hierarchy. Now we know what a counterfactual is and how to answer them, so know exactly what needs to be done to predict whether Mary would have won had she played differently.

Though, for this game at least, the real winning move is not to play.

New Comment
2 comments, sorted by Click to highlight new comments since:

Many years ago a mentor told me that critics of abduction point out that induction can make it redundant by making credences in hypotheses about facts, and that this is in fact more aligned with the idea that you don't have a credence in the facts directly you instead have a credence in some model of the facts. I haven't spent any time in the literature since then. Overall, do you think abduction is underrated? I do a lot of skimming lesswrong posts about logic and probability and so on and basically never see it.

I'm having a little trouble understanding the question. I think you may be thinking of either philosophical abduction/induction or logical abduction/induction.

 

Abduction in this article is just computing P(y | x) when x is a causal descendant of y. It's not conceptually different from any other kind of conditioning.

In a different context, I can say that I'm fond of Isil Dillig's thesis work on an abductive SAT solver and its application to program verification, but that's very unrelated.