Lauren Lee responded on facebook to my post about locating yourself as an instance of a class:

Riffing on this LW post by Abram Demski (https://www.lesserwrong.com/…/placing-yourself-as-an-instan…)

In particular this paragraph:

<< For example, if a person is trying to save money but sees a doodad they'd like to buy, the fool reason as follows: "It's just this one purchase. The amount of money isn't very consequential to my overall budget. I can just save a little more in other ways and I'll meet my target." The wise person reasons as follows: "If I make this purchase now, I will similarly allow myself to make exceptions to my money-saving rule later, until the exception becomes the rule and I spend all my money. So, even though the amount of money here isn't so large, I prefer to follow a general policy of saving, which implies saving in this particular case." A very wise person may reason a bit more cleverly: "I can make impulse purchases if they pass a high bar, such that I actually only let a few dollars of unplanned spending past the bar every week on average. How rare is it that a purchase opportunity costing this much is at least this appealing?" **does a quick check and usually doesn't buy the thing, but sometimes does, when it is worth it** >>

The above is a rough outline for one set of moves you can make, but there are ways you can enhance the decision process even further.

One way you can generally increase your "wisdom" is to notice what specific environmental cues are /relevant/ to WHY the impulse occurred in the first place. I think these cues are initially not obvious, at least in my experience.

To illustrate, say I walk into a random store.

What factors determine the likelihood that I'm going to want to impulsively buy a thing?

There are times when I walk into a store without a plan, and I can be confident I'm going to walk out of it without having purchased anything. Other times, I feel more like I will have ended up purchasing something. What is giving me these clues?

The situation of me being in a grocery store while I'm hungry is a different /class/ of situation from me being in a grocery store while I'm full. And so I want to treat these classes as somewhat separate when making decisions.

Somehow, I figured out at some point that this was a different class. Maybe by noticing a pattern over time that I bought more stuff when I was hungry in grocery stores vs. full.

The question is how to figure out one's own patterns / cues very generally, so they cover a wide range of situations.

I suspect there's a bunch of tools for this, but one I made up just now:

At the moment of environmental switch (e.g. I just walked into a store, I just saw something cool, I just noticed a desire arise), make a prediction about your future on the order of ~15-60 minutes. (This will hopefully have the effect of you checking in with your current state, taking inventory / capturing all the variables in that moment. With those variables—even before walking through the store further or considering a decision—you should be able to predict the outcome. That is my claim anyway.)

In most situations, my own behavior shouldn't surprise me. I have most of the relevant information about what my behavior will be. Humans are mostly just TAP machines. If you know what relevant Triggers to be paying attention to, you should be able to make accurate predictions about the output Actions.

And yes, you can change the output Actions, but mostly by changing what Triggers you pay attention to / how to weight them, not by trying to reprogram your Actions in response to the exact same state.

(If you've ever found yourself disappointed in yourself or surprised by yourself, you should try to figure out what the Trigger was that caused it.)

To react to a thing, some part of you needs to recognize that it is a thing.

In Retroactive Readmission of Evidence, Conor says:

Eventually, though, the pattern makes itself clear, and catches my mental eye, and I'm like Gah, this thing! It's only on iteration number seven that I realize that, a) that thing is really annoying, and b) it's happened six times before, too.

In Why and How to Name Things, Conor discusses how a proliferation of named things broadens what we can think about.

In Outside View as the Main Debiasing Technique, I discuss how simply knowing about a bias can allow you to wake up when the bias is about to happen, and course-correct. Grognor and I expressed a similar sentiment in A List of Nuances. Grognor writes about why lumping errors are easier to make than splitting errors.

All of these things assume that potential categories come from somewhere, but don't say much about where they come from. You can take outside view all you want and never get anywhere if you have the wrong categories. But how do we get better categories?

Lauren Lee suggests we make predictions and watch for what drives our behavior. I agree with her that there's probably a bunch of advice for this. But, I suspect none of it will stick very well until we have a good name for the problem. What do we call the problem of cutting the world at its joints?

Lots of bad names come to mind. Reification. Ontology. Class formation, concept formation, category formation. Object segmentation. Factoring the world. For now, I think all of them are trumped by "That's a thing!".

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 9:37 PM

Anything wrong with "ontology" to you other than that it's not very precise, as in this process is part of ontology but just saying "ontology" doesn't really say anything about this particular process?

"Ontology" also has the connotation of the big questions of what kind of stuff the world is; what's called the top-level ontology when people are being more precise. I'm not talking about questions of putting the physical and the mental in the right relation to each other and so on.