In many domains, you can get better primarily by being correct more frequently. If you’re managing a team, or trying to improve your personal relationships, it’s very effective to improve your median decision. The more often you’re right, the better you’ll do, so people often implicitly optimize their success rate.

But what about competitive domains, like prediction markets, investing, or hiring?

One of the most common mistakes I see people make is trying to apply non-competitive intuitions to competitive dynamics. But the winning strategies are very different! It is not enough to be right – others must be wrong. Rather than having a high success rate across all situations, you want to find an edge in some situations, and bet heavily when you find a mistake in the consensus.

For example, say you’re a startup trying to hire software engineers, and your process finds two excellent candidates – Alex and Bob. You make both of them offers at market rate for excellent software engineers. Alex knows how to interview well, so every company believes (correctly) that he’s excellent, while Bob is a bad interviewer, and most of your competitors think he’s merely good. As a result, there might be a 10% chance Alex accepts your offer in this situation, and a 50% chance Bob does.

Assuming a similar rate of Alexes and Bobs in the world, this means your process should be optimized almost entirely for finding Bobs – if you miss a few Alexes to find another Bob, that’s totally worth it! But from the outside, this will look crazy – you’re passing up obviously great candidates in order to specifically find people who aren’t obviously great.

Furthermore, this means that copying mainstream interviewing strategies is one of the worst things you can do – if you only get points for beating consensus predictions, then matching them will get you a 0.

The next time you find yourself thinking about one of these situations, try to figure out whether it’s competitive or non-competitive, and whether you should be optimizing for median predictions or edge.

New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 6:40 AM
Furthermore, this means that copying mainstream interviewing strategies is one of the worst things you can do – if you only get points for beating consensus predictions, then matching them will get you a 0.

Are you classifying hiring as a case where you only get points for beating the consensus? Consider three processes for evaluating candidates:

Process A is no better than noise, and just selects candidates at random.

Process B matches the consensus.

Process C selects for the consensus's false negatives.

Your post seems to be advocating in favor of C, and I would agree that C is the best of these, but isn't B clearly better than A (for realistic candidate pool distributions)? If so, it doesn't seem right to say that you get "no points" for doing B.

I could have been clearer - hiring is definitely a case where you get some points for following consensus, unlike, say, active investing where you're typically measured on alpha. And following consensus on some parts of your process is fine if you have an edge elsewhere (e.g. Google and Facebook pay more than most, so having consensus-level assessment is fine.) But I would argue that for most startups you'll see something like order-of-magnitude improvements through Process C.

This seems relevant for assessing how and when to pragmatically implement epistemic modesty. Your cost function matters!

I think that you need to consider both precision and recall of your interview process. The standard interview process is optimized for precision -- you want to be as sure as possible that the people you identify as good are actually good. This is in part because it's very expensive to fix a hiring mistake, and also because the candidate pool is very bad. The good candidates get hired and keep jobs, and the bad candidates keep interviewing.

If you come up with a new process that has higher recall (can find Bob when the typical process doesn't), unless you've invented something that dominates the typical process, you're going to get a bunch of false positives and end up hiring people you think are Bobs but are actually bad.

TL;DR your post focuses on recall (avoiding false negatives) but in reality precision (avoiding false positives) is much more important because the candidate pool is mostly terrible.

Promoted to the frontpage for being short but still making a good point.

I really like this short post for giving a concrete example of a common, general and signficant problem, so I've promoted it to Featured.

> If you only get points for beating consensus predictions, then matching them will get you a 0.

Important note on this: Matching them guarantees a 0, implementing your own strategy and doing poorer than the consensus could easily get you negative marks.