Follow-up to: Status Regulation and Anxious Underconfidence


 

Somehow, someone is going to horribly misuse all the advice that is contained within this book.

Nothing I know how to say will prevent this, and all I can do is advise you not to shoot your own foot off; have some common sense; pay more attention to observation than to theory in cases where you’re lucky enough to have both and they happen to conflict; put yourself and your skills on trial in every accessible instance where you’re likely to get an answer within the next minute or the next week; and update hard on single pieces of evidence if you don’t already have twenty others.

I expect this book to be of much more use to the underconfident than the overconfident, and considered cunning plots to route printed copies of this book to only the former class of people. I’m not sure reading this book will actually harm the overconfident, since I don’t know of a single case where any previously overconfident person was actually rescued by modest epistemology and thereafter became a more effective member of society. If anything, it might give them a principled epistemology that actually makes sense by which to judge those contexts in which they are, in fact, unlikely to outperform. Insofar as I have an emotional personality type myself, it’s more disposed to iconoclasm than conformity, and inadequacy analysis is what I use to direct that impulse in productive directions.

But for those certain folk who cannot be saved, the terminology in this book will become only their next set of excuses; and this, too, is predictable.

If you were never disposed to conformity in the first place, and you read this anyway… then I won’t tell you not to think highly of yourself before you’ve already accomplished significant things. Advice like that wouldn’t have actually been of much use to myself at age 15, nor would the universe have been a better place if Eliezer-1995 had made the mistake of listening to it. But you might talk to people who have tried to reform the US medical system from within, and hear what things went wrong and why.1 You might remember the Free Energy Fallacy, and that it’s much easier to save yourself than your country. You might remember that an aspect of society can fall well short of a liquid market price, and still be far above an amateur’s reach.

I don’t have good, repeatable exercises for training your skill in this field, and that’s one reason I worry about the results. But I can tell you this much: bet on everything. Bet on everything where you can or will find out the answer. Even if you’re only testing yourself against one other person, it’s a way of calibrating yourself to avoid both overconfidence and underconfidence, which will serve you in good stead emotionally when you try to do inadequacy reasoning. Or so I hope.

Beyond this, other skills that feed into inadequacy analysis include “see if the explanation feels stretched,” “figure out the further consequences,” “consider alternative hypotheses for the same observation,” “don’t hold up a mirror to life and cut off the parts of life that don’t fit,” and a general acquaintance with microeconomics and behavioral economics.

The policy of saying only what will do no harm is a policy of total silence for anyone who’s even slightly imaginative about foreseeable consequences. I hope this book does more good than harm; that is the most I can hope for it.

For yourself, dear reader, try not to be part of the harm. And if you end up doing something that hurts you: stop doing it.

Beyond that, though: if you’re trying to do something unusually well (a common enough goal for ambitious scientists, entrepreneurs, and effective altruists), then this will often mean that you need to seek out the most neglected problems. You’ll have to make use of information that isn’t widely known or accepted, and pass into relatively uncharted waters. And modesty is especially detrimental for that kind of work, because it discourages acting on private information, making less-than-certain bets, and breaking new ground. I worry that my arguments in this book could cause an overcorrection; but I have other, competing worries.

The world isn’t mysteriously doomed to its current level of inadequacy. Incentive structures have parts, and can be reengineered in some cases, worked around in others.

Similarly, human bias is not inherently mysterious. You can come to understand your own strengths and weaknesses through careful observation, and scholarship, and the generation and testing of many hypotheses. You can avoid overconfidence and underconfidence in an even-handed way, and recognize when a system is inadequate at doing X for cost Y without being exploitable in X, or when it is exploitable-to-someone but not exploitable-to-you.

Modesty and immodesty are bad heuristics because even where they’re correcting for a real problem, you’re liable to overcorrect.

Better, I think, to not worry quite so much about how lowly or impressive you are. Better to meditate on the details of what you can do, what there is to be done, and how one might do it.

 


 

This concludes Inadequate Equilibria. The full book is now available in electronic and print form through equilibriabook.com.

 


 

  1. As an example, see Zvi Mowshowitz’s “The Thing and the Symbolic Representation of The Thing,” on MetaMed, a failed medical consulting firm that tried to produce unusually high-quality personalized medical reports. 

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 12:53 AM

So much of your writing sounds like an eloquent clarification of my own underdeveloped thoughts. I'd bet good money your lesswrong contributions have delivered me far more help than harm :) Thanks <3

I just wrote a long post on my tumblr about this sequence, which I am cross-posting here as a comment on the final post. (N.B. my tone is harsher and less conversational than it would have been if I had thought of it as a comment while writing.)

I finally got around to reading these posts.  I wasn’t impressed with them.

The basic gist is something like:

“There are well-established game-theoretic reasons why social systems (governments, academia, society as a whole, etc.) may not find, or not implement, good ideas even when they are easy to find/implement and the expected benefits are great.  Therefore, it is sometimes warranted to believe you’ve come up with a good, workable idea which ‘experts’ or ‘society’ have not found/implemented yet.  You should think about the game-theoretic reasons why this might or might not be possible, on a case-by-case basis; generalized maxims about ‘how much you should trust the experts’ and the like are counterproductive.”

I agree with this, although it also seems fairly obvious to me.  It’s possible that Yudkowsky is really pinpointing a trend (toward an extreme “modest epistemology”) that sounds obviously wrong once it’s pinned down, but is nonetheless pervasive; if so, I guess it’s good to argue against it, although I haven’t encountered it myself.

But the biggest reason I was not impressed is that Yudkowsky mostly ignores an which strikes me as crucial.  He makes a case that, given some hypothetically good idea, there are reasons why experts/society might not find and implement it.  But as individuals, what we see are not ideas known to be good.

What we see are ideas that look good, according to the models and arguments we have right now.  There is some cost (in time, money, etc.) associated with testing each of these ideas.  Even if there are many untried good ideas, it might still be the case that these are a vanishing small fraction of ideas that look good before they are tested.  In that case, the expected value of “being an experimenter” (i.e. testing lots of good-looking ideas) could easily be negative, even though there are many truly good, untested ideas.

To me, this seems like the big determining factor for whether individuals can expect to regularly find and exploit low-hanging fruit.

The closest Yudkowsky comes to addressing this topic is in sections 4-5 of the post “Living in an Inadequate World.”  There, he’s talking about the idea that even if many things are suboptimal, you should still expect a low base rate of exploitable suboptimalities in any arbitrarily/randomly chosen area.  He analogizes this to finding exploits in computer code:

Computer security professionals don’t attack systems by picking one particular function and saying, “Now I shall find a way to exploit these exact 20 lines of code!” Most lines of code in a system don’t provide exploits no matter how hard you look at them. In a large enough system, there are rare lines of code that are exceptions to this general rule, and sometimes you can be the first to find them. But if we think about a random section of code, the base rate of exploitability is extremely low—except in really, really bad code that nobody looked at from a security standpoint in the first place.
Thinking that you’ve searched a large system and found one new exploit is one thing. Thinking that you can exploit arbitrary lines of code is quite another.

This isn’t really the same issue I’m talking about – in the terms of this analogy, my question is “when you think you have found an exploit, but you can’t costlessly test it, how confident should you be that there is really an exploit?”

But he goes on to say something that seems relevant to my concern, namely that most of the time you think you have found an exploit, you won’t be able to usefully act on it:

Similarly, you do not generate a good startup idea by taking some random activity, and then talking yourself into believing you can do it better than existing companies. Even where the current way of doing things seems bad, and even when you really do know a better way, 99 times out of 100 you will not be able to make money by knowing better. If somebody else makes money on a solution to that particular problem, they’ll do it using rare resources or skills that you don’t have—including the skill of being super-charismatic and getting tons of venture capital to do it.
To believe you have a good startup idea is to say, “Unlike the typical 99 cases, in this particular anomalous and unusual case, I think I can make a profit by knowing a better way.”
The anomaly doesn’t have to be some super-unusual skill possessed by you alone in all the world. That would be a question that always returned “No,” a blind set of goggles. Having an unusually good idea might work well enough to be worth trying, if you think you can standardly solve the other standard startup problems. I’m merely emphasizing that to find a rare startup idea that is exploitable in dollars, you will have to scan and keep scanning, not pursue the first “X is broken and maybe I can fix it!” thought that pops into your head.
To win, choose winnable battles; await the rare anomalous case of, “Oh wait, that could work.”

The problem with this is that many people already include “pick your battles” as part of their procedure for determining whether an idea seems good.  People are more confident in their new ideas in areas where they have comparative advantages, and in areas where existing work is especially bad, and in areas where they know they can handle the implementation details (“the other standard startup problems,” in EY’s example).

Let’s grant that all of that is already part of the calculus that results in people singling out certain ideas as “looking good” – which seems clearly true, although doubtlessly many people could do better in this respect.  We still have no idea what fraction of good-looking ideas are actually good.

Or rather, I have some ideas on the topic, and I’m sure Yudkowsky does too, but he does not provide any arguments to sway anyone who is pessimistic on this issue.  Since optimism vs. pessimism on this issue strikes me as the one big question about low-hanging fruit, this leaves me feeling that the topic of low-hanging fruit has not really been addressed.

Yudkowsky mentions some examples of his own attempts to act upon good-seeming ideas.  To his credit, he mentions a failure (his ketogenic meal replacement drink recipe) as well as a success (stringing up 130 light bulbs around the house to treat his wife’s Seasonal Affective Disorder).  Neither of these were costless experiments.  He specifically mentions the monetary cost of testing the light bulb hypothesis:

The systematic competence of human civilization with respect to treating mood disorders wasn’t so apparent to me that I considered it a better use of resources to quietly drop the issue than to just lay down the ~$600 needed to test my suspicion.

His wife has very bad SAD, and the only other treatment that worked for her cost a lot more than this.  Given that the hypothesis worked, it was clearly a great investment.  But not all hypotheses work.  So before I do the test, how am I to know whether it’s worth $600?  What if the cost is greater than that, or the expected benefit less?  What does the right decision-making process look like, quantitatively?

Yudkowsky’s answer is that you can tell when good ideas in an area are likely to have been overlooked by analyzing the “adequacy” of the social structures that generate, test, and implement ideas.  But this is only one part of the puzzle.  At best, it tells us P(society hasn’t done it yet | it’s good).  But what we need is P(it’s good | society hasn’t done it yet).  And to get to one from the other, we need the prior probability of “it’s good,” as a function of the domain, my own abilities, and so forth.  How can we know this?  What if there are domains where society is inadequate yet good ideas are truly rare, and domains where society is fairly adequate but good ideas as so plentiful as to dominate the calculation?

In an earlier conversation about low-hanging fruit, tumblr user @argumate brought up the possibility that low-hanging fruit are basically impossible to find beforehand, but that society finds them by funding many different attempts and collecting on the rare successes.  That is, every individual attempt to pluck fruit is EV-negative given risk aversion, but a portfolio of such attempts (such as a venture capitalist’s portfolio) can be net-positive given risk aversion, because with many attempts the probability of one big success that pays for the rest (a “unicorn”) goes up.  It seems to me like this is plausible.

Let me end on a positive note, though.  Even if the previous paragraph is accurate, it is a good thing for society if more individuals engage in experimentation (although it is a net negative for each of those individuals).  Because of this, the individual’s choice to experiment can still be justified on other terms – as a sort of altruistic expenditure, say, or as a way of kindling hope in the face of personal maladies like SAD (in which case it is like a more prosocial version of gambling).

Certainly there is something emotionally and aesthetically appealing about a resurgence of citizen science – about ordinary people looking at the broken, p-hacked, perverse-incentived edifice of Big Science and saying “empiricism is important, dammit, and if The Experts won’t do it, we will.” (There is precedent for this, and not just as a rich man’s game – there is a great chapter in The Intellectual Life of the British Working Classes about widespread citizen science efforts in the 19th C working class.)  I am pessimistic about whether my experiments, or yours, will bear fruit often enough to make the individual cost-benefit analysis work out, but that does not mean they should not be done.  Indeed, perhaps they should.

It seems to be hard in practice to draw the line between "only picking certain bets" and "doing things I'm best at" (though the theoretical difference is obvious - maximizing P(win) by events and maximizing usefulness of one's skills to win by skills). The latter seems to be a good practice - yet your attack on the former seems to indirectly hamper it.