I tend to draw a very sharp line between anything that happens inside a brain and anything that happened in evolutionary history.  There are good reasons for this!  Anything originally computed in a brain can be expected to be recomputed, on the fly, in response to changing circumstances.

Consider, for example, the hypothesis that managers behave rudely toward subordinates "to signal their higher status".  This hypothesis then has two natural subdivisions:

If rudeness is an executing adaptation as such - something historically linked to the fact it signaled high status, but not psychologically linked to status drives - then we might experiment and find that, say, the rudeness of high-status men to lower-status men depended on the number of desirable women watching, but that they weren't aware of this fact.  Or maybe that people are just as rude when posting completely anonymously on the Internet (or more rude; they can now indulge their adapted penchant to be rude without worrying about the now-nonexistent reputational consequences).

If rudeness is a conscious or subconscious strategy to signal high status (which is itself a universal adapted desire), then we're more likely to expect the style of rudeness to be culturally variable, like clothes or jewelry; different kinds of rudeness will send different signals in different places.  People will be most likely to be rude (in the culturally indicated fashion) in front of those whom they have the greatest psychological desire to impress with their own high status.

When someone says, "People do X to signal Y", I tend to hear, "People do X when they consciously or subconsciously expect it to signal Y", not, "Evolution built people to do X as an adaptation that executes given such-and-such circumstances, because in the ancestral environment, X signaled Y."

I apologize, Robin, if this means I misunderstood you.  But I think it really is important to use different words that draw a hard boundary between the evolutionary computation and the cognitive computation - "People are adapted to do X because it signaled Y", versus "People do X because they expect it to signal Y".

"Distal cause" and "proximate cause" doesn't seem good enough, when there's such a sharp boundary within the causal network about what gets computed, how it got computed, and when it will be recomputed.  Yes, we have epistemic leakage across this boundary - we can try to fill in our leftover uncertainty about psychology using evolutionary predictions - but it's epistemic leakage between two very different subjects.

I've noticed that I am, in general, less cynical than Robin, and I would offer up the guess for refutation (it is dangerous to reason about other people's psychologies) that Robin doesn't draw a sharp boundary across his cynicism at the evolutionary-cognitive boundary.  When Robin asks "Are people doing X mostly for the sake of Y?" he seems to answer the same "Yes", and feel more or less the same way about that answer, whether or not the reasoning goes through an evolutionary step along the way.

I would be very disturbed to learn that parents, in general, showed no grief for the loss of a child who they consciously believed to be sterile.  The actual experiment which shows that parental grief correlates strongly to the expected reproductive potential of a child of that age in a hunter-gatherer society - not the different reproductive curve in a modern society - does not disturb me.

There was a point more than a decade ago when I would have seen that as a puppeteering of human emotions by evolutionary selection pressures, and hence something to be cynical about.  Yet how could parental grief come into existence at all, without a strong enough selection pressure to carve it into the genome from scratch?  All that should matter for saying "The parent truly cares about the child" is that the grief in the parent's mind is cognitively real and unconditional and not even subconsciously for the sake of any ulterior motive; and so it does not update for modern reproductive curves.

Of course the emotional circuitry is ultimately there for evolutionary-historical reasons.  But only conscious or subconscious computations can gloom up my day; natural selection is an alien thing whose 'decisions' can't be the target of my cynicism or admiration.

I suppose that is a merely moral consequence - albeit it's one that I care about quite a lot.  Cynicism does have hedonic effects.  Part of my grand agenda that I have to put forward about rationality, has to do with arguing against many various propositions "Rationality should make us cynical about X" (e.g. "physical lawfulness -> choice is a meaningless illusion") that I happen to disagree with.  So you can see why I'm concerned about drawing the proper boundary of cynicism around evolutionary psychology (especially since I think the proper boundary is a sharp full stop).

But the same boundary also has major consequences for what we can expect people to recompute or not recompute - for the way that future behaviors will change as the environment changes.  So once again, I advocate for language that separates out evolutionary causes and clearly labels them, especially in discussions of signaling.  It has major effects, not just on how cynical I end up about human nature, but on what 'signaling' behaviors to expect, when.

New Comment
29 comments, sorted by Click to highlight new comments since:

This post helps to ease much of what I have found frustrating in the task of understanding the implications of EP.

I'm getting two things out of this.

1) Evolutionary cynicism produces different predictions from cognitive cynicism, e.g. because the current environment is not the ancestral environment.

2) Cognitive cynicism glooms up Eliezer's day but evolutionary cynicism does not.

(1) is worth keeping in mind. I'm not sure what significance (2) has.

However, we may want to develop a cynical account of a certain behavior while suspending judgment about whether the behavior was learned or evolved. Call such cynicism "agnostic cynicism", maybe. So we have three types of cynicism: evolutionary cynicism, cognitive cynicism, and agnostic cynicism.

A careful thinker will want to avoid jumping to conclusions, and because of this, he may lean toward agnostic cynicism.

Eliezer, you are right that my sense of moral approval or disapproval doesn't rely as heavily on this distinction as yours, and so I'm less eager to make this distinction. But I agree that one can sensibly distinguish genetically-encoded evolution-computed strategies from consciously brain-computed strategies from unconsciously brain-computed strategies. And I agree it would be nice to have clean terms to distinguish these, and to use those terms when we intend to speak primarily about one of these categories.

Most actions we take, however, probably have substantial contributions from all three sources, and we will often want to talk about human strategies even when we don't know much about these relative contributions. So surely we also want to have generic words that don't make this distinction, and these would probably be the most commonly used words out of these four sets.

Constant, I agree that we can often be unsure, but the distinction I draw in my mind is so sharp that I would always separately keep track of the two hypotheses, rather than having a single vague hypothesis. They just seem to me like such tremendously different ideas, with such different consequences! (Consider the original debate about whether you can be "agnostic" to have a single vague state of mind that encompasses theism and atheism.)

Robin, I'm happy to use "expect" to blend together conscious and subconscious expectation, because on a basic level, the structure of consciously and subconsciously signaling something is very similar. The only difference is the degree to which you can report something by introspection, and the degree to which you can make deliberate plans as opposed to taking advantage of opportunities that appear within your other plans. Even on a moral level, there's a distinction worth drawing, but it's not nearly as sharp. On the other hand, I have real, basic difficulty filing an evolutionary and a cognitive explanation into the same mental bucket.

I guess if I had to use a single word, the only word I could use would be "optimize" - if I said "I hypothesize that rudeness is optimized to signal high status", that would indeed leave me agnostic about who or what did the optimizing. But how would you test this hypothesis, or derive any predictions from it - even a question like "Will rudeness go up or down with increased anonymity?" - without saying at least something about how status signaling created rudeness?

Eliezer, wishes aren't horses; strongly wanting to be able to tell the difference doesn't by itself give us evidence to distinguish. Note that legal punishments often distinguish between conscious intention and all other internal causation; so apparently that is the distinction the law considers most relevant, and/or easiest to determine. "Optimize" invites too many knee-jerk complaints that we won't exactly optimize anything.

@Eliezer: is the following a real experiment that was actually made or are you hypothesizing?

The actual experiment which shows that parental grief correlates strongly to the expected reproductive potential of a child of that age in a hunter-gatherer society - not the different reproductive curve in a modern society - does not disturb me.

I usually hear "People are adapted with filters that enable them to learn that (set of behaviors X) signals (intentions Y) with varying degrees of effectiveness." Drawing the hard, bright line blurs the messy etiology of both, and implies an anti-sphexishness bias that tends to (pun intended) bug me. I'm comfortable with my sphexishness.

Robin, I can keep two distinct mental buckets without having distinguishing evidence.

Roland, that's a real experiment. My notes say this is 'Crawford, Charles B., B. E. Salter, and K. L. Lang (1989) "Human Grief: Is Its Intensity Related to the Reproductive Value of the Deceased?" Ethology and Sociobiology 10:297-307.' - haven't read the actual paper, just a summary of the results.

[-]Jack-30

There is no such thing as a "subconscious". You mean unconscious.

@Eliezer, you mention that "This hypothesis then has two natural subdivisions:" I suppose you consider the second correct and the first incorrect?

A few years ago conscious and subconscious computations could gloom up my day a lot more than they can now. Subsequently I believe I came to understand people a lot better and now I am a lot more aware of personal confusion on this subject but in general at the very least I can say that conscious and subconscious ulterior motivations also only remind me more of what humans are. Broadly, they seem likely to fall under "something to protect".

Anyway, I'm really glad to see what seems to me like uncommonly effective communication between Eliezer and Robin on this point.

I'd have to say that for myself, I've sometimes noticed via introspection that some of my own thoughts and actions, when I poke at them, seem to be status and signaling related.

ie, at least in my case, at least sometimes, it would seem that the signaling is actively computed by my brain, rather than just by evolution in the past.

I certainly understand the distinction you're making, but several times I've had instances in which when I tried to really think about and put words on what and why I wanted to do certain things or felt certain things, I found myself thinking in terms of what other people would think of me, status, etc.

@Elf: Thanks for that phrase "messy etiology" - it's awesome. However, where do you get the connection between sharp distinctions and ANTI-sphexishness? My understanding is that Eliezer's preference for sharp distinctions leads to PRO-sphexishness. By "pro-sphexishness" I mean considering hardwired values the foundation of morality, to be elevated and celebrated, not denigrated.

I may well be missing something, but, like Elf, I don't see how Eliezer's "evolutionary-cognitive boundary" can be well-defined.

If humans were ideal optimizers, a sharp distinction would make sense. Humans would come with genetically encoded objective functions. Some sub-goals (e.g., "keep your kids alive") would be encoded in our objective functions, and would be goals we intrinsically and permanently cared about. Other sub-goals (e.g., "wear your safety belt") would be consciously computed heuristics that we had no intrinsic attachment to, and that we would effortlessly lose when new information became available. "Beliefs" would be produced by a blend of evolutionary and individual computation (we would have genetically encoded heuristics for using sense-data and neurons to form accurate predictions, as in human language-learning), while "preference" would have a sharp line division.

But humans aren't ideal optimizers; humans are a kludged-together mess of preferences, habits, reflexes, context-sensitive heuristics, conscious and non-conscious "beliefs", and faculties for updating the same from experience. Our "intrinsic preferences" change in a manner not unlike the manner our beliefs update (based, again, on evolutionarily created algorithms that make use of sense-data and neurology; e.g., heuristics for updating the "intrinsic" attractiveness of your mate based on others' views of his attractiveness). Even at a single time, behavior can't always be parsed into (temporary) "beliefs" and (temporary) "preferences"; sometimes our apparent "preferences" vary so dramatically depending on context or framing that I'm not sure we should be regarded as having "preferences" at all (think Cialdini), and sometimes we just have a mix of habits, tendencies, emotions, and actions that acts vaguely purposive.

For example: *Fear of heights. Humans are designed (based on evolutionary computation) to acquire such fears easily; still, my fear is partially (though not fully) responsive to my conscious beliefs about how dangerous a particular cliff is, and is also partially (though not fully) responsive to subconscious computations that update from whether I've been badly hurt falling from a height (even if I consciously know that this height is different and harmless). Should my fear here be regarded as an intrinsic preference, a conscious subgoal of "avoid harm", an evolutionarily encoded "belief" (that is felt as an emotion, but updated from new data), or what?

Can someone who thinks a sharp line distinction is desirable explain how to draw that line for fear of heights or a similar example? Am I under-estimating the extent to which humans can be thought of as "having intrinsic preferences"?

Rather, I see that there's a well-defined distinction between "the code/genome evolution creates" and "what happens when that genome develops into Joe and Joe's brain, with Joe's (non-EEA) sense-data". I just don't see how to use such a distinction to sharp-line divide purposes or other targets for cynicism.

If human evolution has really been speeding up over the last ten-thousand years, as Greg Cochran believes, does it necessarily make sense to say that we are no longer in the EEA? Perhaps modern humans are better adapted to modern life than the "caveman" thesis (or just-so story?) suggests.

[Tim, you post this comment every time I talk about evolutionary psychology, and it's the same comment every time, and it doesn't add anything new on each new occasion. If these were standard theories I could forgive it, but not considering that they're your own personal versions. I've already asked you to stop. --EY]

Anna's point is similar to mine point that most behaviors we talk about are a mix of computation at all levels; this doesn't seem a good basis for hard lines for dichotomous cynical vs. not distinctions.

Anna, you're talking about a messiness of the human system, not a difficulty in drawing hard distinctions between human-style messiness and evolutionary-style messiness.

But I suppose that the more complicated the two systems are, the more expertise it would take before the distinction between them becomes natural and automatic; conversely, the fact that these two systems are complicated and different makes it very difficult for me to stick both in the same mental bucket. Meaning it not in any ad hominem way - the line between "natural selection" and "intelligent design" can appear like a mere matter of taste in the absence of any expertise, but becomes sharper and sharper as you learn more about it. The same is also true for how the two systems play out their goals in their separately incoherent ways.

It doesn't intrinsically matter whether behavior is caused by evolutionary adaptation or by consequentialist computation in the brain. The origin is screened off by what you are in this moment, where you might wish do disown any part that isn't really you. It's easier to change habits of thought than it is to change the structure of mind, and it's healthier to keep the things we can do something about higher on the list of concerns. So we worry more about bad cognitive bearing than about bad design. Nonetheless, alienness of evolution doesn't excuse the mistakes it made, one should focus on repair and not on blame.

"parental grief correlates strongly to the expected reproductive potential of a child of that age in a hunter-gatherer society"

Would parental attention and love too ?

@John: I think Eliezer did a good job of describing the problem in his followup to Anna, but I'm still having trouble convincing myself of the correctness of his statements. It feels to me like Eliezer is working hard to have these systems both ways: in his example of something historically effective but not psychologically effective, surely the psychological effectiveness, if it exists, is an emergent property of its historical effectiveness.

There ought to be an HTML entity for a lightbulb going on! Eliezer tickled my "invest more energy in this conversation" bias by mentioning ID vs. evolution, and there's a thought tickling the back of my head linking William Dembski's discredited mathematical premises for design inference and Eliezer's premise. I'm also getting "Danger, Will Robinson!" signals from the backbrain telling me not to accidentally find an argument that might give me the impression that Dembski might be on to something.

Let me see if I can get this right, given that I haven't quite "leveled up" as much as many of the participants here; I'm just an amateur SF writer who finds Overcoming Bias just about the most incredible resource about for developing interesting questions about how "humanity" will survive the future.

If I read Eliezer correctly, his central argument is that the long distance between the emergent, temporal mechanism of evolution, with its own distinct processes, is so far removed from the immediate mental machinery of a living mind that investing energy in attempting to discuss, describe, or recompute the evolutionary process is a waste, and Eliezer has better things to do. I agree.

Where I disagree (this goes back to a conversation Eliezer and I had, frack, 11 years ago now?) is in his second argument about the origins of signaling and its effect on cynicism. I'm much more cynical than Eliezer, but in a cheerful and hopeful way. When Eliezer writes about the strong selection pressure to, as he puts it, "carve [parental grief] into the genome from scratch," I have to wonder if it did. When I wonder what human consciousness "is for," my conclusion is that it's the current peak expression (not at all necessarily the optimal or the peak expression) of the evolutionary arms race that took place in an environment of other human beings. (Why the heck is it, whenever someone talks about the EEA, often the most salient feature of it-- the presence of other human beings-- seems to be almost an afterthought?)

Parental grief is seen in several mammal species. Accusing a baboon mother who refuses to part with the corpse of a dead child of being an "animal too stupid to know it's dead" is part of that bias that attempts to privilegize the human animal. If I wanted to know what parental grief signaled, I'd look at what other species did, at what faint, subtle expressions we inherited and emphasized, and wonder what it signaled-- or if it signaled anything at all.

I don't lose sleep about the distinction Eliezer wants to make. I mean, if you're comfortable with your sphexishness, you are (in the views of those who believe in an ineffable free will) already too cynical for society.

I'm rambling. It's BC (before coffee).

@Robin: Dennett (Elbow Room: The Varieties of Free Will Worth Wanting) points out that as our understanding of internal causation slowly comes to embrace and accurately describe the various mental processes that we currently describe as "conscious will," the courts are going to be seriously challenged over this distinction. Dennett's solution, that we will ultimately treat those brought before court as if every behavior were a product of free will and only worry about the most effective treatment, sounds right to me. Dennett points out that a weaker approach is incoherent and encourages the kind of cynicism I suspect Eliezer frets over.

Elf: unless I'm completely missing your point, I think you're misunderstanding Eliezer's point.

The distinction he was making was along the lines of, well, imagine you keep asking me, say, "what's 2+3?"

Now, maybe I'll just build a machine for you that automatically outputs 5, because I know that "what's 2+3?" is the question you keep asking. The computations were then, well, precomputed and the outcome hard coded into the system. This would be more or less analogous to the "computed by evolution" situation.

Now imagine instead I made a calculator that could actually add two numbers, so when you ask "what's 2+3", it actually goes through the mathematical work of adding 2 and 3 to get 5, and if you had instead asked "what's 11+7?" it would have answered 18. This then is analogous to computation your brain performs rather than computations evolution already did.

'...natural selection is an alien thing whose 'decisions' can't be the target of my cynicism or admiration.'

All anthropomorphisation of evolution should be left to artists, even when it is dressed up, tongue-in-cheek, in scare quotes. Such anthropomorphisation is a barrier not only to the understanding of the process, but also to its widespread adoption due to its moral implications.

"I think it really is important to use different words that draw a hard boundary between the evolutionary computation and the cognitive computation."

Does this boundary even exist? It's a distinction we can make, for purposes of discussion; but not a hard boundary we can draw. You can find examples that fall clearly into one category (reflex) or another (addition), but you can also find examples that don't. This is just the sort of thing I was talking about in my post on false false dichotomies. It's a dichotomy that we can sometimes use for discussion, but not a true in-the-world binary distinction.

Eliezer responds yes: "Anna, you're talking about a messiness of the human system, not a difficulty in drawing hard distinctions between human-style messiness and evolutionary-style messiness."

I can't figure out what that's supposed to mean. I think it means Eliezer didn't understand what she said. The "messiness" is that you can't draw that hard distinction.

The entire discussion is cast in terms that imply Eliezer thinks evolutionary psychology deals with issues of conscious vs. subconscious motivations. AFAIK it sidesteps the issue whenever possible. Psychologists don't want to ask whether behavior comes from conscious or subconscious motivations. They want to observe behavior, record it, and explain it. Not trying to slice it up into conscious vs. subconscious pieces is the good part of behaviorism.

Phil, I think you're misunderstanding Eliezer's take on ev psych; Eliezer is explicitly not concerned with slicing things into conscious vs. subconscious (only into evolutionarily computed vs. neurologically computed).

Eliezer, I agree that one can sharply distinguish even messy processes, including evolutionary vs. human messiness. My question (badly expressed last time) is whether motives can be sharply divided into evolutionary vs. human motives.

As a ridiculously exaggerated analogy, note that I can draw a sharp line division between you1 = you until thirty seconds ago and you2 = you after thirty seconds ago. However, it may be that I cannot cleanly attribute your "motive" to read this comment to either you1 or you2. The complexity of you1 and you2 is no barrier; we can sharply distinguish the processing involved in each. But if "motives" are theoretical entities that help us model behavior, then human "motives" are only approximate: if you seek a more exact model, "motives" are replaced by sets of partially coordinated sphexish tendencies, which are (in a still more exact model) replaced by atoms. "Motives" may be more useful for approximating you (as a whole) than for approximating either you1 or you2: perhaps you1 handed off to you2 a large number of partially coordinated, half-acted-out sphexish tendendies regarding comment-reading, and these tendencies can (all at once) be summarized by saying that you were motivated to read this comment.

Analogously, though less plausibly, my motivation to avoid heights, or to take actions that make me look honest, might be more coherently assigned to the combined processing of evolution and neurology than to either alone.

I tend to draw a very sharp line between anything that happens inside a brain and anything that happened in evolutionary history. There are good reasons for this!

Counterpoint: the brain evolves and has an evolutionary history of its own which takes place within an individual's lifespan. Organic evolution and brain evolution thus share copying, variation, selection, and evolutionary theory (kin selection, drift, adaptation, etc). So: best to go easy with the "very sharp line".

I wish every time someone tries to submit a comment with the word “evolutionary” in a present-tense sentence a pop-up appeared with a link to this post.