Followup toReductionism, Explaining vs. Explaining Away, Fake Reductionism

Back to our original topic:  Reductionism, which (in case you've forgotten) is part of a sequence on the Mind Projection Fallacy.  There can be emotional problems in accepting reductionism, if you think that things have to be fundamental to be fun.  But this position commits us to never taking joy in anything more complicated than a quark, and so I prefer to reject it.

To review, the reductionist thesis is that we use multi-level models for computational reasons, but physical reality has only a single level.  If this doesn't sound familiar, please reread "Reductionism".


Today I'd like to pose the following conundrum:  When you pick up a cup of water, is it your hand that picks it up?

Most people, of course, go with the naive popular answer:  "Yes."

Recently, however, scientists have made a stunning discovery:  It's not your hand that holds the cup, it's actually your fingers, thumb, and palm.

Yes, I know!  I was shocked too.  But it seems that after scientists measured the forces exerted on the cup by each of your fingers, your thumb, and your palm, they found there was no force left over—so the force exerted by your hand must be zero.

The theme here is that, if you can see how (not just know that) a higher level reduces to a lower one, they will not seem like separate things within your map; you will be able to see how silly it is to think that your fingers could be in one place, and your hand somewhere else; you will be able to see how silly it is to argue about whether it is your hand picks up the cup, or your fingers.

The operative word is "see", as in concrete visualization.  Imagining your hand causes you to imagine the fingers and thumb and palm; conversely, imagining fingers and thumb and palm causes you to identify a hand in the mental picture.  Thus the high level of your map and the low level of your map will be tightly bound together in your mind.

In reality, of course, the levels are bound together even tighter than that—bound together by the tightest possible binding: physical identity.  You can see this:  You can see that saying (1) "hand" or (2) "fingers and thumb and palm", does not refer to different things, but different points of view.

But suppose you lack the knowledge to so tightly bind together the levels of your map.  For example, you could have a "hand scanner" that showed a "hand" as a dot on a map (like an old-fashioned radar display), and similar scanners for fingers/thumbs/palms; then you would see a cluster of dots around the hand, but you would be able to imagine the hand-dot moving off from the others.  So, even though the physical reality of the hand (that is, the thing the dot corresponds to) was identical with / strictly composed of the physical realities of the fingers and thumb and palm, you would not be able to see this fact; even if someone told you, or you guessed from the correspondence of the dots, you would only know the fact of reduction, not see it.  You would still be able to imagine the hand dot moving around independently, even though, if the physical makeup of the sensors were held constant, it would be physically impossible for this to actually happen.

Or, at a still lower level of binding, people might just tell you "There's a hand over there, and some fingers over there"—in which case you would know little more than a Good-Old-Fashioned AI representing the situation using suggestively named LISP tokens.  There wouldn't be anything obviously contradictory about asserting:

|—Inside(Room,Hand)
|—~Inside(Room,Fingers)

because you would not possess the knowledge

|—Inside(x, Hand)—> Inside(x,Fingers)

None of this says that a hand can actually detach its existence from your fingers and crawl, ghostlike, across the room; it just says that a Good-Old-Fashioned AI with a propositional representation may not know any better.  The map is not the territory.

In particular, you shouldn't draw too many conclusions from how it seems conceptually possible, in the mind of some specific conceiver, to separate the hand from its constituent elements of fingers, thumb, and palm.  Conceptual possibility is not the same as logical possibility or physical possibility.

It is conceptually possible to you that 235757 is prime, because you don't know any better.  But it isn't logically possible that 235757 is prime; if you were logically omniscient, 235757 would be obviously composite (and you would know the factors).  That that's why we have the notion of impossible possible worlds, so that we can put probability distributions on propositions that may or may not be in fact logically impossible.

And you can imagine philosophers who criticize "eliminative fingerists" who contradict the direct facts of experience—we can feel our hand holding the cup, after all—by suggesting that "hands" don't really exist, in which case, obviously, the cup would fall down.  And philosophers who suggest "appendigital bridging laws" to explain how a particular configuration of fingers, evokes a hand into existence—with the note, of course, that while our world contains those particular appendigital bridging laws, the laws could have been conceivably different, and so are not in any sense necessary facts, etc.

All of these are cases of Mind Projection Fallacy, and what I call "naive philosophical realism"—the confusion of philosophical intuitions for direct, veridical information about reality.  Your inability to imagine something is just a computational fact about what your brain can or can't imagine.  Another brain might work differently.

 

Part of the sequence Reductionism

Next post: "Angry Atoms"

Previous post: "Awww, a Zebra"

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 11:14 AM
Select new highlight date
All comments loaded

Richard, also, from your "Dualist Explanations":

Once the brain matter is there, they think that's all there is to consciousness -- there's nothing further to explain. Most of us think there is something still to be explained, and dualism can achieve this by positing bridging laws that cause 'mind' to emerge from 'matter'.

Looks like a clear-cut case of Mind Projection Fallacy to me, or, at the very least, a severe misrepresentation of how a mature reductive materialist sees their own viewpoint. A reductive materialist need not pass from "Brain dynamics entirely and strictly constitute consciousness" to "There's nothing further to explain." There is plenty more to explain, namely, the nature of the identity.

Knowing that != knowing why != knowing how != seeing how.

If you merely know that mind=brain you may have a great deal left to explain. You may still need to find the insights needed to dissolve the apparent impossibility of mind=brain.

The map is not the territory, so you can't jump from

mind=brain

to

(mind=brain)->know('mind=brain')

to

know('mind=brain')->know_how('mind'='brain')

to

~know_how('mind'='brain')->~(mind=brain)

You seem to jump from "The identity 'mind=brain' seems unsatisfying" to "I have just cause to believe that mind is not matter". Of course the mere asserted identity seems unsatisfying: If you don't know any of the actual reductions, the mere materialist assertion that some reduction exists won't let you make any new predictions. This does not give you just cause to reject materialism; it gives you just cause to believe that your map is missing some reductions.

Postulating "bridging laws" completely fails to explain the lingering mysteries in any fashion whatsoever. It makes no new predictions even in retrospect; it is anticipation-isomorphic to "magic" or "God did it" or "elan vital" or "some reduction exists, but I won't tell you what". Note that if you actually accepted "The mind equals the brain!" as an answer to consciousness, it would also constitute a mysterious answer to a mysterious question: believing it would not make you any less confused. The answer is when consciousness stops being mysterious - this requires an actual reduction, not just a flat assertion that a reduction exists.

Which, of course, it does.

If people can understand the concept of Unions from c/c++ they can understand reductionism. One can use different overlaping data structures to access the same physical locations in memory.

union mix_t { long l; struct { short hi; short lo; } s; char c[4]; } mix;

Unfortunately the blog ate my indentations.

Is mix made up of a long, shorts or chars? Silly questions. mix.l, mix.s and mix.c are accessing the same physical memory location.

This is reductionism in a nutshell, it's talking about the same physical thing using different data types. You can 'go up'(use big data types) or 'go down' use small data types but you are still referring to the same thing.

In conclusion, aspiring rationalists should learn some basic c++.

As a C programmer who hangs out in comp.lang.c, I'm strongly tempted to get out a copy of C99 so that I can tell you precisely where you're wrong there. But I'll content myself with pointing out that there is no guarantee that sizeof(long)==2*sizeof(short)==4*sizeof(char), and moreover that even if that did hold, there is still no guarantee that sizeof(struct {short hi; short lo;})==2*sizeof(short) because the struct might have padding - what if 'short' were a 16 bit quantity but stored in 32 bit words (perhaps because the arch can only do 32 bit writes, and has decided that short should be an int_fast16_t rather than an int_least16_t), resulting in alignment requirements?

In conclusion, PK should learn some basic C, and forget about the ++. (Old joke: what does C++ mean? Take C, add to it, then use the old version)

EDIT: thanks, paper-machine, and I approve of Markdown's choice of escape character. Now, if it'll just let me use \033[1;35m to change the colour...

I don't see anything wrong with grandparent, assuming a particular architecture. And whenever I used to write c, it was almost always for a particular architecture, usually with inlined assembly. Am I missing something, or are you just trying to make some point about portability regarding a metaphor?

It is a lesson hard-learned over many programmer-years of coding that portability should be acquired as an innate reflex; that whenever you are about to sacrifice portability, you ask yourself "Why am I doing so, are there alternatives, and have I done at least a rudimentary cost/benefit analysis of this decision?"

What you don't do is throw away portability just because you happen to be using a particular machine right now. This is something quickly learned in comp.lang.c.

Incidentally, a better model than PK's might be:

typedef struct foo {int i} s;

s f;

Now what is 'f'? Is it an 's', is it a struct foo, or is it an int? And what about f.i? Both have the same address, casting either's address to any of the pointer types 's *', 'struct foo *' or 'int *' is valid; writing

*(int *)&f=1;

does just the same as f.i=1; quite possibly the compiler will generate the same code, because after all, the territory is the same. It's just that "f.i=1" is a higher-level map, which conveniently abstracts things out.

This is made more explicit by considering struct bar {int a; long b;} t; *(long *)(((char *)&bar)+offsetof(bar, b))=1; // same as bar.b=1

Of course, you could go to all the trouble of computing the offset pointer every time, since that's what happens on the "quark" (assembly language) level of the territory, but the higher-level 'struct' map is cognitively useful.

Upvoted for delicious, delicious C.

EDIT: However, you may want to work on the markdown; * is a reserved character ;_;

I'm confused as to what your purpose is with this series on reductionism. Is there a particular anti-reductionist position you're combating?

Earlier, you wrote,

Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.
I don't think your typical anti-reductionist is concerned about the existence of different levels of models. I've never heard one ask "How can you model the plane without the wings?"

Anti-reductionists are opposed to models in general. An anti-reductionist believes that a collection of things has properties that are not the results of the combined properties of the things in the collection, let alone of a model. For example, they would say that a human has free will, even though the constituents of the human are deterministic; or that a human has a soul, but if you constructed a human from parts, it wouldn't; or that representations in a human brain have meaning, while representations in a computer cannot; or that a human brain has consciousness, etc.

So I don't think what you're writing addresses what it is that reductionists believe, that non-reductionists don't believe.

Anti-reductionism equals spiritualism, and it is not the opposite of materialism, but of science. Science is not materialistic, since we believe in fields. Also, if you discovered that spirits were composed of magnetic fields, and conducted experiments on them, you would still be a scientist. The spiritualist, by contrast, uses the term "spirit" to name something that can't be explained. Anti-reductionism = spiritualism = the belief that there exist complex phenomena ("spirits") with no explanations.

Richard sez:

Note that my fundamental premise is not, "I think the zombie world is coherently conceivable." Nothing of interest follows from the fact that I have an opinion, since the opinion might be baseless (as you've repeatedly pointed out). Instead, my basic premise is that the zombie world is coherently conceivable (i.e. without the sort of finger-hand misunderstanding that might make a scenario seem conceivable when in fact it's incoherent). You haven't said a word that bears on the truth of this premise. All you've said is that ignorance might lead one to believe it even if it were false. But that is no reason to think that it is false.

...

Please explain what it means for something to be genuinely conceivable, as opposed to just being conceivable to some particular person.

I'm not sure how I'm supposed to react to this paragraph, frankly. I mean, suppose someone came to me and said: "There are Martians in my nose, therefore extraterrestrial life exists" and I said "Why do you believe there are Martians in your nose?" and he said, "No, I'm not reasoning from my belief that there are Martians in my nose - nothing follows from this, as a belief could be mistaken - I'm reasoning from the fact that there are Martians in my nose."

I mean, how is this not 100% pure naive realism?

Poke: Criticizing the critics.

Richard: What exactly is the 'P' that you think you're taking as a premise, here?

Also, in "Arguing with Eliezer", you write:

I find this commitment less absurd than denying the manifest reality of first-personal conscious experience (as reductive materialists like Dennett and Eliezer do), or engaging in the metaphysical contortions that non-reductive materialists must (see my 'dualist explanations' post).

If, as you claim, you understood my argument fully and rejected it due to arguments I fail to understand, then why are you writing what I must object to as a major misrepresentation of my views as I see them?

Namely: From my viewpoint, I'm certainly not denying your first-person conscious experience - just saying that it doesn't work the way you think it does.

I can't tell from your parody of positions in the philosophy of mind whether you're criticizing eliminative materialists or critics of eliminative materialism.

If you want to fight the good fight, edit the section "Limits of Reductionism" in the Wikipedia article on Reductionism. It cites many examples of things that are merely complex, as evidence that reductionism is false.

People interested in the discussion between Eliezer and Richard might find this Wikipedia article interesting: Depersonalization Disorder

Essentially, people behave as they otherwise would, except they don't have a sense of "self-awareness". That is, they did something, and they know they did something, but it doesn't feel as though it was them who did the thing. Often people feel as though they are automata, pre-programmed to respond to certain stimuli, but that there is no "self" driving them.

The disorder also tends to cause its inverse, which is derealization. That is, the individual perceives himself to be real, but nothing external is real.

This effect can be generated with drugs, and it can also be treated with drugs. This suggests to me that the entire "sense of self" is caused by a chemical interaction within the brain.

Not wanting to open a possibly long article: is that the same thing as dissociation? Is dissociation the symptom and depersonalization a cluster of symptoms that includes it?

Depersonalization is a type of dissociation disorder, yes. It's in the same class of disorder as multiple personalities - or Dissociative Identity Disorder.

And I think the discussions between Richard, Eliezer, and Robin, among others, are worth reading. Richard argues, in a nutshell, that the mind is more than just the brain, that there is something else that creates the mind. Eliezer and Robin argue that he his making the same mistakes that he has been describing in the Mysterious Answers to Mysterious Questions sequence. That's what the "zombie world" is all about - if a world consists of humans who are in every way exactly the same as ours minus whatever ephemeral thing it is that Richard says completes "consciousness", Richard argues these zombies will not be conscious, and Eliezer argues that they will be, because they are made of all the same things that we are.

I'm pretty confused by this discussion. People toss out terms like reductionist or anti-reductionist, and I can't even tell what they disagree about.

Here's what I know:

1) There are quarks and electrons, maybe some strings too. Nobody seems to dispute the quarks and electrons, at least. There are also clusters of particles.

2) Everything above that level is an abstraction that only exists in our heads. Yeah, those atoms really are near each other, but the only thing that makes them a "computer" is that we use them for computing. Same applies to brains and minds.

3) Still, calling a spade a spade is useful, so we do it. Not because it's "really" a spade, but because we can't reason quickly about innumerable swarms of quarks.

And that is all. Call it reductionism, call it anti-reductionism, that's all there is to it. There are no spadetrons, no mindtrons and (so far) no computrons.

So, what is the dispute over?

the addition of p-consciousness has no physical consequences. To infer that it has "absolutely no consequences of any kind" is obviously question-begging

Talking is a physical act. Remembering is a physical act. Thinking is a physical act. If your 'p-consciousness' has no physical consequences, it cannot affect the way people think and remember and talk. None of the philosophers who think and remember and talk about their supposed non-experiential experiences can be causally connected to the thing they're supposedly experiencing.

Again: your position is incoherent. Your conception of 'non-physical facts' is meaningless and without semantic content.

Yes and when I hit my radio with a rock it might stop working, change the station, if I rip out transistors it might make the sound distorted, etc. That really doesn't prove that the song is stored inside the radio, does it?

Well, no. All else being equal, however, and absent evidence for radio waves, the most parsimonious explanation IS that the song is stored in the radio. Absent evidence of immaterial souls, the same applies to brains. Heraclitis could fairly easily have been wrong, since he was just going on the effects of gross trauma. Fortunately, we have advanced some since Heraclitis, having discovered the neuron, brain areas responsible for different tasks, computing in general, fMRI scans, and other fun stuff. This has gone a certain way towards confirming his hypothesis. I'm not saying it's impossible that there's a soul floating around communing with the brain. Fully material, reducible brains are not, in my estimation, as certain as gravity could be said to be- but the brain is not exactly growing more mysterious and inexplicable as our study goes on, and betting on the side of inexplicability has not had a good record these past few hundred years.

Also, I'm sorry that you didn't want to actually present a reason that I'm wrong, as opposed to asserting that I'm biased, following trends, not worth arguing with, etc. My "bias towards materialism" is only that it's usually proven right in the past and in my experience. I'm afraid that I'm going to have to put off reading any more parapsychology than I've done in the past on the basis that there's probably nothing new in it.

A bridging law such as Richard is proposing would be something like "when a physical system, i.e. like the brain, is in condition XYZ (a physical description), then it will be conscious of redness, and when it is not in condition XYZ, it will not be conscious of redness." This bridging law allows one to predict the future: it allows one to predict when one will see redness and when one will not. It predicts the future just as well as the law of conservation of energy. Either both are isomorphic with "God did it," or neither are. Eliezer simply meant to say that the bridging law still doesn't explain WHY the brain sees redness. And the law of conservation of energy doesn't explain WHY energy is conserved, it just asserts it.

No, I don't think that's what he means. A "bridging law" is the same as an "emergent property" or a "complex system"- specifically, it asserts that a reduction and explanation exists, and it sounds like it provides one, but it does not actually do so. This makes it fundamentally useless. Conservation of energy states the difference between prior and posterior amounts of energy should be 0. This is a precise prediction. If it were isomorphic to the "bridging law", it would state that there is some transformation that describes the relations between prior and posterior states of energy in a system- which is to say, that they make sense "somehow". It's a functionally meaningless statement, and it doesn't tell you anything about what it's describing any more than asserting an suggest "appendigital bridging law" would tell us about hands.

As for questions having answers, I've already gone on about the empirical validity of a few observations unto preaching. Suggesting that an empirical question, such as how a brain works, might be unanswerable because of the fundamental philosophical unanswerability of questions in general is sophistry at best. The ability to ask and answer questions to some approximation is so fundamental that you can't even assert gravity without it. Trying to undermine this is silly.

Richard, I'm always amazed at what philosophers think they can see merely by "understanding the terms." Such analysis may well tell us a lot about what we often assume, but I am skeptical that it can tell us as much as philosophers think about what is actually possible vs. only apparently possible.

"Eliminative fingerists" is amusing. I think that particular projection fallacy as pertains to minds is part of a larger tradition, however. Radical behaviorists, for instance, really ought to be included. They feel left out, off on their lonesome, busily declaring that the "mind" is a convenient fiction because it can't be measured.

... and seperately, I might note that I've read through much of your archives and enjoyed it immensely. Keep it up!

EDIT: Typos.

I have been reading some of the sequences, and this entry shocked me a lot.

If the reductionist thesis is "we use multi-level models for computational reasons, but physical reality has only a single level", then what kind of evidence could support it against the thesis "we use multi-level models for computational reasons AND physical reality has multiple levels?" (let me call it 'anti-reductionist thesis' regardless of what actual anti-reductionists defend). I just can't think of how the world would be different if physical reality had multiple levels than if it had only one level.

In other words, the reductionist thesis, as it is presented here, does not lead me to anticipate differently than the anti-reductionst thesis. Accepting it just generates a floating belief, and as a result, I reject the reductionist thesis, and you should do the same.

Am I wrong? And why?

EDIT: this is now pretty much retracted, see the following thread.

If the reductionist thesis is "we use multi-level models for computational reasons, but physical reality has only a single level", then what kind of evidence could support it against the thesis "we use multi-level models for computational reasons AND physical reality has multiple levels?"

Lower level models are more accurate than abstract models, and you can observe the consequences of this on multiple levels of abstraction. Therefore if physical reality has multiple levels then they must be incompatible and parallel in a very peculiar way. This makes the idea more complex and therefore less probable than the reductionist thesis.

Tabooing reality might make things a bit clearer.

The whole point of the renormalization group is that lower level models aren't more accurate, the lower level effects average out.

The multiple levels of reality are "parallel in a peculiar way" governed by RG. It might be "more complex" but it's also the backbone of modern physics.

The whole point of the renormalization group is that lower level models aren't more accurate, the lower level effects average out.

I tried to read about RG but it went way over my head. Is the universe in principle inexplicable by lower level theories alone according to modern physics? Doesn't "averaging out" lose information? Are different levels of abstraction considered equally real by RG? Does this question even matter or is it in the realm of unobservables in the vein of Copenhagen vs MW interpretation?

The point of RG is that "higher level" physics is independent of most "lower level" physics. There are infinitely many low level theories that could lead to a plane flying.

There are infinitely many lower level theories that could lead to quarks behaving as they do,etc. So 1. you can't deduce low level physics from high level physics (i.e. you could never figure out quarks by making careful measurements of tennis balls), and you can never know if you have truly found the lowest level theory (there might be a totally different theory if you only had the ability to probe higher energies).

This is super convenient for us- we don't need to know the mass of the top quark to figure out the hydrogen atom,etc. Also, it's a nice explanation for why the laws of physics look so simple- the laws of physics are the fixed points of renormalization group flow.

Thanks, my reality got just a bit weirder. It's almost as if someone set up a convenient playground for us, but that must be my apophenia speaking. If there are infinite possibilities of lower level theories, are successful predictions in particle physics just a matter of parsimony? Is there profuse survival bias when it comes to hyping successful predictions?

I think I'm communicating a little poorly. So start with atomic level physics- it's characterized by energy scales of 13.6 eV or so. Making measurements at that scale will tell you a lot about atomic level physics, but it won't tell you anything about lower level physics- there is an infinite number of of lower level physics theories that will be compatible with your atomic theory (which is why you don't need the mass of the top quark to calculate the hydrogen energy levels- conversely you can't find the mass of the top quark by measuring those levels).

So you build a more powerful microscope, now you can get to 200*10^6 eV. Now you'll start creating all sorts of subatomic particles and you can build QCD up as a theory (which is one of the infinitely many theories compatible with atomic theory). But you can't infer anything about the physics that might live at even lower levels.

So you build a yet more powerful microscope, now you can get 10^14 eV, and you start to see the second generation of quarks,etc.

At every new level you get to, there might be yet more physics below that length scale. The fundamental length scale is maybe the planck scale, and we are still 13 orders of magnitude above that.

Edit: this author is sort of a dick overall, but this was a good piece on the renormalization group- http://su3su2u1.tumblr.com/post/123586152663/renormalization-group-and-deep-learning-part-1

I think I'm the one communicating poorly since it seems I understood your first explanation, thanks for making it sure anyways and thanks for the link!

When I was wondering about successful predictions in particle physics, I was in particular thinking about Higgs boson. We needed to build a massive "microscope" to detect it, yet could predict its existence four decades ago with much lower energy scale equipment, right?

The existence of the Higg's is one of the rare bits of physics that doesn't average out under renormalization.

The reason is that the Higgs is deeply related to the overall symmetry of the whole standard model- you start with a symmetry group SU(2)xU(1) and then the Higgs messes with the symmetry so you end up with just U(1) symmetry. What the theory predicts is relationships between the Higgs, the W and Z boson, but not the absolute scale. The general rule is RG flow respects symmetries, but other stuff gets washed out.

This is why the prediction was actually "at least 1 scalar particle that interacts with W and Z bosons". But there are lots of models consistent with this- it could have been a composite particle made of new quark-like-things (technicolor models), there could be multiple Higgs (2 in SUSY, dozens in some grand unified models),etc. So it's sort of an existence proof with no details.

Even if the argument "Occam's Razor says that since reality having only one level is simpler than reality having multiple levels, then the first option is more likely to be true." was valid, there is a problem.

Contrarily to other contexts where Occam's Razor is actually useful, none of these options lead us to anticipate differently under any circumstance, so the rational thing to do here is not to apply Occam's Razor, but to reject the question "Does physical reality have one level or multiple levels?"

Edit: Note that I did not mean to say that you should not apply Occam's Razor at all in this scenario. Perhaps, given the hypothesis that reality has multiple levels, Occam's Razor makes certain phenomena more likely, and observations regarding these phenomena could be used to argue for or against the reductionist thesis. The point is that I cannot find examples of such phenomena, specially if the kind of multiple levels that we are talking about are purely physical.

Wait. Perhaps one of such predictions would be that we should find universal laws involving higher-level entities, while it seems that at that level, we only find ceteris paribus laws. By contrast, at the lower level, we do find universal laws. This should be evidence in favour of the reductionist thesis.

Which would indicate that I was wrong in my initial claim.

Actually when I first responded to you I was thinking about biology, psychology and such as the higher level. In this case the claim seems to make sense. However, if I understood EHeller correctly, this doesn't hold water inside the realm of modern physics. Besides, we can in principle never know if we're at the lowest level.

In the zombie world, Ben Jones posts a comment on this blog, but he never notices what he is posting.

Um, no, this is wrong.

I wouldn't expect you to take my word for this, but Chalmers himself has said that's not the case. P-zombies behave exactly the same way as people with consciousness do in all ways, so zombie-Ben-Jones' eyes pick up visual data, his brain contains a representation of what he has done and what he is seeing, and he could provide just as much of a reasoned and intelligent discussion of his positions as you'd otherwise expect.

You are not defending Chalmers' actual hypothesis. You are defending a much more intuitively-appealing and defensible position (which as it happens is still wrong, but that's another argument and will be had another day).

P-zombies will reason. P-zombies will claim to have 'experiences' (at least some of them will, anyway), and (some) will discuss how those experiences are not communicable and indescribable, etc. etc. They will not merely be identical to our crude human perceptions. They will not only be identical to the limits of our ability to measure. They will act precisely the same in all ways.

Chalmers argues that it is meaningful to postulate a property such entities would lack that does not affect causality in any way. Don't call it 'conscious experience' if you're hung up on that term - replace it with 'fitzgoanth' if you wish.

P-zombies behave identically in every way to people with fitzgoanth, except that they lack fitzgoanth, and no possible observation of how they act or how they are composed can lead someone to conclude that someone possesses or lacks fitzgoanth. Saying that someone has fitzgoanth does not lead to different conclusions than denying that someone has fitzgoath.

This fact actually is one proof that Caledonian and others are talking nonsense in saying that the zombie world is incoherent; everyone involved in this discussion, including Caledonian, knows exactly what the world would be like if it were a zombie world,
Yes, it would be exactly the same thing it would be if it weren't a zombie world. And that's why the concept of 'zombies' is incoherent.

The mental gymnastics people will go through to avoid confronting this simple and obvious fact are quite extraordinary.

I suspect it's the same reason why people continue to believe in various gods, despite their religions being nonsense, and sometimes will insist that everyone knows that the gods in question exist and are denying their existence out of hate / spite / denial.

"Yes and when I hit my radio with a rock it might stop working, change the station, if I rip out transistors it might make the sound distorted, etc. That really doesn't prove that the song is stored inside the radio, does it?"

Pace Dan, above, it would not be "parsimonious" to assume the song is "stored in the radio." Even assuming complete naivete about how radios work, it would be relatively easy to show that the songs are (in some mysterious way) coming from some outside source. (For example, you could compare the performance of two identical radios as one is systematically banged up -- although just the fact that two identical radios set next to one another play the same content in perfect simultaneity in itself would be suggestive.)

What's fatal to the analogy, then, is that while physical abuse to your radio doesn't qualitatively affect the radio transmissions, physical abuse to your head does qualitatively affect your mind.

Although I prefer an even weaker kind of scientism: scientism'': an ontological claim is boring if it has no scientific implications. By boring, I mean, tells us nothing relevant to practical reason. Which is why I'm happy to take Richard's property dualism: I accept scientism'', ergo, it doesn't matter.

Sez Richard:

The difference between our views is that he thinks the reduction is logically necessary; that there is no sense to be made of the idea of a 'zombie' world physically identical to ours but lacking consciousness. I think that's plainly false. There's nothing incoherent about the idea of zombies. So the admitted link between the physical and phenomenal facts is merely contingent (taking the form of a natural law, rather than a reductive analysis).

"Plainly false?" You mean you can imagine a world identical to ours but lacking 'consciousness', therefore, it is logically possible? But logical possibility does not follow from conceptual possibility. You may simply have not yet proved the tautology

|- Inside(x, Hand) -> Inside(x, Fingers)

which you would need to see that your conceptual imagining is logically impossible.

I don't know whether 577 is prime, so it is conceptually possible to me that it is prime or composite, but only one of these two alternatives is logically possible. I can conceive that 577 is prime, or alternatively, that it is not prime, and yet 577+2 = 579 either way; but this does not imply that the primeness of 577 is a detachable property that floats around independently of its arithmetical behavior. It just means I don't know.

This is the fundamental Mind Projection Fallacy I think you are committing in passing from "I can imagine zombies" to "Zombies are logically possible" to "The 'bridging laws' that imply consciousness are contingent physical facts rather than logical implications."

Indeed, the whole notion of a "bridging law" consists of committing the Mind Projection Fallacy with respect to theorems like

|- Inside(x, Brain) -> Inside(x, Mind)

and supposing that they describe physical causation, something that happens out there in the world, rather than knowledge deductions. The rule that states that a hand is present when fingers+palm+thumb are present, is not a physical cause, or a contingent bridging law, but a consequence of the definitions of the discovered referents of "hand" and "fingers+palm+thumb". You don't know the definition of the undiscovered referent of "consciousness" so you can't see the logical identity, but it is there - this is what a reductionist believes.

Unknown, Dan has described very well the difference between "A bridging law did it!" and Conservation of Energy.

Caledonian: Sure you do. That's why we have biology and chemistry and neuroscience instead of having only one field: physics.

That's just a matter of efficiency (as I have tried to illuminate). There is nothing about those high level descriptions that is not compatible with physics. They are often more convenient and practical, but they do not add one iota of explanatory power.

I know of no models of reality that have superior explanatory power than the standard reductionist one-level-to-bind-them-all position (apologies for the pun).

Sure you do. That's why we have biology and chemistry and neuroscience instead of having only one field: physics.

Since we don't currently know whether our models of the most basic known components of the physical world are compatible with our high-level models of phenomena, they are ALL better within their limited domain than more general models are.

This is not a problem for anyone wanting merely to produce a useful model. It is a profound problem for anyone wanting to produce a ur-model that encompasses all known phenomena.

Sure you do. That's why we have biology and chemistry and neuroscience instead of having only one field: physics.

Are not these models simply abstractions of physics? That is, simply higher levels of the same systems that physics describes? We know chemistry conforms to physics, and we know biology conforms to chemistry, does biology somehow not conform to physics?

Since we don't currently know whether our models of the most basic known components of the physical world are compatible with our high-level models of phenomena, they are ALL better within their limited domain than more general models are.

Do we not know this? I thought we did.

The high level models are certainly more practical for their given applications than attempting to work the whole thing out from the movement of quarks, but they are certainly not more accurate. It's the difference between using a globe to find Australia (high level map), and a world atlas to find a road in Sydney (lower level map). It's a lot easier to find Australia on a globe than it is in an atlas, but that does not mean you cannot use the atlas to do so, and it certainly doesn't make the globe more accurate than the atlas, for you'll never find that particular road on a globe. Better for the task, perhaps, but only because you do not need the level of detail the lower level map provides.

PK: I don't see the ++ in your nice example, it's perfectly valid C... =)

Caledonian, Ian C.: I know of no models of reality that have superior explanatory power than the standard reductionist one-level-to-bind-them-all position (apologies for the pun). So why add more? In a certain way "our maps [are] part of reality too", but not in any fundamental sense. To simulate a microchip doing a FFT, it's quite sufficient to simulate the physical processes in it's logic gates. You need not even know what the chip is actually supposed to do. You just need a very precise description of the chip. If you do know what it's doing, it's of course much more efficient to directly use the same algorithm it is also using. That will also dramatically cut down on the length of it's description. But that does not make the FFT algorithm fundamental in any way. It is just a way to look at what is happening. I mean, really, this shouldn't be so hard to grasp...

Everyone ignored my c++ example. Was I completely off base? If so please tell me. IMHO we should look for technical examples to understand concepts like "reductionism". Otherwise we end up wasting time arguing about definitions and whatnot.

Personally, I find it irritating when a discussion starts with fuzzy terms and people proceed to add complexity making things fuzzier and fuzzier. In the end, you end up with confused philosophers and no practical knowledge whatsoever. This is why I like math or computer science examples. It connects what you are talking about to something real.

I would love to see Eliezer defend his claim that the physical world has only one level. Precisely how does he know that? We cannot confirm that our various models of phenomena can be reduced to the most basic physics.

I'm not convinced that we can have real probability distributions over impossible possible worlds. At the very least, a real probability distribution must sum its exhaustive and exclusive possibilities to 1, but in fact it seems to me that the same type of effort that is needed to show that a set of impossible possibilities sums to 1 also changes the degree to which they have been examined, changing their subjective probabilities. It specifically seems to me that pseudo-probability distribution over impossible possible worlds will generally contain non-correctable biases from framing such as overconfidently narrow probability distributions, or conversely conjunction fallacies and subadditivity. For a concrete example, after estimating the probability that an albino tiger which has previously performed on the Daily Show is sneaking up behind me as I write this in order to deliver a pizza, the probability that I am left with will be sufficiently low that it will be utterly dominated by skeptical hypotheses about either my calculation (even after checking 10 times I may have misplaced a decimal point. I may even be deluding myself about my knowing how to multiply or about the validity of 'multiplication') or my world (which may actually be a short 'joke sim' of an agent existing only to make some extremely low estimate of some event's probability and then be proven wrong). These are specifically the sorts of scenarios in which conjunction fallacies are not actually fallacies, and also where non-additivity is valid, framing effects are valid, etc, as given the skeptical nature of the scenario simply the formation of the frame constitutes evidence. To call this sort of uncertainty, which cannot be mathematically manipulated and processed a "probability distribution" ignores the fact that "Probability distribution" is a mathematical concept with specific properties.

I can't presume to answer for Eliezer, but I don't think he's yet claimed to know how the brain works. He's also paid considerable attention to the nonsensical nature of some attempts to say that we might "already" know how- IE "emergence", "complexity", and other non-explanations. I'd go so far as to say that it follows directly from the fact that we can't make our own brains from first principles that we don't really understand the ones currently in circulation.

That said, it would be a serious defiance of all precedent if brains somehow had a magical, non-reducible quality by which they refused to comply with empirical observation. It's true that the past success of such study can't reliably predict future trends. By the same logic, however, we can't expect gravity to continue in the future because past trends and consistency are of a different substance than future ones. Until gravity and reductionism actually do give out, we can say reasonably well that gravity is likely to continue and things are likely to be explicable. Following this line of reasoning - that the past may not predict the future at all - could easily kill any plotted course of action relying upon gravity or causality equally well, so why apply it only to cognitive science?

As pertains to brains, we have reasonable inferences that the mind is strictly anchored in a physical substance. Among the oldest I'm aware of is Heraclitis' observation that hitting someone in the head causes stupor, confusion, etc, so the mind probably resides there. More modern versions can include research into brain lesions, neurotransmitters, psychoactive drugs, and the like if you prefer. The only way I can imagine to actually rule out a purely "physical" brain, especially against the weight of current evidence, would be if we could finally map the brain to perfection, watch all the computation it's carrying out, understand it all- and still demonstrate that there's a mysterious magic term in the input or output that definitely comes from nowhere at all. It sounds ridiculous spelled out this way, but that's essentially what postulating "non-reducibility" comes down to- that monitoring an entire brain physically, you could actually watch things come out of nowhere. Certain physicists would find this kind of disturbing, for one.

Additionally, "God did it" and "Energy is conserved" are not isomorphic. One explains nothing; it does not provide any way to plot future events, assuming causality and a fairly stable universe. The other one does provide a way to plot future events, assuming causality and a fairly stable universe. Again, if you want to chuck out causality and a fairly stable universe, I have to wonder why you bother finishing sentences seeing as sound and information propagation are bound to stop working at any time. If we can agree that causality and stability are to remain in play, however, it follows that certain models will correspond to predictable reality and others will not. Going against this doesn't just undermine AI or cognitive science, it actually undermines empiricism in general, which is funny because empiricism has a pretty good track record in spite of it.

When you pick up a cup of water... the force exerted by your hand must be zero.

Unless you are holding the cup up, supporting it against the force of gravity.

...much later... The thing that puzzles me about this post is that no attention is paid to context.

I had an operation last year to my right index finger. It was carried out by a hand surgeon. I used those terms because it was rather important which finger was operated on, and because the medical specialism relates to any part of the hand indifferently.

A trivial example, of course, but it illustrates the point, which applies also to much more complex issues, that the appropriate choice of "model level" (or other meta-model feature) to best represent the aspect of reality that matters depends on the context (and especially on the purpose). The difficulty begins, IMHO, when people insist on using the same model or meta-model whatever the context.

Most commenters on this post seem entirely wrapped up in the mind/brain question. That isn't the only question for rationalists to have a view about! They don't seem to be aware that arguments about the usefulness and limits of reductionism also continue in many other fields. The problem is probably that concepts like emergence are used in the mind-brain debate as an excuse for vitalism. But that is really a special case, just because minds are the things that are conducting this debate. In other fields emergence can be a useful concept. In other words I can claim that emergence is useful (in some senses anyway) without believing this has anything to do with consciousness.