(Cross-posted from Hands and Cities)

A number of people I know are illusionists about consciousness: that is, they think that the way consciousness seems to us involves some fundamental misrepresentation. On an extreme version of this view (which Frankish (2016) calls “strong illusionism”), phenomenal consciousness simply does not exist; it only seems to exist (I’ll say more about what I mean by phenomenal consciousness in a moment). I’m especially interested in this version.

For a long time, though, I’ve found it hard to really grok what it would be for strong illusionism to be true. I can repeat the words illusionists say; but I haven’t had a clear sense of the reality envisioned, such that I could really look at the world through the illusionist’s eyes. What’s more, I’ve suspected that some sort of barrier in this respect is crucial to the resistance that I (and I expect many others) feel to the view. Successfully imagining illusionism being true, I think, may be halfway to believing it.

(As a sidenote: I think this dynamic may be common. Actually looking at the world the way someone you disagree with looks at it is often much more difficult than being able to pass their “intellectual turing test” — e.g., to present their position in terms that they would endorse. As ever, words are easy; seeing the world in new ways is hard. And once you have seen the world in a new way, the possibility that the world actually is that way is much easier to take seriously.)

The aim of this post is to grok illusionism more fully. Let’s start with a few clarifications.

The philosophical debate about consciousness centers on “phenomenal consciousness,” which is generally thought of as the thing we ascribe to a system when we say that there is “something it’s like” to be that system, or when we ascribe to that system a first-person perspective or subjective experience. And experiences themselves — the taste of wine, the smell of leaves, the color of an afterimage in your visual field — are thought of as “phenomenally conscious” when there’s something it’s like to have them, and the “phenomenal properties” of experiences determine (consist in?) what it’s like to have them.

Phenomenal consciousness is often contrasted with “access consciousness,” understood as a property of the mental states that a subject can do certain things with (e.g., “access”) — in particular, notice them, report on them, reason about them, etc. 

People have various intuitions about phenomenal consciousness often thought difficult to validate using a standard physicalist conception of the universe. Chalmers (2018) offers a helpful taxonomy:

  • Explanatory intuitions: these are intuitions to the effect that certain familiar modes of physical and functional explanation are unable in principle to explain phenomenal consciousness.
  • Metaphysical intuitions: intuitions about the metaphysical status of phenomenal consciousness. For example, intuitions that phenomenal consciousness is not a physical phenomenon, or that it is in some sense simple. Chalmers doesn’t say so, but I’ll assume that intuitions to the effect that consciousness is e.g. unified (different conscious experiences arise in a single unified mental “space”) or binary (you have it or you don’t) would also fall under this bucket.
  • Knowledge intuitions: Intuitions about the type of knowledge it’s possible to have about phenomenal consciousness — for example, intuitions to the effect that a neuroscientist raised in a black and white room and who knows all the physical facts about color vision learns something new when she sees red for the first time; and intuitions to the effect that it is difficult or impossible to know, from a third-person perspective, whether a given system is phenomenally conscious, or what its phenomenal consciousness is like, even granted arbitrary amounts of physical knowledge and understanding.
  • Modal intuitions: These are intuitions about what sorts of scenarios involving phenomenal consciousness are possible. For example, you might think it possible that despite their behavior, other people are not conscious (even though you are); or that what other people call “blue” actually looks to them like red looks to you (and vice versa); or that there could be a physical duplicate of our world consisting entirely of creatures that don’t have phenomenal consciousness (“phenomenal zombies”).

Some theorists argue, from intuitions of this kind and other considerations, that a standard physicalist conception of the universe requires revision. Others resist such a revision.

Illusionists are definitely in the latter category. Where illusionism differs from other physicalist theories, however, is somewhat harder to pin down. Broadly speaking, illusionism is more willing to claim that the way phenomenal consciousness seems to us involves some fundamental aspect of misrepresentation, whereas other theories hold out more hope that various ways things seem to us might come out true. But because illusionism and other physicalists theories share a fundamental physicalist metaphysic, however, the distinction between them comes down primarily to a debate, not about how things fundamentally are, but about how they seem (or, alternatively, which properties something must have, in order to count as phenomenal consciousness, vs. something else). In this respect, the distinction is much less interesting than the distinction between physicalist and non-physicalists theories more broadly.

This is a familiar dialectic in philosophical debates about whether some domain X can be reduced to Y (meta-ethics is a salient comparison to me). The anti-reductionist (A) will argue that our core intuitions/concepts/practices related to X make clear that it cannot be reduced to Y, and that since X must exist (as we intuitively think it does), we should expand our metaphysics to include more than Y. The reductionist (R) will argue that X can in fact be reduced to Y, and that this is compatible with our intuitions/concepts/everyday practices with respect to X, and hence that X exists but it’s nothing over and above Y. The nihilist (N), by contrast, agrees with A that it follows from our intuitions/concepts/practices related to X that it cannot be reduced to Y, but agrees with D that there is in fact nothing over and above Y, and so concludes that there is no X, and that our intuitions/concepts/practices related to X are correspondingly misguided. Here, the disagreement between A vs. R/N is about whether more than Y exists; the disagreement between R vs. A/N is about whether a world of only Y “counts” as a world with X. This latter often begins to seem a matter of terminology; the substantive questions have already been settled.

My sense is that the distinction between what Frankish calls “weak” and “strong” illusionism may turn out to be largely terminological as well. Frankish characterizes weak illusionism as admitting that phenomenal consciousness exists, but claiming that we misrepresent it as having certain metaphysically suspicious features — such as being ineffable, intrinsic, essentially private, or infallibly-known — that it doesn’t possess. Strong illusionism, by contrast, denies that phenomenal consciousness exists altogether. But it’s not clear to me what’s at stake in the difference between admitting phenomenal consciousness exists, but does not have X, Y, and Z features, vs. saying that it doesn’t exist at all, unless we can say more about the features that, according to weak illusionists, it does have. Frankish, here, mostly says that weak illusionists still allow that experiences have properties that are “genuinely qualitative” and “feely,” and in that sense phenomenal; claims which strong illusionists deny. But it’s very unclear to me, absent further positive characterization, what “qualitativeness” and “feely-ness” amount to (I think Frankish talks about this “Quining Diet Qualia,” which I haven’t read).

Despite the purported strength of his illusionism, though, Frankish himself does a few terminological dances to avoid baldly endorsing claims like “there’s nothing it’s like to be you,” and “you are a phenomenal zombie.” He is committed to the non-existence of phenomenal consciousness, but he says that we need not construe talk about “what it’s like” or of “phenomenal zombies” as essentially about phenomenal consciousness. For example, we might think of there being “something it’s like” to have a certain experience if that experience is represented introspectively in some way; and we might think of zombies as essentially lacking in this type of introspective access to their mental states — what Frankish calls an “inner life.” We aren’t zombies like that, says Frankish.

I think Frankish is squirming a bit here, and that he should bite the relevant bullets more forthrightly (though to his credit, he’s still reasonably up front). No one ever thought that phenomenal zombies lacked introspective access to their own mental states, since they were by hypothesis functionally identical to humans; and the central function of “what it’s like” talk in the discourse about consciousness has been to point to/characterize phenomenal consciousness.

Let’s consider, then, the more forthright version of strong illusionism, which just states directly that phenomenal consciousness does not exist; there’s nothing it’s like to be you, or a bat, or your partner; there’s never been anything it’s like to be anyone; there’s nothing it’s like to see green, or to feel pain, or to fall in love. You used to think a zombie world was merely possible; actually, it’s actual. The lights have never been on. No one has ever been home.

Can you conceive of this? Can you take seriously the possibility that this might, actually, be true?

In attempting this, the shift I’ve found most helpful is actively and deliberately moving from conceiving of subjective experience as a thing — a “space” or “experiential array” that you have some sort of “direct acquaintance” relationship with — to conceiving of it as the content of a story, as a way things are represented to be. Less like the canvas and paint of a painting, and the more like what the painting is supposed to be of; less like a newspaper, and more like the news. And the news, as we all know, might be oversimplified, partly false, or entirely fake.

Suppose, for example, that after fixating your vision on a black, green, and yellow image of an American flag, you are left with an “after-image” of a red stripe when the flag stimulus is removed (this is a favorite example of Daniel Dennett’s). It’s tempting to think that there is something that has the property of phenomenal redness — that is, an appearance of a red stripe, where that appearance is itself red, in your “internal space” or “experiential array” — and that it is this something that you direct your attention to in noticing the after-image. On the view illusionists like Dennett and Frankish are encouraging, though, what’s happening is that according to the story your brain is telling, there is a stripe with a certain type of property. That’s the sense in which it seems to you like there’s a red stripe; that’s all that the appearance of the red stripe amounts to, and this does not require an actual red stripe made out of mental stuff, painted in mental paint (Dennett calls it “figment”) in your internal world.

Here’s Frankish’s (2019) more comprehensive version of this picture (it’s not the only version available, but my impression is that many illusionist accounts proceed on similar lines). Your brain engages in processes associated, for Frankish, with “access consciousness” — e.g., acquiring information and synthesizing information about the environment, and then “broadcasting” that information to the brain as a whole, such that it can enter into further processes like reasoning and decision-making. Beyond this, though, it also uses introspective mechanisms to track the processes involved in access consciousness and represent them using a simplified model — a model which can then itself feed into other cognitive processes like decision-making, memory storage, and so on. Importantly, though, this simplified model involves representing some things (maybe mental states, maybe objects in the world) as having properties they don’t have — specifically, phenomenal properties. And it is this false representation that gives rise to problematic intuitions like the ones described above.

Frankish is openly hazy about exactly what it is to represent a property as phenomenal, and about the specific mechanisms via which such representations give rise to the problematic intuitions in question — this, he thinks, is a matter for further investigation. But the basic outlines of the eventual story are, for him, relatively clear, and the project of filling them out is much more promising, he thinks, than the project of trying to validate the intuitions in question, whether via physicalist or non-physicalists means.

Because this account is more of a promissory note than a developed theory, it doesn’t provide a ton of content to aid in constructing an illusionist model of how your mind works. Still, I think, shifting to thinking of your subjective experience as the content of a story — not the Cartesian theatre; but the plot of the film — seems to me a basic and instructive first step. 

(Note that we can make the same move, not just about phenomenally conscious experiences, but about the self that experiences them. The basic picture is: there is a physical machine controlled by a brain, it contains representations that purport to describe a self, a set of mental states, and an external world, all with various properties; according to these representations, the self is situated at the center of a unified internal arena or space in which sights, sounds, etc with phenomenal properties appear and disappear. To the extent that we end up thinking of the properties of these mental states as illusory, we may end up thinking of properties of the “self” that is represented as experiencing them as illusory as well.)

When I try to see the world like this, I find myself shifting from an implicit frame in which I am the “consumer” of my brain’s story about the world — a consumer who uses that story as a map — to one in which I am fully engrossed in that story, fully in the world that the story portrays. Shakespeare writes: “Think when we talk of horses, that you see them; printing their proud hoofs i’ the receiving earth.” On illusionism, I continue to take this advice, applied to qualia, very much to heart: “Think, when your brain talks of phenomenal redness, that you see it.” Oh, but I do. I look at my desktop background, and there is the phenomenal redness, shining vividly. And indeed it is, says illusionism, in the fictional world your brain is representing. Quite a fiction, I find myself thinking; very engrossing; feels so real; feels like I’m there.

Indeed, a part of me is tempted to say that this fictional world, in which things have phenomenal redness, is my world, and that I am more deeply identified with the “self” in this world than with the organism and brain that houses the mental states representing it. Perhaps, in this sense, “I” will end up as fictional/illusory as the phenomenal redness I take myself to be perceiving. I’m tempted towards this view in part because the fictional world is where, as it were, the phenomenal red lives; in this sense, the fictional self in the fictional world is right about its perceiving the (fictional) phenomenal red, though wrong to treat the fictional world as real. And being right about something that seems as obvious as the phenomenal red seems like a real benefit.

But a part of me pulls the other direction. On this view, I’m the organism/brain, using a flawed map to navigate a real territory. Phenomenal properties, it turns out, are a flaw in the map — a particularly compelling and unusual flaw, but familiar in a broader sense, and not, perhaps, particularly harmful outside of philosophy seminars (though my best guess is actually that accepting illusionism would have very revisionary implications in a variety of domains, especially ethics). This, I think, is where most illusionists end up; and it seems the more sensible route. 

New Comment
50 comments, sorted by Click to highlight new comments since:

This is a familiar dialectic in philosophical debates about whether some domain X can be reduced to Y (meta-ethics is a salient comparison to me). The anti-reductionist (A) will argue that our core intuitions/concepts/practices related to X make clear that it cannot be reduced to Y, and that since X must exist (as we intuitively think it does), we should expand our metaphysics to include more than Y. The reductionist (R) will argue that X can in fact be reduced to Y, and that this is compatible with our intuitions/concepts/everyday practices with respect to X, and hence that X exists but it’s nothing over and above Y. The nihilist (N), by contrast, agrees with A that it follows from our intuitions/concepts/practices related to X that it cannot be reduced to Y, but agrees with D that there is in fact nothing over and above Y, and so concludes that there is no X, and that our intuitions/concepts/practices related to X are correspondingly misguided. Here, the disagreement between A vs. R/N is about whether more than Y exists; the disagreement between R vs. A/N is about whether a world of only Y “counts” as a world with X. This latter often begins to seem a matter of terminology; the substantive questions have already been settled.

Is this a well-known phenomenon? I think I've observed this dynamic before and found it very frustrating. It seems like philosophers keep executing the following procedure:

  1. Take a sensible, but perhaps vague, everyday concept (e.g. consciousness, or free will), and give it a precise philosophical definition, but bake in some dubious, anti-reductionist assumptions into the definition.
  2. Discuss the concept in ways that conflate the everyday concept and the precise philosophical one. (Failing to make clear that the philosophical concept may or may not be the best formalization of the folk concept.)
  3. Realize that the anti-reductionist assumptions were false.
  4. Claim that the everyday concept is an illusion.
  5. Generate confusion (along with full employment for philosophers?).

If you'd just said that the precisely defined philosophical concept was a provisional formalization of the everyday concept in the first place, then you wouldn't have to claim that the everyday concept was an illusion once you realize that your formalization was wrong!

My sense is that the possibility of dynamics of this kind would be on people's radar in the philosophy community, at least.

Even after reading your post, I don't think I'm any closer to comprehending the illusionist view of reality. One of my good and most respected friends is an illusionist. I'd really like to understand his model of consciousness.

Illusionists often seem to be arguing against strawmen to me. (Notwithstanding the fact that some philosophers actually do argue for such "strawman" positions). Dennet's argument against "mental paint" seems to be an example of this. Of course, I don't think there is something in my mental space with the property of redness. Of course "according to the story your brain is telling, there is a stripe with a certain type of property." I accept that the most likely explanation is that everything about consciousness is the result of computational processes (in the broadest sense that the brain is some kind of neural net doing computation, not in the sense that it is anything actually like the Von Neumann architecture computer that I am using to write this comment). For me, that in no way removes the hard problem of consciousness, it only sharpens it.

Let me attempt to explain why I am unable to understand what the strong illusionist position is even saying. Right now, I'm looking at the blue sky outside my window. As I fix my eyes on a specific point in the sky and focus my attention on the color, I have an experience of "blueness." The sky itself doesn't have the property of phenomenological blueness. It has properties that cause certain wavelengths of light to scatter and other wavelengths to pass through. Certain wavelengths of light are reaching my eyes. That is causing receptors in my eyes to activate which in turn causes a cascade of neurons to fire across my brain. My brain is doing computation which I have no mental access to and computing that I am currently seeing blue. There is nothing in my brain that has the property of "blue". The closest thing is something analogous to how a certain pattern of bits in a computer has the "property" of being ASCII for "A". Yet I experience that computation as the qualia of "blueness." How can that be? How can any computation of any kind create, or lead to qualia of any kind? You can say that it is just a story my brain is telling me that "I am seeing blue." I must not understand what is being claimed, because I agree with it and yet it doesn't remove the problem at all. Why does that story have any phenomenology to it? I can make no sense of the claim that it is an illusion. If the claim is just that there is nothing involved but computation, I agree. But the claim seems to be that there are no qualia, there is no phenomenology. That my belief in them is like an optical illusion or misremembering something. I may be very confused about all the processes that lead to my experiencing the blue qualia. I may be mistaken about the content and nature of my phenomenological world. None of that in any way removes the fact that I have qualia.

Let me try to sharpen my point by comparing it to other mental computation. I just recalled my mother's name. I have no mental access to the computation that "looks up" my mother's name. Instead, I go from seemingly not having ready access to the name to having it. There is no qualia associated with this. If I "say the name in my head", I can produce an "echo" of the qualia. But I don't have to do this. I can simply know what her name is and know that I know it. That seems to be consistent with the model of me as a computation. That if I were a computation and retrieved some fact from memory, I wouldn't have direct access to the process by which it was retrieved from memory, but I would suddenly have the information in "cache." Why isn't all thought and experience like that? I can imagine an existence where I knew I was currently receiving input from my eyes that were looking at the sky and perceiving a shade which we call blue without there being any qualia. 

For me, the hard problem of consciousness is exactly the question, "How can a physical/computational process give rise to qualia or even the 'illusion' of qualia?" If you tell me that life is not a vital force but is instead very complex tiny machines which you cannot yet explain to me, I can accept that because, upon close examination, those are not different kinds of things. They are both material objects obeying physical laws. When we say qualia are instead complex computations that you cannot yet explain to me, I can't quite accept that because even on close examination, computation and qualia seem to be fundamentally different kinds of things and there seems to be an uncrossable chasm between them.

I sometimes worry that there are genuine differences in people's phenomenological experiences which are causing us to be unable to comprehend what others are talking about. Similar to how it was discovered that certain people don't actually have inner monologues or how some people think in words while others think only in pictures.

Thanks for explaining where you're coming from. 

Yet I experience that computation as the qualia of "blueness." How can that be? How can any computation of any kind create, or lead to qualia of any kind? You can say that it is just a story my brain is telling me that "I am seeing blue." I must not understand what is being claimed, because I agree with it and yet it doesn't remove the problem at all. Why does that story have any phenomenology to it? I can make no sense of the claim that it is an illusion.

As I understand it, the idea would be that, as weird as it may sound, there isn't any phenomenology to it. Rather: according to the story that your brain is telling, there is some phenomenology to it. But there isn't. That is, your brain's story doesn't create, lead to, or correlate with phenomenal blueness; rather, phenomenal blueness is something that the story describes, but which doesn't exist, in the same way that a story can describe unicorns without bringing them to life. 

according to the story that your brain is telling, there is some phenomenology to it. But there isn't.

Doesn't this assume that we know what sort of thing phenomenological consciousness (qualia) is supposed to be so that we can assert that the story the brain is telling us about qualia somehow fails to measure up to this independent standard of qualia-reality?

The trouble I have with this is that there is no such independent standard for what phenomenal blueness has to be in order to count as genuinely phenomenal. The only standard we have for identifying something as an instance of the kind qualia is to point to something occurring in our experience. Given this, it remains difficult to understand how the story the brain tells about qualia could fail to be the truth, and nothing but the truth, about qualia (given the physicalist assumption that all our experience can be exhaustively explained through the brain's activity).

I see blue and pointing to the experience of this seeing is the only way of indicating what I mean when I say "there is a blue qualia". So to echo J_Thomas_Moros, any story the brain is telling that constitutes my experience of blueness would simply be the qualia itself (not an illusion of one).

(tl;dr: I think a lot of this is about one-way (read-only) vs. two-way communication)

As a long-term meditator and someone who takes contents of phenomenal consciousness as quite "real" in their own way, I enjoyed this post -- it helped me clarify some of my disagreements with these ideas, and to just feel out this conceptual-argumentative landscape.

I want to draw out something about "access consciousness" that you didn't mention explicitly, but that I see latent in both your account (correct me if I'm wrong) and the SEP's discussion of it (ctrl-F for "access consciousness"). Which is: an assumed one-way flow of information. Like, an element of access consciousness carries information, which is made available to the rest of the system; but there isn't necessarily any flow back to that element. 

I believe to the contrary (personal speculation) that all channels in the mind are essentially two-way. For example, say we're walking around at night, and we see a patch of grey against the black of the darkness ahead. That information is indeed made available to the rest of the system, and we ask ourselves: "could it be a wild animal?". But where does that question go? I would say it's addressed to the bit of consciousness that carried the patch of grey. This starts a process of the question percolating down the visual processing hierarchy till it reaches a point where it can be answered -- "no, see that curve there, it's just the moonlight catching a branch". (In reality the question might kick off lots of other processes too, which I'm ignoring here.)

Anyway, the point is that there is a natural back and forth between higher-level consciousness, which deals in summaries and can relate disparate considerations, and lower-level e.g. sensory consciousness, which deals more in details. And I think this back-and-forth doesn't fit well in the "access consciousness" picture.

More generally, in terms of architectural design for a mind, we want whatever process carries a piece of information to also be able to act as a locus of processing for that information. The same way, if a CEO is getting briefed on some complex issue by a topic expert, it's much more efficient if they can ask questions, propose plans and get feedback, and keep them as a go-to-person for that issue, rather than just hear a report.

I think "acting as an addressable locus of processing" accounts for at least a lot of the nature of "phenomenal consciousness" as opposed to "access consciousness".

Interesting post, thanks!!

Frankish is openly hazy about exactly what it is to represent a property as phenomenal, and about the specific mechanisms via which such representations give rise to the problematic intuitions in question — this, he thinks, is a matter for further investigation.

I think Graziano's recent book picks up where Frankish left off. See my blog post:

https://www.lesswrong.com/posts/biKchmLrkatdBbiH8/book-review-rethinking-consciousness

I feel like I now have in my head a more-or-less complete account of the algorithmic chain of events in the brain that leads to a person declaring that they are conscious, and then writing essays about phenomenal consciousness. Didn't help! I find consciousness as weird and unintuitive as ever. But your post is as helpful as anything else I've read. I'll have to keep thinking about it :-D

Phenomenal properties, it turns out, are a flaw in the map

Map vs territory is a helpful framing I think. When we perceive a rock, we are open to the possibility that our perceptions are not reflective of the territory, for example maybe we're hallucinating. When we "perceive" that we are conscious, we don't intuitively have the same open-mindedness; we feel like it has to be in the territory. So yeah, how do we know we're conscious, if not by some kind of perception, and if it's some kind of perception, why can't it be inaccurate as a description of the territory, just like all other perceptions can? (I'm not sure if this framing is deep or if I'm just playing tricks with the term "perception".) Then the question would be: must ethics always be about territories, or can it be about maps sometimes? Hmm, I dunno.

Glad you found it helpful (or at least, as helpful as other work on the topic). So far in my engagement with Graziano (specifically, non-careful reads of his 2013 book and his 2019 “Toward a standard model of consciousness”), I don’t feel like I’ve taken away much more than the summary I gave above of Frankish’s view: namely, “introspective mechanisms ... track the processes involved in access consciousness and represent them using a simplified model” — something pretty similar to what Chalmers also says here on p. 34. I know Graziano focuses on attention in particular, and he talks more about e.g. sociality and cites some empirical work, but at a shallow glance I’m not sure I yet see really substantive and empirically grounded increases in specificity, beyond what seems like the general line amongst a variety of folks that “there’s some kind of global workspace-y thing, there’s some kind of modeling of that, this modeling involves simplifications/distortions/opacity of various kinds, these somehow explain whatever problem intuitions/reports need explaining." But I haven’t tried to look at Graziano closely. The “naive” vs. “sophisticated” descriptions in your blog post seem like a helpful way to frame his project. 

No one ever thought that phenomenal zombies lacked introspective access to their own mental states

I'm surprised by this. I thought p-zombies were thought not to have mental states.

I thought the idea was that they replicated human input-output behavior while having "no one home". Which sounds to me like not having mental states.

If they actually have mental states, then what separates them from the rest of us?

I thought the idea was that they replicated human input-output behavior while having "no one home".

No, the idea is that p-zombies are perfect atom-by-atom copies of humans, except with "no one home". From https://plato.stanford.edu/entries/zombies/:

Zombies [...] are exactly like us in all physical respects but without conscious experiences[.]

Which (from my perspective) is a much stranger thing to imagine than just "a thing that somehow produces human input-output behavior via an algorithm that lacks our inner awareness". A p-zombie is implementing the same algorithm as us, but minus "how that algorithm feels from the inside".

(Or more precisely, minus phenomenal consciousness. The p-zombie may still have "a way it feels inside" in the sense that the p-zombie can internally track things about its brain and produce highly sophisticated verbal reports about its "experiences". But by hypothesis, this p-zombie access consciousness isn't accompanied by "the lights actually being on inside".)

(You might say that you can't have mental states without being phenomenally conscious. But hopefully it's at least clearer why others would find it natural to talk about p-zombies' mental states, since they're implementing all the same cognitive algorithms as a human, not just reproducing the same functional behavior.)

Hmm, maybe it's worth distinguishing two things that "mental states" might mean:

  1. intermediate states in the process of executing some cognitive algorithm, which have some data associated with them
  2. phenomenological states of conscious experience

I guess you could believe that a p-zombie could have #1, but not #2.

I meant mental states in something more like the #1 sense -- and so, I think, does Frankish.

[-]TAG10

A p-zombie is implementing the same algorithm as us, but minus “how that algorithm feels from the inside”.

There's no known reason why any algorithm should feel like anything from the inside. Non zombiehood needs explaining.

This is a good point. But on the other hand, we can be very confident that there are algorithms that exhibit behavior that we would explain, in ourselves, as a consequence of feeling things, and there are "parallel explanations" of the algorithm's behavior and the feelings-based explanations we would normally tell about ourselves.

(It's more of an open question of whether we actually have any of these algorithms running on computers right now, If we're allowed to cherry-pick examples in narrow domains, then there are plausibly some like "this neural network is seeing a dog," or "this robot is surprised")

Another hint at this correspondence is that we can make models of humans themselves as if their feelings are due to the mechanistic behavior of neurons, make predictions and plans using that model, and then try them out, and as far as we can tell the model makes successful predictions about what I will feel.

Ultimately I think this comes down to questions like "am I absolutely committed to Cartesian dualism, or is there 'merely worldly' evidence that could convince me that I am a part of the world rather than a soul merely communicating with it? What would that evidence look like?"

[-]TAG10

This is a good point. But on the other hand, we can be very confident that there are algorithms that exhibit behavior that we would explain, in ourselves, as a consequence of feeling things, and there are “parallel explanations” of the algorithm’s behavior and the feelings-based explanations we would normally tell about ourselves

And they can conceivably do all that without feelings. The flip side of not being able to explain why an algorithm should feel like anything on the inside is that zombies are conceivable.

Another hint at this correspondence is that we can make models of humans themselves as if their feelings are due to the mechanistic behavior of neurons, make predictions and plans using that model, and then try them out, and as far as we can tell the model makes successful predictions about what I will feel

Models in which mental states figure also make successful predictions ... you can predict ouches from pains. The physical map is not uniquely predictive.

am I absolutely committed to Cartesian dualism,

Cartesian dualism is not the only alternative to physicalism.

And they can conceivably do all that without feelings. 

Sure, if we mean "conceivable" in the same way that "561 is prime" and "557 is prime" are both conceivable. That is, conceivable in a way that allows for internal contradictions, so long as we haven't figured out where the internal contradictions are yet.

"am I absolutely committed to Cartesian dualism,"

Cartesian dualism is not the only alternative to physicalism.

True, but it's a very convenient central example of a priori dualism, which has no space in its framework for any evidence (either from sensations of the external world or phenomena in general) that it's actually being implemented on a physical substrate.

[-]TAG10

That is, conceivable in a way that allows for internal contradictions, so long as we haven’t figured out where the internal contradictions are yet.

You seem to be saying that an algorithm is necessarily conscious , only we don't know how or why , so there is no contradiction for us,no internal contradiction, in imagining an unconscious algorithm.

That's quite a strange thing to say. How do we know that consciousness is necssitated when w don't understand it? Is it necessitated by all algorithms that report consciousness? Do we know that it depends solely on the abstract algorithm and not the substrate?

"Dualism wrong" contains little information, and therefore tells you little about th features of non-dualism

Hm, no, I don't think you got what I meant.

One thing I am saying is that I think there's a very strong parallel between not knowing how one could show if a computer program is conscious, and not having any idea how one could change their mind about dualism in response to evidence.

[-]TAG30

True, but it’s a very convenient central example of a priori dualism, which has no space in its framework for any evidence (either from sensations of the external world or phenomena in general) that it’s actually being implemented on a physical substrate.

You seem to be using "a priori" to mean something like "dogmatic and incapable of being updated". But apriori doesn't mean that, and contemporary dualists are capable of saying what they need to change their minds: a reductive explanation of consciousness.

Them merely saying they'll be convinced by a "reductive explanation" is too circular for my tastes. It's like me saying "You could convince me the moon was made of green cheese if you gave me a convincing argument for it." It's not false, but it doesn't actually make any advance commitments about what such an argument might look like.

If someone says they're open to being persuaded "in principle," but has absolutely no idea what evidence could sway them, then my bet is that any such persuasion will have nothing to do with science, little to do with logic, and a lot to do with psychology.

[-]TAG10

That's not an analogous analogy, because reductive explanations have an agreed set of features.

It's odd to portray reductive explanation as this uselessly mysterious thing, when it is the basis of reductionism, which is an obligatory belief around here.

I'm not sure if we're using "reductive explanation" the same way then, because if we associate it with the closest thing I think is agreed upon around here, I don't feel like dualists would agree that such a thing truly works.

What I'm thinking of is explanation based on a correspondence between two different models of our experience. Example: I can explain heat by the motion of atoms by showing that atomic theory predicts very similar phenomena to the intuitive model that led to me giving "heat" a special property-label. This is considered progress because atomic theory also makes a lot of other good predictions, without much complexity.

These models include bridging laws (e.g. when the atoms in the nerves in your skin move fast, you feel heat). Equivalently, they can be thought of as purely models of our phenomena that merely happen to include the physical world to the extent that it's useful. This is "common sense" on LW because of how much we like Solomonoff induction, but isn't necessarily common sense among materialist scientists, let alone dualists.

These inferred bridging laws can do pretty neat things. Even though they at first would seem to only work for you (there being no need to model the phenomena of other minds if we're already modeling the atoms), we can still ask what phenomena "you" would experience if "you" were someone else, or even if you were a bat. At first it might seem like the bridging laws should be specific to the exact configuration of your brain and give total nonsense if applied to someone else, but if they're truly as simple as possible then we would expect them to generalize for the same sorts of reasons we expect other simple patterns in our observations to generalize.

Anyhow, that's what I would think of as a reductive explanation of consciousness - rules that parsimoniously explain our experiences by reference to a physical world. But there's a very resonant sense in which it will feel like there's still an open question of why those bridging laws, and maybe that we haven't shown that the experiences are truly identical to the physical patterns rather than merely being associated with them. (Note that all this applies equally well to our explanation of heat.)

"Look," says the imaginary dualist, "you have a simple explanation of the world here, but you've actually shown that dualism is right! You have one part of this explanation that involves the world, and another part that involves the experiences. But nowhere does this big model of our experiences say that the experiences are made of the same stuff as the world. You haven't really explained how consciousness arises from patterns of matter, you've just categorized what patterns of matter we expect to see when we're in various conscious states."

Now, if the dualists were hardcore committed to Occam's razor, maybe they would come around. But somehow I don't associate dualists with the phrase "hardcore committed to Occam's razor." The central issue is that a mere simple model isn't always a good explanation by human standards - it doesn't actually put in the explanatory work necessary to break the problem into human-understandable pieces or resolve our confusions. It's just probably right. A classic example is "Why do mirrors flip left and right but not up and down?" Maxwell's equations are a terrible explanation of this.

[-]TAG10

If the bridging laws , that explain how and why mental states arise from physical states, are left unspecified , then the complexity of the explanation cannot be assessed , so Occam's razor doesn't kick in. To put it another way, Occam's razor applies to explanations, so you need to get over the bar of being merely explanatory.

What you call being hardcore about Occam's razor seems to mean believing in the simplest possible ((something)) ,where ((something)) doesn't have to be an explanation.

A classic example is “Why do mirrors flip left and right but not up and down?” Maxwell’s equations are a terrible explanation of this.

Maxwell's equations are a bad intuitive explanation of reflection flipping, but you can't deny that the intuitive explanation is implicit in Maxwell's equations, because the alternative is that it is a physics-defying miracle.

The central issue is that a mere simple model isn’t always a good explanation by human standards—it doesn’t actually put in the explanatory work necessary to break the problem into human-understandable pieces or resolve our confusions.

What's the equivalent of Maxwell's equations in the mind body problem?

These inferred bridging laws can do pretty neat things. Even though they at first would seem to only work for you (there being no need to model the phenomena of other minds if we’re already modeling the atoms), we can still ask what phenomena “you” would experience if “you” were someone else, or even if you were a bat

We can ask, but as far as I know there is no answer. I have never heard of a set of laws that allow novel subjective experience to be predicted from brain states. But are your "inferred" and "would" meant to imply that they don't?

Do you remember that conversation we had (I think maybe Carl Shulman was also present? IDK) a few years ago about roughly this topic? At the lodge? Key words: Solomonoff induction, solipsistic phenomenal idealism.

I think the bold claim I'd make now is that anyone who isn't a realist about qualia doesn't have a viable epistemology yet; all our standard epistemological theories (bayesianism, solomonoff induction, etc.) imply realism about qualia.

Perhaps, though, this just means we need new epistemological theories. But I'd want to see independent evidence for this, because the standard arguments against qualia realism are bogus.

(Also it's been a year since I thought about this at all, and years since I seriously thought about it, so... if someone comments with a compelling objection I won't be too surprised. And IIRC there were some arguments we discussed in that conversation that were making me unhappy with qualia realism, making me wish for new epistemological theories instead.)

I do remember that conversation, though I'm a bit hazy on the details of the argument you presented. Let me know if there's a write-up/summary somewhere, or if you create one in future. 

I think Frankish is squirming a bit here, and that he should bite the relevant bullets more forthrightly (though to his credit, he’s still reasonably up front). No one ever thought that phenomenal zombies lacked introspective access to their own mental states, since they were by hypothesis functionally identical to humans; and the central function of “what it’s like” talk in the discourse about consciousness has been to point to/characterize phenomenal consciousness.

As an illusionist, I endorse biting this bullet. I think I just am a p-zombie.

I also endorse the rest of your post!

Because this account is more of a promissory note than a developed theory, it doesn’t provide a ton of content to aid in constructing an illusionist model of how your mind works.

Notably, the dualist sort of agrees that a story like this must be possible, since they think it's possible to fully reductively explain p-zombies, who dualists agree do have delusive beliefs and perceptions exactly like those. (Or p-beliefs and p-perceptions, if you prefer.)

An important question here, perhaps, is whether the process of fully reductively explaining the p-zombie would help make illusionism feel less mysterious or counter-intuitive. (I have to imagine it would, even if there would always be an element of mind-bending oddness to the claim.)

I’m hopeful that if we actually had a worked out reductionist account of all the problematic intuitions, which we knew was right and which made illusionism true, then this would be at least somewhat helpful in making illusionism less mysterious. In particular, I’m hopeful that thoroughly and dutifully reconceptualizing our introspection and intuitions according to that theory — “when it seems to me like X, what’s going on is [insert actual gears level explanation, not just ‘neurons are firing’ or ‘my brain is representing its internal processing in a simplified and false way’]” — would make a difference.

People have various intuitions about phenomenal consciousness

People say that, but are there actual studies of expressed intuitions about consciousness?

Actually looking at the world the way someone you disagree with looks at it is often much more difficult than being able to pass their “intellectual turing test”

A somewhat relevant reference: [...] while a human might be able to imagine what it is like to be a bat by taking "the bat's point of view", it would still be impossible "to know what it is like for a bat to be a bat."

I lost the plot after Can you conceive of this? Can you take seriously the possibility that this might, actually, be true?. What followed was too lengthy, and did not seem any different from the mainstream view that all our experiences are just some computations in the brain.

An interesting observation: I can easily conceive of philosophical zombies and strong illusionism, what I have trouble conceiving is our human condition where I actually feel stuff. So what I am unable to conceive is exactly the situation I live in. I think this is true for others, too, and that's why we call it the hard problem of consciousness; If it was conceivable, there would be no mystery.

Regarding 'no mystery, that's exactly my experience:  I am able to conceive the situation I live in, and am fine with the human condition in which I feel stuff and experience things, even though I can fundamentally be reduced to a data processing algorithm (and quite possibly could be run deterministically without ever noticing the difference.)

Once I was able to do that, the 'hard problem' dissolved.  There's isn't a mystery of consciousness for me anymore; merely a mystery of "what are the details involved in constructing entities that exhibit this property".

Thankyou for a fantastically lucid exposition of this tricky terrain! I myself am a strong illusionist, and I really appreciated your analogy of phenomenal conscious being more like the plot of a story, rather than the film on the screen (cartesian theatre). This feels like one of Dennett's 'strange inversions of reasoning' - a critical inversion if strong illusionism is the right way forward - like the Necker cube, where a perspective-shift out of the thicket of conceptual baggage of theories of consciousness with which our introspective, self-reflective machinery source code is infected (an ecosystem of believed-to-be-true fictitious memes). With this in mind, I deeply acknowledge your 'pull in the other direction'!
One thing I'd like to add in relation to your comment about Frankish's 'promissory note'. We may be decades or centuries away from a 'full neuroscientific account' of the functionality of the brain vis-a-vis consciousness. Nonetheless I find it incredibly useful to bear in mind that, as Dennett points out, this vast symphony of neurons signalling, like so many murmurations of starlings, are REAL physical events. Every conscious moment experienced by every conscious being in the history of the world, every moment of transcendence, every epiphany of realisation, ever moment of felt experience, had a one-to-one neural correlate made of matter and information in the brains of those conscious beings. In short, figment is real only and precisely in this sense.
So while the strong illusionist may at present only have a promissory note to offer in lieu of all the exquisite details of the electro-chemical-informational complexity, I think this is precisely the best way out of the ill-construed 'hard problem' hall of mirrors. I am with Dennett, Frankish, Metzinger, Graziano, Churchland and others in this regard.
 

Another strong illusionist here.  Perhaps similar to you, I ended up here because I kept running into the problem of 'everything is made of matter'.

Because ultimately, down at the floor, it's all just particles and forces and extremely well understood probabilities.  There's no fundamental primitive for 'consciousness' or 'experience', any more than there's a fundamental primitive for 'green' or 'traffic' or 'hatred'.  Those particles and forces down at the floor are the territory; everything else is a label.

As I was going through the process to get here, I kept running into parts of me with the internal objection of "But it feels real!  I feel like I'm conscious!  I have conscious experience!"  The "what does it feel like to be wrong" post was of significant help here; it makes blatantly clear that feelings and intuition don't necessarily map the territory accurately, or even at all (in the case of being wrong and not knowing it.)

So the final picture was 1) hard materialism all the way down to fundamental physics, and 2) explicit demonstration that my feelings didn't necessarily map to reality, no matter how real they felt.  There was only one conclusion to make from there, though it did take me a few months of mulling over the problem for the majority of my cognitive apparatus to agree.

There's no fundamental primitive for 'consciousness'

I'm not sure if this is the case, but I'm worried that people subscribe to illusionism because they only compare it to the weakest possible alternative, which (I would say) is consciousness being an emergent phenomenon. If you just assume that there's no primitive for consciousness, I would agree that the argument for illusionism is extremely strong since [unconscious matter spontaneously spawning consciousness] is extremely implausible.

However, you can also just dispute the claim and assume consciousness is a primitive, which gets around the hard problem. That leaves the question 'why is consciousness a primitive', which doesn't seem particularly more mysterious than 'why is matter a primitive'.

I am extremely confused by your answer.

You seem to be saying that illusionism is viable because people compare it to "consciousness being an emergent phenomenon", which you consider to be an alternative.  Further, you explicitly state that "the argument for illusionism is extremely strong since [unconscious matter spontaneously spawning consciousness] is extremely implausible".

This seems problematic to me because:

  1. If there's no fundamental primitive for consciousness, then pretty much by definition whatever it is that we slap the label "consciousness" on must be emergent behavior.  In other words, what we call consciousness being emergent behavior is a direct and immediate consequence of reductionist beliefs, which IMO also produce strong illusionism.
  2. You've stated that "[unconscious matter spontaneously spawning consciousness] is extremely implausible" without any evidence or rationale.  I disagree.  In my view the likelihood of "unconscious matter spontaneously spawning consciousness" approaches 1 as system complexity increases, which pretty much matches the observations of intelligence in species we see in the real world.

Apologies, I communicated poorly. ImE, discussions about consciousness are particularly prone to misunderstandings. Let me rephrase my comment.

  1. Many (most?) people believe that consciousness is an emergent phenomenon but also a real thing.
  2. My assumption from reading your first comment was that you believe #1 is close to impossible. I agree with that.
  3. I took your first comment (in particular this paragraph)...

Because ultimately, down at the floor, it's all just particles and forces and extremely well understood probabilities. There's no fundamental primitive for 'consciousness' or 'experience', any more than there's a fundamental primitive for 'green' or 'traffic' or 'hatred'. Those particles and forces down at the floor are the territory; everything else is a label.

... as saying that #2 implies illusionism must be true. I'm saying this is not the case because you can instead stipulate that consciousness is a primitive. If every particle is conscious, you don't have the problem of getting real consciousness out of nothing. (You do have the problem of why your experience appears unified, but that seems much less impossible.)

Or to say the same thing differently, my impression/worry is that people accept that 'consciousness isn't real' primarily because they think the only alternative is 'consciousness is real and emerges from unconscious matter', when in fact you can have a coherent world view that disputes both claims.

Note:  beware the definition of 'real'.  There is a reason I've been using the structure "as real as X".  We should probably taboo it.

There's still some comm issues here I think.  All following comments reply explicitly and only to the immediate parent comment.  Regarding your #1 in the immediate parent:

I believe that consciousness is an emergent phenomenon, like many other emergent phenomena we are familiar with, and is exactly as "real" as those other emergent phenomena.  Examples of emergent phenomena in the same class as "consciousness" include rocks, trees, hatred, weather patterns.  This is how I will be using 'real' for the remainder of this comment.

Regarding #2, I can't parse it with sufficient confidence to respond.

Regarding #3, I'm saying that the lack of a 'consciousness' primitive means that if there's something labelled consciousness that isn't a primitive, then it must be emergent from the primitives we do have.

Regarding consciousness actually being a primitive:  sure, that's an option, but it's approximately as likely to be a primitive as 'treeness', 'hatred', and 'plastic surgery'.  The fact of the matter is that we have a lot of evidence regarding what the primitives are, what they can be, and what they can do, and none of it involves 'consciousness' any more than it involves 'plastic surgery'; the priors for either of these being a primitive are both incredibly small, and approximately equally likely.

Regarding the last paragraph in the immediate parent, the easiest coherent world view which disputes both claims is to say that "consciousness is real and does not emerge from unconscious matter".  However, that world view suffers a complexity penalty against either of the other two world views:  "consciousness isn't real" requires only primitives; "consciousness is real and emerges from unconscious matter" requires primitives and the possibility of emergent behavior from those primitives; while "consciousness is real and does not emerge from unconscious matter" flat out requires an additional consciousness primitive. That is an extremely nontrivial addition.

And lastly, there's the evidenciary burden.  The fact of the matter is that observation over the past century or so has gives us extremely strong evidence for a small set of very simple primitives, which have the ability to generate extremely complex emergent behavior. "Consciousness is real" is compatible with observation; "consciousness is real and emerges from unconscious matter" is compatible with observation; "consciousness is real and does not emerge from unconscious matter" requires additional laws of physics to be true.

Comparing consciousness to plastic surgery seems to me to be a false analogy. If you have your model of particles bouncing around, then plastic surgery is a label you can put on a particular class of sequences of particles doing things. If you didn't have the name, there wouldn't be anything to explain, the particles can still do the same thing. Consciousness/subjective experience describes something that is fundamentally non-material. It may or may not be cause by particles doing things, but it's not itself made of particles.

If your response to this is that there is no such thing as subjective experience -- which is what I thought your position was, and what I understand strong illusionism to be -- then this is exactly what I mean when I say consciousness isn't real. By 'consciousness', I'm exclusively referring to the qualitatively different thing called subjective experience. This thing either exists or doesn't exist. I'm not talking about the process that makes people move their fingers to type things about consciousness.

I apologize for not tabooing 'real', but I don't have a model of how 'is consciousness real' can be anything but a well-defined question whose answer is either 'yes' or 'no'. The 'as real as X' framing doesn't make any sense to me. it seems like trying to apply a spectrum to a binary question.

Consciousness/subjective experience describes something that is fundamentally non-material.

More non-material than "love" or "three"?

It makes sense to me to think of "three" as being "real" in some sense independently from the existence of any collection of three physical objects, and in that sense having a non-material existence. (And maybe you could say the same thing for abstract concepts like "love".)

And also, three-ness is a pattern that collections of physical things might correspond to.

Do you think of consciousness as being non-material in a similar way? (Where the concept is not fundamentally a material thing, but you can identify it with collections of particles.)

"Spontaneously" is your problem. It's like creationists saying monkeys don't spontaneously turn into humans. I don't know if consciousness is real and if it reduces to known matter or not, but I do know that human intuition is very anti-reductionist; Anything it doesn't understand, it likes to treat as an atomic blackbox.

That's fair. However, if you share the intuition that consciousness being emergent is extremely implausible, then going from there directly to illusionsism means only comparing it to the (for you) weakest alternative. And that seems like the relevant step for people in this thread other than you.

I don't at all share that intuition.  My intuition is that consciousness being emergent is both extremely plausible, and increasingly likely as system complexity increases.  This intuition also makes consciousness approximately as "real" as "puppies" and "social media".  All three are emergent phenomena, arising from a very small and basic set of primitives.

If you just assume that there's no primitive for consciousness, I would agree that the argument for illusionism is extremely strong since [unconscious matter spontaneously spawning consciousness] is extremely implausible.

How is this implausible at all? All kinds of totally real phenomena are emergent. There's no primitive for temperature, yet it emerges out of the motions of many particles. There's no primitive for wheel, but round things that roll still exist.

Maybe I've misunderstood your point though?

feelings didn’t necessarily map to reality, no matter how real they felt

But they do map to reality, just not perfectly. "I see red stripe" approximately maps to some brain activity. Sure, feelings about them being different things may be wrong, but "illusionism about everything except physicalism" is just restating physicalism without any additional argument. So what feelings are you illusionistic about?

All of them.

Some do happen to map (partially) to reality, but the key here is that there is no obligation for them to do so, and there's nothing which guarantees that to be true.

In short, what I believe may or may not map to reality at all.  Everything we "feel" is a side effect, an emergent behavior of the particles and forces at the bottom.  It's entirely an illusion.

That doesn't mean there isn't something going on, and that doesn't mean that feelings don't exist, any more than claiming bullets or trees don't exist.  But they're no more a primitive of the universe than "bullet" or "tree" is:  bullets and trees are loose collections of particles and forces we've decided to slap labels on; feelings, opinions, and ideas are more conceptual, but still end up being represented and generated by particles and forces.  They're no more real than trees or bullets are, though the labels are useful to us.

If you don't have philosophical issues with trees, you shouldn't have them with consciousness.

I appreciate the difference between absolute certainty and allowing the possibility of error, but as a matter of terminology, "illusion" is usually used to refer to things that are wrong, not merely may be wrong. Words doesn't matter that much, of course, but I still interested in what intuitions about consciousness you consider to probably not correspond to reality at all? For example, what do you do with intuition underlying zombie argument:

  1. Would you say the statement "we live in non-zombie world" is true?
  2. Or the entire setup is contradictory because consciousness is a label for some arbitrary structure/algorithm and it was specified that structures match for both worlds?
  3. Or do you completely throw away the intuition about consciousness as not useful?

From what you said I guess it's 2 (which by the way implies that whether you/you from yesterday/LUT-you/dust-theoretic copies of you/dogs feel pain is a matter of preferences), so the next question is what evidence is there for the conclusion that the intuition about consciousness can't map to anything other than algorithm in the brain? It can't map to something magical but what if there is some part of reality that this intuition corresponds to?

  1. When I first heard about p-zombies 10+ years ago, I thought the idea was stupid.  I still think the idea is stupid.  Depending on how you define the words, we could all be p-zombies; or we could all not be p-zombies.  Regardless of how we define the words though, we're large collections of particles and forces operating on very simple rules, and when you look at it from that standpoint, the question dissolves.
  2. Basically yes:  the p-zombie thought experiment is broken because it hinges on label definitions and ignores the fact that we have observation and evidence we can fall back on for a much, much more accurate picture (which isn't well represented by any single word in our language.)
  3. Intuition about consciousness is useful in the same way that intuition about quantum mechanics and general relativity is useful:  for most people, basically not at all, or only in very limited regimes.  Keep in mind that human intuition is no more complicated than a trained neural net trying to make a prediction about something.  It can be close to right, it can be mostly wrong, it can be entirely wrong.  Most people have good intuition/prediction about whether the sun will rise tomorrow; most people have bad intuition/prediction about how two charged particles in a square potential well will behave.  And IMO, most people have a mistaken intuition/prediction that consciousness is somehow real and supernatural and beyond what physics can tell us.  Those people would be in the 'wrong' bucket.

Regarding "what evidence is there for the conclusion that the intuition about consciousness can't map to anything other than algorithm in the brain?":  I would posit for evidence the fact that people have intuitions about all kinds of completely ridiculous and crazy stuff that doesn't make sense.  I can see no reason why intuition about consciousness must somehow always be coherent, when so many people have intuitions that don't even remotely match reality (and/or are inconsistent with each other or themselves.)

Regarding "what if there is some part of reality that this intuition corresponds to?":  I don't understand what you're trying to drive at here.  We call intuitions like that "testable", and upon passing those tests, we call them "likely to model or represent reality in some way".

Hmm, I'm not actually sure about quantifying ratio of crazy/predictive intuitions (especially in case of generalizing to include perception) to arrive at low prior for intuitions. The way I see it, if everyone had an interactive map of Haiti in the corner of their vision, we should try to understand how it works and find what it corresponds to in reality - not immediately dismiss it. Hence the question about specific illusionary parts of consciousness.

Anyway, I thing the intuition about consciousness does correspond to a part of reality - to "reality" part. I.e. panpsychism is true and zombie thought experiment illustrates difference between real world and the world that does not exist. It doesn't involve additional primitives, because physical theories already include reality, and it diverges from intuition about consciousness in unsurprising parts (like intuition being too anthropocentric).