In response to the classic Mysterious Answers to Mysterious Questions, I express some skepticism that consciousness is can be understood by science. I postulate (with low confidence) that consciousness is “inherently mysterious”, in that it is philosophically and scientifically impenetrable. The mysteriousness is a fact about our state of mind, but that state of mind is due to a fundamental epistemic feature of consciousness and is impossible to resolve.

My issue with understanding the cause of consciousness involves p-zombies. Any experiment with the goal of understanding consciousness would have to be able to detect consciousness, which seems to me to be philosophically impossible. To be more specific, any scientific investigation of the cause of consciousness would have (to simplify) an independent variable that we could manipulate to see how this manipulation affects the dependent variable, the presence or absence of consciousness. We assume that those around us are conscious, and we have good reason to do so, but we can't rely on that assumption in any experiment in which we are investigating consciousnessBefore we ask “what is causing x?”, we first have to know that x is present.

As Eliezer points out, that an individual says he's conscious is a pretty good signal of consciousness, but we can't necessarily rely on that signal for non-human minds. A conscious AI may never talk about its internal states depending on its structure.  (Humans have a survival advantage to social sharing of internal realities; an AI will not be subject to that selection pressure. There’s no reason for it to have any sort of emotional need to share its feelings, for example.) On the flip side, a savvy but non-conscious AI may talk about it's "internal states", not because it actually has internal states, but because it is “guessing the teacher's password” in the strongest way imaginable: it has no understanding whatsoever of what those states are, but computes that aping internal states will accomplish it's goals. I don't know how we could possibly know if the AI is aping consciousness for it own ends or if it actually is conscious. If consciousness is thus undetectable, I can't see how science can investigate it.

That said, I am very well aware that “Throughout history, every mystery, ever solved has turned out to be not magic*” and that every single time something has seemed inscrutable to science, a reductionist explanation eventually surfaced. Knowing this, I seriously downgrade my confidence that "No, really, this time it is different.  This phenomenon really is beyond the grasp of science." I look forward to someone coming forward with something clever that dissolves the question, but even so, it does seem inscrutable.

 


*- Though, to be fair, this is a selection bias.  Of course, all the  solved mysteries weren't magic. All the mysteries that are acctully magic remain unsolved, because they're magic! This is NOT to say I believe in magic, just to say that it's hardly saying much to claim that all the things we've come to understand were in principle understandable. To steelman: I do understand that with each mystery that was once declared to be magical, then later shown not to be, our collective priors for the existence of magical things decrease. (There is a sort of halting problem: if a question has remained unsolved since the dawn of asking questions, is that because it is unsolvable, or because we're right around the corner form solving it?)

New Comment
41 comments, sorted by Click to highlight new comments since:

I don't know how we could possibly know if the AI is aping consciousness for it own ends or if it actually is conscious.

How do you answer that question for human beings? ;-)

How do you know if a tree falling in the forest is just aping a sound for its own ends, or if it actually makes a sound? ;-)

Thinking about p-zombies is a sure sign that you are confused, and projecting a property of your brain onto the outside world. If you think there can be a p-zombie, then the mysterious thing called "consciousness" must reside in your map, not in the territory.

More specifically, if you can imagine robot or person A "with" consciousness, and robot or person B "without" it, and meanwhile imagine them to be otherwise "identical", then the only place for the representation of the idea that there is a difference, is in the mental tags you label these imaginary robots or people with.

This is why certain ideas seem "supernatural" or "beyond the physical" -- they are actually concepts that exist only in the mind of the observer, rather than in the physical reality.

This isn't to say that there might someday be a reductionist definition of consciousness that refers to facts in the territory, just that the intuitive idea of consciousness is actually a bit that gets flipped in our brains: an inbuilt bit that's part of our brain's machinery for telling the difference between animals, plants, and inanimate objects. This bit is the true home of our intuitive idea of consciousness, and is largely unrelated to whatever actual consciousness is.

In young children, after all, this bit flips on for anything that appears to move or speak by itself: animals, puppets, cartoons, etc. We can learn to switch it on or off for specific things, of course, but it's an inbuilt function that has its own sense or feel of being "real".

It's also why we can imagine the same thing having or not-having consciousness, because to our brain, it is an independent variable, because we have separate wiring to track that variable independently. But the mere existence of a tracking variable in our heads, doesn't mean the thing it tracks has a real existence. (We can see "colors" that don't really exist in the visible spectrum, after all: many colors are simply our perception of what happens when something reflects light at more than one wavelength.)

I suggest that if you are still confused about this, consult the sequence on the meaning of words, and particularly the bits about "How an algorithm feels from the inside" -- if I recall correctly, that's the bit that shows how our neural nets can represent conceptual properties like "makes a sound" as if they were independent of the physical phenomena involved. The exact same concept applies to our projections of consciousness, and the confusion of p-zombies.

How do you answer that question for human beings? ;-)

I don't but I assume that they do, because other humans and I share a common origin. I know I am conscious, and I would consider it strange if natural selection made me conscious, but not individuals that are basically genetically identical to me.

I suggest that if you are still confused about this, consult the sequence on the meaning of words, and particularly the bits about "How an algorithm feels from the inside"

I will review that sequence again. I certainly am confused. However, I don't think that consciousness is an attribute of my map but not the territory (though I freely admit that I'm having trouble expressing how that could be). I'm going to do my best here.

Can one rightly say that something exists but it interacts with nothing or is this contrary to what it means to exist? (Human consciousness in not one of these things, for indeed I have some idea of what consciousness is and I can say the words "I am conscious", but think the question is relevant.) If such things exist, are they in the map of the territory?

I intend to flesh this out further, but for the moment, I'll repeat what I said to Toggle, below:

I saw a bit of the movie her about the love affair between a guy and his operating system. It was horrifying to me, but I think for a different reason than everyone else in the room. I was thinking, "he might be falling in love with an automaton. How do we know if he is in a relationship with another mind or just an unthinking mechanism of gears and levers that looks like another mind from the outside?" The idea of being emotionally invested in an emotional void, bothers me. I want my relationships to be with other minds.

Some here see this as a meaningless distinction. The being acts the same as a mind, so for all intents and purposes it is a mind. What difference does it make to your utility if the relationship is with an empty chasm shaped like a person? The input-output is the same.

Perhaps, you're right. I'm still working around this, and maybe my discomfort is a facet of a outdated worldview. I'll note however, that this reduces charity to fuzzy-seeking. It doesn't make any difference if you actually help as long as you feel like you help. If presented the choice, saving a life is identical to saving every life. That way lies solipsism.

I know I am conscious

How do you know that? Break it down!

What difference does it make to your utility if the relationship is with an empty chasm shaped like a person?

How do you know that you're not an empty chasm shaped like a person? Is it merely your belief that makes this so? Well then, any being we make that likewise believes this will also be conscious by this definition.

The essential confusion that you're making is that you think you have a coherent definition of "consciousness", but you don't.

How do you know you're not a p-zombie? You merely believe this is not the case.

But why?

Because your brain has -- figuratively speaking -- a neuron that means "this thing is an entity with goals and motivations". Not as an abstract concept, but as an almost sensory quality. There is, in fact, a "qualia" (qualion?) for "that thing is an intentional creature", triggered initially by observation of things that appear to move on their own. Call it the anthopomorphic qualion, or AQ for short.

The AQ is what makes it feel like p-zombies are a coherent concept. Because it exists, you can imagine otherwise-identical sensory inputs, but with the AQ switched on or off. When you make reference to an "empty chasm shaped like a person", you are describing receiving sensory inputs that match "person", but with your AQ not firing.

Compare this with e.g. Capgras syndrome, in which a person is utterly convinced that all their friends and relatives have been replaced by impostor-duplicates. These dastardly villains look and sound exactly the same as their real friends and relatives, but are clearly impostors.

This problem occurs because we have figurative neurons or qualia associated with recognizing specific people. So, if you have a friend Joe, there's a Joe Qualion (JQ) that represents the "experience of seeing Joe". In people with Capgras syndrome, there is damage to the parts of the brain that recognize faces, causing the person to see Joe, and yet not experience the Joe Qualion.

Thus, even though he or she sees a person that they will readily admit "looks just like Joe", they will nonetheless insist, "That's not Joe. I've known Joe my whole life, and that is not Joe."

Now, even if you don't have Capgras syndrome, you can still imagine what this would be like, to have Joe replaced by an exact duplicate. In your mind, you can picture Joe, but without the Joe Qualion.

And this is the exact same thing you're doing when you imagine p-zombies. (i.e., making stuff up!)

In both cases, however, what you're doing is 100% part of your map. You can imagine "looks like a person, but is empty inside", just like you can imagine "duplicate impostor of Joe".

This does not mean, however, that it's possible to duplicate Joe, any more than it implies you can have something that's identical to a person but not conscious.

Now, you may say, "But it doesn't rule it out, either!"

No, but then we can imagine the moon made of green cheese, or some other physical impossibility, or God making a stone so heavy he can't lift it, and these imaginary excursions into incoherence are not any more evidence for their respective propositions than the p-zombie thought experiment is.

Because the only evidence we have for the proposal that p-zombies can exist, is imaginary. What's more, it rests solely on the AQ. That is, our ability to anthopomorphize. And we do not have any credible evidence that our own perception of personal consciousness isn't just self-applied anthropomorphism.

The thing that makes p-zombies so attractive, though, is that this imaginary evidence seems intuitively compelling. However, just as with the "tree falling in the forest" conundrum, it rests on a quirk of neuroanatomy. Specifically, that we can have qualia about whether something "is" a thing, that are independent of all the sensory inputs that we use to decide whether the thing "is" that thing in the first place.

Thus, we can imagine a tree without a sound, a Joe that isn't Joe, and a p-zombie without consciousness. But our ability to imagine these things only exists because our anatomy contains separate representations of the ideas. They are labels in our mind, that we can turn on and off, and then puzzle over philosophical arguments about whether the tree "really" makes a sound or not. But the labels themselves are part of the map, not the territory, because they are in the brain of the arguer.

Summary: we can imagine all kinds of stupid sh** (like trees falling in the forest without making a sound), because our brain's maps represent "is"-ness as distinct qualia from the qualia that are used to determine the is-ness in the first place. A surprising number of philosophical quandaries and paradoxes arise from this phenomenon, and are no longer confusing as soon as you realize that "is-ness" is actually superfluous, existing as it does only in the map, not the territory. Rationalist taboo and E-prime are two techniques for resolving this confusion in a given context.

It's extremely premature to leap to the conclusion that consciousness is some sort of unobservable opaque fact. In particular, we don't know the mechanics of what's going on in the brain as you understand and say "I am conscious". We have to at least look for the causes of these effects where they're most likely to be, before concluding that they are causeless.

People don't even have a good definition of consciousness that cleanly separates it from nearby concepts like introspection or self-awareness in terms of observable effects. The lack of observable effects goes so far that people posit they could get rid of consciousness and everything would happen the same (i.e. p-zombies). That is not a unassailable strength making consciousness impossible to study, it is a glaring weakness implying that p-zombie-style consciousness is a useless or malformed concept.

I completely agree with Eliezer on this one: a big chunk of this mystery should dissolve under the weight of neuroscience.

It's extremely premature to leap to the conclusion that...

Premature to leap to conclusions? Absolutely. Premature to ask questions? I don't think so. Premature acknowledge foreseen obstacles? Perhaps. We really do have little information bout how the abrain works and how a brain creates a mind. Speculation before we have data may not be very useful.

I want to underscore how skeptical I am of drawing conclusion about the world on the basis of thought alone. Philosophy is not an effective method for finding truth. The pronouncements by philosophers of what is "necessary" is more often than not shown to be fallacious bordering on the absurd, once scientists get to the problem. Science's track record of proving the presumed to be unprovable, is fantastic. Yet, knowing this, the line on inquiry still seems to present problems, a priori.

How could we know if an AI is conscious? We could look for signs of consciousness, or structural details that always (or even frequently) accompany consciousness. But in order to identify those features we need to assume what we are tryign to prove.

Is this specific problem clear? That is what I want to know about.

We have to at least look for the causes of these effects where they're most likely to be, before concluding that they are causeless.

I am in no way suggesting that consciousness is causeless (which seems somewhat absurd to me), only that there is an essential difficulty in discovering the cause. I heartily recommend that we look. I am ABSOLUTELY not suggesting that we should give up on trying to understand the nature of mind, and especially with the scientific method. However, my faulty a priori reasoning foresees a limitation in our empirical methods, which have a much better track record. When the empirical methods exceed my expectation, I'll update and abandon my a priori reasoning since I know that it is far less reliable (though I would want to know what was wrong with my reasoning). Until the empirical methods come though for me, I make a weak prediction that they will fail, in this instance, and am asking others to enlighten me about my (knowingly faulty) a priori reasoning.

I apologize if I'm belaboring the point, but I know that I'm going against the grain of the community and could be misconstrued. I want to be clear so as not to be misrepresented.

[-][anonymous]20

Given the rich history of cultures we have to draw on, I think it should be unsurprising that we have more words to use than actual concepts to apply them to (especially in nebulous topics like consciousness).

Let me switch gears and use another example. Reality, World, Being, Existence, Nature, Universe, Cosmos, ect. Aren't these all basically referring to the same thing?

I'm suggesting that perhaps a big problem with a scientific theory of consciousness in the current landscape is various vague terms for a single concept.

With that in mind, I humbly suggest that "mind", "consciousness", "experience", "qualia" and similar terms are redundant. What they all refer to is "subjective".

Consciousness, as far as I can tell, and someone feel free to kindly correct me if I'm wrong, is just a fancy word for the subjective part of reality.

So, with that in mind, what precisely are our expectations for a scientific theory of consciousness?

Are we literally trying to develop an objective description of subjectivity?

If not, how is what we are trying to do different?

If so, while it seems impossible at first, there may be options.

[-][anonymous]00

I think I'm trying to point at a different issue altogether. The great Rationalists of history (Leibniz, Spinoza, Descartes) all left maps, with their own idiosyncracies. The ancient East left a variety of different maps. The Greeks left a few different versions too.

Our current map, seems to have redundant features. For example. Is there a significant difference between mind and consciousness? Hypothetically, if we came to fully understood mind, what would be left to know about consciousness?

I'm not sure what you mean by "mind" or "consciousness." I usually think of a mind as the content of a consciousness. I don't know yet if that is an artificial distinction.

[-][anonymous]10

Couldn't one equally suggest that consciousness is content of a mind?

I could be missing something, but I guess the approach I'm saying is identify certain concepts and then label them.

And so far, I haven't seen much in the way of a standard distinction between consciousness and mind and experience and qualia and phenomenal reality, ect.

You have a point.

"Minds are made of thoughts."

Is that a coherent thing to say?

[-][anonymous]10

Does having a thought make something a mind?

Or does having a mind make something think?

I think the most honest thing to say is that as of right now, there isn't a material, or spatial, or temporal description of how these things are related. Which comes first temporally, which is larger spatially, which is more complex materially. None of those questions have answers.

I think we can say with a pretty straight face that we all have subjective experiences. How that involves minds creating consciousness or consciousness creating minds is something of which I'm skeptical.

Phenomenological investigation allows us to break something like consciousness down.

We for example think that perceiving the qualia of red and blue are each something that consciousness is about. We can add a third color base color via gene therapy and model ask people do describe how their conscious experience of colors changes.

Just because investigating it might need conceptual advances doesn't mean that those can't be made.

Describing novel qualia is famous difficult.

Difficult is not the same thing as impossible. Doing advanced math is also difficult.

I was thinking of writing "difficult going on impossible" . No one can do it reliably at all, strictly speaking.

I don't think that's the case. I do think there are people who do get something out of phenomenological investigation.

The practically relevant philosophical question is not "cam science understand consciousness?", but "what can infer from observing the correlates of consciousness, or from observing their absence?". This is the question that for example anesthesiologists have to deal with on a daily basis.

When formulated as this, the problem is really not that different from other scientific problems where causality must be detected. Detecting causal relations is famously hard - but it's not impossible. (We're reasonably certain, for example, that smoking causes cancer.)

In light of this, maybe Bradford Hill criteria can be applied. For example, if we're presented with the problem of a non-consious AI agent that wants to convince us of being conscious, then it's likely we can reject its claims by applying the consistency criteria. We could secretly create other instances of the same AI agent, put them in modified environments (e.g. in an environemt where the motivation to lie about being conscious is removed), and then obseve whether the claims of these instances are consistent.

Similarly, if the internal structure of the agent is too simple to allow consciousness (e.g. the agent is a Chinese room with table-lookup based "intelligence", or a bipartite graph with a high Phi value), we can reject the claim on the plausibility criteria. (Note that the mechanist for intelligence is not a priori required to be the biological one, or it's emulation. For an analogy, we don't reject people's claims of having qualia just because we know that the don't have the ordinary biological mechanisms for them. Persons who claim to experience phantom pain in their amputated limbs are as likely to be seriously treated by medical professionals as persons who experience "traditional", corporeal pain.)

In light of this, maybe Bradford Hill criteria can be applied. For example, if we're presented with the problem of a non-consious AI agent that wants to convince us of being conscious, then it's likely we can reject its claims by applying the consistency criteria. We could secretly create other instances of the same AI agent, put them in modified environments (e.g. in an environemt where the motivation to lie about being conscious is removed), and then obseve whether the claims of these instances are consistent.

This only solves half the problem. If the AI has no motivation to say that it is conscious, we have no reason to think that it will. We would assume that both copies were non-conscious, because it had no motivation to convince us otherwise.

I suppose what we need is a test under which an AI has motivation to declare that it is conscious iff it acctully is conscious. Does anyone have any idea for how to actually design such a test?

[-]lmm10

Surely this is just a particular case of "we want the AI to figure out things about the world, and tell us those things truthfully"? If you can figure out how to get the AI to tell us whether investing more money into AI research is likely to lead to good outcomes, and not lie about this, the same method would work for getting it to tell us whether it's conscious.

How would we define "conscious" in order to ask the question?

[-]lmm10

The same way we define it when asking other people?

Which is what, specifically?

I think we do it indexicly. I use a word in context, and since you have a parallel expedience in the same context, I never have to make clear exactly (at least in terms of Intension) what I mean, you have the same experience and some can infer the label. Ask an automation "do you have emotions?" and it may observe human use of the word emotion, conclude that "emotion" is an automatic behavioral response to conditions and changes in conditions that affect one's utility function, and declare that, yes it does have emotion. Yet, of course this completely misses what we meant by emotion, which is a subjective quality of experience.

Can you make a being come to understand the concept of subjectivity, it doesn't itself embody a subjective perspective?

Alternatively, if you asked me “What is red?” I could point to a stop sign, then to someone wearing a red shirt, and a traffic light that happens to be red, and blood from where I accidentally cut myself, and a red business card, and then I could call up a color wheel on my computer and move the cursor to the red area. This would probably be sufficient, though if you know what the word “No” means, the truly strict would insist that I point to the sky and say “No.”

This only communicates if the person you are trying to explain "red" to can perceive color.

The problem is, that my subjective experience of red is always accompanied by a particular range of wavelengths of light. Yet, when I say the word red, I don't mean the photons that are of that frequency, I mean the subjective experience that those photons cause. But, since the one always accompanies the other, someone naive of color might think I, mean the mathematical features of the waves reflected from the objects to which I'm pointing.

[-]lmm00

If you can't express the question then you can't be confident other people understand you either. Remember that some people just don't have e.g. visual imagination, and don't realise there's anything unusual about them.

Now I'm wondering whether I'm conscious, in your sense. I mean, I feel emotion, but it seems to adequately correspond to your "automaton" version. I experience what I assume is consciousness, but it seems to me that that's just how a sufficiently advanced self-monitoring system would feel from the inside.

Yes. I'm wondering if these dispute simply resolve to having different subjective experiences of what it means to be alive. In fact maybe the mistake is assuming that p-zombies don't exist. Maybe some humans are p-zombies!

However,

that's just how a sufficiently advanced self-monitoring system would feel from the inside.

seems like almost a contradiction in terms. Can a self monitoring system become sufficiently advanced without feeling anything (just as my computer computes, but I suppose, doesn't feel)?

[-]lmm-10

Can a self monitoring system become sufficiently advanced without feeling anything (just as my computer computes, but I suppose, doesn't feel)?

I think not. But I think that makes it entirely unsurprising, obvious even, that a more advanced computer would feel.

If so, I want to know why.

[-]lmm-10

Because it seems like the most plausible explanation for the fact that I feel, to the extent that I do. (also it explains the otherwise quite confusing result that our decision-making processes activate after we've acted for many kinds of actions, even though we feel like our decision determined the action).

I don't know what that second thing has to do with consciousness.

We're reasonably certain, for example, that smoking causes cancer.

We can identify cancer and make a distinction between cancer and the absence of cancer. We might be wrong sometimes, but an autopsy is pretty reliable, at least after the fact. The same cannot be said of conciseness, since it is in nature (NOT IN CAUSE) non-physical. I realize that I need demonstrate this. That may take some time to write up.

I tend to agree with you, although I think of this as a strict limitation of empiricism.

We expand scientific knowledge through the creation of universal experiences that are (in principle) available to anyone- replicable experiments- and on the models that follow from those experiences. Consciousness, in contrast, is an experience of being onesself (and in exceptional cases, the experience of being aware that one is aware of onesself, and on down the rabbit hole). What would it mean to create the experience of being a particular entity, accessible to any observer? That question looks suspiciously like gibberish. The excellent and admirable What is it like to be a bat? develops this idea quite a bit.

But hopefully, this problem becomes fairly trivial even if it doesn't disappear as such. We can certainly notice that human bodies tend to seem conscious and that shoes tend not to, and by developing AI from a (gulp) basically phenomenological perspective we can create a future we have every reason to believe is rich with selves and perspectives, no more a leap of faith than biological reproduction. We just won't be able to point a consciousness-detecting machine at them and wait for it to go 'ping'.

Having been able to firsthand experinece what it is like to be an echolocator I don't find the question gibberish at all.

Even with ordinary evidence you don't have access to other peoples experiences of the aparatus. That is I will never know what the thermometer will look to you. And if I am like color blind I can never have quite same experience. But still we get compatible enough experiences to constuct a "thermometer reading" that is same no matter the subject. In theory it should not be any more harder to construct such experiences that stand for experiences rather than temperatures.

Having been able to firsthand experinece what it is like to be an echolocator

Elaborate?

comment in a previous thread with similar topic

I heard on the televisionthat some blind click had developed the skill of seeing with clicks. It sounded cool and worth the effort so I trained myself to have that abiilty too. Yes, it was fun as predicted but not totally mindblowing. No, it is not impossible.

Small babies have eyes that recieve light but they can't see because they can't process the information sufficently. For hearing people retain this property after being 3 days old. There is no natural incentive to be particulaltry picky about hearing, you can be a human just fine without being a echolocator (with just stereo hearing) and not even know you are missing anything. (Humans are seers, not sniffers like dogs or hearers like bats. Humans are also trichronmats while the average animal is a quadrochromat and yes still a human that doesn' feel like missing out on anything because you don't know you are the handicapped minority)

The argument is like because people are naturally illiterate they can't possibly imagine what it would be to see instead of hear words. If you don't go outside of the experience of a medieval peasant that might hold. If you are given a text ina foreign alphabeth and later given the alphabeth and asked to point out the letters you saw you might not be able to complete the task. That you have this property doesn't mean it can't be changed with training. Your ability to see letters will be improved if you work on your literacy. People as able to work for a more efficient sensory processing. And this includes also high end stuff such as synesthetic ability to use spatial metaphors for amounts. There are people whos processes of identifying a letter/number give it a color associaton. It's not that the information would be in the wrong format for the brain to accept it. It is that the brain is in a insufficiently expressive format yet to represent the stimuli. But it is more of the duty of the brain to change rather than the incomprehensibility of the object. Map and territority etc.

People that argue that imagination can't encompass that have not seriosly tried. And even if they have seriosly tried that is more evidence for their lower than average imagination capability than the truth of their argument. "What it is to be a human" isn't even nearly so standard that it can be referenced as single monolithic concept much less a axiom that doesn't need to be stated.

So claiming logical impossibility is hasty in the greatest measure available.

by developing AI from a (gulp) basically phenomenological perspective we can create a future we have every reason to believe is rich with selves and perspectives, no more a leap of faith than biological reproduction.

Can you elaborate on this?

Well, one of the reasons that the Turing Test has lasted so long as a benchmark, despite its problems, is the central genius of holding inorganic machines to the same standards as organic ones. Notwithstanding p-zombies and some of the weirder anime shows, we're actionably and emotionally confident in the consciousness of the humans that surround us every day. We can't experience these consciousnesses directly, but we do care about their states in terms of both instrumental and object-level utility.

An AGI presents new challenges, but we've already demonstrated a basic willingness to treat ambulatory meat sacks as valuable beings with an internal perspective. By assigning the same sort of 'conscious' label to a synthetic being who nonetheless has a similar set of experiential consequences in our lives, we can somewhat comfortably map our previous assumptions on to a new domain. That gives us a beachhead, and a basis for cautious expansion and observation in the much more malleable space of inorganic intelligences.

we can somewhat comfortably map our previous assumptions on to a new domain.

I'm not sure how comfortably.

I saw a bit of the movie her about the love affair between a guy and his operating system. It was horrifying to me, but I think for a different reason than everyone else in the room. I was thinking, "he might be falling in love with an automaton. How do we know if he is in a relationship with another mind or just an unthinking mechanism of gears and levers that looks like another mind from the outside?" The idea of being emotionally invested in an emotional void, bothers me. I want my relationships to be with other minds.

Some here see this as a meaningless distinction. The being acts the same as a mind, so for all intents and purposes it is a mind. What difference does it make to your utility if the relationship is with an empty chasm shaped like a person? The input-output is the same.

Perhaps. I'm still working around this, and perhaps my discomfort is a facet of a outdated worldview. I'll note however, that this reduces charity to fuzzy-seeking. It doesn't make any difference if you actually help as long as you feel like you help. If presented the choice, saving a life is indifference to saving every life.

In any case, I feel safe in presuming the consciousness of other humans, not because they resemble me in outputs, as because we were both produced by the same process of evolution, and it would be strange if the evolution made me conscious but not the beings that are genetically basically-identical to me. I do not so readily make that assumption for a non-human, even a human-brain-emulation running on hardware other than a brain.

http://philpapers.org/rec/ARGMAA-2 - this may be relevant to your question, although I haven't read the whole article yet.

[-]see00

1) Conscious beings reasonably often try to predict their own future state or the state of other minds.

2) In order to successfully mimic a conscious being, a p-zombie would have to also engage in this behavior, predicting its own future states and the future states of other minds.

3) In order to predict such future states, it would seem necessary that a p-zombie would have to have at least some ability to model the states of minds, including its own.

Now, before we go any further, how does consciousness differ from having a model of the internal states of one's own mind?

how does consciousness differ from having a model of the internal states of one's own mind?

This is a great question and I'm not sure.

There's a difference between understanding processes and their causes from the outside and experiencing them subjectively, from the inside, as qualia.

Light exists and there are rules that govern it. A human can learn to understand the math that governs photons, the biology of how eyes work, the neurology that determines how the signals are passed through neurons and "processed" in the occipital lobe, and even how the brain will react to that stimuli. A computer algorithm could model all of that. But even if you understand the chain of causes perfectly, if you are color blind, you still don't experience the subjective qualia of color.

Some psychopaths are notoriously charming and expert manipulators. They are extremely skilled at modeling others. But, at least in some cases, that modeling is done without any affective response, unlike most of us, who feel something when we someone in pain. Apparently, modeling can occur without subjective (motivating) affect. Conceivably, one could even model one's self without affect.

Conscious expedience seems to be something more than just modeling, since all the modeling in the world, from the outside, does not produce a conscious experience. Yet, that experience incontrovertibly exists, in some sense. I'm having it. Whatever causes color, nothing could ever disprove that color isn't.

(This feels like a problem of free will to me. I have written essays all-but-proving, that Free Will is an incoherent concept. Yet this fails to persuade some, who see their personal, subjective feeling of having chosen to indisputable. I can't deny the existence of my qualia, since it is the only thing I have direct access to. In that I experience it, it is. However, I think I may be confuses in much the same way the proponent of Free Will is confused. Someone please dissolve the question for me.)

We assume that those around us are conscious, and we have good reason to do so, but we can't rely on that assumption in any experiment in which we are investigating consciousness

This is contraidtory in strictly literal terms. Either the reasons are godo and we can build a technical equivalent or actually we just approve of our stance without ever having any basis to belief so. It is kinda like Turing having said that when in doubt whether agents you deal with are unconcsious or not, it is polite to assume so. But this doesn't have epistemological weight. Maybe consiousness turns out to be a fiction necceary for humans to aknowledge psychology in a similar way that some people need the concept of god to found morality. But consiousness as a mode of social interaction doesn't have import to it's turth.