I found this article on the Brain Preservation Foundation's blog that covers a lot of common theories of consciousness and shows how they kinna miss the point when it comes to determining if certain folks should or should not upload our brains if given the opportunity.

Hence I see no reason to agree with Kuhn’s pessimistic conclusions about uploading even assuming his eccentric taxonomy of theories of consciousness is correct.  What I want to focus on in the reminder of this blog is challenging the assumption that the best approach to consciousness is tabulating lists of possible theories of consciousness and assuming they each deserve equal consideration (much like the recent trend in covering politics to give equal time to each position regardless of any empirical relevant considerations). Many of the theories of consciousness on Kuhn’s list, while reasonable in the past, are now known to be false based on our best current understanding of neuroscience and physics (specifically, I am referring to theories that require mental causation or mental substances). Among the remaining theories, some of them are much more plausible than others.

http://www.brainpreservation.org/not-all-theories-of-consciousness-are-created-equal-a-reply-to-robert-lawrence-kuhns-recent-article-in-skeptic-magazine/

New Comment
17 comments, sorted by Click to highlight new comments since:

My reply to Cerullo:

"If we exactly duplicate and then emulate a brain, then it has captured what science tells us matter for conscious[ness] since it still has the same information system which also has a global workspace and performs executive functions. "

It'll have what science tells us matters for the global workspace aspect of consciousness (AKA access consciousness, roughly). Science doesn't tell us what is needed for phenomenal consciousness (AKA qualia) , because it doesn't know. Consciousness has different facets. You are kind of assuming that where you have one facet, you must have the others...which would be convenient, but isn't something that is really known.

"The key step here is that we know from our own experience that a system that displays the functions of consciousness (the easy problem) also has inner qualia (the hard problem)."

Our own experience pretty much has a sample size of one, and therefore is not a good basis for a general law. The hard question here is something like: "would my qualia remain exactly the same if my identical information-processing were re-implemented in a different physical substrate such as silicon?". We don't have any direct experience of that would answer it. Chalmer's' Absent Qualia paper is an argument to the effect, but I wouldn't call it knowledge. Like most philosophical arguments, its an appeal to intuition., and the weakness of intuition is that it is kind of tied to normal circumstances. I wouldn't expect my qualia to change or go missing while my brain was functioning within normal parameters...but that is the kind of law that sets a norm within normal circumstances, not the kind that is universal and exceptionless. Brain emulation isn't normal, it is unprecedented and artificial.

Thanks for the post, I really liked the article overall. Nice general summary of the ideas. I agree with torekp. I also think that the term consciousness is too broad. Wanting to have a theory of consciousness is like wanting to have a "theory of disease". The overall term is too general and "consciousness" can mean many different things. This dilutes the conversation. We need to sharpen our semantic markers and not to rely on intuitive or prescientific ideas.Terms that do not "carve nature well at its joints" will lead our inquiry astray from the beginning.

When talking about consciousness one can mean for example:

-vigilance/wakefulness

-attention: focusing mental resources on specific information

-primary consciousness: having any form of subjective experience

-conscious access: how the attended information reaches awareness and becomes reportable to others

-phenomenal awareness/qualia

-sense of self/I

Neuroscience is needed to determine if our concepts are accurate (enough) in the first place. It can be that the "easy problem" is hard and the "hard problem" seems hard only because it engages ill posed intuitions.

[-]oge10

I agree re: consciousness being too broad a term.

I use the term in the sense of "having an experience that isn't directly observable to others" but as you noted, people use it to mean LOTS of different other things. Thanks for articulating that thought.

The author is overly concerned about whether a creature will be conscious at all and not enough concerned about whether it will have the kind of experiences that we care about.

[-]oge20

My understanding is that if the creature is conscious at all, and it acts observably like a human with the kind of experience we care about, THEN it likely has the kind of experiences we care about.

Do you think it is likely that the creatures will NOT have the experiences we care about?

(just trying to make sure we're on the same page)

It depends how the creatures got there: algorithms or functions? That is, did the designers copy human algorithms for converting sensory inputs into thoughts? If so, then the right kind of experiences would seem to be guaranteed. Or did they find new ways to compute similar coarse-grained input/output functions? Then, assuming the creatures have some reflexive awareness of internal processes, they're conscious of something, but we have no idea what that may be like.

Further info on my position.

That is, did the designers copy human algorithms for converting sensory inputs into thoughts? If so, then the right kind of experiences would seem to be guaranteed

You seem to be rather sanguine about the equivalence of thoughts and experiences.

(And are we talking about equivlanet experiences or identical experiences? Does a tomato have to be coded as red?)

Or did they find new ways to compute similar coarse-grained input/output functions? Then, assuming the creatures have some reflexive awareness of internal processes, they're conscious of something, but we have no idea what that may be like.

It's uncontroversial that the same coarse input-output mappings can be realised by different algorithms..but if you are saying that consc. supervenes on the algorithm, not the function, then the real possibility of zombies follows, in contradiction to the GAZP.

(Actually, the GAZP is rather terrible because irt means you won't; even consider the possibility of a WBE not being fully conscious, rather than refuting it on its own ground).

I'm not equating thoughts and experiences. I'm relying on the fact that our thoughts about experiences are caused by those experiences, so the algorithms-of-experiences are required to get the right algorithms-of-thoughts.

I'm not too concerned about contradicting or being consistent with GAZP, because its conclusion seems fuzzy. On some ways of clarifying GAZP I'd probably object and on others I wouldn't.

You only get your guarantee if experiences are the only thing that can cause thoughts about experiences. However, you don;t get that by noting that in humans thoughts are usually caused by experiences. Moreover, in a WBE or AI, there is always a causal account of thoughts that doesn't mention experiences, namely the account i terms of information processing.

You seem to be inventing a guarantee that I don't need. If human algorithms for sensory processing are copied in full, the new beings will also have most of their thoughts about experience caused by experience. Which is good enough.

Mentioning something is not a prerequisite for having it.

If human algorithms for sensory processing are copied in full, the new beings will also have most of their thoughts about experience caused by experience

That reads like a non sequitur to me. We don't know what the relationship between algorithms and experience is.

Mentioning something is not a prerequisite for having it.

It's possible for a description that doesn't explicitly mention X to nonethless add up to X, but only possible..you seem to be treating it as a necessity.

I'm convinced by Kurzweil-style (I think he originated them, not sure) neural replacement arguments that experience depends only on algorithms, not (e.g.) the particular type of matter in the brain. Maybe I shouldn't be. But this sub-thread started when oge asked me to explain what the implications of my view are. If you want to broaden the subject and criticize (say) Chalmers's Absent Qualia argument, I'm eager to hear it.

If you mean this sort of thing http://www.kurzweilai.net/slate-this-is-your-brain-on-neural-implants, then he is barely arguing the point at all...this is miles below philosophy-grade thinking..he doesn't even set out a theory of selfhood, just appeals to intuitions. Absent Qualia is much better, although still not anything that should be called a proof.

I got started by Sharvy, It aint the meat its the motion, but my understanding was Kurzweil had something similar first. Maybe not. Just trying to give the devil his due.

Everyone should care about pain-pleasure spectrum inversion!