Followup toHow an Algorithm Feels from the Inside, Dissolving the Question, Wrong Questions

When you are faced with an unanswerable question—a question to which it seems impossible to even imagine an answer—there is a simple trick which can turn the question solvable.

Compare:

  • "Why do I have free will?"
  • "Why do I think I have free will?"

The nice thing about the second question is that it is guaranteed to have a real answer, whether or not there is any such thing as free will.  Asking "Why do I have free will?" or "Do I have free will?" sends you off thinking about tiny details of the laws of physics, so distant from the macroscopic level that you couldn't begin to see them with the naked eye.  And you're asking "Why is X the case?" where X may not be coherent, let alone the case.

"Why do I think I have free will?", in contrast, is guaranteed answerable.  You do, in fact, believe you have free will.  This belief seems far more solid and graspable than the ephemerality of free will.  And there is, in fact, some nice solid chain of cognitive cause and effect leading up to this belief.

If you've already outgrown free will, choose one of these substitutes:

  • "Why does time move forward instead of backward?" versus "Why do I think time moves forward instead of backward?"
  • "Why was I born as myself rather than someone else?" versus "Why do I think I was born as myself rather than someone else?"
  • "Why am I conscious?" versus "Why do I think I'm conscious?"
  • "Why does reality exist?" versus "Why do I think reality exists?"

The beauty of this method is that it works whether or not the question is confused.  As I type this, I am wearing socks.  I could ask "Why am I wearing socks?" or "Why do I believe I'm wearing socks?"  Let's say I ask the second question.  Tracing back the chain of causality, I find:

  • I believe I'm wearing socks, because I can see socks on my feet.
  • I see socks on my feet, because my retina is sending sock signals to my visual cortex.
  • My retina is sending sock signals, because sock-shaped light is impinging on my retina.
  • Sock-shaped light impinges on my retina, because it reflects from the socks I'm wearing.
  • It reflects from the socks I'm wearing, because I'm wearing socks.
  • I'm wearing socks because I put them on.
  • I put socks on because I believed that otherwise my feet would get cold.
  • &c.

Tracing back the chain of causality, step by step, I discover that my belief that I'm wearing socks is fully explained by the fact that I'm wearing socks.  This is right and proper, as you cannot gain information about something without interacting with it.

On the other hand, if I see a mirage of a lake in a desert, the correct causal explanation of my vision does not involve the fact of any actual lake in the desert.  In this case, my belief in the lake is not just explained, but explained away.

But either way, the belief itself is a real phenomenon taking place in the real universe—psychological events are events—and its causal history can be traced back.

"Why is there a lake in the middle of the desert?" may fail if there is no lake to be explained.  But "Why do I perceive a lake in the middle of the desert?" always has a causal explanation, one way or the other.

Perhaps someone will see an opportunity to be clever, and say:  "Okay.  I believe in free will because I have free will.  There, I'm done."  Of course it's not that easy.

My perception of socks on my feet, is an event in the visual cortex.  The workings of the visual cortex can be investigated by cognitive science, should they be confusing.

My retina receiving light is not a mystical sensing procedure, a magical sock detector that lights in the presence of socks for no explicable reason; there are mechanisms that can be understood in terms of biology.  The photons entering the retina can be understood in terms of optics.  The shoe's surface reflectance can be understood in terms of electromagnetism and chemistry.  My feet getting cold can be understood in terms of thermodynamics.

So it's not as easy as saying, "I believe I have free will because I have it—there, I'm done!"  You have to be able to break the causal chain into smaller steps, and explain the steps in terms of elements not themselves confusing.

The mechanical interaction of my retina with my socks is quite clear, and can be described in terms of non-confusing components like photons and electrons.  Where's the free-will-sensor in your brain, and how does it detect the presence or absence of free will?  How does the sensor interact with the sensed event, and what are the mechanical details of the interaction?

If your belief does derive from valid observation of a real phenomenon, we will eventually reach that fact, if we start tracing the causal chain backward from your belief.

If what you are really seeing is your own confusion, tracing back the chain of causality will find an algorithm that runs skew to reality.

Either way, the question is guaranteed to have an answer.  You even have a nice, concrete place to begin tracing—your belief, sitting there solidly in your mind.

Cognitive science may not seem so lofty and glorious as metaphysics.  But at least questions of cognitive science are solvable.  Finding an answer may not be easy, but at least an answer exists.

Oh, and also: the idea that cognitive science is not so lofty and glorious as metaphysics is simply wrong.  Some readers are beginning to notice this, I hope.

 

Part of the sequence Reductionism

Next post: "Mind Projection Fallacy"

Previous post: "Wrong Questions"

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 1:02 PM
Select new highlight date
All comments loaded

OK, time to play:

Q: Why am I confused by the question "Do you have free will?"? A: Because I don't know what "free will" really means. Q: Why don't I know what "free will" means? A: Because there is no clear explanation of it using words. It's an intuitive concept. It's a feeling. When I try to think of the details of it, it is like I'm trying to grab slime which slides through my fingers. Q: What is the feeling of "free will"? A: When people talk of "free will" they usually put it thusly. If one has "free will", he is in control of his own actions. If one doesn't have "free will" then it means outside forces like the laws of physics control his actions. Having "free will" feels good because being in control feels better then being controlled. On the other hand, those who have an appreciation for the absolute power of the laws of physics feel the need to bow down to them and acknowledge their status as the ones truly in control. The whole thing is very tribal really. Q: Who is in control, me or the laws of physics? A: Since currently saying [I] is equivalent to saying [a specific PK shaped collection of atoms operating on the laws of physics], then saying "I am in control" is equivalent to saying "a specific PK shaped collection of atoms operating on the laws of physics is in control". The laws of physics are not an outside force apart from me, they are inside me too. Q: Why do people have a tendency to believe their minds are somehow separate from the rest of the universe? A: Ugghhh... I don't know the details well enough to answer that.

Mitchell, Unknown, I worry you may have misunderstood the point.

The question "Why am I conscious?" is not meant to be isomorphic to the question "Why do I think I'm conscious?" It's just that the latter question is guaranteed to be answerable, whether or not the first question contains an inherent confusion; and that the second question, if fully answered, is guaranteed to contain whatever information you were hoping to get out of the first question.

"Explain" is a recursive option - whenever you find an answer, you can hit "Explain" again, unless you hit "Worship" or "Ignore" instead. If the answer to "Why do I think I'm conscious?" is "Because I'm conscious"; and you can show that this is true evidence (that is, you would not think you were conscious if you were not conscious); and you carry out this demonstration without reference to any mysterious concepts (i.e., "Because I directly experience qualia!" contains four mysterious concepts, not counting "Because"); then you could hit the "Explain" button again regarding "Because I'm conscious."

The point is that by starting with a belief, you start with an unconfused thing - the belief may be about something confused, but the belief itself is just a cognitive object sitting there in your mind. Even if its meaning is self-contradictory, the representation is just a representation. "This sentence is false" is paradoxical when you try to interpret it, but there is nothing paradoxical about writing four English words between quote marks, it happens all the time.

If you're asking "Why is the sentence 'This sentence is false' both true and false?" you'll end up confused, because you dereferenced it in the question, and the referent is self-contradictory. Ask "Why do I think the sentence 'This sentence is false' is both true and false?" and you'll be able to see how your mind, as an interpreter, goes into an infinite loop - suggesting that not every syntactical English sentence refers to a proposition.

By starting with a belief, un-derefenced, inside quote marks, you start with an unconfused thing - a cognitive representation. Then you keep tracing back the chain of causality until you arrive at something confusing. Then you unconfuse it. Then you keep tracing.

It really does help to start with something unconfused.

Unknown said: So there is an actually unanswerable question (at least as far as anyone knows, by any concepts anyone has yet conceived of), and it is not a meaningless question.

1) No one knows what science doesn't know.

2) Perhaps you should ask "Why do I think this question is unanswerable?" rather than "Why is this question unanswerable?"

"No one knows what science doesn't know."

This sort of anthropomorphic bias leads to conceptual errors. 'Science' is the method of acquiring knowledge and the collection of acquired knowledge to which the method is rigorously applied. It is incapable of knowing anything independently of what individuals know; in fact, it can't know anything at all without some knowing individual to practice it. And to be sure, we can know things 'science doesn't know': we know we are in love, that we are happy or sad, that we played baseball for the first time when we were 6 years old at the park in Glens Falls, etc.

"So, why do you believe you've stopped beating your wife?"

I...er...I...crap.

Eliezer Yudkowsky: (can we drop the underscores now?): You did not break the "perception of wearing socks" into understandable steps, as you demanded for the perception of free will. You certainly explained non-confusingly some of the steps, but you left out a very critical step, which is the recognition of socks within the visual input that you receive. That is a very mysterious step indeed, since your cognitive architecture is capable of recognizing socks within an image, even against an arbitrary set of transformations: rotation, blurring, holes in the socks, coloration, etc.

And I know you didn't simply leave out an explanation that exists somewhere, because such understanding would probably mean a solution for the captcha problem. So I would have to say that made the same unacceptable leap that you attacked in the free will example.

This is one of my all-time favourite posts of yours, Eliezer. I can recognize elements of what you're describing here in my own thinking over the last year or so, but you've made the processes so much more clear.

As I'm writing this, just a few minutes after finishing the post, it's increasingly difficult not to think of this as "obvious all along" and it's getting harder to pin down exactly what in the post that caused me to smile in recognition more than once.

Much of it may have been obvious to me before reading this post as well, but now the verbal imagery needed to clearly explain these things to myself (and hopefully to others) is available. Thank you for these new tools.

I think there is a real something for which free will seems like a good word. No, it's not the one true free will, but it's a useful concept. It carves reality at its joints.

Basically, I started thinking about a criminal, say, a thief. He's on trial for stealing a dimond. The prosecutor thinks that he did it of his own free will, and thus should be punished. The defender thinks that he's a pathological cleptomaniac and can't help it. But as most know, people punish crimes mostly to keep them from happening again. So the real debate is whether imprisoning the thief will discourage him.

I realized that when people think of the free will of others, they don't ask whether this person could act differently if he wanted. That's a Wrong Question. The real question is, "Could he act differently if I wanted it? Can he be convinced to do something else, with reason, or threats, or incentives?"

From your own point of view that stands between you and being able to rationally respond to new knowledge makes you less free. This includes shackles, threats, bias, or stupidity. Wealth, health, knowledge make you more free. So for yourself, you can determine how much free will you have by looking at your will and seeing how free it is. Can you, as Eliezer put it, "win"?

I define free will by combining these two definitions. A cleptomaniac is a prisoner of his own body. A man who can be scared into not stealing is free to a degree. A man who can swiftly and perfetly adapt to any situation, whether it prohibits stealing, requires it, or allows it, is almost free. A man becomes truly free when he retains the former abilities, and is allowed to steal, AND has the power to change the situation any way he wants.

Quantum magic isn't free will, it's magic.

(Note: this comment is a reply to this comment. Sorry for any confusion.)

Sereboi, I think once again we're miscommunicating. You seem to think I'm looking for a compromise between free will and determinism, no matter how much I deny this. Let me try an analogy (stolen from Good and Real).

When you look in a mirror, it appears to swap left and right, but not up and down; yet the equations that govern reflection are entirely symmetric: there shouldn't be a distinction.

Now, you can simply make that second point, but then a person looking at a mirror remains confused, because it obviously is swapping left and right rather than up and down. You can say that's just an illusion, but that doesn't bring any further enlightenment.

But if you actually ask the question "Why does a mirror appear to switch left and right, by human perception?" then you can make some progress. Eventually you come to the idea that it actually reverses front and back, and that the brain still looks to interpret a reflected image as a physical object, and that the way it finds to do this is imagining stepping into the mirror and then turning around, at which point left and right are reversed. But it's just as valid to step into the mirror and do a handstand, at which point top and bottom are reversed; it's just that human beings are more bilaterally symmetric than up-down, so this version doesn't occur to us.

Anyway, the point is that you learn more deeply by confronting this question than by just stopping at "oh, it's an illusion", but that the mathematical principle is in no way undermined by the solution.

The argument I'm making is that the same thing carries through in the free will and determinism confusion. By looking at why it feels like we have choices between several actions, any of which it feels like we could do, we learn about what it means for a deterministic algorithm to make choices.

I don't know whether this question interests you at all, but I hope you'll accept that I'm not trying to weaken determinism!

Nice post, and great method.

On free will, I'd like to pose a question to anyone interested: What do you think it would feel like not to have free will?

(Or, what do you think it would feel like to not think you have free will?)

During the first month or so after my stroke, while my nervous system was busily rewiring itself, I experienced all sorts of transient proprioceptic illusions.

One of them amounted to the absence of the feeling of free will... I experienced my arm as doing things that seemed purposeful from the outside, but for which I was aware of no corresponding purpose.

For example, I ate breakfast one morning without experiencing control over my arm. It fed me, just like it always had, but I didn't feel like I was in control of it.

To give you an idea of how odd this was: at one point my arm put down the food item it was holding to my mouth, and I lay there somewhat puzzled... why wasn't my arm letting me finish it? Then it picked up a juice carton and brought it to my mouth, and I thought "Oh! It wants me to drink something... yeah, that makes sense."

It was a creepy experience, somewhat ameliorated by the fact that I could "take control" if I chose to... letting my arm feed me breakfast was a deliberate choice, I was curious about what would happen.

I think that's what it feels like to not experience myself as having free will, which is I think close enough to your second question.

As for your first question... I think it would feel very much like the way I feel right now.

Yeah, that's more or less how I interpreted it... not so much lag, precisely, as a failure to synchronize. There were lots of weird neural effects that turned up during that time that, on consideration, seemed to basically be timing/synchronization failures, whcih makese a lot of sense if various parts of my brain were changing the speed with which they did things as the brain damage healed and the swelling went down.

Of course, it's one thing to know intellectually that my superficially coherent worldview is the result of careful stitching together of outputs from independent modules operating at different rates on different inputs; it's quite another thing to actually experience that coherency breaking down.

Fascinating!

It felt like you couldn't control yourself, but which one of you (two) was really "yourself"? English usually refers to people and minds in the singular, but my mind feels more like a committee. Maybe the stroke drove more a wedge between the committee members than usual.

In this particular case, I don't think so.

I mean, we can go down the rabbit hole about what constitutes a "self," but in pragmatic terms, everything involved in making decisions seemed to be more or less aligned and coordinating as well as it ever does... what was missing was that I didn't have any awareness of it as coordinated.

In other words, it wasn't like my arm was going off and doing stuff that I had no idea why it was doing; rather, it was doing exactly what I would have made it do in the first place... I just didn't have any awareness of actually making it do so.

That said, the more extremely disjointed version does happen... google "alien hand syndrome."

I don't think I ever had this confused concept of free will. That is thinking that the future of my actions is undetermined until I make a decision or that my actions are governed by anything other than normal physics never made any sense to me at all.

To me possessing a free will means being in principle capable of being the causal bottleneck of my decisions other than through pure chance.

Making a decision means caching the result of a mental calculation about whether to take a certain course of action (which in humans has the strong psychological consequence of affirming that result).

Being the causal bottleneck is much more difficult to define than I thought when I started this post, but it involves comparing what sort of change to me would result in a different decision to what sort of changes to the rest of the world would result in the same.

The only ways I could see not having a free will would be either not being able to make decisions at all, or not being able to make decisions unless under the influence of something else that is itself the causal bottleneck of the decision, and which is not part of me. I can't see how the second could be the case without some sort of puppet master (and there has to be some reason against concluding that this puppet master is the real me), but it's not obvious why being under the control of the puppet master would feel any different.

I'll give (a few of them) a shot.

"Why do I think I have free will?" There seem to be two categories of things out there in the world: things whose behavior is easily modeled and thus predictable; and things whose internal structure is opaque (to pre-scientific people) and are best predicted by taking an "intensional stance" (beliefs, desires, goals, etc.). So I build a bridge, and put a weight on it, and wonder whether the bridge will fall down. It's pretty clearly the case that there's some limit of weight, and if I'm below that weight -- whether I use feathers or rocks -- the bridge will stay up; otherwise it will collapse. Very simple model, reasonably accurate.

In contrast, if I ask my officemate to borrow his pen, he may or may not give it to me. Trying to predict whether he will is impossible to do precisely, but responds best (for laypeople) to a model with beliefs, goals, memories, etc. Maybe he's usually helpful, and so will give me the pen. Maybe I made fun of his shirt color yesterday, and he remembers, and is angry with me, and so won't.

This "intensional stance" model requires some homunculus in there to "make a decision". It can decide to take whatever action it wants. I can't make it do anything (in constrast to a bridge, which doesn't "want" anything, and responds to my desires).

This is the theory element that gets labeled as "free will". It's that intensional actors appears to be able to do any action that they "want" or "decide" to do. That's part of the theory of predicting their future actions.

So, why do humans have free will but computers don't? Because most computers have behavior that is far easier to understand than human behavior, and no predictive value is gained by adopting the intensional stance towards them.

"Why do I think I was born as myself rather than someone else?"

Because a=a?

The beauty of this method is that it works whether or not the question is confused.

I have to admit, to me the "Why do I think I was born as myself rather than someone else" example seems so confused that I'm having difficulty even parsing the question well enough to apply the method.

And I know you didn't simply leave out an explanation that exists somewhere, because such understanding would probably mean a solution for the captcha problem.
Dileep, George, and Hawkins, Jeff. 2005. "A Hierarchical Bayesian Model of Invariant Pattern Recognition in the Visual Cortex." available from citeseer (direct download pdf) (Accessed November 9, 2011).

Eliezer, in the last few posts you have proposed a method for determining whether a question is confused (namely, ask why you're asking it), and then a method for getting over any sense of confusion which may linger even after a question is exposed as confused ("understand in detail how your brain generates the feeling of the question"). The first step is reasonable, though I'd think that part of its utility is merely that it encourages you to analyse your concepts for consistency. As for the second step, I do not recall experiencing this particular form of residual confusion; if I'm analysing a question and I still feel confused, I would think it was because I was not finished with the analysis.

So what's my problem? The issue is whether this procedure helps in any way with the answering of philosophical or metaphysical questions. I can see the first step leading to (1) a double-check that your concepts make sense (2) attention to epistemic issues. (1) is OK. But (2) is certainly a place where presuppositions can insert themselves. Suppose I'm asking myself "Why are there no positive integers a, b, c, n, with n > 2, such that a^n + b^n = c^n?" If I go reflexive and instead ask "Why do I think there is no such set of integers?", I might notice that this is merely an inductive generalization on my part, from the observed fact that no-one has ever found such a set of integers. And then, if I have a particular epistemology, I might say "But inductive generalizations can never be proved, and so my original question is pointless, because I will never know if there are indeed no such sets, short of being lucky enough to find a counterexample!" And maybe I'll throw in a personal confusionectomy just to finish the job; and the result would be that I never get to discover Wiles's proof of the theorem.

It is a somewhat silly example, but I would think that it illustrates a real hazard, namely the use of this procedure to rationalize rather than to explain.

Since our introspection ability is so limited, this method sounds like it could easily end up resulting in, not the correct explanation of the belief and explanation-away of the phenomenon, but a just-so story that claims to explain away something that might actually exist. This is not a Fully General Counterargument; a well-supported explanation of the belief is probably right, but more support is needed than the conjecture. Look how many candidate explanations have been offered for belief in free will.

Z. M., let me answer you indirectly. The working hypothesis I arrived at, after a long period of time, was a sort of monadology. Most monads have simple states, but there is (one hypothesizes) a physics of monadic interaction which can bring a monad into a highly complex state. From the perspective of our current physics, an individual monad is something like an irreducible tensor factor in an entangled quantum state. The conscious self is a single monad; conscious experience is showing us something of its actual nature; any purely mathematical description, such as physics presently provides, is just formal and falls short of the truth.

Now all that may or may not be true. As far as I am concerned, thinking in terms of monads has one enormous advantage, and that is that there is no need to falsify one's own phenomenology in order to fit it to a neurophysical apriori the way that, say, Dennett does. Dennett dismisses phenomenal color and the subjective unity of experience as "figment" and "the Cartesian theater", respectively, and I'm sure he does so because there is indeed no color in a billiard-ball materialism, and no Cartesian theater in a connectionist network. But for the neo-monadologist, because consciousness is being mapped onto the state of a single monad, the ontological mismatch does not arise. We will have a formal physics of monads, described mathematically, and then the fully enriched ontology of the individual monad, to be inferred from conscious phenomenology, and there is no need to convince yourself that you are actually a collection of atoms or a collection of neurons.

The downside is that there had better be a very high-dimensional coherent quantum subsystem of the brain which is physically and functionally situated so as to play the role of Cartesian theater, or else it's back to the theoretical drawing board.

But having dreamed up all of that, what do I see when I look at current attempts to understand the mind? The subjective facts are only crudely understood; and then they are further falsified and dumbed-down to fit the neurophysical apriori; but people believe this because they think the only alternative is superstition and dualism. It's certainly a lot easier to see it so starkly, when you have an alternative, but nonetheless it is possible to sense that something is going wrong even when you don't have the alternative. And that is why I object to this happy process of dissolving one's metaphysical questions in cognitive materialism. It is simply an invitation to deceive oneself in all those areas where physics-as-we-know-it is inherently incapable of giving an answer. Better to maintain the tension of not knowing, and maybe think of something new as a result.

It looks like the basic recipe for complacency being offered here is:

Something mysterious = Thoughts about something mysterious = Thoughts = Computation = Matter doing stuff = Something we know how to understand.

But if you really follow this procedure, you will eventually end up having to relate a subjective fact like "being a self" or "seeing blue" to a physical fact like "having a brain" or "signalling my visual cortex".

It seems that most materialists about the mind have a personal system of associations, between mental states and physical states, which they are happy to treat as identities (e.g. mental process X is physical process X', "from the inside"), and which are employed when they need to be able to interpret their own experience and their own thinking in material terms.

If you keep asking why, you will need to justify these alleged identities. In fact, if you really keep asking why, in my experience the identities appear untenable and based on a crude and radically incomplete description of the subjective facts, and you end up being interested in metaphysics, from both sides, material and mental.

"The nice thing about the second question is that it is guaranteed to have a real answer, whether or not there is any such thing as free will."

Who guaranteed this?

The claim that every fact, such as someone's belief, has a definite cause, is a very metaphysical claim that Eliezer has not yet established.

Q: Why do I think there is something instead of nothing? A: Because I think I'm experiencing, well, something. Q: Why do I think I'm experiencing something?

A: uh... dang, the urge is overwelming for me to say "Because I actually am experiencing something. That's the plainest fact of all, even though evidence in favor of it seems to be at the moment the least communicable sort of evidence of them all."

argh!

So, I see at least two possibilities here:

Either I'm profoundly confused about something, causing me to seem to think that I can't possibly be experiencing the thought of thinking I'm conscious without, well... experiencing it. (I think I experience the thought that I'm consciouss? But it sure seems like I'm experiencing that thought... argh...) so either way there's some profound confusion going on in my head.

Or I'm confused partly because I'm trying to think of what sort of state of affairs could result in me seeming to think I'm conscious without actually being so (I'm not talking about philosophical zombies here, I mean from the inside), and am confused because it may really be as incoherent an idea as it seems to me.

The question of free will at least "feels" solvable. That it can be broken down into more basic things. These two (why is there something instead of nothing, and what's the nature of consiousness (as in "feels like from the inside"/qualia/etc) are the Langford philosophical basilisk questions. May not have anything to do with the nature of the question itself, but seems to fry my brain any way I bang my head at it. :)

``Why do I think I can avoid literary effects and reason directly instead?''

I'm sure the meta-physicists will suggest something like the following. How do you know the causal chain you trace is meaningful? That is you are resting our ability to see thing on physics, and our ability to have a valid physics on being able to see things in the world. It is self-reinforcing but requires axioms taken on faith or blind chance to start things off. So is not really the same thing as meta-physics.

My reply would be to say, "Well, it works so far." And then get on with my life, and not worry about it.

This reminds me of "Why do I have qualia?" I've also asked "Why do I think I have qualia?" I then realized that that's still not quite enough. The right question (or at least one I have to answer first) is "What do I think 'qualia' are?" I'm still thoroughly confused by this question. You could try that with free will too.

"Why do I think I have free will?"

One answer might go like this: "But I don't think that. If I use W to denote the proposition that I have free will, I can think of no experiments whose results might provide evidence for or against W. I don't assign a high subjective probability to (W|). For any other proposition Y, I don't see any difference between the probability of (Y|W) compared to (Y|~W)".

"Nevertheless I choose to assume W because I often find it easier to estimate P(Y|W) than to directly estimate P(Y), especially when I can influence P(Y) by an act of 'will'."

A belief doesn't have to be useful to be valid; an assumption doesn't have to be true to be useful.

Z. M., "my" monads aren't much like Leibniz's. For one thing, they interact. It could even be called a psychophysical identity theory, it's just that the mind is identified with a single elementary entity (one monad with many degrees of freedom) rather than with a spatial aggregate of elementary entities (a monadic self will still have "parts" in some sense, but they won't be spatial parts). I suppose my insistence that physical ontology should be derived from phenomenological ontology, rather than vice versa, might also seem anti-materialist. (What I mean by this: In fundamental physics, the states of things are known by sets of numerical labels whose meaning is totally relative. All we know from the equation is that cause X turns state A into state B. It tells us nothing about state A in itself. But phenomenology offers us a direct glimpse of something, as Psy-Kosh struggles to express, a few comments back. At some level, it is what we have to work with and it is all we have to work with.) But the main thing is to get away from the assumptions of the neurophysical apriori, because they are inhibiting and distorting what passes for phenomenology today. The description of consciousness is probably best pursued in the almost-solipsistic frame of mind described by Husserl, in which one suspends the question of whether things actually exist, and focuses on the states of consciousness which somehow constitute their appearance. Being able to entertain the possibility of idealism is very conducive to this.

If (let us say) the brain really does have a functionally consequential coherent quantum subsystem, a sharply defined physical entity which really-and-truly is the self, and whose states are literally our states of consciousness, I would expect materialistically pursued neuroscience to eventually figure it out, because neuroscience does include the search for correlations between subjective experience and the physical reality. (Though if it were true, it might save a few years to have the hypothesis already out there in the literature, rather than having to wait for it to become screamingly obvious.) The same may go for whatever other unorthodox possibilities I haven't thought of. It is true that I am ready to give up right now on all existing materialist theories of consciousness; they are manifestly unable to explain even what color is.

So scientifically, I make a noise in favor of metaphysics because I think that will get us to the truth faster. Unfortunately, I doubt I can do the argument justice in off-the-cuff blog comments. I will just have to make an effort and write something longer. The other thing that worries me is the conjunction of information technology with antimetaphysical theories of the mind. There's even less of a reality check there than in neuroscience, when it comes to the attribution of mental properties. But that's a whole other topic.

"Why do I believe I am conscious?" = "Why am I conscious?"

"Why do I think reality exists?"

We could well be in a matrix world, with all an illusion. Or, perhaps we arrived just a moment ago, but intact with false implanted memories. (Sort of like the creationist explanation of evidence for evolution.)

The assumption that "reality exists" is mere convenience. It's helpful in order to predict my future observations (or so my current memory suggests to me). Even if this is a matrix world, there is still the EXACT SAME theory of "reality", which would then be used to predict the future illusions that I'll notice.

"Why do I think time moves forward instead of backward?"

Basically, because of entropy.

There are actually two questions here: first, why does time (appear to) flow at all? And second, why does it flow only forwards?

If the whole universe were composed only of a single particle, say a photon, you couldn't even notice time passing. Every moment would be identical to every other moment. Time wouldn't even flow.

So first you need multiple entities, in order to have change. So now let's say you had the same single photon, bouncing forever between two parallel mirrors. Now time would flow (you could watch a movie of the photon, and notice changes from frame to frame). But it wouldn't particularly flow forwards or backwards. If someone gave you a movie of the bouncing photon, but it wasn't labeled which side was the start and which the end, you'd have no way to tell. There isn't really a "forward" or "backward" in time in that situation.

So what it takes is a complex universe, with order and chaos. And then it's just a matter of probabilities. Eggs are vastly more likely to scramble than to descramble; shattered cups rarely bounce off the floor and spontaneously reassemble; etc. The laws of physics don't prevent these things. They're just exceedingly unlikely. So if you had an unlabeled film, you could tell which side was the "past" and which the "future", since in one direction all the action was extremely probable, while in the other direction every action is exceedingly unlikely.

So, in our normal, macroscopic world, we imagine an arrow of time, a past we can never change, and a future that can be altered by our free will.

Even though relativity tells us that the REAL universe doesn't have absolutely reference frames, that time passes differently in different frames, that it doesn't even make sense to ask whether two events apart in space are even simultaneous or not, that time doesn't really mean anything "before" the big bang or inside a black hole, that really the whole evolution of the universe is a single fixed state vector of space-time, and time never flows at all.

But a (false) concept of linear time with a fixed past and a changable future, helps us quickly make useful decisions in our typical lives.