I found this theory pretty interesting, and it reminded me of Gary Drescher's explanation of consciousness in Good and Real:
How the light gets out
Consciousness is the ‘hard problem’, the mystery that confounds scientists and philosophers. Has a new theory cracked it?
[...]
Attention requires control. In the modern study of robotics there is something called control theory, and it teaches us that, if a machine such as a brain is to control something, it helps to have an internal model of that thing. Think of a military general with his model armies arrayed on a map: they provide a simple but useful representation — not always perfectly accurate, but close enough to help formulate strategy. Likewise, to control its own state of attention, the brain needs a constantly updated simulation or model of that state. Like the general’s toy armies, the model will be schematic and short on detail. The brain will attribute a property to itself and that property will be a simplified proxy for attention. It won’t be precisely accurate, but it will convey useful information. What exactly is that property? When it is paying attention to thing X, we know that the brain usually attributes an experience of X to itself — the property of being conscious, or aware, of something. Why? Because that attribution helps to keep track of the ever-changing focus of attention.
I call this the ‘attention schema theory’. It has a very simple idea at its heart: that consciousness is a schematic model of one’s state of attention. Early in evolution, perhaps hundreds of millions of years ago, brains evolved a specific set of computations to construct that model. At that point, ‘I am aware of X’ entered their repertoire of possible computations.
- Princeton neuroscientist, Michael Graziano, writing in Aeon Magazine.
This is either selection fallacy or tautology. How do we know what the brain is paying attention to outside of consciousness? Or is non-conscious attention ruled out by definition?
In fact, my brain pays attention to a great many things that I do not experience. I know this because there are specific examples. One is motor control, which mostly happens inside the brain but outside of consciousness. Touch your finger to your nose. You can do that, but how did you do it?
...
According to this theory, every model-based controller is conscious. So we've been building artificial consciousnesses for forty years. They even talk to us through their control panels. The Swiss have officially put into law the concept of the dignity of plants; should we add the dignity of machines?
If theories that the cerebellum uses model-based control methods are correct, then it follows from Graziano's view that cerebellums are also conscious. This, however, is not our experience, and experience is what he is supposedly trying to explain.
Now, Graziano is, as usual with explanations of consciousness, not actually trying to explain conscious experience. He starts out claiming to address that, but then makes the standard bait and switch:
He is explaining why we talk about having conscious experience, while ignoring conscious experience itself.
I think "attributes an experience of X to itself" is being used to mean "is conscious of experiencing." Stated this way, the role of attention doesn't seem to be either tautological or necessarily a product of selection fallacy. As you pointed out, brains do pay attention to things that are not consciously experienced, so I think this is why the original said 'usually' rather than 'always'.
Do you not agree that any explanation that is sufficient to explain why we talk about consciousness necessarily entails an explanation of consciousness itself? Otherwise, it seems you'd have to believe the cause of us talking about conscious experience is something entirely unrelated to our actual conscious experience.