I found this theory pretty interesting, and it reminded me of Gary Drescher's explanation of consciousness in Good and Real:
How the light gets out
Consciousness is the ‘hard problem’, the mystery that confounds scientists and philosophers. Has a new theory cracked it?
[...]
Attention requires control. In the modern study of robotics there is something called control theory, and it teaches us that, if a machine such as a brain is to control something, it helps to have an internal model of that thing. Think of a military general with his model armies arrayed on a map: they provide a simple but useful representation — not always perfectly accurate, but close enough to help formulate strategy. Likewise, to control its own state of attention, the brain needs a constantly updated simulation or model of that state. Like the general’s toy armies, the model will be schematic and short on detail. The brain will attribute a property to itself and that property will be a simplified proxy for attention. It won’t be precisely accurate, but it will convey useful information. What exactly is that property? When it is paying attention to thing X, we know that the brain usually attributes an experience of X to itself — the property of being conscious, or aware, of something. Why? Because that attribution helps to keep track of the ever-changing focus of attention.
I call this the ‘attention schema theory’. It has a very simple idea at its heart: that consciousness is a schematic model of one’s state of attention. Early in evolution, perhaps hundreds of millions of years ago, brains evolved a specific set of computations to construct that model. At that point, ‘I am aware of X’ entered their repertoire of possible computations.
- Princeton neuroscientist, Michael Graziano, writing in Aeon Magazine.
I think "attributes an experience of X to itself" is being used to mean "is conscious of experiencing." Stated this way, the role of attention doesn't seem to be either tautological or necessarily a product of selection fallacy. As you pointed out, brains do pay attention to things that are not consciously experienced, so I think this is why the original said 'usually' rather than 'always'.
Do you not agree that any explanation that is sufficient to explain why we talk about consciousness necessarily entails an explanation of consciousness itself? Otherwise, it seems you'd have to believe the cause of us talking about conscious experience is something entirely unrelated to our actual conscious experience.
Sort of -- only on the rather trivial grounds that if talk of conscious experience is caused by conscious experience, then an explanation of the talk must explain how it is caused by conscious experience, and for the explanation to go beyond the assertion that it is so caused, it must contain some sort of explanation of consciousness.
But Graziano's explanation is not of this nature. He explains talk of conscious experience by the existence of models within the brain. One cannot argue that because this is an explanation of the talk, it must be an explanation of consciousness; it may just be a wrong explanation of the talk.