How do you notice when you are ignorant of necessary alternative hypotheses?

So I just wound up in a debate with someone over on Reddit about the value of conventional academic philosophy.  He linked me to a book review, in which both the review and the book are absolutely godawful.  That is, the author (and the reviewer following him) start with ontological monism (the universe only contains a single kind of Stuff: mass-energy), adds in the experience of consciousness, reasons deftly that emergence is a load of crap... and then arrives to the conclusion of panpsychism.

WAIT HOLD ON, DON'T FLAME YET!

Of course panpsychism is bunk.  I would be embarrassed to be caught upholding it, given the evidence I currently have, but what I want to talk about is the logic being followed.

1) The universe is a unified, consistent whole.  Good!

2) The universe contains the experience/existence of consciousness.  Easily observable.

3) If consciousness exists, something in the universe must cause or give rise to consciousness.  Good reasoning!

4) "Emergence" is a non-explanation, so that can't be it.  Good!

5) Therefore, whatever stuff the unified universe is made of must be giving rise to consciousness in a nonemergent way.

6) Therefore, the stuff must be innately "mindy".

What went wrong in steps (5) and (6)?  The man was actually reasoning more-or-less correctly!  Given the universe he lived in, and the impossibility of emergence, he reallocated his probability mass to the remaining answer.  When he had eliminated the impossible, whatever remained, however low its prior, must be true.

The problem was, he eliminated the impossible, but left open a huge vast space of possible hypotheses that he didn't know about (but which we do): the most common of these is the computational theory of mind and consciousness, which says that we are made of cognitive algorithms.  A Solomonoff Inducer can just go on to the next length of bit-strings describing Turing machines, but we can't.

Now, I can spot the flaw in the reasoning here.  What frightens me is: what if I'm presented with some similar argument, and I can't spot the flaw?  What if, instead, I just neatly and stupidly reallocate my belief to what seems to me to be the only available alternative, while failing to go out and look for alternatives I don't already know about?  Notably, it seems like expected evidence is conserved, but expecting to locate new hypotheses means I should be reducing my certainty about all currently-available hypotheses now to have some for dividing between the new possibilities.

If you can notice when you're confused, how do you notice when you're ignorant?

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 2:52 AM
Select new highlight date
Rendering 50/69 comments  show more

I think the error is actually (4). "Emergence" is a non-explanation because it's way too vague; it encompasses many different possible explanations and doesn't narrow things down enough. Because it's a non-explanation in this particular way, you cannot take its inverse. Imagine Sherlock Holmes saying: "'Someone killed him' is a non-explanation, so that can't be it."

Strawson says what he means by emergence, in order to reject it, so this is a side issue.

The problem was, he eliminated the impossible, but left open a huge vast space of possible hypotheses that he didn't know about (but which we do): the most common of these is the computational theory of mind and consciousness, which says that we are made of cognitive algorithms.

Wait, are you suggesting that the reviewer (Jerry Fodor) is unaware of the computational theory of mind? Unlikely, given that he is one of its progenitors. From the wikipedia article on the computational theory:

The theory was proposed in its modern form by Hilary Putnam in 1961, and developed by the MIT philosopher and cognitive scientist (and Putnam's PhD student) Jerry Fodor in the 1960s, 1970s and 1980s

Yep. Eli failed to consider the hypothesis that the philosophers who reject CTM do so because they have objections to it, rather than because they have never heard if it.

If you can notice when you're confused, how do you notice when you're ignorant?

Have you noticed when YOU are confused?

1) The universe is a unified, consistent whole. Good!

"Good"? What does the statement even mean? What would be an alternative? non-unified whole? unified parts? non-unified bits and pieces? How would you tell?

2) The universe contains the experience/existence of consciousness. Easily observable.

Depends on your definition of consciousness. Is it one of the qualia? An outcome on the mirror test? Something else? If it's a quale, do qualia exist in the same way physical things do? The statement above is meaningless without specifying the details.

3) If consciousness exists, something in the universe must cause or give rise to consciousness. Good reasoning!

Eh, bad reasoning. Depends on the definition of "cause", which is more logic than physics. Causality in physics is merely a property of a certain set of the equations of motion, which is probably not what is meant in the above quote.

4) "Emergence" is a non-explanation, so that can't be it. Good!

Bad. Emergence "as a process whereby larger entities, patterns, and regularities arise through interactions among smaller or simpler entities that themselves do not exhibit such properties" does not necessarily imply irreducibility, so even if we can reduce humans to quarks, humans can have properties which quarks don't. Anyway, I grant this one if it means "everything is reducible" and nothing more. Of course, the reduced constituents are not required to have all the properties of the whole.

5) Therefore, whatever stuff the unified universe is made of must be giving rise to consciousness in a nonemergent way.

Presumably this means "in a non-dualist way", i.e. a complex enough optimizer is not granted consciousness by some irreducible entity.

6) Therefore, the stuff must be innately "mindy".

To argue against it, you don't need "the computational theory of mind and consciousness", just note that, say, atoms are not innately solid or liquid, so some properties of complex systems are meaningless when applied to its constituents.

Of course, maybe I am the one who is confused and not noticing it...

I don't think you should so much notice that you're ignorant as assume you're ignorant. You always assign some probability to "something I haven't thought of". You do need to notice when you're making an implicit assumption that you've thought of everything. And you need to figure out how much probability to assign to things you haven't thought of.

I don't think there's any good theoretical way to figure out how likely it is that the answer is something that you haven't thought of. You just have to practice. I'm not sure how you can practice.

Once you come to a conclusion, try to apply it to make a prediction or even just see whether it could've been used to predict some previously known things. If not, you're still ignorant.

Of course this is hard in a field where almost none of your interlocutors consider making predictions to be a useful thing

I sort of just always assume that my current hypotheses are a waypoint on the road to greater understanding. I'm not confident in the things I don't know,as there's always the possibility of unknown unknowns.

I would like an explicitation for the reasons why it seems to be false. In particular I fail to see how computational account would be against it. You can compute with levers, transistors and a large array of different things. And actually there is no things you can't compute with. Thus you can compute with anything. So anything is "computy" which is another way of saying it's "mindy". But ofcourse that everything can be used in computing doesn't mean the computations are of equal value/complexity. Thus a genuine difference between rocks and people. But that it still allows that there is "what it feels to be a rock inside". Granted it probably isn't anything grand or interesting. However it would be really weird if there was a clear division where "feeling" began and "cold" motion stopped.

I would like to note that an abstraction where we disregard "feelings" and focus on technical public impact with the environment can lead to a "cold" conceptation of the world. However when used as a worldview (that is outside of tracking positions and mechanics) it is quite erroneus. In an extreme extrapolation you are just a robot and should be "cold". This kind of non-psychisim has the loudest counterevidence there is available - you do feel (crossing fingers that you are not a zombie). Whether the psychisims extends beyond you is an open question. If you can get around the problem of other minds that is existence of psychisims like you why would you assume that there are only feelers like you? Ie there is an analog problem of mindness of other, given that you could not directly experience the feelings of rocks why would you assume they don't have them?

If the answer is purely because you are used to abstract that facet of them away because of practical needs that doesn't answer the theorethical question. It is the same that a psychopath would treat fully fledged people - to him it doesn't matter what people are on the inside only what he can do with them. In that way the "cold" and "feely" way of relating to your surroundings don't disagree what the mechanics are. But why insist that the "feely" way is false or inferior?

Anything is potentially computy.which is analogous to panPROTOpsychism.

You should probably be skeptical when presented with binary hypotheses (either by someone else or by default). Say in this example that H1 is "emergence". The alternative for H1 isn't "mind-stuff" but simply ~H1. This includes the possibility of "mind-stuff" but also any alternatives to both emergence and mindstuff. Maybe a good rule to follow would be to assume and account for your ignorance from the beginning instead of trying to notice it.

One way to make this explicit might be to always have at least three hypotheses: One in favor, one for an alternative, and a catchall for ignorance; the catchall reflecting the little that you know about the subject. The less you know about the subject, the larger your bucket.

Maybe in this case, your ignorance allocation (i.e. prior probability for ignorance) is 50%. This would leave 50% to share between the emergence hypothesis and the mindstuff hypothesis. I personally think that the mindstuff hypothesis is pretty close to zero, so the remainder would be in favor of emergence, even if it's wrong. In this case, "emergence" is asserted to be a non-explanation, but this could probably be demonstrated in some way, like sharing likelihood ratios; that might even show that "mindstuff" is an equally vapid explanation for consciousness.

If you can notice when you're confused, how do you notice when you're ignorant?

I think one tricky thing about this question is there are cases where I am ALWAYS ignorant, and the question to ask instead is, is my ignorance relevant? I mean, I tried to give a short example of this with a simple question, below, but ironically, I was ignorant about how many different ways you could be ignorant about something until I started trying to count them, and I'm likely still ignorant about it now.


For instance, take the question: What is my spouse's hair color?

Presumably, a good deal of people reading this are somewhat ignorant about that.

On the other hand, they probably aren't as ignorant as a blind visiting interstellar Alien, Samplix, who understands English but nothing about color, although Samplix has also been given a explanation of hexadecimal color chart and has decided to guess the RGB values of my spouse's hair is #66FF00.

But you could also have another blind alien, Sampliy, who wasn't given even given a color chart, doesn't understand what words are colors and what words aren't, and so goes to roughly the middle of a computer English/Interstellar dictionary and guesses "Mouse?"

Or another visiting Alien, Sampliz, who doesn't understand English and so responds with '%@%$^!'

And even if you know my spouse has black hair, you could get more specific than that:

For instance, a Hair analyzing computer might be able to determine that my spouse has approximately 300,000 hairs, and 99% of them happen to be the Hexadecimal shade #001010, but another, more specific Hair Analyzing computer, might say that my spouse has 314,453 hairs, and 296,415 of them are Hexadecimal shade #001010. and 10,844 of them are Hexadecimal shade #001011, and...

And even if you were standing with that report from the second computer saying "Okay, it finished it's report, and I have this printout from an hour ago, so I am DEFINITELY not ignorant about your spouse's hair color."

Well, what if I told you my spouse just came back from a Hair salon?


The above list isn't exhaustive, but I think it establishes the general point. My spouse's hair color seems like the kind of question which someone could be ignorant about in less ways than something as confusing as consciousness, and yet... even spousal hair color is complicated.

I think there's a relevant difference here between being ignorant of actual data that you are aware exists (e.g. the color of hair), and being ignorant of the existence of alternative theories or models (e.g. possible alternative meanings of the word "color").

That seemed to make sense to me at first, but I'm having a hard time actually finding a good dividing line to show the relevant difference, particularly since what seems like it can be model ignorance for one question can be data ignorance for another question.

For instance, here are possible statements about being ignorant about the question. "What is my spouse's hair color?"

1: "I don't know your spouse's hair color."

2: "I don't know if your spouse has hair."

In this context, 1 seems like data ignorance, and 2 would seem like model ignorance.

But given a different question "Does my spouse have hair?"

2 is data ignorance, and 1 doesn't seem to be a well phrased response.

And there appear to be multiple levels of this as well: For instance, someone might not know whether or not I have a spouse.

What is the best way to handle this? Is it to simply try to keep track of the number of assumptions you are making at any given time? That seems like it might help, since in general, models are defined by certain assumptions.

What frightens me is: what if I'm presented with some similar argument, and I can't spot the flaw?

Having recognized this danger, you should probably be more skeptical of verbal arguments.

Rejecting hypothesis can only bring you to a state where you don't know what's going on. It's not constructive in a way where it bring you to the conclusion that one of the alternatives is true.

It would probably make sense to say: I don't know over a wider array of questions.

If you can notice when you're confused, how do you notice when you're ignorant?

You don't need to notice that you're ignorant if you already know that you are.

One of the structural commitments of Korzybski (of the The Map is not the Territory fame) is that abstractions always leave out some facts. My concepts of a thing is not the thing itself - the map is not the territory. That consciousness of abstraction entails a consciousness of ignorance.

When he had eliminated the impossible, whatever remained, however low its prior, must be true.

Eliminated by his calculations, with his priors, with his abstractions. What's the probability that those are wrong? What's the probability that he hadn't taken into account everything. And then, what's the chance that he hadn't been thorough enough in his enumeration of "whatever remained"?

Jaynes has a nice example of rejecting "whatever remained", by putting a something else theory into the analysis, and assigning some small probability to it.

Also, like Korzybski, Jaynes encourages a consciousness of abstraction by conditioning all probabilities on background knowledge I, as in P(X | a_1,a_2,......, I). There's my background knowledge I, staring back at me. What if it's incorrect?

So there are two main failures in these proof by contradiction scenarios. The first is to fail to include a valid alternative. The second is that your I, your model and assumptions, suck. They are wrong, or worse, not even wrong.

Philosophers aren't actually ignorant of computational theories of mind. Some of them reject CTM , because it seems have no more ability address qualia/hard problem issues than materialism ( in fact, one can robustly argue that compuationalism doesn't add anything to materialism in terms of powers or properties, and that CTM is therefore less able to explain qualia than straight materialism).

So, before LW starts shouting about the stupidity of philosophers, LW needs to say something about the Hard Problem.

At the moment there isn't even a consensus.

Eta: having re-read Fodors review, I notice there are frequent references to the hard problem issues, qualia than, conscious experience, etc. I am not sure whether Eli thinks they're unimportant, or thinks the CTM explains them , or what.

panpsychism is bunk.

Panpsychism is the least defensible of a set of related concepts.

I've sometimes found it productive to explicitly add "the hypothesis that hasn't occurred to me" to the list. To remind me there is (at least) one.

If you can notice when you're confused, how do you notice when you're ignorant?

I actually have a specific feeling associated with everything clicking together. If I don't have that feeling, my model does not perfectly explain everything which means there's something I'm not considering. In that case, I go looking for alternative hypotheses.