Perhaps 'a priori' and 'a posteriori' are too loaded with historic context. Eliezer seems to associate a priori with dualism, an association which I don't think is necessary. The important distinction is the process by which you arrive at claims. Scientists use two such processes: induction and deduction.

Deduction is reasoning from premises using 'agreed upon' rules of inference such as modus ponens. We call (conditional) claims which are arrived at via deduction 'a priori.'

Induction is updating beliefs from evidence using rules of probability (Bayes theorem, etc). We call (conditional) claims which are arrived at via induction 'a posteriori.'

Note: both the rules of inference used in deduction and rules of evidence aggregation used in induction are agreed upon as an empirical matter because it has been observed that we get useful results using these particular rules and not others.

Furthermore: both deduction and induction happen only (as far as we know) in the physical world.

Furthermore: deductive claims by themselves are 'sterile,' and making them useful immediately entails coating them with a posteriori claims.

Nevertheless, there is a clear algorithmic distinction between deduction and induction, a distinction which is mirrored in the claims obtained from these two processes.

How to Convince Me That 2 + 2 = 3

In "What is Evidence?", I wrote:

This is why rationalists put such a heavy premium on the paradoxical-seeming claim that a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise.  If your retina ended up in the same state regardless of what light entered it, you would be blind...  Hence the phrase, "blind faith".  If what you believe doesn't depend on what you see, you've been blinded as effectively as by poking out your eyeballs.

Cihan Baran replied:

I can not conceive of a situation that would make 2+2 = 4 false. Perhaps for that reason, my belief in 2+2=4 is unconditional.

I admit, I cannot conceive of a "situation" that would make 2 + 2 = 4 false.  (There are redefinitions, but those are not "situations", and then you're no longer talking about 2, 4, =, or +.)  But that doesn't make my belief unconditional.  I find it quite easy to imagine a situation which would convince me that 2 + 2 = 3.

Suppose I got up one morning, and took out two earplugs, and set them down next to two other earplugs on my nighttable, and noticed that there were now three earplugs, without any earplugs having appeared or disappeared—in contrast to my stored memory that 2 + 2 was supposed to equal 4.  Moreover, when I visualized the process in my own mind, it seemed that making XX and XX come out to XXXX required an extra X to appear from nowhere, and was, moreover, inconsistent with other arithmetic I visualized, since subtracting XX from XXX left XX, but subtracting XX from XXXX left XXX.  This would conflict with my stored memory that 3 - 2 = 1, but memory would be absurd in the face of physical and mental confirmation that XXX - XX = XX.

I would also check a pocket calculator, Google, and perhaps my copy of 1984 where Winston writes that "Freedom is the freedom to say two plus two equals three."  All of these would naturally show that the rest of the world agreed with my current visualization, and disagreed with my memory, that 2 + 2 = 3.

How could I possibly have ever been so deluded as to believe that 2 + 2 = 4?  Two explanations would come to mind:  First, a neurological fault (possibly caused by a sneeze) had made all the additive sums in my stored memory go up by one.  Second, someone was messing with me, by hypnosis or by my being a computer simulation.  In the second case, I would think it more likely that they had messed with my arithmetic recall than that 2 + 2 actually equalled 4.  Neither of these plausible-sounding explanations would prevent me from noticing that I was very, very, very confused.

What would convince me that 2 + 2 = 3, in other words, is exactly the same kind of evidence that currently convinces me that 2 + 2 = 4:  The evidential crossfire of physical observation, mental visualization, and social agreement.

There was a time when I had no idea that 2 + 2 = 4.  I did not arrive at this new belief by random processes—then there would have been no particular reason for my brain to end up storing "2 + 2 = 4" instead of "2 + 2 = 7".  The fact that my brain stores an answer surprisingly similar to what happens when I lay down two earplugs alongside two earplugs, calls forth an explanation of what entanglement produces this strange mirroring of mind and reality.

There's really only two possibilities, for a belief of fact—either the belief got there via a mind-reality entangling process, or not.  If not, the belief can't be correct except by coincidence.  For beliefs with the slightest shred of internal complexity (requiring a computer program of more than 10 bits to simulate), the space of possibilities is large enough that coincidence vanishes.

Unconditional facts are not the same as unconditional beliefs.  If entangled evidence convinces me that a fact is unconditional, this doesn't mean I always believed in the fact without need of entangled evidence.

I believe that 2 + 2 = 4, and I find it quite easy to conceive of a situation which would convince me that 2 + 2 = 3.  Namely, the same sort of situation that currently convinces me that 2 + 2 = 4.  Thus I do not fear that I am a victim of blind faith.

If there are any Christians in the audience who know Bayes's Theorem (no numerophobes, please) might I inquire of you what situation would convince you of the truth of Islam?  Presumably it would be the same sort of situation causally responsible for producing your current belief in Christianity:  We would push you screaming out of the uterus of a Muslim woman, and have you raised by Muslim parents who continually told you that it is good to believe unconditionally in Islam.  Or is there more to it than that?  If so, what situation would convince you of Islam, or at least, non-Christianity?

 

Part of the Overly Convenient Excuses subsequence of How To Actually Change Your Mind

Next post: "Infinite Certainty"

Previous post: "Absolute Authority"

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 11:37 PM
Select new highlight date
All comments loaded

"To apply the same reasoning the other way, if you aren't a Christian, what would be a situation which would convince you of the truth of Christianity?"

-And Jesus said unto them, Because of your unbelief: for verily I say unto you, If ye have faith as a grain of mustard seed, ye shall say unto this mountain, Remove hence to yonder place; and it shall remove; and nothing shall be impossible unto you. - Matthew 17:20

If mountains moved when Christians told them to, every time, and no one else could effectively command mountains to move, I think most of us non-believers would start going to church.

Alternatively, if the world looked like it was designed and regulated by a loving being, it would help. That might not promote Christianity specifically, but it would be a much better start than what we actually see.

At any rate, if the former is true, 2+2=4 is outside the province of empirical science, and applying empirical reasoning to evaluate its 'truth' is wrong.

When I imagine putting two apples next to two apples, I can predict what will actually happen when I put two earplugs next to two earplugs, and indeed, my mind can store the result in a generalized fashion which makes predictions in many specific instances. If you do not call this useful abstract belief "2 + 2 = 4", I should like to know what you call it. If the belief is outside the province of empirical science, I would like to know why it makes such good predictions.

To apply the same reasoning the other way, if you aren't a Christian, what would be a situation which would convince you of the truth of Christianity?

You'd have to fix all the problems in belief, one by one, by reversing the evidence that originally convinced me of the beliefs' negations. If the Sun stopped in the sky for a day, and then Earth's rotation restarted without apparent damage, that would convince me there was one heck of a powerful entity in the neighborhood. It wouldn't show the entity was God, which would be much more complicated, but it's an example of how one small piece of my model could be flipped from the negation of Christianity (in that facet) to the non-negation.

Getting all the pieces of the factual model (including the parts I was previously convinced were logically self-contradictory) to align with Christianity's factual model, would still leave all the ethical problems. So the actual end result would be to convince me that the universe was in the hands of a monstrously insane and vicious God. But then there does not need to be any observable situation which convinces me that it is morally acceptable to murder the first-born children of Egyptians - morality does not come from environmental entanglement.

If you do not call this useful abstract belief "2 + 2 = 4", I should like to know what you call it.

I call it "2+2=4 is a useful model for what happens to the number of earplugs in a place when I put two earplugs beside two other earplugs". Which is a special case of the theory "arithmetic is a useful model for numbers of earplugs under some operations (including but not limited to adding and removing)".

If the belief is outside the province of empirical science, I would like to know why it makes such good predictions.

The mathematical claim "2+2=4" makes no predictions about the physical world. For that you need a physical theory. 2+2=4 would be true in number theory even if your apples or earplugs worked in some completely different manner.

I hate to break it to you, but if setting two things beside two other things didn't yield four things, then number theory would never have contrived to say so.

Numbers were invented to count things, that is their purpose. The first numbers were simple scratches used as tally marks circa 35,000 BC. The way the counts add up was derived from the way physical objects add up when grouped together. The only way to change the way numbers work is to change the way physical objects work when grouped together. Physical reality is the basis for numbers, so to change number theory you must first show that it is inconsistent with reality.

Thus numbers have a definite relation to the physical world. Number theory grew out of this, and if putting two objects next to two other objects only yielded three objects when numbers were invented over forty thousand years ago, then number theory must reflect that fact or it would never have been used. Consequently, suggesting 2+2=4 would be completely absurd, and number theorists would laugh in your face at the suggestion. There would, in fact, be a logical proof that 2+2=3 (much like there is a logical proof that 2+2=4 in number theory now).

All of mathematics are, in reality, nothing more than extremely advanced counting. If it is not related to the physical world, then there is no reason for it to exist. It follows rules first derived from the physical world, even if the current principles of mathematics have been extrapolated far beyond the bounds of the strictly physical. I think people lose sight of this far too easily (or worse, never recognize it in the first place).

Mathematics are so firmly grounded in the physical reality that when observations don't line up with what our math tells us, we must change our understanding of reality, not of math. This is because math is inextricably tied to reality, not because it is separate from it.

Numbers were invented to count things, that is their purpose. The first numbers were simple scratches used as tally marks circa 35,000 BC.

Verbal expressions almost certainly predate physical notations. Unfortunately the echos don't last quite that long.

In fact I once had this sort of mathematical experience.

Somehow, while memorizing tables of arithmetic in the first grade, I learned that 11 - 6 = 7. This equation didn't come up very often in elementary school arithmetic, and even more seldom in high school algebra, and so I seldom got any math questions marked wrong. Then one day at university, I received back a Math 300 homework assignment on which I'd casually asserted that 11 - 6 - 7. My TA had drawn a red circle around the statement, punctuating it with three question marks and the loss of a single point.

I was confused. There was nothing wrong with 11 - 6 = 7. Why would my TA have deducted a point? Everyone knew that 11 - 6 = 7, because it was just the reverse of 7 + 6 = wait-a-minute-here.

Pen. Paper. I grabbed eleven coins and carefully counted six of them away. There were not seven of them left. I started writing down remembered subtraction problems. 11 - 4 = 7. 11 - 5 = 6. 11 - 6 = 7. 11 - 7 = 4. One of these sums was clearly not like the others. I tried addition, and found that both 7 + 6 = 13 and 6 + 7 = 13.

The evidence was overwhelming. I was convinced. Confused, yes—fascinated by where my error could have come from, and how I could have held onto it so long—but convinced. I set to work memorizing 11 - 6 = 5 instead.

It didn't entirely take. Twenty years later, the equation 11 - 6 = 7 still feels so right and familiar and uncontroversial that I've had to memorize 11 - 6 = stop. I know the answer is probably either 5 or 7, but I work it out manually every time.

I am confused by this discussion. Are we talking about integers or things?

Analytic truths may or may not correspond to our situations. When they don't correspond, I guess that's what you all are calling "false." So, if we're engineers working on building a GPS system, I might say to you, "Careful now, Euclidean geometry is false."

Similarly, quantum physicists on the job might say, "Watch out now, two and two isn't necessarily four."

I'm thinking of this excellent blog post I came across last week:

...Consider a basket with 2 apples in it. Now toss in 2 more apples. Examine the basket, and you will find (surprise!) 4 apples. However, you cannot prove a priori that there will be 4 apples in the basket. It is an empirical question, albeit a trivial one, whether baskets of apples (which are physical things) behave in the same manner as the non-negative integers under addition (which is an abstract logical construct).
This distinction might seem hopelessly pedantic at first, but you can easily go astray by ignoring it. For example, many people naively expect photons to behave in the same manner as integers under addition, but they don’t. “Number of photons” is not a conserved quantity in the way that “number of apples” is; photons can be created/destroyed, one photon can be split into two, et cetera....

For a while this confused me, because I incorrectly identified what part of Eliezer's argument I thought was wrong.

Suppose I were to make all those observations suggesting that combining two objects with two objects produced three objects. I would not conclude that 2+2=3, rather I would conclude that objects were not modelled by Peano Arithmetic. (This much has been said by other commenters). But then I only 'know' Peano Arithmetic through the (physical) operation of my own brain.

Here's how to convince me that 2+2=3. Suppose I look at the proof from (peano axioms) to (2+2=4), and suddenly notice that an inference has been made that doesn't follow from the inference rules (say, I notice that the proof says a + (b⁺) = (a+b)⁺ and I know full well that the correct rule is (a⁺)+(b⁺)=(a+b)⁺). I correct this 'error' and follow through to the end of the proof, and conclude the result 2+2=3. What do I do? I consider that this observation is more likely if 2+2=3 than if 2+2=4, and so I update on that. It's still more likely that 2+2=4, because it's more likely that I've made an error this time than that everyone who's analysed that proof before has made an error (or rather, that I have not heard of anyone else spotting this error). But clearly there is something to update on, so my prior probability that 2+2=3 is not zero. However, I also maintain that if in fact the proof of 2+2=4 is correct, then it remains correct whether or not I am convinced of it, whether or not I exist, and even whether or not physical reality exists. So it is a priori true, but my knowledge of it is not a priori knowledge (because the latter does not exist).

I think this is what Eliezer was trying to say with "Unconditional facts are not the same as unconditional beliefs.", but this seems to be glossed over and almost lost within the confusion about earplugs. The article's failure to distinguish between a mathematical theory and a mathematical model (map and territory, possibly?) came very close to obscuring the actual point. This article does not explain how to convince Eliezer that 2+2=3, it explains how to convince Eliezer that PA does not model earplugs - and since the latter is not an a priori truth, it is much less interesting that knowledge of it is not a priori either.

You're over-thinking this. Take a look at this real-world example of a "neurological fault":

Now I knew where I was. Soon I would come to interchange 27 with its two ramps, A and B. B led away from my destination and A directly into it. It had always struck me as strange that one reached 27B before 27A. I recalled drawing that on a map to give to someone who was going to visit me. My breathing has returned to normal and my panic had disappeared. I come up to the first sign for the interchange.

"27A"

I could hardly breathe. That was not possible. 27A was after 27B. I knew that. I considered for a moment the possibility that on the previous night, shortly after I drove on this very highway, construction workers had descended en masse on the interchanges and somehow moved them. That seemed far more possible than that my clear (and detailed) memory could be so wrong. 27A looked exactly as I remembered, except that now I could see 27B clearly in the distance and in the past I had to turn my head to see it.

I exited on the ramp that I knew wasn’t there twenty-four hours previously to find myself on a well-remembered road. And soon I was home.

Now imagine that happening on a massive scale. Say that right after reading this comment you experience evidence, like that which the OP describes, going against your memories of what happens when you put one pair of objects next to another pair. (This includes "mental confirmation that XXX - XX = XX", though not a formal proof in PA.) Would that make you doubt your memories of what PA says? Would you want to check the proof that it says (2+2=4) in case your current memory is lying about that as well?

There seem to be far too many people hung up on the mathematics which ignores the purpose of the post as I understand it.

The post is not about truth but about conviction. Eliezer is not saying that there could be a scenario in which the rules of mathematics didn't work, but that there could be a scenario under which he was convinced of it.

Deconstructing all elements of neurology, physics and socialogy that make up the pathway from complete ignorance to solid conviction is not something I could even begin to attempt - but if one were able to list such steps as a series bullet points I could conceive that the manipulation of certain steps could lead to a different outcome, which appears to me to be the ultimate point of the post (although not hugely ground-breaking, but an interesting thought experiment).

It is not a claim that the strongly held conviction represents fact or that the conviction would not be shaken by a future event or presentation of evidence. As a fundamental believer in scientific thought and rationality there is much that I hold as firm conviction that I would not hesitate to re-think under valid contradictory evidence.

"A priori reasoning" takes place inside the brain; which is to say, any particular form of "a priori reasoning" is part of a simple physical process unified with the empirical questions that we are reasoning about. It is no great surprise by selecting the right form of "a priori reasoning" we can manage to mirror the outside world. Inside and outside are part of the same world.

When you think about mathematics, your thoughts are not taking place inside another universe, though I can see why people would feel that way.

I am not making claims about other universes. In particular I am not asserting platonic idealism is true. All I am saying is "2+2=4" is an a priori claim and you don't use rules for incorporating evidence for such claims, as you seemed to imply in your original post.

Please explain the miraculous correspondence to apples and earplugs, then.

I confess that I'm also not entirely sure what you mean by "a priori" or why you think it requires no evidence to locate an "a priori claim" like "2 + 2 = 4" in the vast space of possible a priori claims that includes "2 + 2 = 498034". I'm suspicious of claims that supposedly do not require justification and yet seem to be uniquely preferred within a rather large space of possibilities. Are you sure "a priori" isn't just functioning as a semantic stopsign?

I'll accept as divine any entity that can consistently reduce the entropy of a closed, isolated system

This could just be a manifestation of an entity running our world as a computer simulation. Or even simpler, it could be an alien that knows an important fact you don't know about the real laws of physics. Even if the entity is running our world as a computer simulation, it could itself be made of atoms, go to the bathroom, have a boss screaming at it, etc.

As Damien Broderick observed: "If you build a snazzy alife sim ... you'd be a kind of bridging `first cause', and might even have the power to intervene in their lives - even obliterate their entire experienced cosmos - but that wouldn't make you a god in any interesting sense. Gods are ontologically distinct from creatures, or they're not worth the paper they're written on."

It's often poor form to quote oneself, but since this post (deservedly) continues to get visits, it might be good to bring up the line of thought that convinced me that this post made perfect sense:

The space of all possible minds includes some (aliens/mental patients/AIs) which have a notion of number and counting and an intuitive mental arithmetic, but where the last of these is skewed so that 2 and 2 really do seem to make 3 rather than 4. Not just lexically, but actually; the way that our brains can instantly subitize four objects as two distinct groups of two, their minds mistakenly "see" the pattern 0 0 0 as composed of two distinct 0 0 groups. Although such a mind would be unlikely to arise within natural selection, there's nothing impossible about engineering a mind with this error, or rewiring a mind within a simulation to have this error.

These minds, of course, would notice empirical contradictions everywhere: they would put two objects together with two more, count them, and then count four instead of three, when it's obvious by visualizing in their heads that two and two ought to make three instead. They would even encounter proofs that 2 + 2 =4, and be unable to find an error, although it's patently absurd to write SSSS0 = SS0 + SS0. Eventually, a sufficiently reflective and rational mind of this type might entertain the possibility that maybe two and two do actually make four, and that its system of visualization and mental arithmetic are in fact wrong, as obvious as they seem from the inside. We would consider such a mind to be more rational than one that decided that, no matter what it encountered, it could never be convinced that 2 and 2 made 4 rather than 3.

Now, given all that, why exactly should I refuse to ever update my arithmetical beliefs if given the sort of experiences in Eliezer's thought experiment? Wouldn't the hypothesis that I am such an agent get a lot of confirmation? (Of course, I very strongly don't expect to encounter such experiences, because of all the continuing evidence before me that 2 + 2 = 4; but if I did wake up in that situation, I'd have to accept that some part of my mind is probably broken, and the part that tells me 2 + 2 = 4 is as likely a candidate as any.)

Wikipedia on a priori: Relations of ideas, according to Hume, are "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe".

This points out clearly the problem that I have with "a priori". It is a fundamentally Cartesian-dualist notion. The "mere operation of thought" takes place INSIDE THE UNIVERSE, as opposed to anywhere else.

To observe your own thoughts is a kind of evidence, if the spikings of your neurons be entangled with the object of your inquiry (relative to your current state of uncertainty about both). If, for example, I do not know what will happen with two earplugs and two earplugs on the nightstand, I can visualize two apples plus two apples to find out. All of this takes place in the same, unified, physical universe, with no ontological border between the atoms in my skull and the atoms outside my skull. That's why the trick works. It would work just as well if I used a pocket calculator. Is the output of a pocket calculator an a priori truth? Why not call the earplugs themselves a priori truths, then? But if neither of these are a priori, why should I treat the outputs of my neurons as "a priori"? It's all the same universe.

It appears to me that "a priori" is a semantic stopsign; its only visible meaning is "Don't ask!"

Vassar: It sure seems to me that our evolution and culture constructed ethical attitudes are entangled with the world.

They're causal products of the world, and yes, if I was ignorant about some evolution-related factual question, I might be able to use my ethical attitudes as evidence about conditions obtaining in my ancestral environment. That's not the same as my stating an external truth-condition for it being wrong to slaughter the first-born male children of the subjects of an unelected Pharaoh. It is perfectly acceptable for me to say, "I can think of no encounterable situation that would transform the terminal value of this event from negative to positive."

Spear: The test of any religion is whether cultures believing it tend to thrive and improve the quality of their lives or not.

Ah, yes, the old theory that there are reasons to believe2 in an assertion-of-fact besides its being true.

Lee: If he proclaims "two and two makes three," then he must be talking about something other than the integers. You cannot be mistaken about the integers, you can only misunderstand them.

Just to be clear, when I say "be convinced that 2 + 2 = 3", I mean being convinced that the system of Peano axioms with standard deductive logic and:

\a.(a + 0 = a) \ab.(a + Sb = S(a + b))

does not have as a theorem

SS0 + SS0 = SSSS0

but does have as a theorem

SS0 + SS0 = SSS0

and is consistent. Just as I currently believe that PA is consistent and has a theorem SS0+SS0=SSSS0 but not SS0+SS0=SSS0. So yes, this blog post is about what it would take to convince me that 2 + 2 actually equalled 3. I am not supposed to be convinced of this, if I am sane, and if it is not true. But at the same time, my belief in it should not be unconditional or nonevidential, because there are particular evidences which convinced me that 2 + 2 = 4 in the first place.

I also note that if you do not believe that there is a finite positive integer which encodes a proof of Godel's Statement, then you clearly are not using Peano Arithmetic to define what you mean by the word "integer".

Eliezer: When you are experimenting with apples and earplugs you are indeed doing empirical science, but the claim you are trying to verify isn't "2+2=4" but "counting of physical things corresponds to counting with natural numbers." The latter is, indeed an empirical statement. The former is a statement about number theory, the truth of which is verified wrt some model (per Tarski's definition).

To apply the same reasoning the other way, if you aren't a Christian, what would be a situation which would convince you of the truth of Christianity?

The Second Coming? An opportunity to have a chat with the Lord Himself? An analysis of a communion wafer revealing it to, in fact, be living human flesh? It's seriously not that hard to think of these.

Which is more likely "God exists" or "I just hallucinated that" For the third one, probably that He exists, for the second one, definitely hallucination, for the first, I'm not sure.

Second one: depends. I was kind of assuming that you have some way of verifying it, like you ask Him to create something and someone who wasn't there later describes some of its previously determined properties accurately without being clued in. First: you'd need a massive global hallucination, and could use a similar verification method.

That seems accurate. Remember that a single person can hallucinate that someone else verified something, but this has low prior probability.

I once conducted an experiment in which I threw a die 500 times, and then prayed for an hour every day for a week that that die consistently land on a four, and then threw the die 500 more times. Correlation was next to zero, so I concluded that God does not answer prayers about dice from me.

Haven't you ever heard the saying, "God does not throw dice games"?

Perhaps 'a priori' and 'a posteriori' are too loaded with historic context. Eliezer seems to associate a priori with dualism, an association which I don't think is necessary. The important distinction is the process by which you arrive at claims. Scientists use two such processes: induction and deduction.

Deduction is reasoning from premises using 'agreed upon' rules of inference such as modus ponens. We call (conditional) claims which are arrived at via deduction 'a priori.'

Induction is updating beliefs from evidence using rules of probability (Bayes theorem, etc). We call (conditional) claims which are arrived at via induction 'a posteriori.'

Note: both the rules of inference used in deduction and rules of evidence aggregation used in induction are agreed upon as an empirical matter because it has been observed that we get useful results using these particular rules and not others.

Furthermore: both deduction and induction happen only (as far as we know) in the physical world.

Furthermore: deductive claims by themselves are 'sterile,' and making them useful immediately entails coating them with a posteriori claims.

Nevertheless, there is a clear algorithmic distinction between deduction and induction, a distinction which is mirrored in the claims obtained from these two processes.