Standard methods of inferring knowledge about the world are based off premises that I don’t know the justifications for. Any justification (or a link to an article or book with one) for why these premises are true or should be assumed to be true would be appreciated.


Here are the premises:

  • “One has knowledge of one’s own percepts.” Percepts are often given epistemic privileges, meaning that they need no justification to be known, but I see no justification for giving them epistemic privileges. It seems like the dark side of epistemology to me.

  • “One’s reasoning is trustworthy.” If one’s reasoning is untrustworthy, then one’s evaluation of the trustworthiness of one’s reasoning can’t be trusted, so I don’t see how one could determine if one’s reasoning is correct. Why should one even consider one’s reasoning is correct to begin with? It seems like privileging the hypothesis, as there are many different ways one’s mind could work, and presumably only a very small proportion of possible minds would be remotely valid reasoners.

  • “One’s memories are true.” Though one’s memories of how the world works gives a consistent explanation of why one is perceiving one’s current percepts, a perhaps simpler explanation is that the percepts one are currently experiencing are the only percepts one has ever experienced, and one’s memories are false. This hypothesis is still simple, as one only needs to have a very small number of memories, as one can only think of a small number of memories at any one time, and the memory of having other memories could be false as well.




Edit: Why was this downvoted? Should it have been put in the weekly open thread instead?

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 7:41 AM
Select new highlight date
Rendering 50/100 comments  show more

With the (important) proviso that "knowledge", "trustworthy", and "are true" need to be qualified with something like "approximately, kinda-sorta, most of the time", I think these premises should be assumed as working hypotheses on the grounds that if they are badly wrong then we're completely screwed anyway, we have no hope of engaging in any sort of rational thought, and all bets are off.

Having adopted those working hypotheses, we look at the available evidence and it seems like it fits them pretty well (note: this isn't a trivial consequence of having taken those things as working hypotheses; e.g., if we'd assumed that our perception, reasoning and memory are perfectly accurate then we'd have arrived quickly at a contradiction). Those propositions now play two roles in our thinking: as underlying working assumptions that are mostly assumed implicitly, and as conclusions based on thinking about things as clearly as we know how.

There's some circularity here, but how could there not be? One can always keep asking "why?", like a small child, and sooner or later one must either refuse to answer or repeat oneself.

Somewhat-relevant old LW post: The lens that sees its flaws. My memory was of it being more relevant than it appears on rereading; I wonder whether there's another post from the "Sequences" that I was thinking of instead.

There's some circularity here, but how could there not be? One can always keep asking "why?", like a small child, and sooner or later one must either refuse to answer or repeat oneself.

I see that circularity seems inevitable in order to believe anything, but would that really make circularity okay?

Somewhat-relevant old LW post: The lens that sees its flaws. My memory was of it being more relevant than it appears on rereading; I wonder whether there's another post from the "Sequences" that I was thinking of instead.

I recall a post about Yudkowsky made that seems like what you're talking about, but I can't find it. I think it was in Highly Advanced Epistemology 101 for Beginners

Well, what does "OK" mean?

Suppose, e.g., that something like the following is true: those assumptions are the minimal assumptions you need to make to get rational thinking off the ground, and making them does suffice to support everything else you need to do, and they are internally consistent in the sense that when you make those assumptions and in estimate further you don't turn up any reason to think those assumptions were wrong.

What more would it take to make assuming them "OK"? If "OK" means that they're provable from first principles without any assumptions at all and suffice to ground rational empirical investigation of the world then no, they're not OK -- but there's good reason to think nothing else is or could be.

If what you're aiming for is a rational empiricism with minimal assumptions then I think something like this is optimal. I'm happy calling that "OK".

By okay, I mean an at least least somewhat accurate method of determining reality (i.e. the generator of percepts). Given I don't know how to tell what percepts I've perceives, I don't see how standard philosophy reflects reality.

It sure seems (to me) as if those assumptions give an at least somewhat accurate method of determining reality. Are you saying you don't think they do -- or is your actual objection that we don't know that with whatever degree of certainty you're looking for?

I don't see how we have any evidence at all that those assumptions give at least a somewhat accurate method of determining reality. The only way I know of of justifying those axioms is by using those axioms.

The other ways would be (1) because they seem obviously true, (2) because we don't actually have the option of not adopting them, and (3) because in practice it turns out that assuming them gives what seem like good results. #1 and #3 are pretty much the usual reasons for adopting any given set of axioms. #2 also seems very compelling. Again, what further OK-ness could it possibly be reasonable to look for?

With such skepticism, how are you even able to write anything, or understand the replies? Or do anything at all?

I think you misspelled "skepticism" in the title.

Also, your argument (including what you have said in the comments) is something like this:

Every argument is based on premises. There may be additional arguments for the premises, but those are arguments will themselves have premises. Therefore either 1) you have an infinite regress of premises; or 2) you have premises that you do not have arguments for; or 3) your arguments are circular.

Assuming (as you seem to) that we do not have an infinite regress of premises, that means either that some premises do not have arguments for them, or that the arguments are circular. Either way, you say, that means we have unjustified beliefs which are not known to be true.

This may be true, given a particular arbitrary definition of knowledge that there is no reason for anyone to accept. But if knowledge is defined in such a way as to be a contradiction, who would want it anyway? It would be like wanting a round square.

Memories can be collapsed under percepts.

In answer to your broader question - yup: you've hit upon epistemic nihilism, and there is no real way around it. Reason is Dead, and we have killed it. Despair.

...Or, just shrug and decide that you are probably right but you can't prove it. There's plenty of academic philosophy addressing this (See: Problem of Criterion) and Lesswrong covers it fairly extensively as well.

http://lesswrong.com/lw/t9/no_license_to_be_human/ and related posts.

http://lesswrong.com/lw/iza/no_universally_compelling_arguments_in_math_or/

Rather than going on a reading binge I recommend to just continue mulling it over until it clicks into place, because, similar to the whole "dissolve free will" thing, it feels clear in hindsight yet it is not easy to explain or understand explanations others provide.

I'll give it a shot anyway: The essential point is that ultimately you are a brain and you gonna do things the way your brain is designed to do them. Assuming you've satisfactorily resolved the whole moral nihilism thing (even though there is no divine justification for morality, we can still talk about what is moral and what isn't because morality is inside us), resolving epistemic nihilism follows an analogous chain of thought: There is not and cannot be any justification for human methods of inference[morality], but it still is our method of inference[morality] and we're gonna use it regardless.

Why does everyone refer to it as "epistemic nihilism"? Philosophical skepticism ('global' skepticism) was always the term I read and used.

Everyone? In this discussion right here, the only occurrences of the word "nihilism" are in Ishaan's comment and your reply?

In general. I hear the word used but I haven't ever encountered it in literature (which isn't very surprising since I haven't read much literature). Seriously, Google 'epistemic nihilism' right now and all you get are some cursory references and blogs.

Maybe I wasn't clear: I'm questioning whether the premise of your question

Why does everyone refer to it as "epistemic nihilism"?

is correct. I don't think everyone does refer to it that way, whether "everyone" means "everyone globally", "everyone on LW", "everyone in the comments to this post", or in fact anything beyond "one or two people who are making terminology up on the fly or who happen to want to draw a parallel with some other kind of nihilism".

I've heard it from various people on the internet. Perhaps I don't have a large sample size, but it seems to consistently pop up when global skepticism is discussed.

"Epistemic nihilism" is not a name, but a description. Philosophical skepticism covers a range of things, of which this is one.

Assume they're approximately true because if you don't you won't be able to function. If you notice flaws, by all means fix them, but you're not going to be able to prove modus ponens without using modus ponens.

I would agree if you can't trust your reasoning then you are in a bad spot. Even Descartes 'Cogito ergo sum' doesn't get you anywhere if you think the 'therefore' is using reasoning. Even that small assumption won't get you too far but I would start with him.

A better translation (maybe -- I don't speak french) would be "I think, I am". Or so said my philosophy teacher..

That seems false if taken at face value: "ergo" means "therefore", ergo, "Cogito ergo sum" means "I think, therefore I am". Also, I have no clue how to parse "I think, I am". Does it mean "I think and I am"?

There's probably a story behind that translation and how it corresponds to Descartes's other beliefs, but I don't think that "I think, I am" makes sense without that story.

(A side note: it's Latin, not French. I originally added here that Descartes wrote in Latin, but apparently he originally made the statement in French as "Je pense donc je suis.")

gjm has mentioned most of what I think is relevant to the discussion. However, see also the discussion on Boltzmann brains.

Obviously, if you say you are absolutely certain that everything we think is either false or unknown, including your own certainty of this, no one will ever be able to "prove" anything to you, since you just said you would not admit any premise that might be used in such a proof.

But in the first place, such a certainty is not useful for living, and you do not use it, but rather assume that many things are true, and in the second place, this is not really relevant to Less Wrong, since someone with this certainty already supposes that he knows that he can never be less wrong, and therefore will not try.

I never said I was absolutely certain everything we think is either false or unknown. I'm saying that I have no way of knowing if it is false or unknown -- I am absolutely uncertain.

Standard methods of inferring knowledge about the world are based off premises that I don’t know the justifications for.

How do you come to that conclusion?