This post is part of the Solution to "Free Will".
Followup toDissolving the Question, Causality and Moral Responsibility

Planning out upcoming posts, it seems to me that I do, in fact, need to talk about the word could, as in, "But I could have decided not to rescue that toddler from the burning orphanage."

Otherwise, I will set out to talk about Friendly AI, one of these days, and someone will say:  "But it's a machine; it can't make choices, because it couldn't have done anything other than what it did."

So let's talk about this word, "could".  Can you play Rationalist's Taboo against it?  Can you talk about "could" without using synonyms like "can" and "possible"?

Let's talk about this notion of "possibility".  I can tell, to some degree, whether a world is actual or not actual; what does it mean for a world to be "possible"?

I know what it means for there to be "three" apples on a table.  I can verify that experimentally, I know what state of the world corresponds it.  What does it mean to say that there "could" have been four apples, or "could not" have been four apples?  Can you tell me what state of the world corresponds to that, and how to verify it?  Can you do it without saying "could" or "possible"?

I know what it means for you to rescue a toddler from the orphanage.  What does it mean for you to could-have-not done it?  Can you describe the corresponding state of the world without "could", "possible", "choose", "free", "will", "decide", "can", "able", or "alternative"?

One last chance to take a stab at it, if you want to work out the answer for yourself...

Some of the first Artificial Intelligence systems ever built, were trivially simple planners.  You specify the initial state, and the goal state, and a set of actions that map states onto states; then you search for a series of actions that takes the initial state to the goal state.

Modern AI planners are a hell of a lot more sophisticated than this, but it's amazing how far you can get by understanding the simple math of everything.  There are a number of simple, obvious strategies you can use on a problem like this.  All of the simple strategies will fail on difficult problems; but you can take a course in AI if you want to talk about that part.

There's backward chaining:  Searching back from the goal, to find a tree of states such that you know how to reach the goal from them.  If you happen upon the initial state, you're done.

There's forward chaining:  Searching forward from the start, to grow a tree of states such that you know how to reach them from the initial state.  If you happen upon the goal state, you're done.

Or if you want a slightly less simple algorithm, you can start from both ends and meet in the middle.

Let's talk about the forward chaining algorithm for a moment.

Here, the strategy is to keep an ever-growing collection of states that you know how to reach from the START state, via some sequence of actions and (chains of) consequences.  Call this collection the "reachable from START" states; or equivalently, label all the states in the collection "reachable from START".  If this collection ever swallows the GOAL state - if the GOAL state is ever labeled "reachable from START" - you have a plan.

"Reachability" is a transitive property.  If B is reachable from A, and C is reachable from B, then C is reachable from A.  If you know how to drive from San Jose to San Francisco, and from San Francisco to Berkeley, then you know a way to drive from San Jose to Berkeley.  (It may not be the shortest way, but you know a way.)

If you've ever looked over a game-problem and started collecting states you knew how to achieve - looked over a maze, and started collecting points you knew how to reach from START - then you know what "reachability" feels like.  It feels like, "I can get there."  You might or might not be able to get to the GOAL from San Francisco - but at least you know you can get to San Francisco.

You don't actually run out and drive to San Francisco.  You'll wait, and see if you can figure out how to get from San Francisco to GOAL.  But at least you could go to San Francisco any time you wanted to.

(Why would you want to go to San Francisco?  If you figured out how to get from San Francisco to GOAL, of course!)

Human beings cannot search through millions of possibilities one after the other, like an AI algorithm.  But - at least for now - we are often much more clever about which possibilities we do search.

One of the things we do that current planning algorithms don't do (well), is rule out large classes of states using abstract reasoning.  For example, let's say that your goal (or current subgoal) calls for you to cover at least one of these boards using domino 2-tiles.

Boards_3

The black square is a missing cell; this leaves 24 cells to be covered with 12 dominos.

You might just dive into the problem, and start trying to cover the first board using dominos - discovering new classes of reachable states:

Boarddive

However, you will find after a while that you can't seem to reach a goal state.  Should you move on to the second board, and explore the space of what's reachable there?

But I wouldn't bother with the second board either, if I were you.  If you construct this coloring of the boards:

Boardsparity

Then you can see that every domino has to cover one grey and one yellow square.  And only the third board has equal numbers of grey and yellow squares.  So no matter how clever you are with the first and second board, it can't be done.

With one fell swoop of creative abstract reasoning - we constructed the coloring, it was not given to us - we've cut down our search space by a factor of three.  We've reasoned out that the reachable states involving dominos placed on the first and second board, will never include a goal state.

Naturally, one characteristic that rules out whole classes of states in the search space, is if you can prove that the state itself is physically impossible.  If you're looking for a way to power your car without all that expensive gasoline, it might seem like a brilliant idea to have a collection of gears that would turn each other while also turning the car's wheels - a perpetual motion machine of the first type.  But because it is a theorem that this is impossible in classical mechanics, we know that every clever thing we can do with classical gears will not suffice to build a perpetual motion machine.  It is as impossible as covering the first board with classical dominos.  So it would make more sense to concentrate on new battery technologies instead.

Surely, what is physically impossible cannot be "reachable"... right?  I mean, you would think...

Oh, yeah... about that free will thing.

So your brain has a planning algorithm - not a deliberate algorithm that you learned in school, but an instinctive planning algorithm.  For all the obvious reasons, this algorithm keeps track of which states have known paths from the start point.  I've termed this label "reachable", but the way the algorithm feels from inside, is that it just feels like you can do it.  Like you could go there any time you wanted.

And what about actions?  They're primitively labeled as reachable; all other reachability is transitive from actions by consequences.  You can throw a rock, and if you throw a rock it will break a window, therefore you can break a window.  If you couldn't throw the rock, you wouldn't be able to break the window.

Don't try to understand this in terms of how it feels to "be able to" throw a rock.  Think of it in terms of a simple AI planning algorithm.  Of course the algorithm has to treat the primitive actions as primitively reachable.  Otherwise it will have no planning space in which to search for paths through time.

And similarly, there's an internal algorithmic label for states that have been ruled out:

worldState.possible == 0

So when people hear that the world is deterministic, they translate that into:  "All actions except one are impossible."  This seems to contradict their feeling of being free to choose any action.  The notion of physics following a single line, seems to contradict their perception of a space of possible plans to search through.

The representations in our cognitive algorithms do not feel like representations; they feel like the way the world is.  If your mind constructs a search space of states that would result from the initial state given various actions, it will feel like the search space is out there, like there are certain possibilities.

We've previously discussed how probability is in the mind.  If you are uncertain about whether a classical coin has landed heads or tails, that is a fact about your state of mind, not a property of the coin.  The coin itself is either heads or tails.  But people forget this, and think that coin.probability == 0.5, which is the Mind Projection Fallacy: treating properties of the mind as if they were properties of the external world.

So I doubt it will come as any surprise to my longer-abiding readers, if I say that possibility is also in the mind.

What concrete state of the world - which quarks in which positions - corresponds to "There are three apples on the table, and there could be four apples on the table"?  Having trouble answering that?  Next, say how that world-state is different from "There are three apples on the table, and there couldn't be four apples on the table."  And then it's even more trouble, if you try to describe could-ness in a world in which there are no agents, just apples and tables.  This is a Clue that could-ness and possibility are in your map, not directly in the territory.

What is could-ness, in a state of the world?  What are can-ness and able-ness?  They are what it feels like to have found a chain of actions which, if you output them, would lead from your current state to the could-state.

But do not say, "I could achieve X".  Say rather, "I could reach state X by taking action Y, if I wanted".  The key phrase is "if I wanted".  I could eat that banana, if I wanted.  I could step off that cliff there - if, for some reason, I wanted to.

Where does the wanting come from?  Don't think in terms of what it feels like to want, or decide something; try thinking in terms of algorithms.  For a search algorithm to output some particular action - choose - it must first carry out a process where it assumes many possible actions as having been taken, and extrapolates the consequences of those actions.

Perhaps this algorithm is "deterministic", if you stand outside Time to say it.  But you can't write a decision algorithm that works by just directly outputting the only action it can possibly output.  You can't save on computing power that way.  The algorithm has to assume many different possible actions as having been taken, and extrapolate their consequences, and then choose an action whose consequences match the goal.  (Or choose the action whose probabilistic consequences rank highest in the utility function, etc.  And not all planning processes work by forward chaining, etc.)

You might imagine the decision algorithm as saying:  "Suppose the output of this algorithm were action A, then state X would follow.  Suppose the output of this algorithm were action B, then state Y would follow."  This is the proper cashing-out of could, as in, "I could do either X or Y."  Having computed this, the algorithm can only then conclude:  "Y ranks above X in the Preference Ordering.  The output of this algorithm is therefore B.  Return B."

The algorithm, therefore, cannot produce an output without extrapolating the consequences of itself producing many different outputs.  All but one of the outputs being considered is counterfactual; but which output is the factual one cannot be known to the algorithm until it has finished running.

A bit tangled, eh?  No wonder humans get confused about "free will".

You could eat the banana, if you wanted.  And you could jump off a cliff, if you wanted.  These statements are both true, though you are rather more likely to want one than the other.

You could even flatly say, "I could jump off a cliff" and regard this as true - if you construe could-ness according to reachability, and count actions as primitively reachable.  But this does not challenge deterministic physics; you will either end up wanting to jump, or not wanting to jump.

The statement, "I could jump off the cliff, if I chose to" is entirely compatible with "It is physically impossible that I will jump off that cliff".  It need only be physically impossible for you to choose to jump off a cliff - not physically impossible for any simple reason, perhaps, just a complex fact about what your brain will and will not choose.

Defining things appropriately, you can even endorse both of the statements:

  • "I could jump off the cliff" is true from my point-of-view
  • "It is physically impossible for me to jump off the cliff" is true for all observers, including myself

How can this happen?  If all of an agent's actions are primitive-reachable from that agent's point-of-view, but the agent's decision algorithm is so constituted as to never choose to jump off a cliff.

You could even say that "could" for an action is always defined relative to the agent who takes that action, in which case I can simultaneously make the following two statements:

  • NonSuicidalGuy could jump off the cliff.
  • It is impossible that NonSuicidalGuy will hit the ground.

If that sounds odd, well, no wonder people get confused about free will!

But you would have to be very careful to use a definition like that one consistently.  "Could" has another closely related meaning in which it refers to the provision of at least a small amount of probability.  This feels similar, because when you're evaluating actions that you haven't yet ruled out taking, then you will assign at least a small probability to actually taking those actions - otherwise you wouldn't be investigating them.  Yet "I could have a heart attack at any time" and "I could have a heart attack any time I wanted to" are not the same usage of could, though they are confusingly similar.

You can only decide by going through an intermediate state where you do not yet know what you will decide.  But the map is not the territory.  It is not required that the laws of physics be random about that which you do not know.  Indeed, if you were to decide randomly, then you could scarcely be said to be in "control".  To determine your decision, you need to be in a lawful world.

It is not required that the lawfulness of reality be disrupted at that point, where there are several things you could do if you wanted to do them; but you do not yet know their consequences, or you have not finished evaluating the consequences; and so you do not yet know which thing you will choose to do.

A blank map does not correspond to a blank territory.  Not even an agonizingly uncertain map corresponds to an agonizingly uncertain territory.

(Next in the free will solution sequence is "The Ultimate Source", dealing with the intuition that we have some chooser-faculty beyond any particular desire or reason.  As always, the interested reader is advised to first consider this question on their own - why would it feel like we are more than the sum of our impulses?)

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 11:09 AM
Select new highlight date
All comments loaded

It would seem I have failed to make my point, then.

Choosing does not require that it be physically possible to have chosen differently.

Being able to say, "I could have chosen differently," does not require that it be physically possible to have chosen differently. It is a different sense of "could".

I am not saying that choice is an illusion. I am pointing to something and saying: "There! Right there! You see that? That's a choice, just as much as a calculator is adding numbers! It doesn't matter if it's deterministic! It doesn't matter if someone else predicted you'd do it or designed you to do it! It doesn't matter if it's made of parts and caused by the dynamics of those parts! It doesn't matter if it's physically impossible for you to have finally arrived at any other decision after all your agonizing! It's still a choice!"

Choosing does not require that it be physically possible to have chosen differently.

That's a matter of definitions, not fact.

Hopefully, I don't see why you insist on calling deliberation "illusory". Explaining is not the same as explaining away. Determined is not the same as predetermined. I deliberate not yet knowing what I'll choose, consider various factors that do in fact determine my choice, and then, deterministically/lawfully, choose. Where's the illusion? You would seem to be trying to get rid of the rainbow, not just the gnomes.

I feel the obligation to post this warning again: Don't think "Ohhh, the decision I'll make is already determined, so I can as well relax and don't worry too much." Remember you will face the consequences of whatever you decide so make the best possible choice, maximize utility!

"You are not obliged to complete the work, but neither are you free to evade it" -

Rabbi Tarfon

Well said. The fact of deliberation being deterministic does not obviate the need to engage in deliberation. That would be like believing that running the wrong batch program is just as effective as running the right one, just because there will be some output either way.

Paraphrasing my previous comment in another way: determinism is no excuse for you to be sloppy, lazy or unattentive in your decision process.

Hopefully: You seem to be confusing the explicit deliberation of verbal reasoning, much of which is post-hoc or confabulation and not actually required to determine people's actions (rather it is determined by their actions), with the implicit deliberation of neural algorithms, which constitute the high level description of the physics that determines people's actions, the description which is valid on the level of description of the universe where people exist at all.

Eliezer: I'll second Hopefully Anonymous; this is almost exactly what I believe about the whole determinism-free will debate, but it's devilishly hard to describe in English because our vocabulary isn't constructed to make these distinctions very clearly. (Which is why it took a 2700-word blog post). Roland and Andy Wood address one of the most common and silliest arguments against determinism: "If determinism is true, why are you arguing with me? I'll believe whatever I'll believe." The fact that what you'll believe is deterministically fixed doesn't affect the fact that this argument is part of what fixes it.

Eliezer, Great articulation. This pretty much sums up my intuition on free will and human capacity to make choices at this stage of our knowledge, too. You made the distinction between physical capability and possibly determined motivation crystal clear.

Joseph Knecht: Why do you think that the brain would still be Eliezer's brain after that kind of change?

(Ah, it's so relaxing to be able to say that. In the free will class, they would have replied, "Mate, that's the philosophy of identity - you have to answer to the ten thousand dudes over there if you want to try that.")

We can't represent ourselves in our deterministic models of the world. If you don't have the ability to step back and examine the model-building process, you'll inevitably conclude that your own behavior cannot be deterministic.

The key is to recognize that even if the world is deterministic, we will necessarily perceive uncertainties in it, because of fundamental mathematical limits on the ability of a part to represent the whole. We have no grounds for saying that we are any more or less deterministic than anything else - if we find deterministic models useful in predicting the world, we should use them, and assume that they would predict our own actions, if we could use them that way.

But I rarely do, because the little "oh I was so brilliant as a kid that I already knew this, now I will explain it to YOU! Lucky you!" bits with which you preface every single post make me not interested in reading your writing.

If you mean "free will is one of the easier questions to dissolve, or so I found it as a youngster," personally, I find this interesting and potentially useful (in that it predicts what I might find easy) information. I don't read any bragging here, just a desire to share information, and I don't believe Eliezer intends to brag.

What properties must a human visual system that works like a a movie camera have? Properties that apparently don't exist in the actual human visual system. Similarly the popular model of human choice tied to moral responsibility (a person considers options, has the ability to choose the "more moral" or the "less moral" option, and chooses the "more moral" option) may not exist in actual working human brains. In that sense it's reasonable to say "if it's detetministic, if you're designed to do it, if it's made of parts and caused by the dynamics of those parts, if it's physically impossible for you to have finally arrived at any other decision after all your agonizing" it's not a choice. It's an observable phenonenon in nature, like the direction a fire burns, the investments made by a corporation, or the orbital path of Mars. But singling out that phenomenon and calling it "choice" may be like calling something a "perpetual motion machine" or an "omnipotent god". The word usage may obfuscate the phenomenon by playing on our common cognitive biases. I think it can be tempting for those who wish to construct status hierarchies off of individual moral "choice" histories, or who fear, perhaps without rational basis, that if they stop thinking about their personal behavior in terms of making the right "choices" they'll accomplish less, or do things they'll regret. Or perhaps it's just an anaesthetic model of reality for some.

Still, I think the best neuroscience research already demonstrates how wide swaths of our intuitive understanding of "choice" are as inaccurate as our intuitive understanding of vision and other experiential phenomenon.

Joseph Knecht, where you go wrong is here:

...when we... ask what it means for a state to be reachable, the answer circularly depends on the concept of possibility, which is what we are supposedly explaining. A reachable state is just a state that it is possible to reach.

A state labelled "reachable" by a human may in fact be impossible as a matter of physics. Many accidental injuries and deaths follow from this fact. It is as a result of evolution that a given human's label "reachable" has any relation to reality at all.

Is this substantially correct?

I would say not, because you write:

and determining that the phenomenon is reachable.

This uses the word "reachable" in a sentence, without quotes, and therefore makes use of its meaning. But that was merely an infelicitous choice of label. Eliezer has since asked you to substitute labels, so that you not be confused by the meaning of "reachable":

Knecht, for "able to be reached" substitute "labeled fizzbin". I have told you when to label something fizzbin.

If you were to substitute this in, your description would end up being:

Thesis: regarding some phenomenon as possible is nothing other than the inner perception a person experiences (and probably also the memory of such a perception in the past) after mentally running something like the search algorithm and labeling the phenomenon "fizzbin".

I am not, by the way, sure that this quite captures Eliezer's thesis either, since an algorithm could attach many labels, and the above does not pick out which label corresponds to possibility. You may need to start from scratch.

Thesis: regarding some phenomenon as possible is nothing other than . . .

I consider that an accurate summary of Eliezer's original post (OP) to which these are comments.

Will you please navigate to this page and start reading where it says,

Imagine that in an era before recorded history or formal mathematics, I am a shepherd and I have trouble tracking my sheep.

You need read only to where it says, "Markos Sophisticus Maximus".

Those six paragraphs attempt to be a reductive exposition of the concept of whole number, a.k.a., non-negative integer. Please indicate whether you have the same objection to that exposition, namely, that the exposition treats of the number of pebbles in a bucket and therefore circularly depends on the concept of number (or whole number).

Knecht, it's immediately obvious that 1 != 2, but you don't know which of your decisions is determined until after you determine it. So while it is logically impossible (ignoring quantum branching) that you will choose any of your options except one, and all but one of your considered consequences is counterfactual in a way that violates the laws of physics, you don't know which option is the real one until you choose it.

Apart from that, no difference. Like I said, part of what goes into the making of a rationalist is understanding what it means to live in a lawful universe.

What cannot be explained perfectly well without supposing "heat" exists, if you're willing to do everything at the molecular level?

At time t=0, I don't know what I'll do. At t=1, I know. At t=2, I do it. I call this a "choice". It's not strictly necessary, but I find it really useful to have a word for this.

@Jagadul:

by "constraints", I meant that Eliezer specified only that some particular processes happening in the brain are sufficient for choice occurring, which my example refuted, to which you added the ideas that it is not mere happening in the brain but also the additional constraints entailed by concepts of Eliezer-the-person and body-shell-of-Eliezer and that the former can be destroyed while the latter remains, which changes ownership of the choice, etc.

Anyway, I understand what you're saying about choice as a higher-level convenience term, but I don't think it is helpful. I think it is a net negative and that we'd do better to drop it. You gave the thought, "given these options, what will he choose?", but I think the notion of choice adds nothing of value to the similar question, "given these options, what will he do?" You might say that it is different, since a choice can be made without an action occurring, but then I think we'd do better to say not "what will he choose?" but something more like "what will he think?", or perhaps something else depending on the specifics of the situation under consideration.

I believe there's always a way of rephrasing such things so as not to invoke choice, and all that we really give up is the ability to talk about totally generic hypothetical situations (where it isn't specified what the "choice" is about). Whenever you flesh out the scenario by specifying the details of the "choice", then you can easily talk about it more accurately by sidestepping the notion of choice altogether.

I don't think that "choice" is analogous to Newtonian mechanics before relativity. It's more akin to "soul", which we could have redefined and retrofitted in terms of deterministic physical processes in the brain. But just as it makes more sense to forget about the notion of a soul, I think it makes more sense to forget about that of "choice". Just as "soul" is too strongly associated with ideas such as dualism and various religious ideas, "choice" is too strongly associated with ideas such as non-determinism and moral responsibility (relative to some objective standard of morality). Instead of saying "I thought about whether to do X, Y, or Z, then choose to do X, and then did X", we can just say "I thought about whether to do X, Y, or Z, then did X."

@Constant:

I think "choice" is closer to "caloric" than "heat", because I don't believe there is any observable mundane phenomenon that it refers to. What do you have in mind that cannot be explained perfectly well without supposing that a "choice" must occur at some point in order to explain the observed phenomenon?

HA: This pretty much sums up my intuition on free will and human capacity to make choices

Jadagul: this is almost exactly what I believe about the whole determinism-free will debate

kevin: Finally, when I was about 18, my beliefs settled in (I think) exactly this way of thinking.

Is no-one else throwing out old intuitions based on these posts on choice & determinism? -dies of loneliness-

Eliezer, no you made your point quite clearly, and I think I reflect a clear understanding of that point in my posts.

"I am not saying that choice is an illusion. I am pointing to something and saying: "There! Right there! You see that? That's a choice, just as much as a calculator is adding numbers! It doesn't matter if it's deterministic! It doesn't matter if someone else predicted you'd do it or designed you to do it! It doesn't matter if it's made of parts and caused by the dynamics of those parts! It doesn't matter if it's physically impossible for you to have finally arrived at any other decision after all your agonizing! It's still a choice!""

It seems to me that's an arbitrary claim grounded in aesthetics. You could just as easily say "You see that? That's not a choice, just as much as a calculator is adding numbers! It doesn't matter if it's deterministic! It doesn't matter if someone else predicted you'd do it or designed you to do it! It doesn't matter if it's made of parts and caused by the dynamics of those parts! It doesn't matter if it's physically impossible for you to have finally arrived at any other decision after all your agonizining! It's not a choice!""

Doly wrote "The problem I have with the idea that choices are determined is that it doesn't really explain what the hell we are doing when we are "thinking hard" about a decision. Just running an algorithm? But if everything we ever do in our minds is running an algorithm, why does "thinking hard" feel different from, let's say, walking home in "autopilot mode"? In both cases there are estimations of future probabilities and things that "could" happen, but in the second case we feel it's all done in "automatic". Why does this "automatic" feel different from the active "thinking hard"?"

Good questions, and well worth exploring. At this stage quality neuroscience research can probably help us answer a lot of these determinism/human choice questions better than blog comments debate. It's one thing to say human behavior is determined at the level of quantum mechanics. What's traditionally counterintuitive (as shown by Jim Baxter's quotes) is how determined human behavior is (despite instances of the illusion of choice/free will) at higher levels of cognition. I think it's analogous to how our vision isn't constructed at higher levels of cognition in the way we tend to intuit that it is (it's not like a movie camera).

Dude. Eliezer.

Every time I actually read your posts, I find them quite interesting.

But I rarely do, because the little "oh I was so brilliant as a kid that I already knew this, now I will explain it to YOU! Lucky you!" bits with which you preface every single post make me not interested in reading your writing.

I wouldn't write this comment if I didn't think you were a smart guy who has something to say

"What concrete state of the world - which quarks in which positions - corresponds to "There are three apples on the table, and there could be four apples on the table"? Having trouble answering that? Next, say how that world-state is different from "There are three apples on the table, and there couldn't be four apples on the table.""

For the former: An ordinary kitchen table with three apples on it. For the latter: An ordinary kitchen table with three apples on it, wired to a pressure-sensitive detonator that will set off 10 kg of C4 if any more weight is added onto the table.

"But "I could have a heart attack at any time" and "I could have a heart attack any time I wanted to" are nonetheless not exactly the same usage of could, though they are confusingly similar."

They both refer to possible consequences if the initial states were changed, while still obeying a set of constraints. The first refers to a change in initial external states ("there's a clot in the artery"/"there's not a clot in the artery"), while the second refers to a change in initial internal states ("my mind activates the induce-heart-attack nerve signal"/"my mind doesn't activate the induce-heart-attack nerve signal"). Note that "could" only makes sense if the initial conditions are limited to a pre-defined subset. For the above apple-table example, in the second case, you would say that the statement "there could be four apples on the table" is false, but you have to assume that the range of initial states the "could" refers to don't refer to states in which the detonator is disabled. For the heart-attack example, you have to exclude initial states in which the Mad Scientist Doctor (tm) snuck in in the middle of the night and wired up a deliberation-based heart-attack-inducer.

Eliezer: What lame challenge are you putting up, asking for a state of the world which corresponds to possibility? No one claims that possibility is a state of the world.

An analogous error would be to challenge those of us who believe in musical scales to name a note that corresponds to a C major scale.

Jim Baxter: Good luck to you. So far as I can tell, a majority here take their materialism as a premise, not as something subject to review.

The problem I have with this idea that choices are deterministic is that people end up saying things such as:

Don't think "Ohhh, the decision I'll make is already determined, so I can as well relax and don't worry too much."

In other words: "Your decision is determined, but please choose to decide carefully." Hmmm... slight contradiction there.

The problem I have with the idea that choices are determined is that it doesn't really explain what the hell we are doing when we are "thinking hard" about a decision. Just running an algorithm? But if everything we ever do in our minds is running an algorithm, why does "thinking hard" feel different from, let's say, walking home in "autopilot mode"? In both cases there are estimations of future probabilities and things that "could" happen, but in the second case we feel it's all done in "automatic". Why does this "automatic" feel different from the active "thinking hard"?

The two different algorithms "feel" different because they have different effects on our internal mental state, particularly the part of our mental state that is accessible to the part of us that describes how we feel (to ourselves as well as others).

Hmm, it seems my class on free will may actually be useful.

Eliezer: you may be interested to know that your position corresponds almost precisely to what we call classical compatibilism. I was likewise a classical compatibilist before taking my course - under ordinary circumstances, it is quite a simple and satisfactory theory. (It could be your version is substantially more robust than the one I abandoned, of course. For one, you would probably avoid the usual trap of declaring that agents are responsible for acts if and only if the acts proceed from their free will.)

Hopefully Anonymous: Are you using Eliezer's definition of "could", here? Remember, Eliezer is saying "John could jump off the cliff" means "If John wanted, John would jump off the cliff" - it's a counterfactual. If you reject this definition as a possible source of free will, you should do so explicitly.

The algorithm has to assume many different possible actions as having been taken, and extrapolate their consequences, and then choose an action whose consequences match the goal ... The algorithm, therefore, cannot produce an output without extrapolating the consequences of itself producing many different outputs.

It seems like you need to talk about our "internal state space", not our internal algorithms -- since as you pointed out yourself, our internal algorithms might never enumerate many possibilities (jumping off a cliff while wearing a clown suit) that we still regard as possible. (Indeed, they won't enumerate many possibilities at all, if they do anything even slightly clever like local search or dynamic programming.)

Otherwise, if you're not willing to talk about a state space independent of algorithms that search through it, then your account of counterfactuals and free will would seem to be at the mercy of algorithmic efficiency! Are more choices "possible" for an exponential-time algorithm than for a polynomial-time one?

Hopefully, I have no idea what you mean by the phrase, "deliberation is illusory". I do not understand what state of the world corresponds to this being the case. This is not Socratic questioning, I literally have no idea what you're trying to say.

Hopefully probably means that the only acceptable definitions of deliberation involve choices between real possibilities. To Hopefully you probably sound like someone saying unicorns exist but don't have horns.

Roland and Andy, I think you're misreading this formulation of determinism. One may have the illusion/hallucination of being able to engage in deliberation, or relax. But the analogy is more like a kid thinking they're playing a game that's on autoplay mode. The experience of deliberating, the experience of relaxing, the "consequences" are all determined, even if they're unknowable to the person having these experiences. I'm not saying this is the best model of reality, but at this stage of our knowledge it seems to me to be a plausible model of reality.

Just wanted to add this: “Could” also sometimes mean “is physically possible”. We think we have free will because we don’t know all the physics & facts that causes our brains to end up in the states they end up in. The more physics & facts we know, the less possibilities seem possible to us. E. g. if I know nothing about what’s inside the bowl and then take out a red ball from the bowl, it seems that I could have taken out a yellow ball. However, if I knew in the beginning that there are only red balls in the bowl, I would know that taking out yellow ball was impossible. In the same way, if I don’t know all the details about how my brain works, it seems that I could have decided to eat either banana or jump of the cliff. If I knew everything about how my brain works, I would see that it was physically impossible for me to decide to jump of the cliff. If we knew all the processes perfectly well, we would either always see one possibility, or as many possibilities as there are Everett branches.

We've previously discussed how probability is in the mind. If you are uncertain about whether a classical coin has landed heads or tails, that is a fact about your state of mind, not a property of the coin.

The argument is invalid. The existence of subjective uncertainty doesn't imply the non existence of objective indeterminism.

The coin itself is either heads or tails.

That doesn't mean it must have been whatever it was,

But people forget this, and think that coin.probability == 0.5, which is the Mind Projection Fallacy: treating properties of the mind as if they were properties of the external world.So I doubt it will come as any surprise to my longer-abiding readers, if I say that possibility is also in the mind.What concrete state of the world - which quarks in which positions - corresponds to "There are three apples on the table, and there could be four apples on the table"? Having trouble answering that? Next, say how that world-state is different from "There are three apples on the table, and there couldn't be four apples on the table."

I couldn't agree more that a snapshot of state doesn't imply anything about modalities of possibility. Problem is, it doesn't imply anything about modalities of necessity either. To say there are no real possibilities is to say that everything happens necessarily, deterministically and inevitably. But there is neither necessity nor possibility, neither determinism, nor indeterminism, in a snapshot.

And then it's even more trouble, if you try to describe could-ness in a world in which there are no agents, just apples and tables.

To describe both possibility and necessity, you need rules. In general, x!possible means "not forbidden by rules X".

In the case of physical possibility and necessity, the rules are physical laws. A snapshot of state doesn't give you any information about the way the state will evolve. Hence the absence of possibility from state snapshots, not to mention the absence of necessity.

I don't know what physical laws are , ontologocally, but they are not made of atoms, and they are therefore a problem for simpler minded physicalism.

You could even say that "could" for an action is always defined relative to the agent who takes that action, in which case I can simultaneously make the following two statements:

  • NonSuicidalGuy could jump off the cliff.
  • It is impossible that NonSuicidalGuy will hit the ground.

Isn't that a logical mistake, though? Fow that to be correct, it must be that NonSuicidalGuy will hit the ground if and only if he chooses to jump off the cliff.

"I know what it means for you to rescue a toddler from the orphanage. What does it mean for you to could-have-not done it? Can you describe the corresponding state of the world without "could", "possible", "choose", "free", "will", "decide", "can", "able", or "alternative"" ...

Instantiate meta verse, what most people refer to as a multiverse. Contain at least two copies of our universe at the relevant point in time. in one copy, ensure child is rescued. In the other, ensure child is not rescued. resume run; observe results.

Flaw: lack of computational resources.
.... alternatively:

  1. Flip purely quantum coin. heads: rescue child. tails: do not rescue child.
  2. Find a flaw in the matrix that lets you communicate with parallel histories. -and yes, I see the blatant flaw in how # 2 is most likely impossible.

I just figured I could manage to forego the first iteration of taboo words.

This is an awesome, intuitive, sense-making answer to a question I've been thinking about for quite some time. Thanks.