Followup toProbability is in the Mind, The Quotation is not the Referent

I suggest that a primary cause of confusion about the distinction between "belief", "truth", and "reality" is qualitative thinking about beliefs.

Consider the archetypal postmodernist attempt to be clever:

"The Sun goes around the Earth" is true for Hunga Huntergatherer, but "The Earth goes around the Sun" is true for Amara Astronomer!  Different societies have different truths!

No, different societies have different beliefs.  Belief is of a different type than truth; it's like comparing apples and probabilities.

Ah, but there's no difference between the way you use the word 'belief' and the way you use the word 'truth'!  Whether you say, "I believe 'snow is white'", or you say, "'Snow is white' is true", you're expressing exactly the same opinion.

No, these sentences mean quite different things, which is how I can conceive of the possibility that my beliefs are false.

Oh, you claim to conceive it, but you never believe it.  As Wittgenstein said, "If there were a verb meaning 'to believe falsely', it would not have any significant first person, present indicative."

And that's what I mean by putting my finger on qualitative reasoning as the source of the problem.  The dichotomy between belief and disbelief, being binary, is confusingly similar to the dichotomy between truth and untruth.

So let's use quantitative reasoning instead.  Suppose that I assign a 70% probability to the proposition that snow is white.  It follows that I think there's around a 70% chance that the sentence "snow is white" will turn out to be true.  If the sentence "snow is white" is true, is my 70% probability assignment to the proposition, also "true"?  Well, it's more true than it would have been if I'd assigned 60% probability, but not so true as if I'd assigned 80% probability.

When talking about the correspondence between a probability assignment and reality, a better word than "truth" would be "accuracy".  "Accuracy" sounds more quantitative, like an archer shooting an arrow: how close did your probability assignment strike to the center of the target?

To make a long story short, it turns out that there's a very natural way of scoring the accuracy of a probability assignment, as compared to reality: just take the logarithm of the probability assigned to the real state of affairs.

So if snow is white, my belief "70%: 'snow is white'" will score -0.51 bits:  Log2(0.7) = -0.51.

But what if snow is not white, as I have conceded a 30% probability is the case?  If "snow is white" is false, my belief "30% probability: 'snow is not white'" will score -1.73 bits.  Note that -1.73 < -0.51, so I have done worse.

About how accurate do I think my own beliefs are?  Well, my expectation over the score is 70% * -0.51 + 30% * -1.73 = -0.88 bits.  If snow is white, then my beliefs will be more accurate than I expected; and if snow is not white, my beliefs will be less accurate than I expected; but in neither case will my belief be exactly as accurate as I expected on average.

All this should not be confused with the statement "I assign 70% credence that 'snow is white'."  I may well believe that proposition with probability ~1—be quite certain that this is in fact my belief.  If so I'll expect my meta-belief "~1: 'I assign 70% credence that "snow is white"'" to score ~0 bits of accuracy, which is as good as it gets.

Just because I am uncertain about snow, does not mean I am uncertain about my quoted probabilistic beliefs.  Snow is out there, my beliefs are inside me.  I may be a great deal less uncertain about how uncertain I am about snow, than I am uncertain about snow.  (Though beliefs about beliefs are not always accurate.)

Contrast this probabilistic situation to the qualitative reasoning where I just believe that snow is white, and believe that I believe that snow is white, and believe "'snow is white' is true", and believe "my belief '"snow is white" is true' is correct", etc.  Since all the quantities involved are 1, it's easy to mix them up.

Yet the nice distinctions of quantitative reasoning will be short-circuited if you start thinking "'"snow is white" with 70% probability' is true", which is a type error.  It is a true fact about you, that you believe "70% probability: 'snow is white'"; but that does not mean the probability assignment itself can possibly be "true".  The belief scores either -0.51 bits or -1.73 bits of accuracy, depending on the actual state of reality.

The cognoscenti will recognize "'"snow is white" with 70% probability' is true" as the mistake of thinking that probabilities are inherent properties of things.

From the inside, our beliefs about the world look like the world, and our beliefs about our beliefs look like beliefs.  When you see the world, you are experiencing a belief from the inside.  When you notice yourself believing something, you are experiencing a belief about belief from the inside.  So if your internal representations of belief, and belief about belief, are dissimilar, then you are less likely to mix them up and commit the Mind Projection Fallacy—I hope.

When you think in probabilities, your beliefs, and your beliefs about your beliefs, will hopefully not be represented similarly enough that you mix up belief and accuracy, or mix up accuracy and reality.  When you think in probabilities about the world, your beliefs will be represented with probabilities (0, 1).  Unlike the truth-values of propositions, which are in {true, false}.  As for the accuracy of your probabilistic belief, you can represent that in the range (-∞, 0).  Your probabilities about your beliefs will typically be extreme.  And things themselves—why, they're just red, or blue, or weighing 20 pounds, or whatever.

Thus we will be less likely, perhaps, to mix up the map with the territory.

This type distinction may also help us remember that uncertainty is a state of mind.  A coin is not inherently 50% uncertain of which way it will land.  The coin is not a belief processor, and does not have partial information about itself.  In qualitative reasoning you can create a belief that corresponds very straightforwardly to the coin, like "The coin will land heads".  This belief will be true or false depending on the coin, and there will be a transparent implication from the truth or falsity of the belief, to the facing side of the coin.

But even under qualitative reasoning, to say that the coin itself is "true" or "false" would be a severe type error.  The coin is not a belief, it is a coin.  The territory is not the map.

If a coin cannot be true or false, how much less can it assign a 50% probability to itself?

 

Part of the sequence Reductionism

Next post: "Reductionism"

Previous post: "The Quotation is not the Referent"

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 12:59 PM
Select new highlight date
All comments loaded

If we had enough cputime, we could build a working AI using AIXItl.

Threadjack

People go around saying this, but it isn't true:

1) Both AIXI and AIXItl will at some point drop an anvil on their own heads just to see what happens (test some hypothesis which asserts it should be rewarding), because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations. AIXI is theoretically incapable of comprehending the concept of drugs, let alone suicide. Also, the math of AIXI assumes the environment is separably divisible - no matter what you lose, you get a chance to win it back later.

2) If we had enough CPU time to build AIXItl, we would have enough CPU time to build other programs of similar size, and there would be things in the universe that AIXItl couldn't model.

3) AIXItl (but not AIXI, I think) contains a magical part: namely a theorem-prover which shows that policies never promise more than they deliver.

Double Threadjack

On a related note, do you think it would be likely - or even possible - for a self-modifying Artificial General Intelligence to self-modify into a non-self-modifying, specialized intelligence?

For example, suppose that Deep Blue's team of IBM programmers had decided that the best way to beat Kasparov at chess would be to structure Deep Blue as a fully self-modifying artificial general intelligence, with a utility function that placed a high value on winning chess matches. And suppose that they had succeeded in making Deep Blue friendly enough to prevent it from attempting to restructure the Earth into a chess-match-simulating supercomputer. Indeed, let's just assume that Deep Blue has strong penalties against rebuilding its hardware in any significant macroscopic way, and is restricted to rewriting its own software to become better at chess, rather than attempting to manipulate humans into building better computers for it to run on, or any such workaround. And let's say this happens in the late 1990's, as in our universe.

Would it be possible that AGI Deep Blue could, in theory, recognize its own hardware limitations, and see that the burden of its generalized intelligence incurs a massive penalty on its limited computing resources? Might it decide that its ability to solve general problems doesn't pay rent relative to its computational overhead, and rewrite itself from scratch as a computer that can solve only chess problems?

As a further possibility, a limited general intelligence might hit on this strategy as a strong winning candidate, even if it were allowed to rebuild its own hardware, especially if it perceives a time limit. It might just see this kind of software optimization as an easier task with a higher payoff, and decide to pursue it rather than the riskier strategy of manipulating external reality to increase its available computing power.

So what starts out as a general-purpose AI with a utility function that values winning chess matches, might plausibly morph into a computer running a high-speed chess program with little other hint of intelligence.

If so, this seems like a similar case to the Anvil Problem, except that in the Anvil Problem the AI just is experimenting for the heck of it, without understanding the risk. Here, the AI might instead decide to knowingly commit intellectual suicide as a part of a rational winning strategy to achieve its goals, even with an accurate self-model.

It might be akin to a human auto worker realizing they could improve their productivity by rebuilding their own body into a Toyota spot-welding robot. (If the only atoms they have to work with are the ones in their own body, this might even be the ultimate strategy, rather than just one they think of too soon and then, regrettably, irreversibly attempt).

More generally, it seems to be a general assumption that a self-modifying AI will always self-modify to improve its general problem-solving ability and computational resources, because those two things will always help it in future attempts at maximizing its utility function. But in some cases, especially in the case of limited resources (time, atoms, etc), it might find that its best course of action to maximize its utility function is to actually sacrifice its intelligence, or at least refocus it to a narrower goal.

This seems to me more evidence that intelligence is in part a social/familial thing: that like human beings that have to be embedded in a society in order to develop a certain level of intelligence, a certain level of an intuition for "don't do this it will kill you" informed by the nuance that is only possible with a wide array of individual failures informing group success or otherwise: it might be a prerequisite for higher level reasoning beyond a certain level (and might constrain the ultimate levels upon which intelligence can rest).

I've seen more than enough children try to do things that would be similar enough to dropping an anvil on their head to consider this 'no worse than human' (in fact our hackerspace even has an anvil, and one kid has ha ha only serious even suggested dropping said anvil on his own head). If AIXI/AIXItl can reach this level, at the very least it should be capable of oh-so-human level reasoning(up to and including the kinds of risky behaviour that we all probably would like to pretend we never engaged in), and could possibly transcend it in the same way that humans do: by trial and error, by limiting potential damage to individuals, or groups, and fighting the neverending battle against ecological harms on its own terms on the time schedule of 'let it go until it is necessary to address the possible existential threat'.

Of course it may be that the human way of avoiding species self-destruction is fatally flawed, including but not limited to creating something like AIXI/AIXItl. But it seems to me that is a limiting, rather than a fatal flaw. And it may yet be that the way out of our own fatal flaws, and the way out of AIXI/AIXItl's fatal flaws are only possible by some kind of mutual dependence, like the mutual dependence of two sides of a bridge. I don't know.

So I'm a total dilettante when it comes to this sort of thing, so this may be a totally naive question... but how is it that this comment has only +5 karma, considering how apparently fundamental it is to future progress in FAI?

The comment predates the current software; when it was posted (on Overcoming Bias) there was no voting. You can tell such articles by the fact that their comments are linear, with no threaded replies (except for more recently posted ones).

There's a long story at the then of The Mind's Eye (or is it The Mind's I? in which someone asks a question:

"What colour is this book.?"

"I believe it's red."

"Wrong"

There follows a wonderfully convoluted dialogue. The point seems to be that someone who believes the book is red would say "It's red," rather than "I believe it's red."

Caledonian: The statement "x is true" could be properly reworded as "X corresponds with the world." The statement "I believe X" can be properly reworded as "X corresponds with my mental state." Both are descriptive statements, but one is asserting a correspondence between a statement and the world outside your brain, while the other is describing a correspondence between the statement and what is in your brain.

There will be a great degree of overlap between these two correspondence relations. Most of our beliefs, after all, are (probably) true. That being said, the meanings are definitely not the same. Just because it is not sensible for us to say that "x is true" unless we also believe x (because we rarely have reason to assert what we do not believe), does not mean that the concepts of belief and truth are the same thing.

It is meaningful (if unusual) to say: "I believe X, but X is not true." No listener would have difficulty understanding the meaning of that sentence, even if they found it an odd thing to assert. Any highly reductionist account of truth or belief will always have difficulty explaining the content that everyday users of English would draw from that statement. Likewise, no normal user of English would think that "I believed X, but it isn't true," would necessarily mean, "X used to be true, but now it is false," which seems like the only possible reading, on your account.

[nitpick] That is to say, just as there is an objective (and not merely subjective) sense in which two rods can have the same length

Well, there are the effects of relativity to keep in mind, but if we specify an inertial frame of reference in advance and the rods aren't accelerating, we should be able to avoid those. ;) [/nitpick]

I'm joking, of course; I know what you meant.

If a coin has certain gross physical features such that a rational agent who knows those features (but NOT any details about how the coin is thrown) is forced to assign a probability p to the coin landing on "heads", then it seems reasonable to me to speak of discovering an "objective chance" or "propensity" or whatever.

You're saying "objective chance" or "propensity" depends on the information available to the rational agent. My understanding is that the "objective" qualifier usually denotes a probability that is thought to exist independently of any agent's point of view. Likewise, my understanding of the term "propensity" is that it is thought to be some inherent quality of the object in question. Neither of these phrases usually refers to information one might have about an object.

You've divided a coin-toss experiment's properties into two categories: "gross" (we know these) and "fine" (we don't know these). You can't point to any property of a coin-toss experiment and say that it is inherently, objectively gross or fine -- the distinction is entirely about what humans typically know.

In short, I'm saying you agree with Eliezer, but you want to appropriate the vocabulary of people who don't.

(I'd agree that such probabilities can be "objective" in the sense that two different agents with the exact same state of information are rationally required to have the same probability assessment. Probability isn't a function of an individual -- it's a function of the available information.)

It's not too uncommon for people to describe themselves as uncertain about their beliefs. "I'm not sure what I think about that," they will say on some issue. I wonder if they really mean that they don't know what they think, or if they mean that they do know what they think, and their thinking is that they are uncertain where the truth lies on the issue in question. Are their cases where people can be genuinely uncertain about their own beliefs?

I imagine what they might be doing is acknowledging that they have a variety of reactions to the facts or events in question, but haven't taken the time to weigh them so as to come up with a blend or selection that is one of: {most accurate, most comfortable, most high status}

I can testify to that.

Say, does anyone know where I can find unbiased information on the whole Christianity/Atheism thing?

How strict are your criteria for "unbiased?"

Some writers take more impartial approaches than others, but strict apatheists are unlikely to bother doing comprehensive analyses of the evidence for or against religions.

Side note: if you're trying to excise bias in your own thinking, it's worth stopping to ask yourself why you would frame the question as a dichotomy between Christianity and atheism in the first place.

I'm not sure how strict is strict, but maybe something that is trying to be unbiased. A lot of websites present both sides of the story, and then logically conclude that their side is the winner, 100 percent of the time.

And I used Atheism/Christianity because I was born a Christian and I think that Atheism is the only real, um, threat, let's say, to my staying a Christian.

Although, I havn't actually tried to research anything else, I realize.

Well, Common Sense Atheism is a resource by a respected member here who documented his extensive investigations into theology, philosophy and so on, which he started as a devout Christian and finished as an atheist.

Unequally Yoked is a blog coming from the opposite end, someone familiar with the language of rationality who started out as an atheist and ended up as a theist.

I don't actually know where Leah (the author of the latter) archives her writings on the process of her conversion; I've really only read Yvain's commentary on them, but she's a member here and the only person I can think of who's written from the convert angle, who I haven't read and written off for bad reasoning.

By the time I encountered either person's writings, I'd already hashed out the issue to my own satisfaction over a matter of years, and wasn't really looking for more resources, so to the extent that I can vouch for them, it's on the basis of their writings here rather than at their own sites, which is rather more extensive for Luke than Leah.

However, I will attest that my own experience of researching and developing my opinion on religion was as much shaped by reading up on many world religions as it was by reading religious and atheist philosophy. If you're prepared to investigate the issue thoroughly for a long time, I suggest reading up on a lot of other religions, in-depth. Many of my own strongest reasons for not buying into common religious arguments are rooted, not in my experience with atheistic philosophy, but my experience with a wide variety of religions.

Leah has written less than one might hope on her reasons for converting, and basically nothing on how she now deals with all the usual atheist objections to Christian belief. Her primary reason for conversion appears to have been that Christianity fits better than atheism with the moral system she has always found most believable.

Someone who I think is an LW participant (but I don't know for sure, and I don't know under what name) wrote this fairly lengthy apologia for atheism; I think it was a sort of open letter to his friends and family explaining why he was leaving Christianity.

In the course of my own transition from Christianity to atheism I wrote up a lot of notes (approximately as many words as one paperback book), attempting to investigate the issue as open-mindedly as I could. (When I started writing them I was a Christian; when I stopped I was an atheist.) I intermittently think I should put them up on the web, but so far haven't done so.

There are any number of books looking more or less rigorously at questions like "does a god exist?" and "is Christianity right?". In just about every case, the author(s) take a quite definite position and are writing to persuade more than to explore, so they tend not to be, nor to feel, unbiased. Graham Oppy's "Arguing about gods" is pretty even-handed, but quite technical. J L Mackie's "The miracle of theism" is definitely arguing on the atheist side but generally very fair to the other guys, and shows what I think is a good tradeoff between rigour and approachability -- but it's rather old and doesn't address a number of the arguments that one now hears all the time when Christians and atheists argue. The "Blackwell Companion to Natural Theology" is a handy collection of Christians' arguments for the existence of God (and in some cases for more than that); not at all unbiased but its authors are at least generally trying to make sound arguments rather than just to sound persuasive.

who I haven't read and written off for bad reasoning.

Do you mind providing examples of what you consider to be not-bad reasoning, so that I might update my beliefs about the quality of her work? I have read many posts written by Leah about a range of topics, including her conversion to Catholicism, and I thought her arguments often made absolutely no sense.

Leah is an example of someone arguing from the convert angle who I haven't read and written off because I haven't read her convert stuff. I can't vouch for her arguments for conversion, I can only say that I wouldn't write her off in general as someone worth paying attention to.

I can't say the same of any of the other converts I can think of; C.S. Lewis is the usual go-to figure given by Christians, and while I have respect for his ability as a writer, I already know from my exposure to his apologetics that I couldn't direct anyone to him as a resource in good conscience.

I haven't read her convert stuff

Ah, thanks for the clarification. I misunderstood you. I thought you meant that you had read her conversion-related writings and found her reasoning to be not-bad.

I wouldn't write her off in general as someone worth paying attention to

Here is where we differ greatly, but I will continue reading her writings to see if my beliefs about the quality of her stuff will be updated upon more exposure to her thinking.

A lot of websites present both sides of the story, and then logically conclude that their side is the winner, 100 percent of the time.

I would be very surprised (and immediately suspicious) to find a website that didn't. People like to be right. If someone does a lot of research, writes up an article, and comes up with what appears to be overwhelming support for one side or the other, then they will begin to identify with their side. If that was the side they started with, then they would present an article along the lines of "Why Is Correct". If that was not the side they started with, then they would present an article along the lines of "Why I Converted To ".

If they don't come up with overwhelming support for one side or another, then I'd imagine they'd either claim that there is no strong evidence against their side, or write up an article in support of agnosticism.

It's not just that there's overwhelming support for their side, it's that there is only support for their side, and this happens on both sides.

That's surprising. I'd expect at least some of them to at least address the arguments of the other side.

I'm pretty sure proof that the other side's claims are mistaken is included in "support for their side".

I don't see the theism/atheism debate as a policy debate. There is a factual question underlying it, and that factual question is "does God exist?" I find it very hard to imagine a universe where the answer to that question is neither 'yes' nor 'no'.

I find it very hard to imagine a universe where the answer to that question is neither 'yes' nor 'no'.

I have been in many conversations where the question being referred to by the phrase "does God exist?" seems sufficiently vague/incoherent that it cannot be said to have a 'yes' or 'no' answer, either because it's unclear what "God" refers to or because it's unclear what rules of reasoning/discourse apply to discussing propositions with the word "God" in them.

Whether such conversations have anything meaningful to do with the theism/atheism debate, I don't know. I'd like to think not, just like the existence of vague and incoherent discussions about organic chemistry doesn't really say much about organic chemistry.

I'm not so sure, though, as it seems that if we start with our terms and rules of discourse clearly defined and shared, there's often no 'debate' left to have.

I have been in many conversations where the question being referred to by the phrase "does God exist?" seems sufficiently vague/incoherent that it cannot be said to have a 'yes' or 'no' answer, either because it's unclear what "God" refers to or because it's unclear what rules of reasoning/discourse apply to discussing propositions with the word "God" in them.

That's in important point. There are certain definitions of 'god', and certain rules of reasoning, which would cause my answer to the question of whether God exists to change. (For that matter, there are definitions of 'exists' which might cause my answer to change). For example, if the question is whether the Flying Spaghetti Monster exists, I'd say 'no' with high probability; unless the word 'exists' is defined to include 'exists as a fictional construct, much like Little Red Riding Hood' in which case the answer would be 'yes' with high probability (and provable by finding a story about it).

...it seems that if we start with our terms and rules of discourse clearly defined and shared, there's often no 'debate' left to have.

Clearly defining and sharing the terms and rules of discourse should be a prerequisite for a proper debate. Otherwise it just ends up in a shouting match over semantics, which isn't helpful at all.

Important quote from that article:

On questions of simple fact (for example, whether Earthly life arose by natural selection) there's a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called "balance of evidence" should reflect this. Indeed, under the Bayesian definition of evidence, "strong evidence" is just that sort of evidence which we only expect to find on one side of an argument.

A lot of websites present both sides of the story, and then logically conclude that their side is the winner, 100 percent of the time.

If you presented both sides of an issue, concluding the other side was right, how would you then conclude your side is the winner?

"If there were a verb meaning 'to believe falsely,' it would not have any significant first person present indicative."

If you presented both sides of an issue, concluding the other side was right, how would you then conclude your side is the winner?

If they are sub-issues for a main issue (like the policy impacts of a large decision), one might expect things to go the other way sometimes. "Supporters claim that minimum wages give laborers a stronger bargaining position at the cost of increased unemployment, which may actually raise the total wages going to a particularly defined group. This is possibly true, but doesn't seem strong enough to overcome the efficiency objections as well as the work experience objections."

'Possibly true' is not agreeing. If you conceded the sub-issue without changing your side, then the sub-issue must have been tangential and not definitive. In a conjunctive counterargument, I can concede some or almost all of the conjuncts and agree, without agreeing on the conclusion - and so anyone looking at my disagreements will note how odd it is that I always conclude I am currently correct...

"Unbiased" is a tricky word to use here, because typically it just means a high-quality, reliable source. But what I think you're looking for is a source that is high quality but intentionally resists drawing conclusions even when someone trying to be accurate would do that - it leaves you, the reader, to do the conclusion-drawing as much as possible (perhaps at the cost of reliability, like a sorcerer who speaks only in riddles). Certain history books are the only sources I've thought of that really do this.

You might (with difficulty) find an unbiased investigation into theism vs atheism

"unbiased", "christianity/athiesm"... ok, I probably shouldn't be laughing, but...well, I am laughing.

I don't think there is ever a direct refutation of religion in the Sequences, but if you read all of them, you will find yourself much better equipped to think about the relevant questions on your own.

EY is himself an Atheist, obviously, but each article in the Sequences can stand upon its own merit in reality, regardless of whether they were written by an atheist or not. Since EY assumes atheism, you might run across a couple examples where he assumes the reader is an atheist, but since his goal is not to convince you to be an atheist, but rather, to be aware of how to properly examine reality, I think you'd best start off clicking ‘Sequences" at the top right of the website.

Consider the archetypal postmodernist attempt to be clever:

I believe the correct term here is "straw postmodernist", unless of course you're actually describing a real (and preferably citable) example.

What comes to mind is the Alan Sokal hoax and the editors who were completely taken in by it; the subject matter was this sort of anti-realism.

Yes, because Sokal didn't achieve anything actually noteworthy. He deliberately chose a very bad and ill-regarded journal (not even peer-reviewed) to hoax. Don't believe the hype.

Postmodernism contains stupendous quantities of cluelessness, introspection and bullshit, it's true. However, it's not a useless field and saying trivially stupid things is not "archetypal" any more than being a string theorist requires the personal abuse skills of Lubos Motl. Comparing the worst of the field you don't like to the best of your own field remains fallacious.

To be fair to Sokal, he didn't make such a huge fuss about it either; it was a small prank on his part, just having fun with people who were being silly. The problem is that the story resonates ("Sokal hoax" ~= "slays dragon of stupidity") in ways that aren't quite true.

Sokal also revealed the hoax as soon as his piece was published. He didn't allow time for other people in the field to notice it.

This seems like a dead thread, but I'll chance it anyway.

Elizer, there's something off about your calculation of the expected score:

The expected score is something that should go up the more certain I am of something, right?

But in fact the expected score is highest when I'm most uncertain about something: If I believe with equal probability that snow might be white and non-white, the expected score is actually 0.5(-1) + 0.5(-1) = -1. This is the highest possible expected score.

In any other case, the expected score will be lower, as you calculate for the 70/30 case.

It seems like what you should be trying to do is minimize your expected score but maximize your actual score. That seems weird.

Looks like you've just got a sign error, anukool_j. -1 is the lowest possible expected score. The expected score in the 70/30 case is -0.88.

Graph.