The Kelly criterion is the optimal way to allocate one's bankroll over a lifetime to a series of bets assuming the actor's utility increases logarithmically with the amount of money won. Most importantly the criterion gives motivation to decide between investments with identical expected value but different risk of default. It essentially stipulates that the proportion of one's bankroll invested in a class of bets should be proportional to the expected value divided by the payoff in case it pans out.

Now, nothing in the formalism restricts the rule to bets or money for that matter, but is applicable to any situation an actor as assumed above faces uncertainty and possible payoff in utility. Aside from the obvious application to investments, e.g. bonds, this is also applicable to the purchase of insurance or cryonic services.

Buying an insurance can obviously be modeled as bet in the Kelly sense. A simple generalisation of the Kelly criterion leads to a formula that allows to incorporate losses.

An open question, to me at least, is if it possible to generalise the Kelly criterion to arbitrary probability distributions. Also, how can it be that integration over all payoffs for constant expected value evaluates as infinity?

Finally, how would a similar criterion look like for other forms of utility functions?

 

I did not put this question in the open thread because I think the Kelly criterion deserves more of a discussion and is immediately relevant to this site's interests.

New Comment
15 comments, sorted by Click to highlight new comments since:

It's apparently not just for logarithmic utility functions. From the wikipedia page:

In most gambling scenarios, and some investing scenarios under some simplifying assumptions, the Kelly strategy will do better than any essentially different strategy in the long run.

Right, over an infinite series of bets the probability that Kelly goes ahead of a different fixed allocation goes to 1. Some caveats:

  • In the long run, we're all dead: in decisions like retirement fund investments the game is short enough that Kelly takes too much risk of short-term losses and you should bet less than Kelly
  • Kelly doesn't maximize expected winnings: each bet where you bet more than Kelly multiplies your EV (relative to Kelly) in exchange for a chance of falling behind Kelly
  • A strategy that is "bet Kelly over the infinite series of bets, except for n all-in bets to get q times Kelly EV in exchange for probability p of losing it all" may not be "essentially different" but it's noteworthy and calls for betting more than Kelly in some bets
  • In an odd situation where your utility is linear or super-linear in winnings, the utility-maximizing strategy is 100% all-in bets essentially different strategy in the long run

In the long run, we're all dead: in decisions like retirement fund investments the game is short enough that Kelly takes too much risk of short-term losses and you should bet less than Kelly

Which is one of the justifications for pension funds and annuities: by having a much longer timespan than any one retiree, they can make larger Kelly bets, see larger returns on investment, with benefits to either the retirees they are paying or the larger economy. Hanson says that this implies that eventually the economy will be dominated by Kelly players.

"the utility-maximizing strategy is 100% all-in bets"

Not quite. It's going all-in when the expected value is greater than one, and not betting anything when it's less. If you have a 51% chance doubling your money, go all in. If you have a 49% chance, don't bet anything. In fact, bet negative if that's allowed.

Right, and Kelly allocation is 0 for negative EV bets.

Carl, thanks, this is great!

In order for that to be true, you have to define "in the long run" in such a way that basically begs the question.

If you define "in the long run" to mean the expected value after than many bets, the Kelly criterion is beaten by taking whatever bet has the highest expected value. For example, suppose you have a bet that has a 50% chance of losing everything and a 50% chance of quadrupling your investment, the Kelly criterion says not to take it, since losing everything has infinite disutility. If you don't take it, your expected value is what you started with. If you take it n times, you have a 2^(-n) chance of having 4^n times as much as you started with, which gives an expected value of 2^n.

For example, suppose you have a bet that has a 50% chance of losing everything and a 50% chance of quadrupling your investment, the Kelly criterion says not to take it, since losing everything has infinite disutility.

A bet where you quadruple your investment has a b of 3, and p is .5. The Kelly criterion says you should bet (b*p-q)/b, which is (3*.5-.5)/3, which is one third of your bankroll every time. The expected value after n times is (4/3)^n.

The assumption of the Kelly criterion is that you get to decide the scale of your investment, and that the investment scales with your bankroll.

If you take it n times, you have a 2^(-n) chance of having 4^n times as much as you started with, which gives an expected value of 2^n.

Indeed, but the probability that the Kelly better does better than that better is 1-2^(-n)!

I think "in the long run" is used in the same sense as for the law of large numbers. The reason we get a different result is that the results of a bet constrain the possible choices for future bets, and it basically turns out that bets are roughly multiplicative in nature, hence why you want to maximize something like log(x) (because if x is multiplicative, log(x) would be additive and law of large numbers applies; that's not a proof but it's intuition).

An open question, to me at least, is if it possible to generalise the Kelly criterion to arbitrary probability distributions.

You mean, the potential actions are discrete but the potential outcomes for those actions are continuous, with a probability measure over those outcomes, or that there is a non-discrete set of possible actions, or something else?

Also, how can it be that integration over all payoffs for constant expected value evaluates as infinity?

I'm not sure I'm understanding this correctly. Are you asking how the St. Petersburg Paradox works?

Finally, how would a similar criterion look like for other forms of utility functions?

Before you take the derivative with respect to Delta, apply the desired utility function, and then take the derivative. (Note that linear utility functions behave the same as logarithmic utility functions, and Wikipedia's treatment assumes a linear utility function, not a logarithmic one.)

Another extension you can do is to make use of a finite lifetime, which scraps the assumption that K/N approaches p in the limit. With finite N, you can discover what Delta maximizes the probabilistically weighted mean of the utilities.

You mean, the potential actions are discrete but the potential outcomes for those actions are continuous, with a probability measure over those outcomes, or that there is a non-discrete set of possible actions, or something else?

Yes, potential actions are discrete and outcomes are arbitrarily distributed.

I'm not sure I'm understanding this correctly. Are you asking how the St. Petersburg Paradox works?

No, I mean that the Kelly criterion says that allocation to a bet should be proportional to expected value over payoff. If I hold expected value constant and integrate over payoff the integral diverges. Intuitively I would expect to see a finite integral, reflecting that Kelly restricts how much risk I should be willing to take.

Before you take the derivative with respect to Delta, apply the desired utility function, and then take the derivative.

Interesting. I should try this later.

(Note that linear utility functions behave the same as logarithmic utility functions, and Wikipedia's treatment assumes a linear utility function, not a logarithmic one.)

The Kelly criterion is the natural result when assuming a logarithmic utility function. For a linear utility function it arises if the actor maximizes expected growth rate.

Yes, potential actions are discrete and outcomes are arbitrarily distributed.

It seems like this paper or this paper might be relevant to your interests. (PM me your email if you don't have access to them.)

No, I mean that the Kelly criterion says that allocation to a bet should be proportional to expected value over payoff. If I hold expected value constant and integrate over payoff the integral diverges. Intuitively I would expect to see a finite integral, reflecting that Kelly restricts how much risk I should be willing to take.

Kelly tells you how much risk you should be willing to take for a particular b; integrating over b is not meaningful, since it's integrating over multiple bets. (Note that f is E/b, if E is the expected value, and 1/x diverges. Since p is capped by 1, then E is capped by b, and the maximum risk you should take is betting everything, if p=1 i.e. it's a sure thing.)

If you put a probability p(b) on any particular payout, you might get something meaningful out of integrating p(b)E/b, but it's not clear to me that's the right way to do things.

Interesting. I should try this later.

It won't work out very prettily, but it is instructive. Basically, that tells you how much your bet should have differed from Delta, given what happened. You can then figure out what would have been optimal for that sequence, then do a weighted sum over sequences. (If your utility function isn't scale invariant, and only log is, then you need information on how long the game runs; if you're allowed to change the fraction of your wealth that you put up each time, then it's an entirely different problem.)

I made a comment early this week on a thread discussing the lifespan dilemma, and how it appears to untangle it somewhat. I had intended to see if it helped clarify other similar issues, but haven't done so yet. I would be interested in feedback- it seems possible the I have completely misapplied it in this case.

[-][anonymous]00

While we're using the kelly criterion, we should probably resolve its paradox to avoid going down its own "garden path" equivalent of the lifespan dilemma.