Intransitive preferences are a demonstrable characteristic of human behaviour. So why am I having such trouble coming up with real-world examples of money-pumping?
"Because I'm not smart or imaginative enough" is a perfectly plausible answer, but I've been mulling this one over on-and-off for a few months now, and I haven't come up with a single example that really captures what I consider to be the salient features of the scenario: a tangled hierarchy of preferences, and exploitation of that tangled hierarchy by an agent who cyclically trades the objects in that hierarchy, generating trade surplus on each transaction.
It's possible that I am in fact thinking about money-pumping all wrong. All the nearly-but-not-quite examples I came up with (amongst which were bank overdraft fees, Weight Watchers, and exploitation of addiction) had the characteristics of looking like swindles or the result of personal failings, but from the inside, money-pumping must presumably feel like a series of gratifying transactions. We would want any cases of money-pumping we were vulnerable to.
At the moment, I have the following hypotheses for the poverty of real-world money-pumping cases:
- Money-pumping is prohibitively difficult. The conditions that need to be met are too specific for an exploitative agent to find and abuse.
- Money-pumping is possible, but the gains on each transaction are generally so small as to not be worth it.
- Humans have faculties for identifying certain classes of strategy that exploit the limits of their rationality, and we tell any would-be money-pumper to piss right off, much like Pascal's Mugger. It may be possible to money-pump wasps or horses or something.
- Humans have some other rationality boundary that makes them too stupid to be money-pumped, to the same effect as #3.
- Money-pumping is prevalent in reality, but is not obvious because money-pumping agents generate their surplus in non-pecuniary abstract forms, such as labour, time, affection, attention, status, etc.
- Money-pumping is prevalent in reality, but obfuscated by cognitive dissonance. We rationalise equivalent objects in a tangled preference hierarchy as being different.
- Money-pumping is prevalent in reality, but obscured by cognitive phenomena such as time-preference and discounting, or underlying human aesthetic/moral tastes, (parochial equivalents of pebble-sorting), which humans convince themselves are Real Things that are Really Real, to the same effect as #6.
Does anyone have anything to add, or any good/arguable cases of real-world money-pumping?
Given the dynamic nature of human preferences, it may be that the best one can do is n-fold money pumps, for low values of n. Here, one exploits some intransitive preferences n times before the intransitive loop is discovered and remedied, leaving another or a new vulnerability. Even if there may never be a single time that the agent you are exploiting is VNM-rational, its volatility by appropriate utility perturbations will suffice to keep money pumping in line. This mirrors the security that quantum encryption offers: even if you manage to exploit it, the receiving party will be aware of your receipt of the communication, and will promptly change their strategies. All of this assumes a meta-level economical injunction that states if you notice intransitivity in your preferences, you will eventually be forced to adjust (or be depleted of all relevant resources).
In light of this, it may be that exploiting money pumps is not viable for any agent without sufficient amounts of computational power. It takes computational (and usually physical) resources to discover intransitive preferences, and if the cost of expending these resources is greater than the expected gain of an n-fold money pump, the victim agent cannot be effectively money pumped.
As such, money pumping may be a dance of computational power: the exploiting agent to compute deviations from a linear ordering, and the victim agent to compute adherence thereto. It is an open question as to which side has the easier task in the case of humans. (Of course, a malevolent AI would probably have enough resources to find and exploit preference loops far quicker than you would have time to notice and correct them. On the other hand, with that many resources, there may be more effective ways to get the upper hand.)
Finally, there is also the issue of volume. A typical human may perform only a few thousand preference transactions in a day, whereas it may take many orders of magnitude more to exploit this kind of VNM-irrationality given dynamical adjustment. (I can see formalizations of this that allow simulation and finer analysis, and dare I say an economics master's thesis?)