In a comment to Can we hold intellectuals to similar public standards as athletes? I said that one of the largest problems with using prediction accuracy (EG prediction markets) as the exclusive standard by which we judge intellectuals would be undervaluing contributing to the thought process.

Here, I propose a modification of prediction markets which values more types of contributions.

Arguments as Models

In many cases, a thought/argument can be reformulated as a model. What I mean by this is a formally described way of making predictions.

Adding models to prediction markets could increase their transparency; we want to create an incentive for traders to explain themselves.

A model has a complexity (description length; the negative log of its prior probability). We generally have more respect for predictions which come from models, in contrast to those which don't. And, given two models, we generally have more respect for the simpler one.

So, to adjust prediction markets to account for models, we would want something which:

  • Values making good predictions over everything else;
  • Values models over opaque predictions;
  • Values simple models more.

But how can we trade off between modelling and predictive accuracy?

Time Travel Markets

As I've discussed before, one of the major differences between prediction markets and Bayesianism is that prediction-market traders only get credit for moving the market in the right direction, which introduces a dependence on when a prediction is made: if you make an accurate prediction at a time when everyone else has already arrived at that conclusion, you don't get any credit; whereas, if you make an accurate prediction at a time when it's unpopular, you'll get a lot of credit for that.

That's a problem for model-makers. An explanation is still useful, even after all the facts needing explanation are agreed upon. 

This is a generalization of the problem of old evidence.

Solomonoff induction solves the problem of old evidence for Bayesians as follows: a new hypothesis which explains old evidence is evaluated as if it were in our set of hypotheses from the beginning. (After all, it would have been in the Solomonoff prior from the beginning; it was only absent from our calculations because we are forced to approximate Solomonoff.) The prior probability is based on the complexity of the hypothesis; the updates are whatever they would have been.

(Critics of Bayesianism rightly point out that adding a hypothesis in this way is not a Bayesian update, making this a problem for Bayesianism; but let's set aside that problem for the moment. We can think of the adherents of Solomonoff induction as endorsing two types of update; the Bayesian update, and a non-Bayesian update to account for logical uncertainty by adding hypotheses as we improve our approximation of Solomonoff.)

Similarly, we could imagine adding models to our market in this way: we score them as if  they were there from the beginning, with a starting weight based on their complexity.

This gives models an advantage over regular trades. Regular trades, made by humans, are always scored at the time they're submitted. But models can get credit for predicting things even after those things are known by the market, because their predictions are assessed retroactively.

On the other hand, it satisfies our desire to favor models without compromising predictive accuracy:

  • Unless the prior probability of a model is overwhelmingly high (due to very low complexity), a high posterior probability simply means a strong agreement with the market. So, this doesn't warp the market to reduce predictive accuracy in favor of modeling; for the most part, it will keep the market the same, but give credit to the model for explaining the market.
  • If the prior probability is quite high, and later evidence isn't overwhelming, it will warp the market; but this basically makes sense, because in the face of uncertainty we do favor simple hypotheses.

So this mostly satisfies the desiderata I set down earlier.

A Virtual Currency

This is more of an implementation detail, but I want to discuss how the prediction market currency works.

Prediction markets can of course use real money, and this has some advantages. However:

  • If we're trying to use a prediction market to keep track of the accuracy of public intellectuals, it doesn't make so much sense to use real money. We want a non-transferable currency of accuracy.
  • Using real money also constrains things; we can't create or destroy it at will, but we might want to do so for our virtual currency.

Starting Budgets

One problem: how much currency is given to people when they start on the market?

If we give the same starting currency to each person who joins, then the amount of currency  in the market just keeps growing and growing as people join. This seems problematic. For example, a large number of newcomers could warp the market in any direction they liked despite having no track record; and, people with a poor track record are highly incentivised to abandon their old accounts and go to new ones.

One solution would be to use a sequence which sums to a finite number. For example, the starting budget of each new account could be cut in half... but this seems too harsh, favoring early accounts over later ones to a very high degree. Any coefficient  could be used, with the nth account getting  starting funds. (This always sums to a finite number.) Selecting  very close to 1 makes things more fair (only slightly decreasing the value of new accounts over time), although putting things too close to 1 brings back the same weird dynamics we're trying to avoid. So the value of  depends on how much we value fairness to new users vs robustness to exploits.

Another proposal could be to assign starting currency , which would have an infinite sum, but which would go to infinity very very slowly. This is like increasing  slowly over time (more and more slowly as it approaches 1). My justification for this is than as the size of the market grows, it can handle more incoming newbies with bad opinions. Under this rule, we get 1 more unit of currency in the system for about every factor of 2.7 increase in the user base.

Rewards for Modelers

More importantly to the present discussion, how exactly do we reward people for adding models?

I've discussed how a model is scored as if it were present in the market from the beginning. I basically want to give that currency to the person who created the model. But there could be several different ways to do this. Here is my proposal:

The model has its own starting currency, based on its prior probability. This is new currency, not taken from anyone (one reason it's important to use a virtual currency). The model's creator doesn't have access to this currency; we want the model to have autonomy once created, so that it can continue to gain credibility based on evidence even if its creator loses faith in it. (In other words, we don't want to let people create models and then drain their accounts.)

But we do want to reward the creator. We do this by giving the creator extra money at the start of time, based on the prior probability of the model (IE based on the model's complexity). This extra money is assumed to be traded by the creator, but in exactly the way the model prescribes.

Once we get up to the current time, the model's creator is free to use that money as they wish.

Note that there is only a finite supply of model money to earn, since the prior probability sums to 1. However, finding the good models to reap those rewards will probably be pretty challenging even with automated enumeration of models.

Also note: we don't want to unreward everyone else for their contributions to the market. So we're not really going back in time and re-doing the entire market calculation as if a new trader was present. Instead, we just imagine what profit a model could have made if it had been present and if the market prices had all been the same. (This might lead to some problems, but I'm not thinking of any right now, so I'll leave this as my current proposal.)

It's possible that we also want to punish people for bad models, but I don't really think so -- it's simple enough to run a model before adding it to the market, to check if it does poorly, so an actual punishment for bad models could simply be avoided.

How do we compare human and virtual time?

A big implementation question is how to line up the timelines of (a) the actual human traders, and (b) the computation of the models.

One extreme would be to let models do all their computation up-front, at the "beginning of time". This is like trying to approximate Solomonoff induction rather than logical induction: we're doing nothing to encourage computationally simple models.

It seems like we instead want to allow models more time to improve their forecasts as we go. However, I don't see any clear principle to guide the trade-off. Human time and computational time seem incomparable.

This seems like a basic conceptual problem, and a solution might be quite interesting.

The Problem of Proofs

I was initially optimistic that, in an "arguments as models" paradigm, mathematical proofs could be handled as particularly strong arguments. I'm now less optimistic.

Imagine there's one big prediction market for mathematics, with MathBucks as the intellectual currency -- a professional is judged by their accumulation of MathBucks. The justification for this is supposed to be the same as the justification for using these markets in any other area: to efficiently pool all the information.

Now, proofs will of course be how outstanding bets get decided. But is there enough of an incentive to produce proofs? Is the reward for proofs close enough to the "true deserved intellectual credit"?

Certainly finding proofs will sometimes be quite profitable. If you prove something new, you could often make some money on the market by betting in favor of that conjecture before revealing your proof.

The problem, however, is that conjectures may be quite confident long before they're proven one way or the other. A 98% confident conjecture only enables a small profit to someone who proves it. I think it's fair to say that the size of the intellectual contribution may be much larger than the amount of credit earned.

This could be indirectly mitigated though bets on details which are hard to know without the proof in hand; for example, related conjectures likely to be decided by the proof, details of the method of proof, and so on. However, these provide no strong sense of proportional rewards for a contribution, and require more work for the contributor to profit.

At first, I thought a time-travel market would fix this. A proof could be turned into a model which bets before everyone.

The problem is that this is equivalent to a simple model which bets on the proposition at the beginning of time. This model could be claimed by someone long before a proof is found.

We could somehow promote the prior probability of correct proofs over other models. This would be somewhat useful, but feels like a hack which might not end up rewarding proof-writers to the desired extent.

Conclusion

The question I'm directly addressing in this post -- how to assign credit for intellectual labor -- is only part of my interest here. I'm also using this as a way to think about radical probabilism, particularly its relationship to models and transparent reasoning.

Philosophically, this is about the radical probabilist's response to the problem of old evidence. When do we require predictions to be made ahead of time for credit, and when do we give credit to predictions made after the fact? The answer proposed here is: we give credit for after-the-fact predictions to the extent that they're generated from simple models.

If this post seems to have the flavor of rough rules of thumb rather than solid mathematical epistemology, I'd say this is because we're in relatively unexplored territory here -- we've traveled from the old world of Bayes' Rule to the new continent of Radical Probabilism, and we're still at the stage of drawing rough maps with no proper scale. More formal versions of all of this seem within reach.

New Comment
7 comments, sorted by Click to highlight new comments since:

Here's one potential way to handle proofs. 

First, we'll invert prices: true statements are worth $0, false statements $1, rather than the usual prices. The market infrastructure will allow anyone with any contract to freely convert that contract into an equivalent one, via atomic steps in the underlying logic. It will also allow traders to create/destroy contracts on axioms for free. So, if a trader knows a proof of some proposition, then they can use the proof to construct a contract on the proposition for $0. Conceptually, it's similar to linear logic, but based on a very hand-wavy understanding of LL I don't think they're equivalent, since truth is free here.

The upshot is that someone with a proof can arbitrage without limit. Their proof will be revealed to the market mechanism (since the mechanism can see which propositions they're converting between), but not to other traders.

(Also I really like this post. The problem of how to get "deeper" information out of prediction markets is an important one, and these are great ideas.)

This is similar to some stuff Sam was working on. I still personally suspect an equivalence to LL, but haven't yet made it work.

This post also sorta touches on the problem of synchronous time in logical induction, which you and I discussed via private chat. A better understanding of time in logical induction / radical probabilism would seem useful here.

Yup, that was exactly the chat which got me thinking about the production-of-true-contracts thing. If a production approach worked, then ideally we could remove the deductive process entirely from the LI construction. Then the whole thing would look a lot more like standard economic models, and it should be straightforward to generalize to multiple decentralized markets with arbitrage between them.

Oh hmmm. How would the generalization work?

Competing cross-market arbitrageurs would bound divergence between prices in different markets - it only takes a small price difference for an arbitrageur to move large quantities from one market to another. As a result, prices of all markets end up (nearly) identical, and the whole thing acts as a single market. The more general principle at work here: two markets in equilibrium act like a single market.

Mathematically: each market mechanism is computing a fixed point, and simultaneously solving two coupled fixed-point problems is equivalent to solving a single fixed-point problem.

In principle, this would also work for normal LI, but the deductive process would still be a single monolithic interface to the rest of the world, so we wouldn't really gain much from it. With truth-producers, the producers can be spread across sub-markets.

It seems to me that you can game the system by finding one algorithm that would make you some money. Then keep submitting very slightly different version of the same algorithm (so they'd have the same complexity) over and over, receiving additional money for each one. (If I'm understanding the proposed system correctly.)

Another way is to submit algorithms that are essentially random. If you submit enough of them, some of them will do well enough to earn you money.

Yet another way is to submit an algorithm that essentially encodes the history of the thing you're trying to predict (i.e. overfit). It seems like it would be maximally rewarded under this system.

My proposal: reward models (and their creators) only based on how well they are predicting incoming data after the model has been submitted. Also submitting an algorithm costs you some money.

It seems to me that you can game the system by finding one algorithm that would make you some money. Then keep submitting very slightly different version of the same algorithm (so they'd have the same complexity) over and over, receiving additional money for each one. (If I'm understanding the proposed system correctly.)

I agree, this is a problem. I think it's kind of not a problem "in the limit of competitive play" -- the first person to discover a theory will do this before everyone else can spam the system with near clones of the theory, so they'll get the deserved credit.

From a Bayesian perspective, this is just paying people for correctly pointing out that an elegant theory's probability is more than just the probability of its most elegant formulation; it also gains credibility from other formulations.

Yet another way is to submit an algorithm that essentially encodes the history of the thing you're trying to predict (i.e. overfit). It seems like it would be maximally rewarded under this system.

My idea was that the prior probability of this would be so low that the reward here is low. It's like knowing a stock is a sure thing -- you know it multiplies in value a million fold -- but the stock only has a millionth of a cent available to buy.

Supposed argument: encoding a history of length L (in bits) is going to put you at roughly . If the market was only able to predict those bits 50-50, then you could climb up to roughly probability 1 from this. But in practice the market should be better than random, so you should do worse.

Things will actually be a bit worse than  because we also have to encode "here's the literal history:" (since giving the literal history will be an unusual thing in the prior, IE, not something we make costless in the description language).

Now that I've written the argument out, though, it isn't as good as I'd like.

  • The market will be 50-50 on some things, so there's non-negligible value to pick up in those cases.
  • The whole point is that you can sneak predictions in before the market predicts well, so you can get in while things are close to 50-50. My argument therefore depends on restricting this, EG always making the "first price" = the price after the first day of trading, to give the humans a bit of an edge over submitted algorithms.

So, yeah, this seems like a serious problem to think about.

I thought that there was only ever a finite pool of money to be obtained by submitting algorithms, but this is clearly wrong due to this trick. The winnings have to somehow be normalized, but it's not clear how to do this.

My proposal: reward models (and their creators) only based on how well they are predicting incoming data after the model has been submitted. Also submitting an algorithm costs you some money.

This defeats the whole point -- there's no reason to submit algorithms rather than just bet according to them.