A Request for Open Problems

Open problems are clearly defined problems1 that have not been solved. In older fields, such as Mathematics, the list is rather intimidating. Rationality, on the other, seems to have no list.

While we have all of us here together to crunch on problems, let's shoot higher than trying to think of solutions and then finding problems that match the solution. What things are unsolved questions? Is it reasonable to assume those questions have concrete, absolute answers?

The catch is that these problems cannot be inherently fuzzy problems. "How do I become less wrong?" is not a problem that can be clearly defined. As such, it does not have a concrete, absolute answer. Does Rationality have a set of problems that can be clearly defined? If not, how do we work toward getting our problems clearly defined?

See also: Open problems at LW:Wiki

1: "Clearly defined" essentially means a formal, unambiguous definition.  "Solving" such a problem would constitute a formal proof.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 12:09 PM
Select new highlight date
Rendering 50/108 comments  show more

Suppose someone offered you a bet on P = NP. How should you go about deciding whether or not to take it?

Does it make sense to assign probabilities to unproved mathematical conjectures? If so, what is the probability of P = NP? How should we compute this probability, given what we know so far about the problem? If the answer is no, what is a rational way to deal with mathematical uncertainty?

Scott Aaronson and many others in his field have essentially bet their careers that P ≠ NP (at least, I think the whole hierarchy of complexity classes would collapse if P = NP), while only a bunch of cranks (so far as I've heard) have bet a substantial amount of their life's efforts on P = NP. That's as good a statement of betting odds as any.

As for whether it's legitimate to do so, well, that's Bayesian probability for you. You've got to have some way of dealing with subjective uncertainty, and you get better results in the long run if your method of dealing with it is like unto the laws of probability.

Here are some papers I found recently that seem to represent the state of the art on the issue of how to deal with uncertainty in mathematics. None of them really get very far beyond "let's try to apply probability theory", but at least the problem is not being completely ignored by mathematicians and philosophers.

Is there a best language in which to express complexity for use in the context of Occam's razor.

If there is a best language in which to express complexity for use in the context of Occam's razor, what is that language?

I suspect the answer is no, at least not the kind of formal languages that have been suggested so far. The problem is this: as soon as you define a formal language, I can say "the lexicographically first object which can't be described in less than a million bits in your language". Given the uniqueness of this object, why should it be a priori as unlikely as a random million-bit string?

Note that in general, this sort of specification is uncomputable. To find the lexicographically first object which can't be described in less than a million bits, we would have to make a list of all objects which can be described in less than a million bits (and return the first thing that's not on the list). But we don't know if our UTM will halt for any given string with less than a million bits, so we can never know if our list is complete.

Yes, and that's sort of intentional. I was trying to come up with a mathematical model of an agent that can deal with uncomputable physics. The physics of our universe seems likely to be computable, but there is no a priori reason to assume that it must be. We may eventually discover a law of physics that's not computable, or find out that we are in a simulation running inside a larger universe that has uncomputable physics. Agents using UTM-based priors can't deal with these scenarios.

So I tried to find a "better", i.e., more expressive, language for describing objects, but then realized that any fixed formal language has a similar problem. Here's my current idea for solving this: make the language extensible instead of fixed. That is, define a base language, and a procedure for extending the language. Then, when the agent encounters some object that can't be described concisely using his current language, he recursively extends it until a short description is possible. What the extension procedure should be is still unclear.

I agree that there are very interesting questions here. We have quite natural ways of describing uncomputable functions very far up the arithmetical hierarchy, and it seems that they can be described in some kind of recursive language even if the things they describe are not recursive (using recursive in the recursion theory sense both times). Turing tried something like this in Systems of Logic Based on Ordinals (Turing, 1939), but that was with formal logic and systems where you repeatedly add the Godel sentence of a system into the system as an axiom, repeating this into the transfinite. A similar thing could be done describing levels of computability transfinitely far up the arithmetical hierarchy using recursively represented ordinals to index them. However, then people like you and I will want to use certain well defined but non-recursive ordinals to do the indexing, and it seems to degenerate in the standard kind of way, just a lot further up the hierarchy than before.

And then there are objects that are completely outside the arithmetical hierarchy, but we probably shouldn't assign zero priors to either. Things like large cardinals, perhaps.

Another mystery is, why did evolution create minds capable of thinking about these issues, given that agents equipped with a fixed UTM-based prior would have done perfectly fine in our place, at least up to now?

We aren't really Bayesian reasoning machines at all, and it isn't really accurate to speak of us having a prior. We choose a prior in order to use Bayesian reasoning to analyze a situation, and we seek to bend our natural reasoning to a Bayesian template in order to improve its accuracy, but we cannot wholly succeed in doing so. So the problem you raise should worry someone building AGI, but it's not realistic to imagine a human agent becoming so Bayesian that they swallow the Solomonoff prior whole and are literally unable to contemplate super-Turing Universes.

I don't think it's unreasonable, therefore, to adopt the Solomonoff prior as a useful model to aid reasoning and discussion, and rely on our human ability to make and adopt a new, super-Turing model if some more general prior would have favoured it.

What does it mean to deal rationally with moral uncertainty? If Nick Bostrom and Toby Ord's solution is right, how do we apply it in practice?

ETA: this isn't a "clearly defined" question in the sense you mentioned, but I'll leave it up anyway; apologies

As I pointed out in that thread, their solution doesn't work. You would need to choose an aggregation mechanism to combine votes. Different mechanisms will cause different systematic outcomes. Notably, some mechanisms will result in always choosing actions from one category; some mechanisms will result in sampling from different categories proportionally to their votes (much as, eg., the American system always chooses the most popular candidate, resulting in a 2-party system equilibrium; many European systems allocate seats proportionally to votes, allowing equilibria with more than 2 parties.)

You need to choose which kind of outcome you prefer in order to choose your aggregation mechanism, in order to implement their solution. But if you could do that, you wouldn't need their solution in the first place!

You need to choose which kind of outcome you prefer in order to choose your aggregation mechanism

Is this really the case? It's doesn't seem true of axiomatic approaches to decision-theory in general, so is there a special reason to think it should be true here?

But if you could do that, you wouldn't need their solution in the first place!

I guess I would view the parliamentary mechanism more as an intuition pump than a "solution" per se. It may well be that, having thought through it's implications, it will turn out that the results can be represented in (say) the standard vNM framework. Nonetheless, the parliamentary model could still be helpful in getting a handle on the nature of the "utility" functions involved.

As an aside, it seems as though their parliamentary approach could potentially be modeled more effectively using co-operative game theory than the more standard non-cooperative version.

Is this really the case? It's doesn't seem true of axiomatic approaches to decision-theory in general, so is there a special reason to think it should be true here?

I just gave the reason. "Some mechanisms will result in always choosing actions from one category; some mechanisms will result in sampling from different categories proportionally to their votes."

The aggregation mechanism is a lot like the thread priority system in a computer operating system. Some operating systems will always give the CPU to the highest-priority task. Some try to give tasks CPU time proportional to their priority. Likewise, some aggregation mechanisms will choose the most popular option; some choose options with probability proportional to their popularity, never giving any voice to minority opinions. You have to choose which type of aggregation mechanism to use. But this choice is exactly the sort of choice that the parliament is supposed to be producing as output, not requiring as input.

I wonder if it would work to renormalize utility so that the total of everything that's "at stake" (in some sense that would need to be made more precise) is always worth the same?

Probably this gives too much weight to easy-to-achieve moralities, like the morality that says all that matters is whether you're happy tomorrow? It also doesn't accommodate non-consequentalist moralities.

But does it ever make sense to respond to new moral information by saying, "huh, I guess existence as a whole doesn't matter as much as I thought it did"? It seems counterintuitive somehow.

I can't follow your comment. I would need some inferential steps filled in, between the prior comment, and the first sentence of your comment, and between every sentence of your comment.

2. Is there a practical method to reach Aumann agreement?

Here are some more problems that have come up on LW:

Here's an open problem that's been on my mind this past week:

Take some controversial question on which there are a small number of popular opinions. Draw a line going from 0 on the left to 1 on the right. Divide that line into segments for each opinion that holds > 1% of opinion-space.

Now stratify the population by IQ into 10-point intervals. Redo the process, drawing a new line from 0 to 1 for each IQ range and dividing it into segments. Then stack your 0-1 line segments up vertically. Connect the sections for the same opinions in each IQ group.

What does the resulting picture look like? Questions include:

  • Is there a wider range of popular opinions (technically, a larger entropy) at the bottom or at the top?
  • Are there opinions that have maximum popularity at an intermediate IQ level?
  • Are there opinions that have minimum popularity at an intermediate IQ level?

Other variables could be used on the vertical axis. In general, continuous variables (IQ, educational level, year of survey, income) are more interesting to me than category variables (race, sex). I'm trying to get at the question of how systematic misunderstandings are, how reliably increasing understanding increases accuracy, and whether increasing understanding of a topic increases or decreases the chances of agreement on it.

I noticed recently my impression that certain wrong opinions reliably attract people of a certain level of understanding of a problem. In some domains, increasing someone's intelligence or awareness seems to decrease their chances of correct action. This is probably because people have evolved correct behaviors, and figure out incorrect behaviors when they're smart enough to notice what they're doing but not smart enough to figure out why. But it's then interesting that their errors would be correlated rather than random.

Another problem, although to what degree it is currently both "open" and relevant is debatable: finding new loopholes in Arrow's Theorem.

Can you even have a clearly defined problem in any field other than Mathematics, or one that doesn't reduce to a mathematical problem regardless of the field where it originated?

Some people like mathematics so much that they define it as encompassing all clearly defined problems. Some people like philosophy so much that they define it as encompassing all problems whatsoever. Both of these definitions are clearly wrong, and the product of affective death spirals.

You say that as if a problem can only belong to one field.

See Wikipedia's list for a few examples.

The distinction between clearly defined and otherwise is somewhat subjective. I have not heard anyone talking about the subject yet, so brought it up.

Since Rationality seems to be strongly related to Bayes' theorem, it makes some sense that a lot of problems could be presented in a fashion where we only have to answer a few questions about priors to understand which actions to take.

I don't know if this answers your question.

It's the best possible kind of answer to my question - a link to a load of interesting stuff - thanks!

I see where I went wrong, in missing out the entire physical universe as a source of questions that can be clearly stated but are about real things rather than mathematical descriptions of them.

How do we handle the existence of knowledge which is reliable but cannot be explained? As an example, consider the human ability to recognize faces (or places, pieces of music, etc). We have nearly total confidence in our ability to recognize people by their faces (given enough time, good lighting, etc). However, we cannot articulate the process by which we perform face recognition.

Imagine you met a blind alien, and for whatever reason needed to convince it of your ability to recognize people by face. Since you cannot provide a reasonable description of your face recognition process, you are essentially in the position of saying "I'm totally sure of the identity of the person I saw, but I cannot explain why I am so certain".

Quite a bit is known about the neurology behind face recognition. No one understands the algorithm well enough to build a fusiform gyrus from scratch, but that doesn't mean the fact that there is an algorithm is mysterious.

Even if we did not have any understanding of the neurology, I'm not sure why pointing to an empirical record of successful face recognition shouldn't be fairly convincing. Is the point that we could be lying about our record?

(In the specific example given, you could probably get a fair bit of mileage from explaining the nature of vision, even without the specifics of face-recognition. I'm not really sure what broader lesson that might have though, as I don't fully understand the nature of the question you're asking.)

I've been lurking here for a while now and thought I had something to add for the first time, so Hi all, thanks for all the great content and concepts; on to the good stuff:

I think a good open problem for the list would be: a formal (or a good solid) defintion of rationality. I know of things like BDI architecture and pareto optimality but how do these things apply to a human rational being. For that matter, how do you reason logically/formally about a human being? What would be a good abstraction/structure, are there any guidelines?

well, just my 2 formal cents.

compare to: http://lesswrong.com/lw/x/define_rationality/

I think that problem is too hard for us. As a general rule, sweeping definitions made without respect to a particular purpose are of limited utility. But thanks for commenting, and please don't let my response deter you from commenting again.

Well, I can give some classes of problems. For instance, many of the biases that we know about, we don't really know good ways for humans to reliably correct for. So right there is a whole bunch of open problems. (I know of some specific ones with known debiasing techniques, but many are "okay, great, I know I make this error... but other than occasionally being lucky enough to directly catch myself in the act of doing so, it's not really obvious how to correct for these")

Another, I guess vaguer one would be "general solution that allows people to solve their akrasia problems"