What would you do with a solution to 3-SAT?

Many experts suspect that there is no polynomial-time solution to the so-called NP-complete problems, though no-one has yet been able to rigorously prove this and there remains the possibility that a polynomial-time algorithm will one day emerge. However unlikely this is, today I would like to invite LW to play a game I played with with some colleagues called what-would-you-do-with-a-polynomial-time-solution-to-3SAT? 3SAT is, of course, one of the most famous of the NP-complete problems and a solution to 3SAT would also constitute a solution to *all* the problems in NP. This includes lots of fun planning problems (e.g. travelling salesman) as well as the problem of performing exact inference in (general) Bayesian networks. What's the most fun you could have?

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 8:10 AM
Select new highlight date
Rendering 50/80 comments  show more

What would you do with a solution to 3-SAT?

I'll tell you what I'd do, man: two chicks at the same time, man.

Anyway, my actual answer would have been about the same as jimrandomh's, but, assuming I'm the only one who has the polynomial-time solution to 3-SAT, in the absence of sufficiently specific knowledge about how to create an FAI, I would use it to make huge amounts of money (either by using it to get a significant advantage in prediction and using that to play the stock market (etc.) or just by making super-useful thitherto-impossible products or services using it) and then use that to support said universe-optimization efforts.

There's a nice paper about that by Scott Aaronson (pdf)

If such a procedure existed, then we could quickly find the smallest Boolean circuits that output (say) a table of historical stock market data, or the human genome, or the complete works of Shakespeare. It seems entirely conceivable that, by analyzing these circuits, we could make an easy fortune on Wall Street, or retrace evolution, or even generate Shakespeare’s 38th play. For broadly speaking, that which we can compress we can understand, and that which we can understand we can predict.

He suggests that this is a good reason to take NP != P as a physical law, like the 2nd law of thermodynamics.

If such a procedure existed, then we could quickly find the smallest Boolean circuits that output (say) a table of historical stock market data, or the human genome, or the complete works of Shakespeare.

I don't see how a circuit overfitted to any of the above would help you.

That's just the thing, the smallest circuit wouldn't be over-fitted. For instance, if I gave you numbers 1,1,2,3,5,8,13,21... plus a hundred more and asked for the SMALLEST circuit that outputted these numbers, it would not be a circuit of size hundred of bits. The size would be a few bits, and it would be the formula for generating the Fibonacci numbers. Except, instead of doing any thinking to figure this out, you would just use your NP machine to figure it out. And essentially all mathematical theorems would be proved in the same way.

And essentially all mathematical theorems would be proved in the same way.

I wasn't talking about mathematical theorems but about

stock market data, or the human genome, or the complete works of Shakespeare.

That is a bit poetic. In the Fibonacci case, we know that there is a simple explanation/formula. For the stock market, genome, or Shakespeare, it is not obvious that the smallest circuit will provide any significant understanding. On the other hand, if there's any regularity at all in the stock market, the shortest efficient description will take advantage of this regularity for compression. And, therefore, you could use this automatically discovered regularity for prediction as well.

On the other hand, if several traders get their hands on efficient NP computers at once, it's safe to bet that historical regularities will go out the window.

Is it known how that would compare (in theoretical predictive accuracy) to the general case of Solomonoff induction?

He suggests that this is a good reason to take NP != P as a physical law, like the 2nd law of thermodynamics.

That sounds odd. There are possible universes that don't have anything analogous to the second law of thermodynamics, but if P ≠ NP, then P ≠ NP in every possible universe; nothing about this particular universe could be evidence for it. But I'll read the paper and see if I'm misunderstanding.

It is not NP != P that is proposed as a physical law. It is the impossibility of building computers that quickly solve NP-complete problems. It is really more like a heuristic to quickly shoot down some physical theories. The 2nd law is a bad metaphor. The impossibility of faster-than-light communication is a better one. If your proposed physical theory makes faster-than-light communication possible, that makes the theory look suspicious. Analogously, if your proposed physical theory makes solving SAT feasible with a polynomial amount of resources, that should make the theory look suspicious, says Aaronson.

EDIT: As an important example, the possibility of general time travel could make you solve SAT easily. It is a nice exercise to figure out how. Harry Potter tried it in Methods of Rationality, and Aaronson has a whole lecture about it.

I totally agree. I guess you could imagine Maxwell's demon as an example where untangling a supposed violation of the 2nd law led to new understanding.

There are possible universes that don't have anything analogous to the second law of thermodynamics,

[Citation needed.]

Conway's game of life.

Edit: In particular it allows for perpetual motion machines.

There are unreachable states ("gardens of Eden" in the lingo) which means that (per the Garden of Eden Theorem) there exist states which are the successor of more than one possible state. This is an irreversibility (you cannot infer the previous state from the present one), implying an increase of entropy.

The perpetual motion machines you refer to are only that in a very metaphorical sense -- they don't allow an infinite extraction (edit: should be "increase") of some energy-like metric. They just cycle between the same states, neither increasing nor decreasing entropy because of the full reversibility of such systems.

The perpetual motion machines you refer are only that in a very metaphorical sense -- they don't allow an infinite extraction of some energy-like metric.

Glider guns produce an endless stream of gliders to give the simplest example.

I'm a little sketchy on how a Turing machine in a universe proves that the universe can violate the 2nd law or lack a 2nd law analog.

There are unreachable states ("gardens of Eden" in the lingo) which means that (per the Garden of Eden Theorem) there exist states which are the successor of more than one possible state. This is an irreversibility (you cannot infer the previous state from the present one), implying an increase of entropy.

While this logic is technically correct its a very weird way to reason, since Garden of Eden patterns are very hard to find in CGL but sets of patterns which converge on the next step are trivially easy to find (e.g. the block and the two common pre-blocks all become blocks on the next step).

It isn't true that irreversibility per se implies an increase of entropy - or at least I can't see how it follows from the definition. (And couldn't there be a universe whose 'laws of physics' were such that states may have multiple successors but at most one predecessor--so that by the 'irreversibility' criterion, entropy is decreasing--but which had a 'low entropy' beginning like a Big Bang and consequently saw entropy increase over time?)

In any case, it's not clear (to me at least) how the definition of entropy applies to the Game of Life.

I have this vague impression that makes me think of life as "cheating" by "running backwards".

In our own universe, quantum coin-flips make it look like one state can lead to more than one new state, and the universe "picks one". However, this "picking" operation is unnecessary and we say "they all happen" and just consider it one (larger) state evolving into one new (larger) state. This makes me wonder why you can't do the same thing for every set of laws that claims not to be a bijection between states.

In the game of life, we have cases where several states lead to one state, but not the other way around. From a timeless point of view, there's still a choice at each transition that deletes information: "why the "initial" state that we chose?". You can get rid of this by looking at all the initial states that lead to the next state, and now its starting to look like a branching universe run backwards with a cherry picked final state.

In our universe too, we can get things that look like second law violations if we carefully choose the right Everett branch and then look at time 'backwards', but because of the way measure works, we don't consider that important.

The observation that there would not be anything like the Second Law if our space-time continuum (our universe) did not start out in a very low-entropy state is in Penrose's Emperor's New Mind.

You can write a program general enough to be a universe but which doesn't involve temperature and doesn't involve inevitable information loss over time. Obviously none of them are going to be generating information from nowhere, but in principle it's at least possible to break even. (One example, which is rather simple and almost borders on cheating, would be to include an API that would allow any agent to access any bit of information from any point in the past. As far as I can tell, there's no reason why this wouldn't be allowed. It would have the aesthetic disadvantage of having a fundamentally directional time dimension, but that shouldn't cause any real problems to any agents living within it.)

doesn't involve inevitable information loss over time.

Actually the lack of loss of information over time is precisely what generates the 2nd law of thermodynamics. Specifically, since all information from the past must thus be stored somewhere (unfortunately often in a way that's hard to access, e.g., the "random" motion of atoms) that continuously leaves less room for new information.

Is it known how that would compare (in theoretical predictive accuracy) to the general case of Solomonoff induction?

Solomonoff induction is uncomputable. It's closer to solving the halting problem than to solving NP-hard problems. An NP solver would be computable, just faster than today's known algorithms for SAT.

What would you do with a solution to 3-SAT?

Any answer other than "create a superintelligent friendly AI to optimize the universe" would be a waste of this particular genie, but there are some steps in between that and 3-SAT which I can't specify yet.

there are some steps in between that and 3-SAT which I can't specify yet.

Coincidentally, I recently had an idea for making progress on FAI that would benefit greatly from a solution to 3-SAT.

Your idea seems to be a general method for solving unformalized problems using an NP-solver: generate phrases that an uploaded human would classify as insights. I'm still afraid that many such phrases will be mindhacks instead of genuine insights, though, and the risk gets worse as the problem gets harder (or more precisely, as the next inferential step becomes harder to hit).

I agree it's still risky, but with the safety features I put in (having a small limit on the phrase length, outputting the top 100,000 (considered individually) in random order instead of just the top 1, review/discussion by a team, and we can also mix together the top 10,000 insights from each of 10 uploads for additional safety) it seems no worse than just having humans try to solve the problem by thinking about it, since we could also come up with self-mindhacks while thinking.

(Do you agree that the FAI problem has to be solved sooner or later? I think you didn't respond to the last argument I made on that.)

Do you agree that the FAI problem has to be solved sooner or later?

...And right now, thinking about possible replies to your comment, I finally switched to agreeing with that. Thanks.

Oh hell. This changes a lot. I need to think.

An NP oracle makes AI rather easier... but I'm not sure it currently would be as much help in FAI in particular. That is, I don't think it would help as much with the F part.

In other words, I suspect that an NP oracle is the sort of thing that if you discovered one... you should be really really quiet about it, and be very cautious about who you tell. (I may be totally wrong about this, though.)

(Declining to resist temptation to link a sci-fi story (7500 words) despite Google confirming you're already familiar with it.)

An NP oracle makes AI rather easier...

How so? This isn't obvious to me.

In other words, I suspect that an NP oracle is the sort of thing that if you discovered one... you should be really really quiet about it, and be very cautious about who you tell. (I may be totally wrong about this, though.)

There are both good and bad arguments against this. If one can find such a thing it seems likely that others can too. A lot of the more mundane bad stuff that can be done with a practical solution to P=NP would be things like breaking encryption which work iff people don't know you have such a solution (otherwise people then go back to things like one-time pads. The economy will suffer until the quantum crypto is really up and running, but the amount of long-term badness someone with the solution can do will be severely limited.) Moreover, such a solution can also help do a lot of good in mundane areas where one needs to routinely solve instances of NP complete problems. So, overall, I'd go with releasing it.

NP oracles allow easy learning by allowing one to find compact models to explain/predict available data...

Also gives the ability to do stuff like "what actions can I take which, within N inferential steps of this model, will produce an outcome I desire?" or "will produce utility > U with probability > P" or such.

Maybe I'm way off on this, but it sure does seem like a cheap NP oracle would make at least UFAI comparatively easy.

If I'm totally wrong about NP oracles -> easy AI, then I'd agree with you re releasing it... with a caveat. I'd say to it with advance warning... ie, anonymously demonstrate to various banks, security institutions, and possibly the public that there exists someone with the ability to efficiently solve NP complete problems, and that the algorithm will be released in X amount of time. (1-2 years would be my first thought for how long the advanced warning should be.)

This way everyone has time to at least partly prepare for the day where all crypto other than OTP (quantum crypto would be an example of an OTP style crypto) would be dead.

I would settle all famous conjectures in mathematics. Or at least the ones with short enough proofs/refutations.

OBVIOUSLY the answer to this question is:

I would assemble a documentary crew, and make a movie about me visiting the town and city halls of every town and city in America. I would travel the minimum distance possible, and do nothing particularly interesting at any location. I would release my video into the Creative Commons, and open up a website to go with the video. There would be a google map-like toy that invites users to plot their own tour of all the towns and cities in Europe. This will be my way of testing initiates into the Better Bayesian Conspiracy. Every initiate would be challenged to think up increasingly epic ways to secretly use P=NP. When involved in the Bayesian Conspiracy events, we would be just like the rest, except we would know all the right dance steps and purposefully misstep in amusing (to one another) ways. We would be the Bayesian who always seems to be smiling at some in-joke, we would be the lummox who just can't seem to get the right answer, we would be the pushy person in a mask telling young initiates that the answer is "one sixth". Eventually, we would be all of the members, at which point some of us would splinter off and form an even more nested, more hidden, more conspiratorial conspiracy of esoteric knowledge. Hopefully they would be at least twice as interested in fun as my shadow organization. One day, in three hundred years, I will play through a copy of my traveling salesman documentary and notice that somebody had quietly inserted the kth digit of a binary number into every frame. Four picoseconds later, I will find out that it was a secret message from a child of mine that I never had, sent from the future at the border of the nearest Tegmark universe.... but that is another story that has nothing to do with 3SATs or salesmen or even polynomials.