Specifically, let's say you are handed a Boolean 3-sat problem, and you finally managed to finish solving the 3-SAT instance you are given by a superpolynomially large algorithm.

Now, you are given another Boolean 3-SAT problem. Can you amortize the complexity costs of 3-SAT problems, or does each 3-SAT problem instance require you to pay the full complexity cost of the algorithm you run?

To give an analogy for the question I'm asking, I'm trying to determine whether computationally hard problems are more CapEx dominated, and the OpEx of running each particular instance of 3-sat is low, making it more like an investment, or perhaps buying things, or is it more like a high OpEx, where each instance of a 3-SAT problem remains just as expensive and can't be amortized, much like renting something.

Equivalently, the question is how much you are able to amortize the costs of solving similar problems, like 3-SAT for NP-complete problems or True Quantified Boolean Formulas for PSPACE-complete problems.

Challenge: If you are able to show that you can reduce computational complexity costs via amortizing the instances of a problem, how far up the complexity hierarchy does this go? How complex does a problem need to be before you can't amortize the costs of a computationally complex problem anymore?

New Answer
New Comment

3 Answers sorted by

Vanessa Kosoy

71

A relevant classical result in complexity theory is: if  then the polynomial hierarchy collapses. Which is to say, this is pretty unlikely. This means that you can't solve  in time which is polynomial in the size  of the circuit, even if you had unlimited time to precompute an advice string for the given  that you need.

Dagon

20

"over sufficiently long timeframes, all costs are variable costs".   CapEx vs OpEx is an accounting distinction, and a short-term optimization question.  It's not fundamental.

How this works in practice, for large computational challenges, is that there is a curve for how much advance work reduces the per-iteration cost of the actual results.  Where you want to be on this curve depends on the specifics of the prep (algorithmic improvements, better caching or parallelism (note: these are often optimizing in opposite directions), hardware improvements or tuning, etc.).  Each of these dimensions has it's own curve, and we often don't know in advance what the curve actually looks like, so there's a meta-curve we guess at about R&D vs implementation excellence for each topic.

It gets very complex very quickly, in ways that's difficult to numerically model, because different problems are JUST different enough not to reuse some of the investments OR meta-investments.

As I often say at work, "that's why they pay us the medium bucks".

Dave Orr

20

In general to solve an NP complete problem like 3-SAT, you have to spend compute or storage to solve it. 

Suppose you solve one 3-SAT problem. If you don't write down the solution and steps along the way, then you have no way to get the benefit of the work for the next problem. But if you do store the results of the intermediate steps, then you need to store data that's also polynomial in size.

In practice often you can do much better than that because the problems you're solving may share certain data or characteristics that lead to shortcuts, but in the general case you have to pay the cost every time you need to solve an NP complete problem.

So it means that you can't gain cost advantages in general by solving other same or similarly computationally complex problems?

1 comment, sorted by Click to highlight new comments since:

This is a difficult question to answer, of course, because you're asking such a broad question and it's going to depend on stuff like the actual distribution rather than worst-case, unless you define a class so narrow that each instance offers some gains on every other instance, which is unusual. No-free-lunch, etc. Sometimes there are big savings (this comes up in cryptography where entities like the NSA can attack the world for a lot cheaper than you would think based on the computational cost of individual attacks), but usually there isn't - AFAIK there is no general way to solve 1 arbitrary 3-SAT problem and get a gain on another. Only if there is some sort of similarity, say, because they are all based on real-world problems (which is much narrower than 'every logically possible problem'), can you hope to, say, distill the search into a neural net blackbox and amortize gains.

You probably want to look into algorithmic information theory; something like OOPS would be relevant here because it can solve every problem, building on Levin search, and attempts to share or amortize between multiple problems. (This also more closely resembles what you're really interested in, which is doing this online as you solve problems.)