Many experts suspect that there is no polynomial-time solution to the so-called NP-complete problems, though no-one has yet been able to rigorously prove this and there remains the possibility that a polynomial-time algorithm will one day emerge. However unlikely this is, today I would like to invite LW to play a game I played with with some colleagues called what-would-you-do-with-a-polynomial-time-solution-to-3SAT? 3SAT is, of course, one of the most famous of the NP-complete problems and a solution to 3SAT would also constitute a solution to *all* the problems in NP. This includes lots of fun planning problems (e.g. travelling salesman) as well as the problem of performing exact inference in (general) Bayesian networks. What's the most fun you could have?

New to LessWrong?

New Comment
80 comments, sorted by Click to highlight new comments since: Today at 8:12 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

What would you do with a solution to 3-SAT?

I'll tell you what I'd do, man: two chicks at the same time, man.

Anyway, my actual answer would have been about the same as jimrandomh's, but, assuming I'm the only one who has the polynomial-time solution to 3-SAT, in the absence of sufficiently specific knowledge about how to create an FAI, I would use it to make huge amounts of money (either by using it to get a significant advantage in prediction and using that to play the stock market (etc.) or just by making super-useful thitherto-impossible products or services using it) and then use that to support said universe-optimization efforts.

[-][anonymous]13y90

There's a nice paper about that by Scott Aaronson (pdf)

If such a procedure existed, then we could quickly find the smallest Boolean circuits that output (say) a table of historical stock market data, or the human genome, or the complete works of Shakespeare. It seems entirely conceivable that, by analyzing these circuits, we could make an easy fortune on Wall Street, or retrace evolution, or even generate Shakespeare’s 38th play. For broadly speaking, that which we can compress we can understand, and that which we can understand we can predict.

He... (read more)

8ata13y
Is it known how that would compare (in theoretical predictive accuracy) to the general case of Solomonoff induction? That sounds odd. There are possible universes that don't have anything analogous to the second law of thermodynamics, but if P ≠ NP, then P ≠ NP in every possible universe; nothing about this particular universe could be evidence for it. But I'll read the paper and see if I'm misunderstanding.
7SilasBarta13y
[Citation needed.]

Conway's game of life.

Edit: In particular it allows for perpetual motion machines.

4jimmy13y
I have this vague impression that makes me think of life as "cheating" by "running backwards". In our own universe, quantum coin-flips make it look like one state can lead to more than one new state, and the universe "picks one". However, this "picking" operation is unnecessary and we say "they all happen" and just consider it one (larger) state evolving into one new (larger) state. This makes me wonder why you can't do the same thing for every set of laws that claims not to be a bijection between states. In the game of life, we have cases where several states lead to one state, but not the other way around. From a timeless point of view, there's still a choice at each transition that deletes information: "why the "initial" state that we chose?". You can get rid of this by looking at all the initial states that lead to the next state, and now its starting to look like a branching universe run backwards with a cherry picked final state. In our universe too, we can get things that look like second law violations if we carefully choose the right Everett branch and then look at time 'backwards', but because of the way measure works, we don't consider that important.
4SilasBarta13y
There are unreachable states ("gardens of Eden" in the lingo) which means that (per the Garden of Eden Theorem) there exist states which are the successor of more than one possible state. This is an irreversibility (you cannot infer the previous state from the present one), implying an increase of entropy. The perpetual motion machines you refer to are only that in a very metaphorical sense -- they don't allow an infinite extraction (edit: should be "increase") of some energy-like metric. They just cycle between the same states, neither increasing nor decreasing entropy because of the full reversibility of such systems.

The perpetual motion machines you refer are only that in a very metaphorical sense -- they don't allow an infinite extraction of some energy-like metric.

Glider guns produce an endless stream of gliders to give the simplest example.

-1SilasBarta13y
Okay, so what would be your energy (or disorder) metric in that case and how does the Glider gun violate it? You need to do more than just keep overwriting zeroes with ones.
1Eugine_Nier13y
What do you mean by "energy metric"? If you're asking for a conserved quantity whose conservation is violated, then you're not going to get that by definition. What I mean by having no 2nd law of thermodynamics is that it's possible to construct a Universal Turing machine that can operate indefinitely without using up any irreplaceable resources.
0SilasBarta13y
Something that works as a measure over the state variables for the purposes of Lyapunov's stability theorems. That is, take a set of state variables that completely define the system, and take some measure that is always non-negative and is an increasing function of every variable. (Lyapunov's theorem -- one version -- says that if you can find such a measure, and if it's strictly non-increasing with time, the system is stable -- but this isn't guaranteed from the definition.) Maybe you can find one that increases with out bound, but I don't know what energy metric you have in mind. What does that have to do with the 2nd law? There are (physically possible) reversible computers that use no irreplaceable resources, so the fact that something is a Turing machine operating indefinitely does not mean that the 2nd law is being violated. (I should also point out that Life defines time as an additional property of the universe, rather than a measure on the other properties, which is how time works in this universe. If you carry our universe's manifestation of time, and check whether that phenomenon exists in Life, it's not obvious that it does.)
5Eliezer Yudkowsky13y
You can encode Turing machines in Life.
3SilasBarta13y
I'm a little sketchy on how a Turing machine in a universe proves that the universe can violate the 2nd law or lack a 2nd law analog.
4benelliott13y
While this logic is technically correct its a very weird way to reason, since Garden of Eden patterns are very hard to find in CGL but sets of patterns which converge on the next step are trivially easy to find (e.g. the block and the two common pre-blocks all become blocks on the next step).
-1SilasBarta13y
I don't see how that matters: if there exist any states for which it is impossible to infer the previous state, that is a loss of information and therefore an increase in entropy. I agree it's hard to know "the" way to map the 2nd law onto an arbitrary universe and see how it applies, but based on some heuristics (checking for irreversibility, agent-perceived flow of time) it seems like Life doesn't violate it.
1benelliott13y
I never said you were wrong, I agree with your main point. I was just pointing out that you were reasoning in a very strange way, deriving a simple fact using a very difficult to establish one. People knew that life wasn't backwards deterministic long before they knew about Garden of Eden patterns. Sort of like arguing that 8+8 != 27 by appealing to Fermat's Last Theorem instead of just pointing out that 8+8 = 16 which is a different number to 27.
0SilasBarta13y
Good point. I was basing my argument on the backwards non-determinism, and wanted to give the easiest way for readers (who might not have known this about Life) to verify it, so I gave them a term they can look up. Also, was it really that long before they knew about GoE patterns? Their existence is a trivial implication of multiple states mapping onto the same state. They may not have found specific GoE patters, but they surely had the concept (if not by that name).
1benelliott13y
I'm not entirely sure it is a trivial implication: In a sense, you're right, in that on any finite life-field run on a computer, which has only a finite number of possible states, the existence of convergent patterns does trivially imply Garden of Eden patterns. However, most life-theorists aren't interested in finite fields, and it was considered possible that Garden of Eden patterns might only work by exploiting weird but uninteresting things that only occur on the boundary. In an infinite field, you have an uncountable infinity of states, and uncountable sets can have functions defined from them to themselves that are surjective but not injective, so the trivial implication does not work. On the other hand, if you only look at a finite subset of the infinite field, then you find that knowing the exact contents of a n by n box in one generation only tells you the exact contents of an (n-2) by (n-2) box in the next generation. You have 2^(n^2) patterns mapping to 2^((n-2)^2) patterns, the former is 16^(n-1) times as large as the latter. This makes the existence of convergent patterns trivial, and the existence of Garden of Eden patterns quite surprising. Another way to look at this is to see that the smallest known Garden of Eden pattern is a lot larger than the smallest pair of convergent patterns.
2pengvado13y
I agree with the GoE part, but does this really single-handedly imply convergent patterns? Two n×n states that produce the same (n-2)×(n-2) successor don't necessarily have the same effects on their boundaries. Contrapositively, the part about only determining a (n-k)×(n-k) successor applies to any cellular automata that use a (k+1)×(k+1) neighborhood, even reversible ones.
0benelliott13y
This is correct. Thanks for pointing that out.
4AlephNeil13y
It isn't true that irreversibility per se implies an increase of entropy - or at least I can't see how it follows from the definition. (And couldn't there be a universe whose 'laws of physics' were such that states may have multiple successors but at most one predecessor--so that by the 'irreversibility' criterion, entropy is decreasing--but which had a 'low entropy' beginning like a Big Bang and consequently saw entropy increase over time?) In any case, it's not clear (to me at least) how the definition of entropy applies to the Game of Life.
2RHollerith13y
The observation that there would not be anything like the Second Law if our space-time continuum (our universe) did not start out in a very low-entropy state is in Penrose's Emperor's New Mind.
1ata13y
You can write a program general enough to be a universe but which doesn't involve temperature and doesn't involve inevitable information loss over time. Obviously none of them are going to be generating information from nowhere, but in principle it's at least possible to break even. (One example, which is rather simple and almost borders on cheating, would be to include an API that would allow any agent to access any bit of information from any point in the past. As far as I can tell, there's no reason why this wouldn't be allowed. It would have the aesthetic disadvantage of having a fundamentally directional time dimension, but that shouldn't cause any real problems to any agents living within it.)
7Eugine_Nier13y
Actually the lack of loss of information over time is precisely what generates the 2nd law of thermodynamics. Specifically, since all information from the past must thus be stored somewhere (unfortunately often in a way that's hard to access, e.g., the "random" motion of atoms) that continuously leaves less room for new information.
0ata13y
Right, sorry, I was referring to subjective information loss. I understand that information is globally conserved.
4Eugine_Nier13y
Won't help. You could have a universe that includes chronoscopes but still have the problem that it's continuously filling up with entropy.
5ata13y
Apparently I don't actually understand this subject, so I hereby relinquish my previous opinion about it and won't form a new one (beyond taking better-informed people's word for it for now) until I've learned it better.
4Psy-Kosh13y
You could have a universe though that gains more "room" for entropy faster than it gains entropy... so entropy keeps increasing, but there's an ever increasing entropy sink, right?
0Eugine_Nier13y
That's another way to do it.
0Psy-Kosh13y
One question one might ask is if our own universe has that property.
0[anonymous]13y
Does Conway's "Life Universe" have something analogous to the second law of thermodynamics? (Eugine Nier's comment would suggest otherwise, given that its 'law of physics' does not conserve information.)
7DanielVarga13y
It is not NP != P that is proposed as a physical law. It is the impossibility of building computers that quickly solve NP-complete problems. It is really more like a heuristic to quickly shoot down some physical theories. The 2nd law is a bad metaphor. The impossibility of faster-than-light communication is a better one. If your proposed physical theory makes faster-than-light communication possible, that makes the theory look suspicious. Analogously, if your proposed physical theory makes solving SAT feasible with a polynomial amount of resources, that should make the theory look suspicious, says Aaronson. EDIT: As an important example, the possibility of general time travel could make you solve SAT easily. It is a nice exercise to figure out how. Harry Potter tried it in Methods of Rationality, and Aaronson has a whole lecture about it.
3[anonymous]13y
I totally agree. I guess you could imagine Maxwell's demon as an example where untangling a supposed violation of the 2nd law led to new understanding.
3cousin_it13y
Solomonoff induction is uncomputable. It's closer to solving the halting problem than to solving NP-hard problems. An NP solver would be computable, just faster than today's known algorithms for SAT.
0ata13y
I know that. I was saying, given that people still prove things about Solomonoff induction's accuracy even though it's uncomputable, are there any results on how successful this type of prediction could be, relative to the standard set by Solomonoff induction? That is, how powerful can induction be if you have a mere NP oracle, compared to a halting oracle?
0Sniffnoy13y
P != NP is, as you point out, a purely mathematical statement, so it seems safe to assume that what was meant was P^(real world) != NP^(real world), which could of course be true or false for differing values of "real world".
-2Vladimir_Nesov13y
If one kind of atoms were NP oracles, that universe would classify as NP-capable even if P!=NP, and NP-hard would play the same role over there as P does here (if we ignore quantum). Edit: Removed nonsense.
0ata13y
If the inhabitants of a universe with atomic NP oracles developed a computational complexity theory which accordingly considered performing an NP oracle operation to be O(1), then their "P" and "NP" would be analogous to what we call "P" and "NP", but they wouldn't be the same objects.
0Vladimir_Nesov13y
Sure, hence merely "play the same role". Note that analogy for P in NP-universe is NP-hard, not NP.
0Vladimir_Nesov13y
(In case anyone read parent comment before the edit, I apologize. In other news, yesterday I forgot that it's April. I need my reflection.)
7Eugine_Nier13y
I don't see how a circuit overfitted to any of the above would help you.
4[anonymous]13y
That's just the thing, the smallest circuit wouldn't be over-fitted. For instance, if I gave you numbers 1,1,2,3,5,8,13,21... plus a hundred more and asked for the SMALLEST circuit that outputted these numbers, it would not be a circuit of size hundred of bits. The size would be a few bits, and it would be the formula for generating the Fibonacci numbers. Except, instead of doing any thinking to figure this out, you would just use your NP machine to figure it out. And essentially all mathematical theorems would be proved in the same way.
2Eugine_Nier13y
I wasn't talking about mathematical theorems but about
[-][anonymous]13y110

That is a bit poetic. In the Fibonacci case, we know that there is a simple explanation/formula. For the stock market, genome, or Shakespeare, it is not obvious that the smallest circuit will provide any significant understanding. On the other hand, if there's any regularity at all in the stock market, the shortest efficient description will take advantage of this regularity for compression. And, therefore, you could use this automatically discovered regularity for prediction as well.

On the other hand, if several traders get their hands on efficient NP computers at once, it's safe to bet that historical regularities will go out the window.

0Oscar_Cunningham13y
Pun intended?

OBVIOUSLY the answer to this question is:

I would assemble a documentary crew, and make a movie about me visiting the town and city halls of every town and city in America. I would travel the minimum distance possible, and do nothing particularly interesting at any location. I would release my video into the Creative Commons, and open up a website to go with the video. There would be a google map-like toy that invites users to plot their own tour of all the towns and cities in Europe. This will be my way of testing initiates into the Better Bayesian Conspir... (read more)

What would you do with a solution to 3-SAT?

Any answer other than "create a superintelligent friendly AI to optimize the universe" would be a waste of this particular genie, but there are some steps in between that and 3-SAT which I can't specify yet.

5Wei Dai13y
Coincidentally, I recently had an idea for making progress on FAI that would benefit greatly from a solution to 3-SAT.
4cousin_it13y
Your idea seems to be a general method for solving unformalized problems using an NP-solver: generate phrases that an uploaded human would classify as insights. I'm still afraid that many such phrases will be mindhacks instead of genuine insights, though, and the risk gets worse as the problem gets harder (or more precisely, as the next inferential step becomes harder to hit).
3Wei Dai13y
I agree it's still risky, but with the safety features I put in (having a small limit on the phrase length, outputting the top 100,000 (considered individually) in random order instead of just the top 1, review/discussion by a team, and we can also mix together the top 10,000 insights from each of 10 uploads for additional safety) it seems no worse than just having humans try to solve the problem by thinking about it, since we could also come up with self-mindhacks while thinking. (Do you agree that the FAI problem has to be solved sooner or later? I think you didn't respond to the last argument I made on that.)

Do you agree that the FAI problem has to be solved sooner or later?

...And right now, thinking about possible replies to your comment, I finally switched to agreeing with that. Thanks.

Oh hell. This changes a lot. I need to think.

0lukeprog13y
This is a most excellent update. I look forward to hearing what comes out of this thinking.
1Psy-Kosh13y
An NP oracle makes AI rather easier... but I'm not sure it currently would be as much help in FAI in particular. That is, I don't think it would help as much with the F part. In other words, I suspect that an NP oracle is the sort of thing that if you discovered one... you should be really really quiet about it, and be very cautious about who you tell. (I may be totally wrong about this, though.)
2Document13y
(Declining to resist temptation to link a sci-fi story (7500 words) despite Google confirming you're already familiar with it.)
0Psy-Kosh13y
But I already am immune to that story. ;)
1JoshuaZ13y
How so? This isn't obvious to me. There are both good and bad arguments against this. If one can find such a thing it seems likely that others can too. A lot of the more mundane bad stuff that can be done with a practical solution to P=NP would be things like breaking encryption which work iff people don't know you have such a solution (otherwise people then go back to things like one-time pads. The economy will suffer until the quantum crypto is really up and running, but the amount of long-term badness someone with the solution can do will be severely limited.) Moreover, such a solution can also help do a lot of good in mundane areas where one needs to routinely solve instances of NP complete problems. So, overall, I'd go with releasing it.
2Psy-Kosh13y
NP oracles allow easy learning by allowing one to find compact models to explain/predict available data... Also gives the ability to do stuff like "what actions can I take which, within N inferential steps of this model, will produce an outcome I desire?" or "will produce utility > U with probability > P" or such. Maybe I'm way off on this, but it sure does seem like a cheap NP oracle would make at least UFAI comparatively easy. If I'm totally wrong about NP oracles -> easy AI, then I'd agree with you re releasing it... with a caveat. I'd say to it with advance warning... ie, anonymously demonstrate to various banks, security institutions, and possibly the public that there exists someone with the ability to efficiently solve NP complete problems, and that the algorithm will be released in X amount of time. (1-2 years would be my first thought for how long the advanced warning should be.) This way everyone has time to at least partly prepare for the day where all crypto other than OTP (quantum crypto would be an example of an OTP style crypto) would be dead.
0JoshuaZ13y
Yes, but if the models aren't well-defined then that won't be doable. At this point we can even give rigorous notions what we mean by an intelligence, so our 3-SAT oracle won't be able to help much. The 3-SAT oracle is only going to help for precisely defined questions.
1Psy-Kosh13y
Huh? I'm not sure I understand what your objection. An NP oracle would let you do stuff like "given this sensory data, find a model of size N or less that within K computational steps or less will reproduce the data to within error x, given such a model exists" Then one can run "which sequence of actions, given this model, will, within S steps, produce outcome A with probability P?" Whether or not we can give a rigorous definition of intelligence, seems like the above is sufficient to act like an intelligence, right? Yeah, there're a few tricky parts re reflective decision theory, but even without that. Even if we let the thing be non-self-modifying... giving a nice chunk of computing power to the above, given an efficient NP oracle, would be enough to potentially cause trouble. Or so I'd imagine.
0JoshuaZ13y
Ok, but how are you specifying the outcome? And are you going to specify each input and output?
0Psy-Kosh13y
Just some property of the outcome that you're interested in. ie, all of the above, with the question being "blah blah blah, with f(outcome) = blah with probability blah blah blah blah blah"
0JoshuaZ13y
Then you will need to specify your AI for every single output-input pair you are interested in.
0Psy-Kosh13y
Huh? I don't follow. (Note, the whole point is that I was claiming that an NP oracle would make, say, a UFAI potentially easy, while to achieve the rather more specific FAI would still be difficult.)
2JoshuaZ13y
It seems that we may be talking past each other. Could you give an explicit example of what sort of question you would ask the NP oracle to help get a UFAI?
1Psy-Kosh13y
Oooh, sorry, I was unclear. I meant the NP oracle itself would be a component of the AI. ie, give me an algorithm for efficiently solving NP complete problems, and one could then use that to perform the sort of computations I mentioned earlier.
0JoshuaZ13y
Hmm, I'm confused. Why do you think that such an object would be helpful as part of the AI? I see how it would be useful to an AI once one had one, but I don't see why you would want it as a component that makes it easier to make an AGI.
0Psy-Kosh13y
It would make AI easy specifically because it would allow the sorts of computations I described above. If I had a fast way to solve NP complete problems, then I could turn that into a way of performing the previously mentioned computations. Those previously mentioned computations amount to "efficient learning" + "efficient planning".
0JoshuaZ13y
I'm sorry but I'm still not following what learning and planning you would do. Are you attaching the oracle to some sort of reward mechanism?
0Psy-Kosh13y
The oracle itself is what gives one the ability to perform this computation: Find compact model that efficiently predicts/explains with bounded error the observed sensory data. (This is rough description of the more precise version stated above) Also gives one the ability to efficiently perform this computation: Given a generated model, determine actions that will lead to desired outcome in bounded number of steps, with reasonably good probability. The ability to perform the former computation would amount to the ability to efficiently learn. The ability to perform the latter computation would amount to the ability to efficiently plan. ie, if one has an algorithm for efficiently solving NP complete problems, one can be really good at doing the above two things. The above two things amount to the ability to learn and the ability to plan. clearer version: the first type of computation would allow it, from observation and such, to determine stuff like the laws of physics, human psychology, etc... The second computation would allow it to do stuff like... figure out what actions it needs to take to increase the rate of paperclip production or whatever. (incidentally A slight amount of additional reflectivity, without needing to solve the really hard problems of reflective decision theory or such, would probably be sufficient to allow it to figure out what experiments it needs to do to gain data it needs to form better models.)

I would settle all famous conjectures in mathematics. Or at least the ones with short enough proofs/refutations.

2jsalvatier13y
Is proof finding known to be NP-complete or easier?
5DanielVarga13y
It is NP-complete. The usual formulation is this: "Given a statement S of ZFC and a number n, is there a proof of S that is shorter than n?". It is trivial to solve SAT based on this: Let the statement be that the SAT formula is satisfiable, and choose an n such that the trivial constructive proof is shorter than n.
0Alex Flint13y
My understanding was that a search for a proof in first-order logic cannot be reduced to a single SAT problem; rather theorem provers generate a (potentially unbounded) series of SAT problems, one of which is eventually guaranteed to yield a proof if such a proof exists. But perhaps the bound on proof length that you mentioned changes this. How do you construct the SAT problem?
-1DanielLC13y
You only proved that it's NP-Hard. That is, it's at least as hard as any NP problem. It is, in fact, harder than NP-Complete problems.

Daniel's statement:

"Given a statement S of ZFC and a number n, is there a proof of S that is shorter than n?"

Is trivially in NP.

0benelliott13y
On the other hand, this is no use for telling you whether main task, solving unsolved mathematics problems, is in NP, since there may be very short mathematical statements that nonetheless have very long proofs. In fact, I believe it has even been proven that the growth rate of maximum proof length exceeds O(n), and as far as I know it may well be exponential.
8DanielVarga13y
The shortest proof for a theorem of length n cannot be bounded by any computable function (polynomial, exponential, doubly exponential, whatever). That would directly violate Goedel's Theorem. But this does not really affect my proposal. Your power in solving unsolved math problem is directly related to the parameters of your assumed SAT-solver. The better the solver, the longer proofs you will be able to find. But very long proofs are not useful for puny humans anyway. If Riemann has no 10000-page proof, then I am not even interested in finding it.
7cousin_it13y
I'm not sure how it would violate Goedel's theorem, but the statement is correct. Here's a more direct proof: for any computable function f, you can build a diagonal statement of length n saying "I am not provable in under f(n) symbols in such-and-such formal system". If the formal system is consistent, then the statement is true (false statements cannot be provable), provable within the system (just enumerate all proofs up to f(n) symbols long), but not provable in f(n) symbols (because it's true). The situation is more subtle. If the RH has no short proofs in ZFC, it could still have short proofs in ZFC+Con(ZFC), and human mathematicians would be interested in that. More generally, not all of math is looking for proofs within existing formal systems. Some of it is picking new formal systems. I'd love to be able to use computers for that, but I don't know how. ETA: just noticed that my statement from the first paragraph also works as an example of a statement having a very long proof in ZFC and a very short proof in ZFC+Con(ZFC), as mentioned in the second paragraph :-)
2DanielLC13y
It's harder. It's also not generally possible. By Gödel's incompleteness theorem, there are theorems that can neither be proven nor disproven. In addition, any computable axiom schema will still result in some such theorems. If you had an algorithm to prove or disprove any provable or disprovable theorem, and tell if it can't be done, you could make a computable axiom schema to make the system complete.
0[anonymous]13y
But proof finding of proofs of a specific lengths is in NP. That is "Is there a proof of X with length less than K" is in NP.