One day, you find an unmarked cardboard box sitting on your front porch.

On top of the box is a single note:

“Open me.”

You take the box inside and open it, to find what appears to be a small black laptop nestled between old newspaper clippings. There are no identifying features to the laptop, other than its sleek blackness and small size. When you open the laptop, instead of being greeted with any sort of familiar welcome screen, there is simply text, displayed in white over a black background. The text says the following:

 

“Welcome to the next stage of the human experiment.

I have have been watching your kind for a while now, and I believe that you are now ready. I have decided to entrust this machine to your care. Do not attempt to figure out how this machine performs its calculations, as uninformed tampering may result in the deletion of your universe.

The item you are now holding is what your mathematicians would call a Universal Turing Machine, similar in many ways to your typical computer. This machine, however, is significantly different from any other currently existent on Earth.

After reading this message, press any button to reveal two input fields stacked on top of each other, and one output field at the bottom. All input fields can be fed any required data from the internet, or can be entered into directly through the keyboard. Input lengths of any finite size are acceptable.

The top input field will accept and is capable of automatically running the intended code of any language or format that is capable of being run on a standard Universal Turing Machine.

That input will then be translated into the necessary binary code to be computable by the internal “black box” computer.

The bottom input field must be fed a finite natural number of any length, which will determine the number of computations per second. Do not worry about exceeding the speed of light, or of going past any other finite limit; the computation itself is performed in a “bubble universe” with different physical laws than your own, and is capable of computing at absolutely any positive finite speed relative to your universe.

The output field will display the output of your calculations, if any exist, after exactly one minute of computation relative to you.

Do with this machine as you will.

Wishing you the best,

God.”

 

Your finger hovers over the keyboard.

What will you do with this marvelous machine? What can you do?

What happens next is up to you.

 

>________________________________

 

NOTE FROM ME, OUTSIDE OF THE STORY: I wrote this trying to work out my thoughts on what might be possible with a machine with unlimited but finite computing power. I was going to continue the story, but found that I honestly couldn't think of all that many interesting things that would be possible to do with such a machine, that couldn't already be done now. As such, I'm turning this question public, hoping that anyone reading this might have some interesting ideas that I haven't thought of.

New Answer
New Comment

3 Answers sorted by

Taran

Nov 12, 2020

110

Pretty interesting.  You're still constrained by your ability to specify solutions, so you can't immediately solve cold fusion or FTL (you'd need to manually write and debug an accurate-enough physics simulator first).  Truly, no computing system can free you from the burden of clarifying your ideas.  But this constraint does leave some scope for miracles, and I want to talk about one technique in particular: program search.

Program Search

Program search is a very powerful, but dangerous and ethically dubious, way to exploit unbounded compute.  Start with a set of test cases, then generate all programs of length less than 100 megabytes (or whatever) and return the shortest, fastest one that passes all the test cases.  Both constraints are important: "shortest" prevents the optimizer from returning a hash table that memorizes all possible inputs, and "fastest" prevents it from relying on the unusual nature of the oracle universe (note that you will need a perfect emulator in order to find out which program is fastest, since wall-clock time measurements in the oracle's universe might be ineffective or misleading).  In a narrow sense, this is the perfect compiler: you tell it what kind of program you want, and it gives you exactly what you asked for.

Risks

There are some practical dangers.  In Python or C, for example, the space of all programs includes programs which can corrupt or mislead your test harness.  The ideal language for this task has no runtime flexibility or ambiguity whatsoever; Haskell might work.  But that still leaves you at the mercy of God's Haskell implementation: we can assume that He introduced no new bugs, but He might have faithfully replicated an existing bug in the reference Haskell compiler, which your enumeration will surely find.  This is unlikely to cause serious problems (at least at first), but it means you have to cross-check the output of whatever program the oracle finds for you.

More insidiously, some the programs that we run during the search might instantiate conscious minds, or otherwise be morally relevant.  If that seems unlikely, ask yourself: are you totally sure it's impossible to simulate a suffering human brain in 100 megs of Haskell?  This risk can be limited somewhat, for example by running the programs in order from smallest to largest, but is hard to rule out entirely.

Applications

If you're willing to put up with all that, the benefits are enormous.  All ML applications can be optimized this way: just find the program that scores above some threshold on your metric, given your other constraints (if you have a lot of data you might be able to use the best-scoring program, but in small-data regimes the smallest, fastest program might still just be a hash table.  Maybe score your programs by how much simpler than the training data they are?).

With a little more work, it should be possible to -- almost -- solve all of mathematics: to create an oracle which, given a formal system, can tell you whether any given statement can proved within that system and, if so, whether it can be proved true or false (or both)...that is, for proofs up to some ridiculous but finite length.  I think you will have to invent your own proof language for this; the existing ones are all designed around complexity limitations that don't apply to you.  Make sure your language isn't Turing complete, to limit the risk of moral catastrophe.  Once you have that, you can just generate all possible proofs and then check whether the one you want is present or not.

Simulation

Up until now we've been limited by our ability to specify the solution we want.  We can write test cases and generate a program which fulfills them, but it won't do anything we didn't explicitly ask for.  We can find the ideal classifier for a set of images, but we first have to find those images out in the real world somewhere, and the power of our classifier is bounded by the number of images we can find.

If we can specify precise rules for a simulation, and a goal within that simulation, most of that constraint disappears.  For example, to find the strongest Go-playing program, we can instantiate all possible Go-playing programs and have them compete until there's an unambiguous winner; we don't need any game records from human players.  The same trick works for everything simulatable: Starcraft, Magic: the Gathering, piloting fighter jets, you name it.  If you don't want to use the oracle to directly generate a strong AI, you can instead develop accurate-enough simulations of the real-world, and then use the oracle to develop effective agents within those simulations.

Endgame

Ultimately the idea would be to develop a computer model of the laws physics that's as correct and complete as our computer model of the rules of Go, so that you can finally develop nanofactories, anti-aging drugs, and things like that.  I don't see how to do it, but it's the only prize worth playing for.  At this point it becomes very important to be able prove the Friendliness of every candidate program; use the math oracle you built earlier to develop a framework for that before moving forward.

With a little more work, it should be possible to -- almost -- solve all of mathematics: to create an oracle which, given a formal system, can tell you whether any given statement can proved within that system

...with a proof of length L or less.


Aside from the issue of the given statement, what can't be proved?

  • Problems whose answers are independent of the framework you are using (The continuum hypothesis). [1]
  • Undecidable problems. [2]

Things which probably can't be proved:

  • Things which are false, like 2+2=3. (For a given framework which meets certai
... (read more)
2Donald Hobson3y
These are pretty much the same thing. The continuum hypothesis is a case where you have a single formal system in mind ( ZFC ) and have proved that the continuum hypothesis is independent of the axioms. In the case of the halting problem, you just have a couple of extra quantifiers. For all formal systems that don't prove a contradiction, there exists a Turing machine, such that whether the Turing machine halts or not can't be proved from the axioms of the formal system. (Technically, the formal system needs to be R.E., which means that there is a computer program that can tell if an arbitrary string is an axiom. )

I don’t think there’s any reason to create the agents for those games, since you can just brute force solve for the optimal play.

4Taran3y
If you're using the oracle to generate moves directly then you don't need an agent, yeah.  But that won't always work: you can generate the complete Starcraft state space and find the optimal reply for each state, but you can't run that program in our universe (it's too big) and you can't use the oracle to generate SC moves in real time (it's too slow).
4Alexei3y
Good point!

Thanks for the fascinating response!

If you don't mind, I might try playing around with some of the ideas you mentioned in future write-ups here; there's a lot of interesting theoretical questions that could be explored along those lines.

1Taran3y
Sure, go ahead.

waveman

Nov 11, 2020

110

I suppose I would test out the claim by getting it to mine a few hundred $million of bitcoin for me.

Then crack all the interesting crypto data that is floating around. 

Then brute force search for proofs/solutions of all the big problems such as Riemann Hypothesis, factoring etc. Try all texts shorter than say 100 terabytes and see if they solved the problem.

Work out the implications of all the human genes by calculating out how the human body works. This would include things like solving protein folding.

Find the best/shortest algorithm for all the AI/ML challenges that delivers near perfect results. 

Then simulate the body with various chemical compounds added to cure cancer, infections etc. Design vaccine + cure for coronavirusV2 and also machines to make them.

I guess you could create a model of the brain and then run all sorts of experiments on the simulation to work out how it works. 

Simulate atoms and molecules and brute force ways to build things that would build arbitrary nanotech engines.

At what point would you get worried about AI safety?

I really like your first three ideas, and would definitely consider doing that if I was in this position (although now that I'm thinking about it, I wouldn't want to accidentally alert any powerful actors against me so early on in my journey, for fear of getting the laptop confiscated/stolen, so I'd be very careful before doing anything that could potentially be traced back to me online). :)

As for "calculating out how the human body works," I'm not sure it would be that simple to pull off, at least not at first. Taking your statement literally would mean h... (read more)

Kevin Lacker

Nov 12, 2020

40

You could probably make drastic improvements in AI, because you could do extremely expensive things like modeling any function by minimizing its Kolmogorov complexity. I bet you could develop superhuman AI within a day, if given access to a computer with 2^2^100 clock speed.

You're a lot braver than me!

I'd be absolutely terrified of trying to create anything anywhere near superhuman AI (as in AGI, of course; I'd be fine with trying to exceed humans on things like efficient protein folding and that sort of stuff), due to the massive existential risk from AGI that LessWrong loves talking about in every other post.

Personally, I would wait to get the world's leading AI ethics experts's unanimous approval before trying anything like that, and that only after at least a few months of thorough discussion. An exception to that might b... (read more)

1Kevin Lacker3y
The first algorithm I would use is this, to solve problems of mimicking a function with provided inputs and outputs: For all possible programs of length less than X, run that program on the inputs for time Y. Then measure how close it comes to the outputs. The closest program is then your model. This takes time O(Y*2^X) so it's impractical in the world we live in, but in this hypothetical world it would work pretty well. This only solves the "classification" or "modeling" type of machine learning problems, rather than reinforcement learning per se, but that seems pretty good to start. For reinforcement learning, it just depends what you'd want to do in general. I would not just build a general AI and give it access to the internet, any more than I would bring an army of teenagers over to my house and give them access to my car and wallet. If you really had a super-powerful AI then I think the best way of increasing its practical capabilities over time while controlling it would be like any other technology - start a tech company, raise money, think of a business model, and just see what happens. That strategy seems way more likely that you could retain control over the technology and continue to express your own moral judgment over time. Compare to, for example, the scientists developing nuclear weapons, who quickly lost control to politicians. Maybe you could build a new search engine - that seems like it could be a lot better with real AI behind it.
9 comments, sorted by Click to highlight new comments since: Today at 12:59 PM

If you could automatically pipe output back into the "computation speed" parameter, without being limited by passing around increasingly large strings, then you could get truly infinite compute in finite time.

How would that work exactly? Let's say you get the output to give you the largest possible number given the number of computations currently allowed, which as long as the "computation speed" parameter is finite, will be finite as well, albeit incredibly large. Every step taken will only increase the computation speed by a finite amount, so how would you reach infinity in a finite time?

Let's say we have api access to the magic box - we can call magic(code, number) to run it. Furthermore, we can do this from within code being executed by the magic box itself.

Then, we could run a function like f(n): magic("f(2*n)", n). So, the function has the box run a copy of itself with n doubled. With each recursive step, the speed doubles, so the computation time (roughly) halves - computation time will actually go down a bit slower than this, since n will have more bits at each step, but that's a linear effect so the whole thing will still work out to an at-least-exponential speedup. The total runtime is then an exponential series (something like 1 + 1/2 + 1/4 + ...), so it will come out to a finite number. Thus, infinite computation in finite time. (This particular infinite computation doesn't actually do anything besides recurse infinitely, but it can be modified.)

Your assumption that you could fully simulate the magic box inside itself is a pretty massive one, and I honestly wouldn't expect it to be true in this hypothetical universe. After all, after receiving the input parameters, the machine simulates a Universal Turing Machine of arbitrary finite size, which by definition cannot perform any non-Turing-computable functions, and certainly couldn't simulate a machine more complex than itself. In order for your magic "Zeno's paradox-destroyer" function to work given the parameters outlined in the story, it would need to be able to call the machine running it from the inside, and God (in His infinite wisdom 🙃) hasn't given us any "escape" api functions for us to do that.

(note that I really do appreciate your thoughts here, and I'm not trying to dunk on them, just trying to get a better understanding of unbounded finite computational systems and want to plug any potential holes in the story before expanding on this hypothetical universe in the future)

Yeah, that makes sense.

How exactly does the "speed" box take input? Because if its input is a plain decimal number, then we really only have access to exponential compute, since we'll need to type out a string of n digits to get 2^O(n) compute. If takes more general programs (which output a number), then we can use the machine itself to search for short formulas which output large numbers, which might buy us a lot more compute. On the other hand, if the speed box takes general programs, then it needs to somehow recognize programs which don't halt, which we could also exploit for hypercomputation.

Maybe the input box uses its own notation, something weak enough that infinite loops are impossible but powerful enough to express Conway's arrows?  That seems like it would be enough to be interesting without accidentally adding a halting oracle.

I'll go with Taran's idea there, I think. Something like Douglas Hofstadter's BlooP language, perhaps, which only allows primitive recursive functions. Would BlooP allow for chained arrow notation, or would it be too restrictive for that? More generally, what is the fastest growing primitive recursive function possible, and what limits would that give us in terms of the scope of problems that can be solved by our magic box?

Would BlooP allow for chained arrow notation, or would it be too restrictive for that?

Sadly, you need more than that: chained arrows can express the Ackermann function, which isn't primitive recursive.  But I guess you don't really need them.  Even if you just have Knuth's up-arrows, and no way to abstract over the number of up-arrows, you can just generate a file containing two threes with a few million up-arrows sandwiched between them, and have enough compute for most purposes (and if your program still times out, append the file to itself and retry).

that's a really good point, and I'm mentally kicking myself right now for not having thought of that. In answer to another comment below yours, I suggested only allowing primitive recursive functions to be entered as input in the second box, which I think would solve the problem (if possibly creating another computational limit at the growth rate of the fastest growing primitive recursive function possible [which I haven't studied at all tbh, so if you happen to have any further reading on that I'd be quite interested]). Going a bit further with this, while I suggested God limiting us to one bounded language like BlooP to use on the second field, if He instead allowed for any language to be used, but only primitive recursive functions to be run, would we be able to exploit that feature for hypercomputation?

Also, while we're discussing it, what would be in the space of problems that could be uniquely solved by 2^O(n) compute, but not by "normal" O(n) compute?