I have a whimsical challenge for you: come up with problems with numerical solutions that are hard to estimate.

This, like surprisingly many things, originates from a Richard Feynman story:

One day I was feeling my oats. It was lunch time in the technical area, and I don't
know how I got the idea, but I announced, "I can work out in sixty seconds the answer to
any problem that anybody can state in ten seconds, to 10 percent!
"
People started giving me problems they thought were difficult, such as integrating
a function like 1/(1 + x 4 ), which hardly changed over the range they gave me. The hardest
one somebody gave me was the binomial coefficient of x 10 in (1 + x) 20 ; I got that just in
time.
They were all giving me problems and I was feeling great, when Paul Olum
walked by in the hall. Paul had worked with me for a while at Princeton before coming
out to Los Alamos, and he was always cleverer than I was.

..

So Paul is walking past the lunch place and these guys are all excited. "Hey,
Paul!" they call out. "Feynman's terrific! We give him a problem that can be stated in ten
seconds, and in a minute he gets the answer to 10 percent. Why don't you give him one?"
Without hardly stopping, he says, "The tangent of 10 to the 100th."
I was sunk: you have to divide by pi to 100 decimal places! It was hopeless.

(From Surely You're Joking Mr. Feynman section "Lucky Numbers")

 

So what would you ask Richard Feynman to solve? Think of this as the reverse of Fermi Problems.

 

Number theory may be a rich source of possibilities here; many functions there are wildly fluctuating, require prime factorization and depend upon the exact value of the number rather than it's order of magnitude. For example, I challenge you to compute the largest prime factor of 650238.

 

(My original example was: "For example, I challenge you to compute the greatest common denominator of 10643 and 15047 without a computer. This problem has the nice advantage of being trivial to make harder to compute - just throw in some extra primes." It has been pointed out that I forgot Euclid's algorithm and have managed to choose about the only number theoretic question that does have an efficient solution.)

New to LessWrong?

New Comment
46 comments, sorted by Click to highlight new comments since: Today at 2:24 AM

Counter-challenge: come up with estimating real-world problems, not the mathematical examples.

The rate of change in atmospheric pressure in Moscow exactly 1 year from now in mbar/hour.

Presumably deterministic chaos-based problems, like the famous 3-body problem would qualify, or anything relying on the butterfly effect.

Edit: this was unintentionally redacted!! Is there a undo-the-redaction option?

Sally and John both play a videogame where their characters live on a circle. Starting next to Sally, John walks around the circle 10 to the 100th times (he is a known hacker). Sally hasn't moved and measures the position of John. Assuming that Sally is at the point (1,0) and John moves counter-clockwise, what is the slope of the trajectory of the projectile that John's character fires towards the center of the circle?

For a real one: what is the length of the coast of Britain? Perhaps this is ill-defined without giving a length-scale, but even with a scale (say, a meter) I'm not sure it's doable. Or similarly, what's the surface area of a convenient piece of broccoli?

[This comment is no longer endorsed by its author]Reply

GCD is easy. Factoring is comparatively hard, though there are algorithms much better than brute force.

I would have said that complexity theory is the place where we often have simple descriptions of hard- or impossible- to compute values. The busy-beaver function, say.

Busy beaver is a little wordy to state in 10 seconds, but is nonetheless a good example.

(And my example in OP has been revised in light of this oversight.)

I believe for the tangent one Feynman would have been best off choosing 1 or -1, as those values of tan(x) have the largest x distance on either side with a tan(x) value within 10% (I think). Even so, it looks like he would have only around a 0.5% chance of being correct (based on some quick tinkering with excel).

Mathematica says the exact value of y that maximizes the distance between arctan(1.1y) and arctan(0.9y) is 1/sqrt(0.99) which is approximately 1.00504 (its negative also works). Feynman should still choose 1, because this isn't a calculation to do in your head, and 1 is a good guess.

His chances of winning, however, are about 3.1%: the range of x-values that work is roughly 0.1, and the period of tangent is pi.

Excellent solution. Looking back on my tinkering with excel, I made an error in how I determined whether or not Feynman would have been right.

Unsurprisingly, that method would have still given the wrong answer (a 3% chance still isn't very good), but it's an example of using your knowledge of the general behavior of a function to make an educated guess.

I did the GCD on a piece of paper, but slipped up in the arithmetic right at the start and got the wrong answer :(

I got the correct GCD, but then realized that by writing out the steps in Notepad I violated the no-computer injunction :(

You all win! I knew there was at least a 50% chance I'd made a trivial oversight...

The 4-millionth (4000000th) prime number. As soon as I typed it I wondered if the distribution of large prime numbers had been approximated. I found http://en.wikipedia.org/wiki/Prime_number_theorem

It seems more than plausible that Feynman would have been familiar with that theorem and would have been able to get the 4000000th prime number from it, within 10%. If he would have, I would have been happy with my failure to stump him.

Without looking it up (or clicking that link): 54,000,000?

(Vaguely remembering that it's about n log n (maybe?), and attempting to approximate log in my head.)

(EDIT: I was almost within 20%... but even that approximation above isn't within 10%.)

n log n calculated correctly is 11% off, so it almost seems like a deliberate trick! I certainly did not know the answer when I set it and thought n log(n) was right, but then looked it up in wik and thought I was confusing it with Stirling's approximation for large factorials.

When I tried n log(n) in my head, I was off by a factor of 10. I realized after the fact is because I did it as log10(4e6) = 66 dB and then 66log(10) = log(4e6). But of course dB are 10log10() and I forgot to divide by 10 at the end.

So I'm guessing Feynman would have gotten within 12% using n log n, and then knowing Feynman, he would have known that n log n underestimates primes and since he was aiming for 10% error he would have taken 10% off his answer and nailed it.

Screw AI, let's just build Feynman when we get the technology. He was a hoot!

Screw AI, let's just build Feynman when we get the technology. He was a hoot!

Sounds good to me.

One quick problem. If you replace the air inside the AEROGEL with the hydrogen or helium, does it float in the atmosphere? For a while at least?

IIRC, density(areogel) - density(air) < density(air) - density(helium), so I guess it would.

Maybe this is a trick question, but why wouldn't it float?

Because there could still be too much of the solid part for it to have a density less than air's?

edit: i suspect it would float for a little bit before the lighter gas diffuses out

Wikipedia says

The lowest-density aerogel is a silica nanofoam at 1 mg/cm3,[6] which is the evacuated version of the record-aerogel of 1.9 mg/cm3.[7] The density of air is 1.2 mg/cm3 (at 20 °C and 1 atm).[8] Only the recently manufactured metallic microlattices have a lower density at 0.9 mg/cm3.[9] By convention, the mass of air is excluded when the microlattice density is calculated. Allowing for the mass of the interstitial air, the true, unevacuated density of the microlattice is approximately 2.1 mg/cm3 (2.1 kg/m3).

So the evacuated version with 1mg should rise since air is 1.2mg.

Googling, I see one or two YouTube videos of aerogel floating in air. SEAgel apparently is sometimes used this way:

SEAgel is made of agar, a carbohydrate material that comes from kelp and red algae, and contains from one and a half to fifty milligrams of material per cubic centimeter of solid (in other words, it has a density of 1.5-50 mg/cm3). SEAgel can be made lighter than air using hydrogen - causing it to float or hang in the air.

edit: i suspect it would float, but only for a little bit before the lighter gas diffuses out.

[This comment is no longer endorsed by its author]Reply

tan(10^100) ~= 0.40123196199081432083

I wrote my own program to calculate this in Python, and got an answer approximately similar to this, then found a more exact answer here:

http://newsgroups.derkeiler.com/Archive/Rec/rec.puzzles/2006-04/msg00363.html

You need lots of decimal places of pi to calculate this :)

"ten trillion factorial factorial"

This takes about 2 seconds to say. Anyone want to try for a shorter one?

"Busy beaver billion."

Currently, does the radioactivity increases or decreases. Of the whole Earth, Solar system, Galaxy?

What do you say?

As for the Earth, I'd say light radionuclides such as carbon-14 are more or less in equilibrium (as many are created by cosmic rays as many decay), and so are heavy short-lived radionuclides such as radon (as many are created by uranium and thorium decaying as many decay themselves), whereas heavy long-lived radionuclides (uranium and thorium) decay with nothing replenishing them. So the total radioactivity is decreasing (at least on timescales long compared with the 2x11-year solar cycle but short compared with the Earth's age).

When an uranium atom splits, a chain begins. Where more energy is released later in this chain than in the starting decay.

In other words. The power of radiation is greater and greater for some time. For how long?

Short compared to the Earth's age.

I am not that sure. We know for the natural nuclear reactors.

E.g.

They could be deep inside the Earth as well. Heavy atoms do fall toward the center of the planet. You can't exclude ever better circumstances for the ongoing natural nuclear reactors deep down and the still rising number of them.

It is not granted, but not excluded also. It might be that the planet's interior is more and more radioactive. Possible if not probable.

Yes, but why there would be more such reactors today than there were one billion years ago?

Be cause U atoms are heavy and migrate under the influence of gravity to the center. Their relative abundance steadily rising.

I don't know if is it enough to have some effect. But if we discover some exoplanets, hotter than expected, it would be indicative for a process like that.

I don't know, it just might be, that the radioactivity of a planet rises for a longer time than usually postulated. Certainly, the Galaxy's atoms are more and more radioactive.

Combinatorics is even better: you can state a problem in ten seconds with a numerical answer that would take years to compute with supercomputers. For instance, find the Ramsey number R(5,5). Wikipedia says it's (an integer) between 43 and 49.

At first this may seem unfair, but it's parallel to tan(10^100) in that there's only computation left to do; the computation in this case being to check all possible colorings of complete graphs with some number of vertices.

Also, isn't GCD a rather poor example, being the one number theoretic problem that does have an efficient algorithm to solve?

[-][anonymous]12y70

R(5, 5) is hard to solve exactly, but easy to estimate: 46 is correct to within 10%.

(Although perhaps only Feynman could produce such an estimate in ten seconds, without knowing the range beforehand.)

Hah! I was wondering if someone would notice that blunder of mine. Congratulations!

(Given sixty seconds, well... the easy upper bound on R(5,5) is 70, the easy upper bound on R(4,4) is 20, and R(4,4) is known to be 18, so I would have probably guessed around 60, which isn't in the 10% range no matter what the real value is. Possibly someone with a better intuition for these things would think of a more calibrated argument, though.)