**EDIT**: Added a section on Euler's conjecture.

# The Goldbach conjecture is likely

The Goldbach conjecture is that "every even integer above two is the sum of two primes". For example, , , , and so on.

Though this is a mathematically precise statement, we can talk about the "probability" of it begin correct. How so?

Well, by the prime number theorem, the probability of a random number less than being prime, is . So if we sum up all the primes less than , we get different sums; these sums will be less than .

So, is itself is one of these sums? Well, the "probability" that it's not the total of any given sum is ; therefore the probability of it being the total of none of the sums is:

So the probability of being the total of such a sum is roughly:

Therefore, the probability of all numbers being the total of such a sum is roughly:

Now, the infinite product converges to a non-zero number if and only if the sum converges to a finite number. That series can be seen to be convergent (for example, by noting that for large enough and using the comparison test).

If use computers to get an estimate of , we get a pretty low probability. However, most of that improbability mass is on the low numbers, and the Goldbach conjecture has been tested up to . So, if we assume it's valid up to , we numerically get:

So the Goldbach conjecture is pretty likely, and, the more examples we discover where it holds, the more likely it is to hold all the way to infinity.

## "Probabilities" of logical facts

The above reasoning seems dubious. The primes are not defined by random sampling among the natural numbers; quite to the contrary, they come from a mathematical rule of extreme precision. So what do these probabilities mean?

Let be an infinite set of numbers, selected from the natural numbers in a way that looks like the prime number theorem (eg the -th number is approximately ). Then what we've shown is that, if such an obeys the "-Goldbach conjecture" up to , then we'd expect it to go all the way to infinity.

Thus the Goldbach conjecture can be restated as "in terms of sums of two elements, the prime numbers behave like a typical sequence selected in a prime-number-theorem way".

So the Goldbach conjecture is not saying that there is something special about the primes; in fact, it's saying the opposite, that the primes are typical of similar sequences, that nothing in the specific ways that the primes are selected has an impact on the sum of two primes. So the Goldbach conjecture is essentially saying "there is no obstruction to the primes being typical in this way".

## One obstruction

Did you notice that, so far, at no point did I require to be an even number? But all the primes except for are odd. So the distribution of sums of primes is very (very!) heavily skewed towards even numbers; most odd numbers will not appear at all. So, that is one clear obstruction to the possible values of the sum, coming from the way the primes are constructed. The Goldbach conjecture is therefore saying that there are no additional obstructions beyond this one condition on parity.

In fact, the Goldbach conjecture has changed; used to be seen as a prime number, and the original conjecture included as another example Then was removed from the list of prime numbers, and it turned out, as far as we can tell, that was the only even number we lost from the list of sums.

If we removed from the list of primes, we'd only lose as a sum. Similarly, if we strike out the first primes, we expect - on probabilistic grounds - that "all numbers greater than a given are the sums of two primes (first primes not included)". If that were to fail, then there's a really interesting obstruction out there.

# Fermat's last theorem was likely (for )

We can show, similarly, that Fermat's last theorem was very likely on probabilistic grounds. The theorem states that, for , there do not exist natural numbers such that .

Fix and . Counting and , there are natural numbers less than or equal to . Therefore there are possible , all less than . So the probability that any two of these -th powers sum to is .

So the probability that there are no 's such that , is

The sum converges. Moreover, we can also sum over : . This also converges. So the probability of Fermat's last theorem was non-zero, at least for ; add on the fact that the theorem was proved for many and checked for many and , means that, even before it was proved, it was very probable it was correct.

So Andrew Wiles's genius was in showing there were no unexpected obstructions for the "likely" outcome to be true. That's why the proof is so hard: he was trying to prove something very "likely", and show an absence of structure, rather than a presence, without knowing what that structure could be.

# Euler's conjecture was unlikely

Euler's conjecture was that you needed to sum at least powers of to get another power of ; Fermat's last theorem establishes this for , and Euler theorised that this extended.

Euler's theorem is in fact false; for we have three fourth powers that sum to another fourth power as:

There are counterexamples known for as well, so the conjecture is false, and not just for one value of .

More interesting from our perspective, we expect it to be false on probabilistic grounds. Recall that the argument about Fermat's last theorem does not work for ; it fails because the crucial sum is of the type , which diverges.

Similarly, if we estimate the probability of Euler's conjecture, we get terms like the following (for some constants ):

This goes to zero for the same reason as the case.

So, on probabilistic grounds, we expect Fermat's last theorem to be true for , and we expect Euler's conjecture to be false.

The only unexpected result here is that Fermat's last theorem and Euler's conjecture are **true** for . So something about the structure of the problem for is moving the result away from the probabilistic outcome.

# The "Stuart conjecture"

Based on what I wrote about Euler's conjecture, I'll hazard a conjecture myself, which I believe to be true on probabilistic grounds. Namely that if there are integers, whose -th powers sum non-trivially to another -th power, then is greater than or equal to .

Fermat's last theorem shows this is true for and .

Reminiscent of Freeman Dyson's 2005 answer to the question: "what do you believe is true even though you cannot prove it?":

How does the randomness of the digits imply that the statement cannot be proven? Superficially the quote seems to use two different notions of randomness, namely "we cannot detect any patterns" (i.e. a pure random generator is the best predictor we have) and "we have shown that there can be no patterns" (i.e. we have shown no other predictor can do any better). Is this a known result from Ergodic Theory?

I was about to type out a rebuttal of this, but halfway through I realized I actually agree with you. The "some non-random property" of the digits of the powers of two is that they are all digits found in order inside of powers of two. I would even go so far as to say that if the statement really can't be proven (even taking into account the fact that the digits aren't truly random) then there's a sense in which it isn't true. (And if it can't be proven false, then I'd also say it isn't false.)

I don't think it's a fair deduction to conclude that Goldbach's conjecture is "probably true" based on a estimate of the measure (or probability) of the set of possible counter examples being small. The conjecture is either true or false, but more to the point I think you are using the words probability and probable in two different ways (the measure theoretic sense, and in the sense of uncertainty about the truth value of a statement), which obfuscates (at least to me) what exactly the conclusion of your argument is.

There is of course a case to be made about whether it matters if Goldbach's conjecture should be considered as true if the first counter example is larger than an number that could possibly and reasonable manifest in physical reality. Maybe this was what you are getting at, and I don't really have a strong or well thought out opinion either way on this.

Lastly, I wonder whether there are examples of heuristic calculations which make the wrong prediction about the truth value of the conjecture to which they pertain. I'm spitballing here, but it would be interesting to see what the heuristics for Fermat's theorem say about Euler's sum of primes conjecture (of which Fermat's last theorem is K = 2 case), since we know that the conjecture is false for K = 4. More specifically, how can we tell a good heuristic from a bad one? I'm not sure, and I also don't mean to imply that heuristics are useless, more that maybe they are useful because they (i) give one an idea of whether to try to prove something or look for a counter example, and (ii) give a rough idea of why something should be true or false, and what direction a proof should go in (e.g. for Goldbach's conjecture, it seems like one needs to have precise statements about how the primes behave like random numbers).

Note that the probabilistic argument fails for n=3 for Fermat's last theorem; call this (3,2) (power=3, number of summands is 2).

So we know (3,2) is impossible; Euler's conjecture is the equivalent of saying that (n+1,n) is also impossible for all n. However, the probabilistic argument fails for (n+1,n) the same way as it fails for (3,2). So we'd expect Euler's conjecture to fail, on probabilistic grounds.

In fact, the surprising thing on probabilistic grounds is that Fermat's last theorem is true for n=3.

That would be sufficient, but there are more easily met conditions like:

1)

if you have a bunch of sequences of observations (100 or 1000 year's worth)

and none of them include the counterexample as an observation with high probability

2)

The frequency is low enough that it isn't worth accounting for.

For example, if you flip a coin it comes up heads or tails - is it worth bringing up another possibility?

This is a poor description of Wiles's proof; in fact, I would call it diametrically wrong. Wiles proved the

presenceof a very rigid structure - not the absence - and the presence of this structure implied FLT via the work of other mathematicians.I don't have a great understanding of the broader point you are making, so I don't know how big an issue this mistake presents. However, be aware that the paradigm you've extracted from the ideas in this post has lead to at least one incorrect prediction.

I'll try to explain how Wiles's proof diverges from your model of it by way of analogy. Suppose that we instead wanted to prove Fermat's first theorem:

Fermat's first theorem:For everyeveninteger n≥4 there are no nontrivial integer solutions to the equation xn+yn=−zn.Further suppose that in our world, mathematicians know about the notion of positivity and absolute values, but the proof of the following fact has long evaded them.

Positivity conjecture:For every integer n, we have n2≥0.The positivity conjecture is a very important structural fact about the integers. And it immediately implies Fermat's first theorem (since the left-hand side must be positive, but the right-hand side must be negative unless x,y,z are all 0). So Fermat's first theorem follows from an important structural fact.

However, in our supposed world, mathematicians don't have access to the positivity conjecture. They might perform the exact same analysis in your post (it goes through verbatim!), and conclude that if you check Fermat's first theorem for enough n, then it is probable to be true. However, it is

nottrue that the proof of FFT via the positivity conjecture is "proving an absence of structure" - quite the opposite!The analogue of the positivity conjecture in the real world is the Modularity theorem. This is what Wiles actually proved, and it was already known that the Modularity theorem implies FLT. And as with the positivity conjecture, the Modularity theorem is a very powerful

structuralresult. To give a slogan, it says that every elliptic curve over Q is "modular," meaning that it relates in an appropriate way to an object called a modular form.If you say that "Wiles proved the Taniyama–Shimura conjecture" (for semistable elliptic curves), then I agree: he's proved a very important structural result in mathematics.

If you say he proved Fermat's last theorem, then I'd say he's proved an important-but-probable

lack of structurein mathematics.So yeah, he proved the existence of structure in one area, and (hence) the absence of structure in another area.

And "to prove Fermat's last theorem, you have to go via proving the Taniyama–Shimura conjecture", is, to my mind, strong evidence for "proving lack of structure is hard".

I'm not sure whats going on with the small numbers though. When I use the formula to try and get the probability of goldbachs conjecture holding for all numbers >2 , I get around 10^-15.

More sophisticated formulae that take into account the even odd pattern still get around 10^-6. Of course, this is sensitively dependant on exactly where you start. But it seems that this probabilistic reasoning showing so strongly that a small even number shouldn't be a sum of primes indicates a problem with the probabilistic model.

Yes, I got that result too. The problem is that the prime number theorem isn't a very good approximation for small numbers. So we'd need a slightly more sophisticated model that has more low numbers.

I suspect that moving from "sampling with replacement" to "sampling without replacement" might be enough for low numbers, though.

For a more precise probabilistic approach to Fermat's last theorem by Feynman, which take into account the repartition of the nth powers, see this amazing article http://www.lbatalha.com/blog/feynman-on-fermats-last-theorem

Thanks! It's cool to see his approach.

Since a+b = b+a shouldn't the total number of 'different sums' be half of what you give? Fortunately the rest of the argument works completely analogously.

You can see this as sampling z(z+1)/2 times sorta-independently, or as sampling z2 times with less independence (ie most sums are sampled twice).

Either view works, and as you said, it doesn't change the outcome.