Brun's theorem is a relatively famous result in analytic number theory that says the sum of the reciprocals of the twin primes converges to a finite value. In other words, we have

for some finite constant . This is in contrast to the same sum taken over all primes, which is divergent:

In this post, I'll use Brun's theorem as an illustration of sieve theoretic arguments in analytic number theory. I'll try to explain relevant results as I go along to minimize the background necessary to understand the arguments, but some background in real analysis and number theory is needed to understand the post. If you don't have such a background, most of the post will probably be gibberish.

I'm writing this post mostly because I think there's some lack of good explanations of sieve theory in general and the Brun sieve in particular. Hopefully this post will be helpful to a handful of people who are interested in or trying to understand such matters.

Note that in the post I'll not always mention that a sum or a product runs over the prime numbers explicitly. If the sum or product is indexed by the letter , you should assume that it runs over the primes and not e.g. over the natural numbers. Sometimes will run only over odd primes, because there is a degenerate case with the prime when we work with twin primes coming from the fact that and are in the same residue class mod . This will often be obvious from the surrounding context.

Background

First, let's discuss some background results that will be useful to know throughout the post.

The prime number theorem says that the number of primes less than , denoted , is well approximated by . Concretely, it says that

Roughly, the prime number theorem says that the density of the prime numbers "around " is roughly when is large. This is a rather difficult theorem to prove and we won't actually need it to prove Brun's theorem. However, the result will be useful to know in heuristic arguments for why we might expect the theorem to be true, and to motivate our method of proof.

We will also need the Mertens theorems. These theorems are "weaker" versions of the prime number theorem. Specifically, we will need to know the fact that

This is a stronger, quantitative version of the result that the sum of the reciprocals of the prime numbers is divergent. This theorem is not as difficult to prove. Here is a possible proof strategy: we know that each prime occurs in the prime factorization of "roughly" times. So we have a rough approximation

where the sum over runs over prime numbers - I'll stick to this convention for the rest of the post for the sake of brevity. On the other hand, Stirling's theorem gives the approximation . Combining these immediately gives

once we take account of the error term in this approximation.

To pass from this to the sum we care about, we employ partial summation. This is the discrete analog of integration by parts. The idea is to write the sum we care about as

where is the indicator function of the primes, and then use summation by parts, which allows us to shift the "differencing operator" from acting on to acting on . Approximating the first difference of by its derivative gives us the main term

which we can now approximate using the result from above as

where the last approximation follows from replacing the sum by an integral and noting that . If we carefully keep track of all the error terms in the approximations, we again recover the precise result that .

That's all the background knowledge we'll need for the post. Moving on:

Heuristics

Let's think about why we might expect Brun's theorem to be true.

We know from the prime number theorem that the density of the primes around is , so if we cheat and assume that the events of being prime and being prime are "independent", the density of the twin primes should be roughly . So we should expect

where counts the twin primes less than or equal to . If so, we can simply turn the crank of partial summation as we did before: letting be the set of twin primes and its indicator function, we compute

This final sum is convergent. One way to see it is by an integral test: if we consider the integral

then the substitution turns the integral into

which is obviously finite. So the initial sum is convergent as well.

This heuristic suggests that Brun's theorem should be true if there is "nothing wrong": if there's no unexpected correlation between the events of being prime and being prime. The task of the proof is therefore to show that this correlation can't get bad enough to make this sum divergent.

One key insight is that we actually have some substantial amount of room to work with in this argument: we don't actually need to get anywhere close to . If we could show, for instance, that

for all sufficiently large , which is a much weaker bound than what the heuristic suggests; we would still prove that the sum we care about is convergent. This fact is what makes Brun's theorem a relatively "easy" result: we don't actually need to show that there's strictly no correlation between being prime and being prime, just that the correlation doesn't get too bad.

The sieve of Eratosthenes

Now that we understand what we have to do, it's time to think about concrete proof strategies. Our goal is to prove a "nontrivial" upper bound on the twin prime counting function . We know that the key objective is to bound the dependence of the events of being prime and being prime. We need to make something work with these ingredients.

Given that our goal is to prove an upper bound, we'll still have succeeded if we find a set that contains the twin primes and for which we can prove a similar result, as if we have an upper bound on we'll obviously have an upper bound on as well. This suggests one immediate way to attempt a proof: the sieve of Eratosthenes.

The sieve of Eratosthenes characterizes the primes by the following: a prime is a number that's either not divisible by, or gives when divided by, . In other words, we obtain a sequence of sets which consists of the integers not divisible by the first primes (except the primes themselves), and as grows this gives us an increasingly refined superset of the prime numbers. It's called a sieve because we're gradually "sieving out" composite numbers from the integers.

To apply this to our problem, we need to modify the sieve a little bit, as we care about twin primes. Instead of sieving out integers that are for each prime , we'll sieve out integers that are either or . This is because the bigger prime in a tuple of twin primes can't belong to either of these congruence classes when is chosen to be smaller than the numbers in the tuple.

Now, we know that for a prime , the number of natural numbers less than in any residue class mod is equal to . This approximation is quite good when is much less than . So we can roughly think that applying the sieve for a prime removes roughly a fraction of the remaining integers for odd primes, and for the prime . For this to work nicely it's also more convenient to sieve out by the entire congruence class, so we don't make an exception for the case when we're sieving out a twin prime as we would with the traditional sieve of Eratosthenes.

Given the above condition, we can't sieve by all primes up to as obviously this will leave us with nothing left, but if we sieve by the primes up to , we'll only have removed at most twin primes, which is not going to be important as the upper bound we want to prove is much bigger than . So we can ideally imagine to get a result like

where the approximation uses the fact that when is small and the last equality uses the Mertens' theorems. Note that here runs over the odd primes. If we could prove this, we would be very happy.

However, in fact we can't prove this, and the reason is that we're again assuming that the events we're dealing with are independent. If we could prove the independence in some formal sense, the argument would be fine, but in fact this turns out to be very difficult to prove when we're taking all primes up to in the product expansion.

The reason is that the fundamental result about patching together different congruence classes modulo different primes, the Chinese remainder theorem, only allows us to control the number of solutions to a system of congruences over different prime moduli when the product of those primes is substantially less than : concretely, it means the exact asymptotic we can give on the above expression is only

It turns out that another implication of the prime number theorem is that

so we expect the product of all primes up to to behave like the exponential of . When we apply this to the primes less than , we get that the product is , which is much larger than asymptotically. So the error term becomes much larger than the main term and the approximation fails.

The best we can hope out of this is to take up to or so. This is sometimes called the Legendre sieve, and gives us the bound

which turns out to be too weak to do what we want. Indeed, this is even weaker than the prime number theorem bound and we know the reciprocals of the prime numbers have a divergent sum, so the Legendre sieve gets us nothing in this situation: it's far too weak.

We obviously need to be more clever. However, this shouldn't be too hard: intuitively, it should be possible for us to go past the threshold, because it's not that we don't understand anything about primes that are bigger than . The only information we don't have is about the joint correlations that involve a very large number of primes, and we understand the low order correlations very well even at thresholds much bigger than , because those only involve the product of a small number of primes. For instance, up to we understand all correlations that only involve or less primes very well from the same Chinese remainder theorem bound.

This suggests the following proof strategy: since higher order correlations are causing all the problems, we should try to find some way to decompose the quantity we want to calculate into "lower order" and "higher order" terms, and then see if we can bound the "higher order" part well enough that we can get some nontrivial result.

The Brun sieve

The Brun sieve is essentially an implementation of the vague idea discussed above.

Let's say that event for a prime is defined as above: it's the event that a natural number in is either or modulo . Then, we can see our above computation as noting that, ignoring the primes less than (which pose no problems, as discussed above), we have

where the exponent denotes set complements. If we assume the events are actually independent, we can do a computation that looks like

and since we know we would be able to do the same proof we did above. However, as we know, the independence assumption is actually wrong, so this doesn't really work.

However, there is an unconditional way to simplify expressions of this kind that doesn't assume independence: to use inclusion-exclusion. This looks like

Overall it's a big mess, but the hope is that this is exactly what we need to use our better understanding of the low order correlations between the congruence classes of different primes: inclusion-exclusion gives us a series expansion for the quantity we want that's increasing in the complexity of the correlations we want to understand!

So our hope might be the following: we truncate this sum at some threshold , as in, we truncate at the term that involves the correlations of primes. We pick to be even, as inclusion-exclusion has the property that the partial sums are always upper bounds for even and lower bounds for odd . We then try to understand this truncated sum better.

The square root is not ideal for this, as we already don't understand three prime correlations when we work with the square root. So let's also take the threshold at which we cut off the primes as a free parameter . In the above, we were using , but we want to generalize this now.

One immediate observation is that if we cut the sum off at , we only have a good understanding of correlations that involve at most primes. To be safe, let's truncate the sum at (or choose to be) the even number closest to where is a small number that we'll fix later. In this case, the Chinese remainder approximations will be very good, and we don't have to worry about the failure of independence anymore. (I'll equivocate between the even integer and the real number sometimes - rest assured that this causes no problems for the argument and everything goes into the error terms.) So we'll get the sum

after truncation. It's important to keep track of the error term of this properly, so we can note that the error we make whenever we approximate the probability of an intersection with an independence assumption is : this is because each correlation involves at most primes and each prime is , so the product is bounded from above by which is equal to given our choice of . Since when the number of such approximations we have to make is bounded from above by , we're safe as long as is small. For instance, will be enough for the error to be negligible relative to the main term here.

Now that we know the error term is negligible, so long as is small, let's think about approximating this sum. Bounding from above with the triangle inequality gives very poor results here because the sum has a lot of cancellation. However, we can use a trick: we know that the untruncated sum, i.e. the sum for , is equal to the infinite product

This is the expression we were working with earlier when assuming independence, so it makes sense to bring it back here. Instead of bounding the sum above directly, we'll write it as this "main term" plus an error term that comes from the "high order correlations" when the number of primes exceeds . This fits in naturally with our proof strategy: try to bound the contribution coming from higher order correlations.

This will give us

The triangle inequality works much better on the error term now. Indeed, it's fairly straightforward to see that if we recall the definition

then the "error term", as in, all terms aside from the product term here, can be bounded in absolute value by

This is an elementary exericse and I leave it to the reader to see why it's true. Taking this for granted, though, we can approximate this sum up to a constant by just its first term if , as the rest of the series can be bounded from above by a geometric series in the parameter which converges quickly when the above condition is true. So as long as we pick asymptotically, say if is always greater than for some , we can approximate this sum by just its first term, which is roughly

This is not exactly right but the difference doesn't affect the proof - it just goes into the error term, as usual. The main term is also easy to approximate: we use the old idea of writing it as

using the Mertens theorems.

Finishing the proof

Now we're almost at the end of the proof. We have a bound of the form

We can use Stirling's formula to replace the factorial and end up with some bound of the form

where I've dropped the big O notation for simplicity, as this bound is not exactly right even up to a constant. However, the differences it has with a bound that works are inconsequential, so I'll illustrate the rest of this argument on this bound.

What can we pick for ? We know that we can't pick it to be a constant, which would be , and we can't pick it to be . As a compromise, let's try something of the form for some constant . With this expression, we'll get

Now, let's work out what this says. Our bound becomes, to top order,

Now, we need to make the choices of the constants. is small, so we need to pick big enough such that for our approximations to have negligible error terms. Choosing is good enough for this purpose. At the same time, we need to ensure that , as otherwise the error term will dominate the main term here. Specifically, we have

and we want the main term to dominate, so we want the exponent here to be less than . If we choose and , it's easy to check that both of these conditions hold. So everything checks out, and we're finally able to recover a bound of the form

Is this good enough? Yes! Earlier, we had said that even a bound of the form

would've been good enough, and this bound is far tighter. It's not even from the independence bound, which is in fact quite impressive.

Conclusion

The fundamental nature of the above proof is that we understand the behavior of primes relative to modular congruence classes quite well, at least when the modulus is small compared to the primes in question. We then try to leverage our understanding here into saying something about the distribution of primes in other sets.

This is the heart of sieve theory: it's used in cases when you understand the behavior of some objects well in some substructures and you want to "piece that together" to gain a broader understanding of how the objects behave. Here, the substructures are arithmetic progressions or modular congruence classes, and the objects are the prime numbers, but the fundamental ideas of sieve theory may be applied in other domains as well.

The Brun sieve itself is a fairly basic sieve that was introduced over a hundred years ago for the exact purpose of proving tight upper bounds on , though it can also be used for other purposes as a general way to prove upper bounds on various quantities involving primes using combinatorial methods. Today, more sophisticated sieves are able to prove nontrivial lower bounds as well, which is often a more tricky problem than proving upper bounds in sieve theory.

For more on sieve theory, you might want to check out the associated tag on Terence Tao's blog. It contains material ranging from introductory to research-level in sophistication.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 4:07 AM

Very well articulated. I was researching on the Pentium FDIV bug and Brun's constant - this proof is easy on readers with little background on analytic number theory. Thanks