Counting primes in arithmetic progressions

We have now seen in the previous two posts a few results about prime numbers. In the first one, we saw why primes are so important and used Euclid’s proof to show that there are infinitely many primes. Moreover, we conjectured that if N and d are coprime, then there are infinitely many primes p with p\equiv_N d. And to top it all, we also conjectured that they are evenly distributed among the integers 1\leq d <N which are coprime to N.

Writing conjectures down is much easier than actually proving them, and in order to do this, in the second post we reproved the fact that there are infinitely many primes, this time using Euler’s proof. The main idea was to consider the infinite sum {\displaystyle \sum_{p\in P}\frac{1}{p}} and to show that it diverges to infinity and therefore there are infinitely many primes.

The set of prime numbers has very little structure that we know of, and actually what we are trying to do is to find some of this structure. For that, we moved step by step to worlds with more structure, in which we have many more tools that can be used to tackle this problem – from the set of prime numbers to the integers (which have addition and multiplication) and from there to the real numbers (which also have a metric, derivatives, integrals, etc). Then, if we are lucky enough, we can prove this claim in the world of real numbers and pull it back to the integers and then back to the prime numbers. In this post, we will find the last step which will allows us to go to even smaller sets of primes, according to their equivalent classes modulo N, and prove Dirichlet’s theorem:

THEOREM: For any N, d coprime integers, there are infinitely many primes p with p\equiv_N d.

The first attempt

For simplicity, let us consider the case of N=3 and d=1,2. For ease of notation, let us denote

{\displaystyle \mathcal{P}_{d,N} = \sum_{p\equiv_N d, p\in P} \frac{1}{p}},


{\displaystyle \mathcal{N}_{d,N} = \sum_{n\equiv_N d} \frac{1}{n}},

Note in particular that \mathcal{P}_ {0,3}+ \mathcal{P}_ {1,3}+ \mathcal{P}_ {2,3}= \mathcal{P}_ {0,1} = \sum_{p\in P} \frac{1}{p} which we already know to be infinite, and similarly \mathcal{N}_{0,3} + \mathcal{N}_{1,3} + \mathcal{N}_{2,3} = \mathcal{N}_{0,1} = \sum_n \frac{1}{n} is the harmonic series.

We would like to show that \mathcal{P}_{1,3} and \mathcal{P}_{2,3} are infinite, and in a sense they are the “same” infinity.

A straight forward attempt at generalizing Euler’s proof from the previous post, is to some how show that \mathcal{N}_{d,N} is infinite, and then pull it back to the prime numbers and show that \mathcal{P}_ {d,N} is infinite.

For example, for d=1 we will get the sum \mathcal{N}_{1,3}=\frac{1}{1} + \frac{1}{4} +\frac{1}{7} + \cdots . To show that such a sum is infinite, follows almost automatically from the sum of the harmonic series, since

{\displaystyle \sum_{k=1}^\infty \frac{1}{kN+d} \geq \sum_{k=1}^\infty \frac{1}{(k+1)N} = \frac{1}{N}(\sum_{k=2}^\infty \frac{1}{k}) = \infty}.

Note that d almost doesn’t play a role in the computation above, and in a sense \mathcal{N}_{d,N} = \frac{\mathcal{N}_{0,1}}{N} (which is more or less what we want in the end for the primes). More specifically, in our N=3 case, if we only sum over the integers with n<M and look at the difference

{\displaystyle \left|\sum_{n\equiv_N 1,\;1\leq n\leq M} \frac{1}{n} - \sum_{n\equiv_N 2,\;1\leq n\leq M} \frac{1}{n}\right|},

it is uniformly bounded (at most 1) for all M. Since both sums diverge to infinity, we consider them as diverging at the same speed.

The hard part is to move from the integers to the prime numbers. It would be really helpful if we could have a result like

{\displaystyle \sum_{1\leq n\equiv _N d} \frac{1}{n} = \prod_{p\equiv _N d} \left(\sum_{k=1}^\infty \frac{1}{p^k}\right)},

which is similar to what we had in the previous post in Euler’s proof. However, trying to open the brackets doesn’t seem to work as last time. For example, a general integer n appearing on the left with n \equiv_N d, doesn’t have to be a product of primes with p\equiv _N d as appearing on the right, so Euler’s proof method doesn’t exactly work here. For example, we have 4 \equiv _3 1, while its decomposition is 4 = 2^2 where 2\equiv_3 2.

We already reached our first road block – so what can we do?

Proving from the result

There are two results that we already know about the \mathcal{P}_{d,3}. The first one is that for d=0 it is quite trivial, since then \mathcal{P}_ {0,3} = \frac{1}{3} (which is, by definition, finite). The second result is that

{\displaystyle   \mathcal{P}_ {0,3} +  \mathcal{P}_ {1,3} +  \mathcal{P}_ {2,3} = \sum _{p\in P} \frac{1}{p} = \infty}.

What we would like to show, is that both \mathcal{P}_ {1,3} and \mathcal{P}_ {2,3} are infinite, and since we are also a bit greedy, we would also like to show that in a sense they are the same infinity \mathcal{P}_ {1,3}= \mathcal{P}_ {2,3}. This equality doesn’t say too much if they are both infinity, however \mathcal{P}_ {1,3}- \mathcal{P}_ {2,3}=0 (or at least, the limit in this difference is finite) is an entirely different story.

If this is indeed the result that we expect to see, what should we prove in order to get it? In the prime notation, the summation above is

{\displaystyle  \frac{1}{1} - \frac{1}{2} - \frac{1}{5} + \frac{1}{7} - \frac{1}{11} + \frac{1}{13} + \cdots },

which is a sum with a sort of alternating plus\minus signs. To separate the two parts of the (1) primes and the (2) signs, so we could order our thoughts, lets define

\psi\left(n\right)=\begin{cases} 0 & n\equiv_{3}0\\1 & n\equiv_{3}1\\-1 & n\equiv_{3}-1\end{cases} ,

so that now our summation from above can be written simply as \sum_{p\in P} \frac{\psi (p)}{p} . This “simple” sum can be done over the integers as well, giving us

{\displaystyle    \sum_{n=1}^\infty \frac{\psi (n)}{n} = \frac{1}{1} - \frac{1}{2} +\frac{1}{4} - \frac{1}{5} +\frac{1}{7} - \frac{1}{8} + \cdots},

which is truly an alternating sum (the signs are 0, 1, -1 periodically). This should already sound familiar from calculus class, so we can return to Euler’s proof method:

  1. Show that {\displaystyle \sum_{n=1}^\infty \frac{\psi(n)}{n}} converges to a finite number.
  2. Pull this result back to the prime numbers, and show that {\displaystyle \mathcal{P}_{1,3} - \mathcal{P}_{2,3} = \sum_{p\in P} \frac{\psi(p)}{p} } converges to a finite number.
  3. Together with what we already know, namely \mathcal{P}_{1,3} + \mathcal{P}_{2,3} = \infty, conclude that both \mathcal{P}_{1,3} and \mathcal{P}_{2,3} converge to infinity, and in a sense they converge with the same speed.

Note that the two “equations” \mathcal{P}_ {1,3}+ \mathcal{P}_ {2,3} = \infty and \mathcal{P}_ {1,3} - \mathcal{P}_{2,3} \sim 0 (= is finite) imply that each of the \mathcal{P}_ {1,3} and {2,3} is about “half” of infinity! We will later give the exact definition for this, but for now the moral of the story should be that in order to understand \mathcal{P}_ {1,3} and \mathcal{P}_ {2,3}, we must consider them together with two specific combinations – one time the addition and one time the subtraction.

This holistic idea is something that comes up in many parts in mathematics. And just to see that we have already know of this type of behavior, consider the complex numbers. Suppose that you have a complex number z, and you want to find its real a\cdot 1 and imaginary b\cdot i parts, but for some reason it is a difficult task. However, you might be able to find their sum and difference, namely z=a+bi and \bar{z}=a-bi. In this case, you can always return back and get the real part a = \frac{z+\bar{z}}{2} and the imaginary part bi = \frac{z-\bar{z}}{2}. The same behavior that we see here, that the pair of real and imaginary parts, and the pair of the complex number and its conjugate, are in a sense dual to one another, and knowing one is knowing the other, is exactly what we see with these \mathcal{P}_ {1,3} and \mathcal{P}_ {2,3}.

Before we get to this duality in the more complicated world of prime numbers, let us start with a simpler place that we all learn about in calculus class.

Infinite sums with alternate signs

The infinite sum \sum_1^\infty \frac{\psi(n)}{n} has third of its summands actual zeros. If we ignore them, the rest of the numbers alternate between plus and minus signs, and decrease in absolute value. You have probably seen it in calculus class in its more general form, where there is a sequence a_n which is positive and decreases to zero (written as a_n \searrow 0, and we want to show that \sum_0^\infty (-1)^n a_n converges.

In our first attempt above, we looked at \mathcal{N}_{d,N} for d=1,2, and noted that their difference is bounded. Just like in the primes, we get that the difference is exactly \sum_1^\infty \frac{\psi(n)}{n} = \mathcal{N}_{1,3} - \mathcal{N}_{2,3} so we already know that this alternating sum is uniformly bounded. Let us show that this is true for more general sequences, as defined above, and that this difference is actually convergent.

We already have sums for subsequences with specific index modulo N for the prime numbers, and for integers, so we now define them for the sequence (-1)^n a_n as well. This time N=2 (we “ignore” the zero elements) and d=0,1. Since N is fixed, we remove it from our subscript. However, since we want to actually do some computations with them, we will add M to the subscript and consider the partial sums:

{\displaystyle S_{0,M} = \sum_{n=0}^{M-1} a_{2n}}\quad\quad ; \quad\quad {\displaystyle S_{1,M} = \sum_{n=0}^{M-1} a_{2n+1}}

CLAIM: For every M we have that S_{0,M}\geq S_{1,M} \geq S_{0,M} - a_0.

This claim follows easily from the fact that a_n \searrow 0 and is left as an exercise. Using this claim, we already know that the partial sums for our alternating signs sequence, which are more or less S_{0,M}-S_{1,M} are always in [0,a_0]. Bounded usually doesn’t imply convergent, but in our case it does.

If instead of starting at the 0 index, we would start at the 2K index, then the segment will be [0, a_{2K}] which by definition is smaller, since a_n \searrow 0. The partial sums for the two subsequences for d=0,1 starting at 2K are by definition S_{0,M+K} - S_{0,K} and S_{1,M+K} - S_{1,K}. It now follows that

(S_{0,M+K} - S_{1,M+K}) - (S_{0,K} - S_{1,K}) = (S_{0,M+K} - S_{0,K}) - (S_{1,M+K} - S_{1,K})  \in [0, a_{2K}].

In other words, using this claim to the subsequences starting at 2K for every K shows that

M \mapsto (S_{0,M} - S_{1,M}) = \sum_0^{2M-1} (-1)^n a_n

is a Cauchy sequence and therefore converge. This almost shows that our alternating sum converges as well, sense we now are only missing the partial sums with odd number of summands. But these partial sums are very close to the even number of summands – indeed, their different is

(\sum_0^{2M} (-1)^n a_n) -  (\sum_0^{2M-1} (-1)^n a_n) = a_{2M} \to 0,

so they both converge and to the same number, implying that the series \sum_n (-1)^n a_n converge to a finite number.

To summarize what we know so far: We wanted to understand both \mathcal{P}_ {1,3} and \mathcal{P}_ {2,3}, since we understand quite well \mathcal{N}_{1,3} and \mathcal{N}_{2,3}. However the direct move from the integers to the primes doesn’t work as we would like it to be, so we considered another perspective and looked at the difference. Also, if you think about it, in Euler’s proof in the previous post we looked at the sum. So now our proof idea has one more step in the middle:

\left(\begin{array}{c} \mathcal{N}_{1,3}\\ \mathcal{N}_{2,3} \end{array}\right)\Rightarrow\left(\begin{array}{c} \mathcal{N}_{1,3}+\mathcal{N}_{2,3}=\infty\\ \mathcal{N}_{1,3}-\mathcal{N}_{2,3}=finite \end{array}\right)\overset{?}{\Rightarrow}\left(\begin{array}{c}  \mathcal{P}_ {1,3}+ \mathcal{P}_ {2,3}=\infty\\  \mathcal{P}_ {1,3}- \mathcal{P}_ {2,3}=finite \end{array}\right)\Rightarrow\left(\begin{array}{c}  \mathcal{P}_ {1,3}\\  \mathcal{P}_ {2,3} \end{array}\right).

We know that \mathcal{N}_{1,3} and \mathcal{N}_{2,3} are more or less the same. If we can now move from the integers to the prime numbers and show that \mathcal{P}_ {1,3}- \mathcal{P}_ {2,3} is finite, we could show that \mathcal{P}_ {1,3} and \mathcal{P}_ {2,3} are more or less the same also, and complete the proof.

From integers to prime numbers

Our next step is to take the result from the previous section and pull it from the integers to the prime numbers. If our intuition so far is true, then we should expect to have an equality of the form:

{\displaystyle \sum_1^\infty \frac{\psi(n)}{n} = \prod_p \left( \sum_{k=0}^\infty \frac{\psi^k(p)}{p^k} \right) = \prod_p \left( \frac{1}{1-\psi(p)/p} \right)}.

When we open the brackets in the middle expression, we will again get products of primes which is how we get all the integers in the left expression, but we will also get products of images of \psi. Initially, we constructed \psi just to fit the result that we wanted to prove, which led it eventually to appear in this multiplication. Luckily for us, this function really likes multiplication – it is not hard to show that for any two integers a, b we have \psi(a\cdot b) = \psi(a) \cdot \psi(b). Such a function is called a multiplicative function. In particular, we can use it for decomposition of integers to prime numbers:

{\displaystyle n = \prod p_i ^{k_i} \; \; \; \Rightarrow \; \; \; \frac{\psi(n)}{n} = \frac{\psi(\prod p_i^{k_i})}{\prod p_i^{k_i}} = \prod (\frac{\psi(p_i)}{p_i})^ {k_i}}.

Thus, at least when formally opening the brackets, we get that the right hand side of the equation equals to the left hand side.

If we could also prove this equation as an equation between actual limits, the rest of the proof is very similar to Euler’s proof from the last post. Taking the logarithm of both sides we get that

{\displaystyle finite = \ln(\sum_1^\infty \frac{\psi(n)}{n}  ) = \sum_p \ln (\frac{1}{1-\psi(p)/p}) = \sum_p \ln(1+\frac{\psi(p)}{p-\psi(p)})}.

The last sum has both positive and negative summands, so we can’t just say that \ln(1+\frac{\psi(p)}{p-\psi(p)}) is more or less \frac{\psi(p)}{p} and continue on. However, take it as an exercise to show that we can move to the sum \sum_{p\in P} \frac{\psi(p)}{p} without loosing too much (the difference is finite), and this will finish the proof.

Last time, we had to work a bit to show that the “formal” equality

{\displaystyle \sum_1^\infty \frac{\psi(n)}{n} = \prod_p \left( \sum_{k=0}^\infty \frac{\psi^k(p)}{p^k} \right) }

is actually true when considering both sides as limits of sequences. This time, this is no longer an easy task, because the numbers keep changing signs, and the proof for this equality will be left for another time. However, if we look on a slightly different equation

{\displaystyle \sum_1^\infty \frac{\psi(n)}{n^{1+\varepsilon}} = \prod_p \sum_{k=0}^\infty \frac{\psi^k(p)}{p^{(1+\varepsilon)k}}},

then for \varepsilon>0 we already know that the left hand side converges and in absolute value. Using the logarithm trick, you can also check that the right hand side converges to a finite number. In this case, the equality is an exercise in integrals and infinite sums, and the hard part is to show that this equality still holds when taking \varepsilon to zero.

To conclude, we have (almost) shown that the limit \mathcal{P}_{1,3} -  \mathcal{P}_ {2,3} is a finite number (namely, the limit when we sum over more and more primes), and we know that \mathcal{P}_ {1,3} +  \mathcal{P}_ {2,3} diverges to infinity. Consider now the limits

{\displaystyle \frac{ \mathcal{P}_ {1,3} }{ \mathcal{P}_ {1,3}+ \mathcal{P}_ {2,3} } \; \; ; \; \;  \frac{ \mathcal{P}_ {2,3} }{ \mathcal{P}_ {1,3}+ \mathcal{P}_ {2,3} }  },

where the M element in the defining sequence is the summation in the numerator and denominator over all the primes p\leq M. By definition, their sum is 1, while their difference is zero (finite divided by infinity), so we conclude that each of them is half. It is not exactly saying that more or less half of the primes are 1 mod 3 and half 2 mod 3 but it is really close! At least philosophically speaking this should be considered as saying that the prime number distribute evenly between 1 and 2 mod N=3.

If it is true for 2 and true for 3 …

So what happens if N>3, and there are many 1\leq d<N which are coprime to N? Even for N=5 we already have 4 such d, and we can’t just look at the sum and difference and be done with it. The sum still works, and we know that it is infinite, namely {\displaystyle \sum_{gcd(d,N)=1}  \mathcal{P}_ {d,N} = \infty}, but we need something more general than just “difference”.

When looking at the proof above, we see that the important part was not the difference, but the fact that \psi(n) was a multiplicative function on the integers, and it was also 3-periodic. In other words, it was a composition

{\displaystyle \mathbb{Z} \to \mathbb{Z}/3\mathbb{Z} \to \{-1,0,1\}}

where the first map is the standard projection mod 3, while the second is a multiplicative function. More precisely, its restriction to the invertible numbers

{\displaystyle \left( {\mathbb{Z}}/{3\mathbb{Z}} \right)^\times \to \{-1,1\}}

is a group homomorphism, and the noninvertible numbers are sent to zero.

The group of invertible numbers mod 3 contain exactly two numbers \{1,2\}. Each group homomorphism must send 1 to 1 (both are the neutral elements), and therefore determined by where it sends 2. Both of the maps defined by \psi_1(2)=1 and \psi_{-1}(2)=-1 define such a group homomorphism. The second one \psi_{-1} is what we up until now thought of as \psi, but the first one is also very important, and we actually used it all time.

To see this, let us fix another notation. For any function \lambda from the integers, we write

{\displaystyle \mathcal{N}_\lambda = \sum_{n=1}^\infty \frac{\lambda(n)}{n} \; \; ; \; \; \mathcal{P}_\lambda = \sum_{p\in P} \frac{\lambda(p)}{p}}.

This already fits with the expressions that we saw up until now with \lambda=\psi in the previous sections. Moreover, if we let \chi_{d,N} be the (N-periodic) function

\chi_{d,N}\left(n\right)=\begin{cases} 1 & n\equiv_{N}d\\ 0 & else \end{cases},

then we also get that \mathcal{P}_{d,N} = \mathcal{P}_{\chi_{d,N}}, and similarly for the integers. Hence, we already see that this new definition is quite useful.

Moreover, by definition we get that \lambda \mapsto \mathcal{P}_\lambda is linear, namely

\mathcal{P}_{a\lambda_1 + b\lambda_2} =  a \mathcal{P}_{\lambda_1} + b \mathcal{P}_{\lambda_2}

as a sequence (the summation up of primes p<M) and as limits if they exist.

In particular, since \psi = \psi_{-1} = \chi_{1,3} - \chi_{2,3}, it is not surprising that we got that \mathcal{P}_\psi = \mathcal{P}_{1,3} - \mathcal{P}_{2,3}. On the other hand, if we use the function \psi_1 we get \mathcal{P}_{\psi_1} =   \mathcal{P}_{1,3} + \mathcal{P}_{2,3} which we know is infinity. The same goes for the integers, so basically our proof steps were:

\left(\begin{array}{c} \mathcal{N}_{1,3}\\ \mathcal{N}_{2,3} \end{array}\right)\Rightarrow\left(\begin{array}{c} \mathcal{N}_{\psi_{1}}=\infty\\ \mathcal{N}_{\psi_{-1}}=finite \end{array}\right)\Rightarrow\left(\begin{array}{c} \mathcal{P}_{\psi_{1}}=\infty\\ \mathcal{P}_{\psi_{-1}}=finite \end{array}\right)\Rightarrow\left(\begin{array}{c} \mathcal{P}_{1,3}\\ \mathcal{P}_{2,3} \end{array}\right) .

It should be true for N=5 as well

Now, with this more general approach, let’s see if we can use it on N=5. We start by looking at group homomorphisms {\displaystyle \left( {\mathbb{Z}}/{5\mathbb{Z}} \right)^\times \to \{-1,1\}}.

The group of invertibles mod 5 is a cylic group of size 4, since mod 5 we have \{1,2,3,4\} = \{2,2^2,2^3,2^4\}. There are exactly two such homomorphisms from this group, which we denote by \alpha_1, \alpha_{-1} defined by where we send the generator 2 (namely \alpha_k(2) = k). We can now define

{\displaystyle  \mathcal{P}_ {\alpha_1} = \sum_{p\in P} \frac{1}{p} = \frac{1}{2} + \frac{1}{3} + \frac{1}{7} + \frac{1}{11} + \frac{1}{13} + \cdots } =  \mathcal{P}_{1,5} + \mathcal{P}_{2,5} + \mathcal{P}_{3,5} +   \mathcal{P}_{4,5}

{\displaystyle  \mathcal{P}_ {\alpha_{-1}} = \sum_{p\in P} \frac{1}{p} = -\frac{1}{2} - \frac{1}{3} - \frac{1}{7} + \frac{1}{11} - \frac{1}{13} + \cdots } =  \mathcal{P}_{1,5} - \mathcal{P}_{2,5} - \mathcal{P}_{3,5} +   \mathcal{P}_{4,5}  .

We can look at their sum to get only the primes which are 1 or 4 mod 5 or their difference for 2 or 3, but unlike with N=3 this time is not enough.

There are simply not enough homomorphisms into the set \{-1,1\}, so we instead cheat and increase this set to \mathbb{C}^\times . The generator 2 mod 5 is of order 4, so it must be sent to an element z such that z^4=1. This time, over the complex numbers, we have exactly 4 such elements – \{1, i, -1, -i\}, and therefore there are four multiplicative functions mod 5. So instead of \mathcal{P}_ {1,5},  \mathcal{P}_ {2,5},  \mathcal{P}_ {3,5} and \mathcal{P}_ {4,5}, we have the four combinations:

{\displaystyle  \mathcal{P}_ {\alpha_1} = 1\cdot  \mathcal{P}_ {1,5} + 1\cdot   \mathcal{P}_ {2,5} + 1\cdot   \mathcal{P}_ {3,5} + 1\cdot   \mathcal{P}_ {4,5} }

{\displaystyle  \mathcal{P}_ {\alpha_i} = 1\cdot  \mathcal{P}_ {1,5} + i\cdot   \mathcal{P}_ {2,5} - 1\cdot   \mathcal{P}_ {3,5} - i\cdot   \mathcal{P}_ {4,5} }

{\displaystyle  \mathcal{P}_ {\alpha_1} = 1\cdot  \mathcal{P}_ {1,5} - 1\cdot   \mathcal{P}_ {2,5} + 1\cdot   \mathcal{P}_ {3,5} - 1\cdot   \mathcal{P}_ {4,5} }

{\displaystyle  \mathcal{P}_ {\alpha_1} = 1\cdot  \mathcal{P}_ {1,5} - i\cdot   \mathcal{P}_ {2,5} - 1\cdot   \mathcal{P}_ {3,5} + i\cdot   \mathcal{P}_ {4,5} },

or to put it in matrix form:

\left(\begin{array}{c}  \mathcal{P}_ {\alpha_{1}}\\  \mathcal{P}_ {\alpha_{i}}\\  \mathcal{P}_ {\alpha_{-1}}\\  \mathcal{P}_ {\alpha_{-i}} \end{array}\right)=\left(\begin{array}{cccc} 1 & 1 & 1 & 1\\ 1 & i & -1 & -i\\ 1 & -1 & 1 & -1\\ 1 & -i & -1 & i \end{array}\right)\left(\begin{array}{c}  \mathcal{P}_ {1,5}\\  \mathcal{P}_ {2,5}\\  \mathcal{P}_ {3,5}\\  \mathcal{P}_ {4,5} \end{array}\right).

Trying to get back the \mathcal{P}_ {d,5} from the \mathcal{P}_ {\alpha_k} is basically finding the inverse to this matrix. You can check that this matrix is invertible by, for example, computing its nonzero determinant, or by noticing the fact that it is a Vandermonde matrix. However, this matrix, constructed from the multiplicative functions, is so special, that we can easily find its inverse. Indeed, denoting it by A, you can check that A\cdot A^{*} = 4 Id. This is not just luck, and it always happens with matrices constructed from homomorphisms from a finite abelian group to \mathbb{C}^\times. Such homomorphism are called characters and are the basis of group representation theory.

In general, the claim is that the matrix A = A(N) satisfies A\cdot A^* = \varphi(N)\cdot Id where \varphi(N) is the Euler function which counts the number of 1\leq d\leq N which are coprime to N. This claim is not too complicated to prove, but I will leave it for another time (or you can try to prove it as an exercise).

Now that we have the inverse, and we know that this should be true for all N, the proof should be:

  1. Show that for all d the infinite sums \mathcal{N}_{d,N} infinite, and go to infinity at the same speed (roughly like {\displaystyle \sim \frac{1}{N} \cdot \sum_n \frac{1}{n}}).
  2. Multiply by the matrix A to get \mathcal{N}_\alpha for all the multiplicative functions \alpha mod N.
    1. If \alpha came from the constant function (all invertibles were sent to 1), then \mathcal{N}_\alpha is simply the sum of all the \mathcal{N}_{d,N} with gcd(d,N)=1 and it is infinity.
    2. Otherwise, the sum \sum_0^{N-1} \alpha(n) is zero (left as exercise – it is part of the fact that A\cdot A^* = \varphi(N)\cdot Id). Using a similar proof to what we saw for series with alternating signs, show that \mathcal{N}_\alpha must converge to a finite number.
  3. Pull these two results to the prime numbers: (1) \mathcal{P}_{\alpha_1} = \infty (which we proved completely), and (2) \mathcal{P}_\alpha is finite for non constant \alpha (which we gave an intuition why it should be true).
  4. Finally, multiply by the inverse \frac{1}{\varphi(N)}A^* and conclude that \mathcal{P}_{d,N} are more or less the same over the invertibles d and grow to infinity like {\displaystyle \frac{1}{\varphi(N)} \sum_{p\in P} \frac{1}{p}}.

Modulo the exact analysis that we are missing when moving from \mathcal{N}_\alpha to \mathcal{P}_\alpha, this is the full proof. While we added a whole new concept in this final proof, this move from characteristic functions to multiplicative functions, the main theme of the proof is the same as in Euler’s proof in the previous post. Characteristic functions, while easy to look at and to describe, don’t have too much structure. In particular they don’t relate to the main property of the prime numbers, which are the multiplicative building blocks of the integers. Thus, we moved from the characteristic functions to the multiplicative functions (which have much more structure), “proved” our results there, and showed that we can pull it back to characteristic functions.

As I mentioned previously, this concept is the basis of group representation theory which appears almost everywhere in mathematics and in many other sciences. In particular, the all too well known Fourier transform that every engineer knows by heart, is basically this transformation between characteristic functions and multiplicative function. However, this will be left to another time and another post.

This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s