Number theory algorithms


Prime numbers: the Miller-Rabin algorithm and its improvements

Primality is tested by the function IsPrime that implements the Miller-Rabin algorithm (initial implementation and documentation supplied by Christian Obrecht). This algorithm is deterministic for "small" numbers and probabilistic for large numbers. In other words, it could sometimes flag a number as prime when in fact the number is composite; but the probability for this to happen can be made extremely small. The basic reference is M. O. Rabin, Probabilistic algorithm for testing primality, J. Number Theory 12 (1980), 128. We also implemented some improvements suggested by J. H. Davenport, Primality testing revisited, Proc. ISSAC 1992, p. 123.

The idea of the Miller-Rabin algorithm is to improve the Fermat primality test. If n is prime, then for any x we have Gcd(n,x)=1. Then by Fermat's "little theorem", x^(n-1):=Mod(1,n). (This is really a simple statement; if n is prime, then n-1 nonzero remainders modulo n: 1, 2, ..., n-1 form a cyclic multiplicative group.) Therefore we pick some "base" integer x and compute Mod(x^(n-1),n); this is a quick computation even if n is large. If this value is not equal to 1 for some base x, then n is definitely not prime. However, we cannot test every base x<n; instead we test only some x, so it may happen that we miss the right values of x that would expose the non-primality of n. So Fermat's test sometimes fails, i.e. says that n is prime when it is in fact not a prime. Also there are infinitely many integers called "Carmichael numbers" which are not prime but pass the Fermat test for every base.

The Miller-Rabin algorithm improves on this by using the property that for prime n there are no nontrivial square roots of unity in the ring of integers modulo n (this is Lagrange's theorem). In other words, if x^2:=Mod(1,n) for some x, then x must be equal to 1 or -1 modulo n. (Note that n-1 is equal to -1 modulo n, so n-1 is a trivial square root of unity modulo n.) In fact, if n is prime, there must be no divisors of 1 at all, i.e. no numbers x and y, not equal to 1 or -1 modulo n, such that x*y:=Mod(1,n). If we find such x, y, then Gcd(x,n)>1 or Gcd(y,n)>1 and n is not prime.

We can check that n is odd before applying any primality test. (A quick test n^2:=Mod(1,24) guarantees that n is not divisible by 2 or 3.) Then we note that in Fermat's test, the power n-1 is certainly a composite number because n-1 is even. So if we first find the largest power of 2 in n-1 and decompose n-1=2^r*q with q odd, then x^(n-1):=Mod(a^2^r,n) where a:=Mod(x^q,n). (Here r>=1 since n is odd.) In other words, the number Mod(x^(n-1),n) is obtained by repeated squaring of the number a that we can easily find. We get a sequence of r repeated squares: a, a^2, ..., a^2^r. The last element of this sequence must be 1 if n passes the Fermat test. (If it does not pass, n must be a composite number.) If n passed the Fermat test, the previous element of the sequence of squares is a square root of unity modulo n. We can check whether this square root is non-trivial (i.e. not equal to 1 or -1 modulo n). If it is non-trivial, then n definitely cannot be a prime. If it is trivial and equal to 1, we can check the preceding element, and so on. If an element is equal to -1, we cannot say anything, i.e. the test passes ( n is "probably a prime").

This procedure can be summarized like this:

A practical application of this procedure needs to select particular base numbers. It is advantageous (according to Pomerance et al., Math. Comp. 35 (1980), 1003) to choose prime numbers b as bases, because for a composite base b=p*q, if n is a strong pseudoprime for both p and q, then it is very probable that n is a strong pseudoprime also for b, so composite bases rarely give new information.

Here are some more formal definitions. An odd integer n is called strongly-probably-prime for base b if b^q:=Mod(1,n) or b^(q*2^i):=Mod(n-1,n) for some i such that 0<=i<r, where q and r are such that q is odd and n-1=q*2^r.

An additional check suggested by Davenport is activated if r>2 (i.e. if n:=Mod(1,4) which is true for only 1/2 of all odd numbers). If i>=1 is found such that b^(q*2^i):=Mod(n-1,n), then b^(q*2^(i-1)) is a square root of -1 modulo n. If n is prime, there may be only two different square roots of -1. Therefore we should store the set of found values of roots; if there are more than two such roots, then we have found roots s1, s2 of -1 such that 0!=Mod(s1+s2+1,n). Then s1^2-s2^2:=Mod(0,n) and n is definitely composite because e.g. Gcd(s1+s2,n)>1. This check costs very little computational effort but guards against some strong pseudoprimes.

Yet another small improvement comes from the paper of Damgard and Landrock. They found that the strong primality test sometimes (rarely) passes on composite numbers n for more than 1/8 of all bases x<n if n is such that either 3*n+1 or 8*n+1 is a perfect square, or if n is a Carmichael number. Checking Carmichael numbers is slow, but it is easy to show that if n is a large enough prime number, then neither 3*n+1, nor 8*n+1, nor any s*n+1 with small integer s can be a perfect square. (If s*n+1=r^2, then s*n=(r-1)*(r+1).) Testing for a perfect square is quick and does not slow down the routine very much. This is however not implemented in Yacas because it seems that perfect squares are rare enough for this improvement not to be very significant.

If an integer is not "strongly-probably-prime" for a given base b, then it is a composite number. However, the reciprocal is false, i.e. "strongly-probably-prime" numbers can actually be composite. Composite strongly-probably-prime numbers for base b are called strong pseudoprimes for base b. There is a theorem that if n is composite, then among all numbers b such that 1<b<n, at most one fourth are such that n is a strong pseudoprime for base b.

For numbers less than B=34155071728321, exhaustive computations have shown that there are no strong pseudoprimes simultaneously for bases 2, 3, 5, 7, 11, 13 and 17. This leads to a very simple and quick way to check whether a number is prime, provided it is smaller than B. If n>=B, the Rabin-Miller method consists in checking if n is strongly-probably-prime for k base numbers b. The base numbers are chosen to be consecutive "weak pseudoprimes" that are easy to generate (see below the function NextPseudoPrime).

In the implemented routine RabinMiller, the number of bases k is chosen to make the probability of erroneously passing the test p<10^(-25). (Note that this is not the same as the probability to give an incorrect answer, because all numbers that do not pass the test are definitely composite.) The probability for the test to pass mistakenly on a given number is found as follows. Suppose the number of bases k is fixed. Then the probability for a given composite number to pass the test is less than p[f]=4^(-k). The probability for a given number n to be prime is roughly p[p]=1/Ln(n) and to be composite p[c]=1-1/Ln(n). Prime numbers never fail the test. Therefore, the probability for the test to pass is p[f]*p[c]+p[p] and the probability for pass erroneously is

p=(p[f]*p[c])/(p[f]*p[c]+p[p])<Ln(n)*4^(-k).

To make p<epsilon, it is enough to select k=1/Ln(4)*(Ln(n)-Ln(epsilon)).

Before calling MillerRabin, the function IsPrime performs two quick checks: first, for n>=4 it checks that n^2:=Mod(1,24) (all primes larger than 4 must satisfy this); second, for n>257, it checks that n does not contain small prime factors p<=257. This is checked by evaluating the GCD of n with the product of all primes up to 257. The computation of GCD is very quick and saves time in case a small prime factor is present.

There are also a function NextPrime(n) that returns the smallest prime number larger than n. This function uses a sequence 5,7,11,13,... generated by the function NextPseudoPrime that contains numbers not divisible by 2 or 3 (but perhaps divisible by 5,7,...). NextPseudoPrime is very fast because it does not perform a full primality test.


Factorization of integers

Factorization of integers is implemented by functions Factor and Factors. Both functions try to find all prime factors of a given integer n. (Before doing this, the primality checking algorithm is used to detect whether n is a prime number.) Factorization consists of repeatedly finding a factor, i.e. an integer f such that Mod(n,f)=0, and dividing n by f.

First we determine whether the number n contains "small" prime factors p<=257. A quick test is to find the GCD of n and the product of all primes up to 257: if the GCD is greater than 1, then n has at least one small prime factor. (The product of primes is precomputed.) If this is the case, the trial division algorithm is used: n is divided by all prime numbers p<=257 until a factor is found. NextPseudoPrime is used to generate the sequence of candidate divisors p.

After separating small prime factors, we test whether the number n is an integer power of a prime number, i.e. whether n=p^s for some prime number p and an integer s>=1. This is tested by the following algorithm. We already know that n is not prime and that n does not contain any small prime factors up to 257. Therefore if n=p^s, then p>257 and 2<=s<s[0]=Ln(n)/Ln(257). In other words, we only need to look for powers not greater than s[0]. This number can be approximated by the "integer logarithm" of n in base 257 (routine IntLog (n, 257)).

Now we need to check whether n is of the form p^s for s=2, 3, ..., s[0]. Note that if for example n=p^24 for some p, then the square root of n will already be an integer, n^(1/2)=p^12. Therefore it is enough to test whether n^(1/s) is an integer for all prime values of s up to s[0], and then we will definitely discover whether n is a power of some other integer. The testing is performed using the integer square root function IntNthRoot which quickly computes the integer part of n-th root of an integer number. If we discover that n has an integer root p of order s, we have to check that p itself is a prime power (we use the same algorithm recursively). The number n is a prime power if and only if p is itself a prime power. If we find no integer roots of orders s<=s[0], then n is not a prime power.

If the number n is not a prime power, the Pollard "rho" algorithm is applied (J. Pollard, Monte Carlo methods for index computation mod p, Mathematics of Computation, vol. 32, pp. 918-924, 1978). The Pollard "rho" algorithm takes an irreducible polynomial, e.g. p(x)=x^2+1 and builds a sequence of integers x[k+1]:=Mod(p(x[k]),n), starting from x[0]=2. For each k, the value x[2*k]-x[k] is attempted as possibly containing a common factor with n. The GCD of x[2*k]-x[k] with n is computed, and if Gcd(x[2*k]-x[k],n)>1, then that GCD value divides n.

The idea behind the "rho" algorithm is to generate an effectively random sequence of trial numbers t[k] that may have a common factor with n. The efficiency of this algorithm is determined by the size of the smallest factor p of n. Suppose p is the smallest prime factor of n and suppose we generate a random sequence of integers t[k] such that 1<=t[k]<n. It is clear that, on the average, a fraction 1/p of these integers will be divisible by p. Therefore (if t[k] are truly random) we should need on the average p tries until we find t[k] which is accidentally divisible by p. In practice, of course, we do not use a truly random sequence and the number of tries before we find a factor p may be significantly different from p. The quadratic polynomial seems to help reduce the number of tries in most cases.

But the Pollard "rho" algorithm may actually enter an infinite loop when the sequence x[k] repeats itself without giving any factors of n. For example, the unmodified "rho" algorithm starting from x[0]=2 loops on the number 703. The loop is detected by comparing x[2*k] and x[k]. When these two quantities become equal to each other for the first time, the loop may not yet have occurred so the value of GCD is set to 1 and the sequence is continued. But when the equality of x[2*k] and x[k] occurs many times, it indicates that the algorithm has entered a loop. A solution is to randomly choose a different starting number x[0] when a loop occurs and try factoring again, and keep trying new random starting numbers between 1 and n until a non-looping sequence is found. The current implementation stops after 100 restart attempts and prints an error message, "failed to factorize number".

A better (and faster) integer factoring algorithm is needed.

Modern factoring algorithms are all probabilistic (i.e. they do not guarantee a particular finishing time) and fall into three categories:


Number Theoretic Functions

A number theoretical function is any function that takes integer arguments, produces integer values, and is of interest to number theory.

Divisors currently returns the number of divisors an integer has, while DivisorsSum returns the sum of these factors. The current algorithms need to factor the number. The following theorem is used:

Let p[1]^k[1]*...*p[r]^k[r] be the prime factorization of n, where r is the number of prime factors, and k[r] is the multiplicity of r-th factor. Then:

Divisors(n)=(k[1]+1)*...*(k[r]+1)

DivisorsSum(n)=(p[1]^(k[1]+1)-1)/(p[1]-1)*...*(p[r]^(k[r]+1)-1)/(p[r]-1)

The functions ProperDivisors and ProperDivisorsSum are functions that do the above, except they ignore the actual number n itself as a divisor. They are given by:

ProperDivisors(n)=Divisors(n)-1

ProperDivisorsSum(n)=DivisorsSum(n)-n .

The next number theoretic function is Moebius, which is defined as follows: Moebius(n)=(-1)^r if no factors are repeated, Moebius(n)=0 if some factors are repeated, and Moebius(n)=1 if n=1.

This again requires to factor the number completely and investigate the properties of its factors. From the definition, it can be seen that if n is prime, then Moebius(n)= -1. The predicate IsSquareFree(n) then reduces to Moebius(n)!=0, which means no factors are repeated.


Integer Partitions

A partition of an integer n is way of writing n as the sum of positive integers, where order is not important. For example, there are 3 ways to write the number 3 in this way: 3=1+1+1, 3=1+2, 3=3. The function PartitionsP counts the number of these partitions.

The algorithm used to compute this function uses the Rademacher-Hardy-Ramanujan theorem. The number of partitions P(n) is equal to an infinite sum:

P(n)=1/(Pi*Sqrt(2))*Sum(k,1,Infinity,Sqrt(k)*A(k,n)*S(k,n)),

where the functions A and S are defined as follows:

S(k,n):=Deriv(n)Sinh(Pi/k*Sqrt(2/3*(n-1/24)))/Sqrt(n-1/24);

A(k,n):=Sum(l,1,k,delta(Gcd(l,k),1)*Exp(-2*Pi*I*(l*n)/k+Pi*I*B(k,l))),

where delta(x,y) is the Kronecker delta function (so that the summation goes only over integers l which are mutually prime with k) and B is defined by

B(k,l):=Sum(j,1,k-1,j/k*(l*j/k-Floor(l*j/k)-1/2)).

The first term of the series gives, at large n, the Hardy-Ramanujan asymptotic estimate,

P(n)<>P0(n):=1/(4*n*Sqrt(3))*Exp(Pi*Sqrt((2*n)/3)).

The absolute value of each term decays quickly, so after O(Sqrt(n)) terms the series gives an answer that is very close to the integer result. There exist estimates of the error of this series, but they are complicated. The series is sufficiently well-behaved and it is easier to determine the truncation point heuristically. Each term of the series is either 0 (when all terms in A(k,n) happen to cancel) or has a magnitude which is not very much larger than the magnitude of the previous nonzero term. (The series is not actually monotonic.) In the current implementation, the series is truncated when Abs(A(k,n)*S(n)*Sqrt(k)) becomes smaller than 0.1 for the first time; in any case, the maximum number of calculated terms is 5+Sqrt(n)/2. One can show that asymptotically for large n, the number of terms is less than mu/Ln(mu), where mu:=Pi*Sqrt((2*n)/3).

The floating-point precision necessary to obtain the integer result must be at least the number of digits in the first term P0(n), i.e.

Prec>(Pi*Sqrt(2/3*n)-Ln(4*n*Sqrt(3)))/Ln(10).

However, Yacas currently uses the fixed-point precision model. Therefore, the current implementation divides the series by P0(n) and computes all terms to Prec digits.

The Rademacher-Hardy-Ramanujan algorithm requires O((n/Ln(n))^(3/2)) operations, of which O(n/Ln(n)) are long multiplications at precision Prec<>O(Sqrt(n)) digits.

A simpler algorithm involves a recurrence relation

P[n]=Sum(k,1,n,(-1)^(k+1)*(P[n-k*(3*k-1)/2]+P[n-k*(3*k+1)/2])).

The sum can be written out as

P(n-1)+P(n-2)-P(n-5)-P(n-7)+...,

where 1, 2, 5, 7, ... is the "generalized pentagonal sequence" generated by the pairs k*(3*k-1)/2, k*(3*k+1)/2 for k=1, 2, ... The recurrence starts from P(0)=1, P(1)=1. (This is implemented as PartitionsP'recur.)

The sum is actually not over all k up to n but is truncated when the pentagonal sequence grows above n. Therefore, it contains only O(Sqrt(n)) terms. However, computing P(n) using the recurrence relation requires computing and storing P(k) for all 1<=k<=n. No long multiplications are necessary, but the number of long additions of numbers with Prec<>O(Sqrt(n)) digits is O(n^(3/2)). This is asymptotically slower than the Rademacher-Hardy-Ramanujan algorithm if a fast multiplication is used. With internal Yacas math, the recurrence relation is faster for n<300 or so, and for larger n the Rademacher-Hardy-Ramanujan algorithm is faster.