On matrices associated to prime factorization of odd integers

In this paper we introduce in section 5 integral matrices M(n) for any factorization of an odd integer n into r distinct odd primes. The matrices appear in several versions according to a parameter ρ ∈ [0, 1], they have size 2r ×2r and their rank satisfies e.g. for ρ = 1/2 the inequalities of theorem 4: r+1 6 rank(M(n)) 6 2 +1, which are obtained using theorem 1 discussed separately in the first few sections. The cases ρ = 0, 1, 1/2 are analyzed in some detail, and various counterexamples for ρ 6= 0, 1, 1/2 are included. There are several main results, theorem 5 is a duality between the cases ρ = 0 and ρ = 1, and theorem 6 is a periodicity theorem. The most important result perhaps is theorem 8 (valid for ρ = 1/2 only) on the existence of odd squarefree integers n with r odd prime factors such that rank(M(n)) = r + 1 attains the lower bound shown previously.


Determinantal identities for multipliers of square roots of unity
We assume that n is an odd integer that is divisible by precisely r = ω(n) distinct prime divisors.
Let us denote these prime divisors by p 1 , p 2 , . . ., p r .Hence we may assume that the integer n is representable in the form Here e j > 0 are positive integer exponents.Then we see that there are precisely r solutions c i , 1 i r of the congruence x 2 ≡ 1 mod n which have the following properties: (1) for all i = 1, 2, . . ., r we have 1 c i n − 1; (2) for all i, j = 1, 2, . . .r, there are congruences We will refer to this system of r square roots as the fundamental system (of square roots of unity) mod n.For all values i, j = 1, 2 . . ., n we may then define a matrix µ ji of multipliers by the equations We will refer to the positive integers µ ji as the factor multipliers of the odd integer n.
The fundamental square roots have an integer sum given as: T.Bier This follows from the fact that for each modulus p ej j there is only one value of c i which in position i yields −1, all the other values c j yield +1, hence the sum of the r quantities is r−1−1 = r−2.
Hence we may define an integer γ 1 , to be called the sum multiplier by the equation (5) Note that (4) and (5) together imply that γ 1 > 0 holds, as each of the r integer quantities c i satisfies c i 2.
We follow the custom to denote the number of odd prime factors of an integer by ω(n) = r.
Theorem 1 Let n be an odd integer with ω(n) = r.For the determinant of the multiplier matrix M = (µ ji ) of the fundamental system of square roots of unity mod n we obtain: In particular det(M ) = 0. Let so that the i−th column vector of M is given as .
We first consider the matrix M that is obtained by multiplying the column vectors of M = [m 1 , m 2 , . . ., m r ] by the constants p e1 1 , p e2 2 , . . ., p er r so that we obtain and subsequently we get for the determinant det(M ) = p e1 1 p e2 2 . . .p er r • det(M ) = n • det(M ).
We let I r be the r × r identity matrix, and J r is the r × r all one matrix.We remark that with this notation the matrix ( ji ) of ( 2) is just ( ji ) = J r − 2I r .
We now make use of the identity c i = ji + p ej j µ ji in (3).This enables us to rewrite the matrix in question as where C is the rank one matrix that has the column vectors with the all one column vector u = (1, 1, . . ., 1) t ∈ R r .
In particular this applies to the matrix where C is the matrix consisting of n constant column vectors (c i , . . ., c i ) t for i = 1, 2, . . ., r defined in (11).Note that A has constant non-zero columns, and hence it is a matrix of rank one.Also note that its trace is equal to The characteristic matrix xI r − A is the usual polynomial matrix that defines the characteristic polynomial, denoted by det(xI r − A) = χ A (x), and for the proof of (6) we use the fact that the determinant of any r × r rank 1 matrix A has a characteristic polynomial of the form Thus we obtain that It is clear that for x = 2 the characteristic matrix specializes to become the matrix M = C−J r +2I r of (10).Thus by ( 13), (12) its determinant is equal to Using (9) then completes the proof of theorem 1.
By the theorem on the solution of simultaneous congruences (Chinese remaindering) there exists an integral solution of the congruences for 1 i, j r.
They have the property that c 2 i ≡ 1 mod n and hence they are a system of solutions of the congruence x 2 ≡ 1 mod n.
Here ij are given as in (2).We may again define the corresponding multipliers by the equations In the same way as before we see that This follows from the fact that for each modulus n j there is only one value of c i which in position i yields −1, all the other values c j yield +1, hence the sum of the r quantities is r−1−1 = r−2.
Hence we may define an integer γ 1 , again called the sum multiplier by the equation Theorem 2 For the determinant of the multiplier matrix M = (µ ji ) we obtain: The proof is a direct adaptation of the previous proof of theorem 1.

The Smith normal form
Again we assume that n is an odd integer with ω(n) = r.
Theorem 3 (i) Assume that at least one of the c i in the given system of fundamental square roots of unity mod n is even.Then the r × r-matrix of multipliers M of the fundamental system of square roots of unity mod n defined in (7) has a Smith normal form given as (ii) Assume that all of the c i in the given system of fundamental square roots of unity mod n are odd.Then γ 1 is even, and the Smith normal of the r × r−matrix M is given as For the proof we first assume case (i), i.e. we consider the case that not all the numbers c i in the given fundamental system of square roots mod n are odd.We employ the standard localization techniques over the ring Z (p) of p−adic integers.Only the case of p = 2 even is slightly delicate.
As n and all the factors n i are odd, we see that we may multiply the matrix M columnwise by the n i in order to get the matrix C = (c i − ij ).
Then we use elementary row operations top reduce C to the form where we have used that the top left entry is Under the given assumption there is at least one index i such c i − 1 is odd, and hence (assuming without loss of generality i = 1) the matrix (19) can be reduced over Z (2) to contain at least one diagonal element 1 in its Smith normal form.The presence of the remaining terms 2 then implies the corresponding version of the statement of the theorem over the 2−adic integers.Then we apply standard localization techniques over the p−adic integers, keeping track of the powers of p in the factors n i for the various odd primes involved to show that there is at most one non-trivial factor for any odd prime p.
The case (ii) is proved using the same method, but over Z (2) we cannot get any term 1.Note that in this case from the assumption c i even, and n odd it follows from (5) that γ 1 is even so that the diagonal form in (ii) above is really a Smith normal form.
We remark that the case (ii) is rare, but it does occur.For example the integer n = 180285 with r = 5 prime factors n = 3

Some general considerations on division with remainder
Assume that two positive integers n, d are given, with n > d and let ρ ∈ [0, 1] be any real constant.
We may consider division of n by d with remainder in the form with an integer quotient q ∈ Z and with r satisfying the inequalities In the present paper we stick exclusively to the case of odd integers, and hence if we restrict ρ to a certain class of rationals, then we may avoid the case of equality in (21) as follows.
First we always assume in case of a divisor without remainder that q = n/d, s = 0.In particular if d = 1 we always have q = n, s = 0.This remark mainly takes care of the special cases ρ = 0, ρ = 1.
In case of a nontrivial divisor with a non-zero remainder we may then restrict to the inequality if ρ is chosen in a way to avoid those rationals s that occur during the division.One easy way to avoid rationals is to choose ρ to be irrational, which leads to a standard approach to the study of rational approximations of irrational numbers ρ.But as in the present paper, we always restrict the integers n, d to be odd positive integers, so that another possible choice of ρ is rational numbers with denominator which is a power of 2. In other words in this paper we might consider the following general form as ρ = e 2 t for any t 0, with integer e ∈ {0, 1, 3, . . ., This contains the three major cases [3], which will be referred to as the classical cases throughout the following discussion: Venkov ρ = 0, with q = n d ; Example 1 Note that 17 11 = 1, and 17 11 = 2, while 17 7 In any case for the purposes of the present paper we will then define that ρ is admissible if 0 ρ 1 holds and ρ is either irrational or of the form (23).

Definition of the multiplier matrices
We now assume that an odd squarefree integer n with r distinct prime factors is given.Thus n = p 1 p 2 . . .p r .Then there are precisely 2 r distinct solutions x = c to the congruence x 2 ≡ 1 mod n in the range 0 < c < n.
These solutions are to be called the square roots of unity modulo n.
They can be indexed by the sets α ⊂ {1, 2, . . ., r} as follows.Let c = c α in the range 0 < c α < n be defined by the congruences On the other hand, for any β ⊂ {1, 2, . . ., r} we may consider all the 2 r divisors n β of the integer n.Note that n β are odd positive integers.Assume that any 0 < ρ < 1, as in (23), is given.We then define the 2 r × 2 r −multiplier matrix M (n) = (m β,α ) by the equations (where for ρ = 0, 1 the equality s β,α = 0 is allowed) Example 2 The case r = 1 is essentially trivial.Let p = p 1 .We have c ∅ = 1, c {1} = p − 1 and the two divisors 1, p.This gives the three answers, according to the size of ρ This shows that rank(M (p)) = 2 except for large ρ > p − 1 p when rank(M (p)) = 1.
The case r = 2 with n = p 1 p 2 is somehow more interesting and its discussion is a starting point of all that follows.Clearly in this case we get a 4 × 4−matrix M (n).
It was shown in chapter 2 of [1] that for the classical cases we always have This result, and even a weaker inequality rank(M (p 1 p 2 ) 3, can however not be extended to the case of general ρ.
Then the rank of the multiplier matrix M (n) satisfies the inequalities We assume for the proof that the columns are in natural ordering.The proof of the lower bound comes essentially out of the theorem 1. However first we remark that the first row of the matrix M (n), i.e. the row indexed by the empty set corresponding to the divisor 1 just consists of the line of integers We also note that all other rows have the first entry zero, since n is odd with c 1 = 1 the congruence conditions force r α,1 = 1, and thus m α,1 = 0.This is because the divisors are d α 3 for all nonempty index sets α = ∅.Thus the first column of the matrix M (n) is just the column vector with a single 1 in the first position and with zeroes in all subsequent positions.Note that this implies that the first row is never linearly dependent on any other combination of rows.
Next we recall that the rows indexed by the sets of singletons . ., d r = p r contain a r × r submatrix with a non-vanishing determinant by theorem 1. Hence these r rows are also linearly independent, and they together with the first row form r + 1 linearly independent rows.This proves the lower bound r + 1 rk(M (n)).
We now prove the upper bound.For any column indexed by a set θ let η = {1, 2, . . ., r}\θ be the index of the "complementary" column.Note that the equation holds, as it is easy to see that both c θ and n − c θ = c η are solutions of the congruence x 2 ≡ 1 mod n which satisfy the corresponding congruences.We note that the condition (24) implies where r α,θ ≡ +1 mod p i if r α,η ≡ −1 mod p i for all i ∈ α.This implies On the other hand This implies |r α,θ + r α,η | < d α by the triangle inequality.By (35) we get that r α,θ + r α,η = 0. Adding (33) and (34) we then see that Hence we get, denoting the complementary row index set of α by β = {1, 2, . . ., r}\α, Hence we have shown that entries in the same row which are in "complementary" position with respect to the columns always give the same sums.This means that all but one of a system of 2 r−1 complementary vectors are linearly dependent, so that the upper inequality is also established.

Rank equality for the Gauss and Venkov cases
In this section it is shown that for any factorization n = p 1 p 2 . . .p r into (at least two) odd primes the two multiplier matrices M G(n) = M (n) for the case ρ = 0 (the Gauss, i.e. floor case) and for M V (n) = M (n) for the case ρ = 1 (the Venkov i.e. ceiling case) have the same rowspaces, and hence the same rank.
Theorem 5 Let n = p 1 p 2 . . .p r be an odd integer with r > 1 distinct prime factors.Then the two multiplier matrices M G In particular, the ranks agree as well: rank Let us keep n fixed, and then denote for a row index β as above the corresponding rows in M G(n) and in M V (n) by RG(β) and by RV (β).Hence RG(β), RV (β) ∈ R 2 r .Trivially RG(∅) = RV (∅), as in the row indexed by ∅ there are no fractions appearing.
From the definitions it is easy to see that for the all one vector u ∈ R 2 r we have the following equations for all non-empty index sets β = ∅ : This follows from the fact that for all β = ∅ due to the congruence conditions on the c α all the fractions c α n β are not integral.We also note that for the full index set β = {1, 2, . . ., r} we get RG(β) = (0, 0, . . ., 0) and RV (β) = (1, 1, . . ., 1) = u.Hence to prove (38) we only have to show that From r > 1 we see that there exist at least 2 distinct indices i = j.Let us simplify the notation for the corresponding rows to respectively.First we consider the following vector in the row space of M G(n) : It is apparent from the congruence conditions that for any column index α there are only two values in V (i) : Indeed we get that for i ∈ α and similarly for the case i ∈ α.This shows the formula in (42).
Next we consider the two vectors in RowSpace(M G(n)) They may each contain up to four distinct entries according to the distinction of cases with the column coordinates More explicitly we find the following easy values: The core of the proof is to identify the remaining values W (i) α , W (j) α in terms of a familiar structure already discussed in chapter 2 of [1] in the Hurwitz case ρ = 1/2 .Consider now the case i ∈ α, j ∈ α and let c be any integer that satisfies the congruences For example we can have c = c α .Then the expression for c = c α clearly is equal to the value under discussion: We use a trick to avoid direct calculation of this number.We first show that for any c as in (45) we have the shifting rule This is readily verified: First of all this argument shows that for all c α with i ∈ α, j ∈ α say the values of all the corresponding coordinates W (i) α are equal.Secondly we may choose any particular c satisfying (45) to compute this value.For this computation we choose the unique solution of (45) that satisfies 1 < c < p i p j − 1.This value then is The other case for the vector W (j) gives Now in turn consider a column index γ with i ∈ γ, j ∈ γ .With a similar reasoning we get for d with 1 < d < p i p j − 1 and Hence we are led to consider the matrix as well as the closely related matrix (the Hurwitz i.e. the symmetric case) Now the latter matrix (53) has already been analyzed in chapter 2 of [1] and the following computation of the determinant was done: and assuming this to be zero leads to the equation the positivity of the entries in I 2 + W M.
We have now shown that det(W M ) = 0. Hence we may use the vectors W (i) and W (j) to construct the characteristic vector of the set of column indices {α : i ∈ α}.In order to see this, let us order the coordinates α in such a way that those with i, j ∈ α come first, then those with i ∈ α, j ∈ α, then those with i ∈ α, j ∈ α, and finally those with i, j ∈ α.Let so that det = a i b j − a j b i = 0 holds.Then the vectors W (i) and W (j) can be represented as where the last entry comes via (44) from ( 48) and (50) as and similarly for a j + b j = p i − 1.In (56) each entry actually occurs 2 r−2 times, and the same happens in (57).Now form the vector This shows that and concludes the proof of (40).Theorem 5 has now been shown.
We need to use the following consequence of the proof of theorem 5.

Corollary 1
The characteristic vector χ[r] of the set of column indices {α : r ∈ α} is in the row space of the matrix M G(n) :

A periodicity result for the classical cases
Assume that ρ = 1/2 is fixed, in this and the following three sections, if not stated otherwise.Assume also that m = p 1 p 2 . . .p r−1 is a product of r − 1 distinct and odd prime numbers, and that p r , p r are two other odd prime numbers distinct from p 1 , p 2 , . . ., p r−1 .Then we have that Theorem 6 Assume that p r ≡ p r mod 2m holds.Then (in the classical Hurwitz case ρ = 1  2 ) the row spaces of the multiplier matrices M (mp r ) and M (mp r ) as in (25) are equal, and hence rank (M (mp r )) = rank (M (mp r )).
Hence considering m as fixed, we see that for p r and n = m • p r the rank of M (n ) depends only on the value of the rank of M (n) for one fixed prime number p r with p r ≡ p r mod 2m and n = m • p r .Thus this value of rank M (n) reappears at new primes periodically mod 2m.
The proof of theorem 6 will be given in the next three sections.In the following section some parametric representations of square roots of unity are derived, which have some dual form, and which are then used in the subsequent two sections to obtain the equality of the row spaces.

Parametric properties of square roots of unity in the squarefree case
Assume that for a squarefree odd positive integer n the distinct prime divisors of n are p 1 , p 2 , . . ., p r .Hence the integer n is in the form Let index sets be given as α, β, γ, .. ⊂ {1, 2, . . ., r}.
For any α ⊂ {1, 2, . . ., r} we let For any β ⊂ {1, 2, . . ., r} assume that 0 < c β < n is that unique square root of unity mod n which satisfies the congruences We may represent c β in the following two forms with non-negative integers k β , x β which satisfy the inequalities Clearly the case of equality happens in (64) if β = ∅ and the case of equality happens in (64 Then clearly there exists another system of square roots of unity mod n which we denote as c β , and there are the corresponding parametric equations with non-negative integers k β , x β which satisfy the inequalities Secondly assume as in theorem 6 the following congruence, with m as in (67) i.e. it is assumed there exists a positive integer λ such that With these assumptions and notations we can prove Lemma 1 For any index set β as above with r ∈ β the corresponding quantities c β , c β have the same first parameters, i.e.
For any index set β as above with r ∈ β the corresponding quantities c β , c β have the same second parameters, i.e.
For the proof first consider the case r ∈ β.Note that this means n β = n β .Then for all i ∈ β we get from the definition of c β , c β This implies that and hence Now by the second assumption we have that This holds for all i ∈ β, and by the CRT this implies From the inequalities 0 k β , k β < n β we then get k β = k β .Now consider the case r ∈ β.Note that this means n n β = n n β .Then for all j ∈ β we get from the definition of c β , c β This implies that and hence Now by the second assumption we have that so that x β p r ≡ x β p r mod p j , i.e.
This holds for all j ∈ β, and using the remark in the beginning of this case by the CRT this implies From the inequalities 0 x β , x β < n n β we then get x β = x β .This completes the proof of lemma 1.
Corollary 2 (a) For any index set β as above with r ∈ β the corresponding quantities c β , c β differ by an integer multiple of 2m, more precisely (b) For any index set β as above with r ∈ β let γ = β − {r}.Then the corresponding quantities c β , c β differ by an integer multiple of 2m, more precisely First consider the case r ∈ β.Then we trivially have n β = n β and from the first part of the lemma k β = k β .Hence (62), (68) may be written in the form By taking the difference we then get This proves part (a) of the corollary.Now consider the case r ∈ β.Then from the second part of the lemma x β = x β .Hence (63) (69) may be written in the form By taking the difference we then get using the definition of γ This proves part (b) of the corollary.
Note that by definition of the matrices in (25) this gives us two sets of equations Here the quantities s β,α , s β,α are considered to be the (symmetric) remainders after division by n β , n β .
The proof then naturally splits into two parts.We distinguish index sets β according to the conditions r ∈ β or r ∈ β.
We shall first show that for any subset β with r ∈ β the row R(β) indexed by β in the matrix M (n) and the row R (β) indexed by the same β in the matrix M (n ) are actually the same: We state this fact as a separate lemma as follows.
Lemma 2 Assume (in the classical Hurwitz case ρ = 1/2) that r ∈ β, so that n β is a divisor of n, and n β is a divisor of n .Then the row R(β) in the matrix M (n) and the row R (β) in the matrix M (n ) (with both rows corresponding to the same index set β) are equal.
The equality of the rows will be shown componentwise, of course.First we consider those components with a column index α such that r ∈ α.
Let us denote as before γ = β − {r}.Then clearly n γ = n γ divides m and it also divides the two (distinct) integers n β and n β .Now (as r ∈ α and the general assumptions of the previous section are valid) we may use (90) but replacing the index set β there by α.This gives us a congruence c α ≡ c α mod 2m and hence a fortiori Using (101) from the equations in (98) and in (99) it follows that Next we observe that from the standing assumption r ∈ α it follows that the two congruences c α ≡ 1 mod p r and c α ≡ 1 mod p r hold.Thus by r ∈ β it follows from the equations in (98) and in (99) that the two congruences Finally we need to consider the inequalities in (98) and ( 99).The first one implies where the upper side of the following ( 108) is now trivial, and since s β,α − 1 p r is an integer, and n γ , p r are both odd, and p r is a prime we also get the lower side of the following inequality A similar argument holds in the second case and shows Putting together ( 106) with ( 108), (109) we get Next rewrite (98) and (99) as Now use the parametric forms (62), (68) to obtain via r ∈ α and lemma 1 (replacing β by α in the first part of that lemma) that Now feeding the information (110) into these equations it follows that m β,α = m β,α , which completes the proof of lemma 2 in case when the condition r ∈ α holds.
For the case of r ∈ α it is perfectly feasible but lengthy to repeat the above arguments with the use of the corresponding dual results of the previous section.However there is a shortcut which for the sake of brevity may be used here instead.
For any index set α ⊂ {1, 2, . . ., r} let us denote its complementary set by ᾱ := {1, 2, . . ., r}−α.The equation Assume as before that β is fixed.The condition r ∈ β is not assumed at this moment.We use equation ( 98) and the one similar to (99): By adding ( 98) and (116) and using (115) we get that As n β divides n, it follows that But on the other hand, from the inequalities in ( 98) and ( 116) we get by using the triangle inequality Now ( 118) and (119) imply that which implies via (117) that which is independent of α.
Now continue to assume that r ∈ β holds, and that n, n are given as above.Then we get two sets of equations From the assumption r ∈ β we get that If r ∈ α, then r ∈ ᾱ and for the numbers m β, ᾱ, m β, ᾱ we may apply the result already proved for the first case which is Now using (121) we can complete the argument This proves lemma 2. We now give an example which shows that in the case of general admissible ρ the conclusion of the lemma is not true.
while for p 3 = 41 we get In the sixth line (corresponding to the index set β = {1, 3}) we find distinct entries.

The proof of the periodicity theorem
We now turn to the case of those rows R(β), R (β) with r ∈ β.Here the situation is more interesting, as these rows are not equal but just linearly dependent by adding a certain multiple of the row vector R({r}) .
The proof will depend on the following formula between the row vectors: Here the vector R({r}) is easily seen to have the following coordinates: We will show (125) by verifying it for all (column-)coordinates α.Note that as in our case r ∈ β we get the relation n β = n β .Hence the two inequalities in (98), (99) take the same form: By the triangle inequality this implies First we consider those coordinates with r ∈ α.From corollary 2 (90) with α instead of β we have that n β divides the difference c α − c α .By (98), (99) we get that n β divides the difference s β,α − s β,α .Together with (128) this proves With these preparations let us compute the difference of the vectors On the other hand with the parametric equation ( 62) we can rewrite the first expression in (126) as Comparing ( 130) with (131) we obtain the proof of (125), at least in the case r ∈ α.The other case r ∈ α can be obtained starting from the second part of (126) by rerunning the argument with the corresponding dual formulas, or by using complements.The details are left to the reader.This ends the proof of (125).
It is now clear that lemma 2 together with (125) proves that Using lemma 2 in the special case R ({r}) = R({r}) it also shows the reverse inclusion and hence the proof of theorem 6 is complete.

Periodicity for other cases
For the classical Gauss (floor) case of ρ = 0 and dually for the classical Venkov (ceiling) case ρ = 1 the periodicity theorem also holds.
Theorem 7 Assume that p r ≡ p r mod 2m holds.Then (in the classical Gauss case ρ = 0 or dually in the classical Venkov case ρ = 1) the row spaces of the multiplier matrices M (mp r ) and M (mp r ) as in (25) are equal, and hence rank (M (mp r )) = rank (M (mp r )).
The proof is quite similar to the above, and in order to avoid unnecessary repetitions we just comment on the differences for the case ρ = 0 instead of ρ = 1 2 .Clearly the section on parametric properties is independent of ρ.In the following section on equality of rows with r ∈ β the equations (98) and (99) are replaced by By an obviously parallel argument (for r ∈ α) one then arrives at the inequalities from which using the congruences one may conclude and then as before one gets m β,α = m β,α .
For the case r ∈ α the same kind of duality argument as before may be used.
The only real difference is in the adaptation of (125).This now takes the form with χ[r] as in (58).The proof of ( 135) is again quite similar to the one given earlier for (125) and thus need not be repeated here.An application of corollary 1 then shows equality of row spaces, and hence proves theorem 7.

The existence theorem
Assume again that ρ = 1 2 is fixed.Assume also that m = p 1 p 2 . . .p r−1 is a product of r −1 distinct and odd prime numbers, and that p r is another odd prime number distinct from p 1 , p 2 , . . ., p r−1 .Then we have that Theorem 8 Assume that p r ≡ 1 mod 2m holds.Then (in the classical Hurwitz case ρ = 1 2 ) the rank of M (mp r ) is one more than the rank of M (m), that is rank (M (mp r )) = rank (M (m)) + 1.
In particular, we observe for any integer r 2 the existence of infinitely many factorizations of positive odd integers n into r distinct prime factors n = p 1 p 2 . . .p r such that rankM (n) = r + 1.
Let us remark here that the same statement is true under the hypothesis p r ≡ −1 mod 2m.This will not be elaborated below, but the adaptation of the proof should present no major difficulties.
Corollary 3 For any integer r 2 there exist infinitely many distinct r− tuples of distinct odd prime numbers (p 1 , p 2 , . . ., p r ) such that rank M (p 1 p 2 . . .p r ) = r + 1. ( We note that although there is some analogy between theorem 8 and the previous theorem 6 which also shows up in the proof, there are clear differences.On the one hand theorem 8 does not hold for the cases ρ = 0, ρ = 1 as opposed to theorem 6, and on the other hand in the proof of the present theorem 8 a natural distinction between the prime p = 3 and the other odd primes shows up, which makes the argument somewhat more intricate. The strategy of the proof is to compare the row space of the larger matrix M (p 1 p 2 . . .p r−1 p r ) = M (mp r ) with the row space of the double of the matrix M (m).Here we define the double of any s × n matrix M as the s × 2n−matrix DM = [M, M ] that is obtained from M by writing each of the s rows of M twice in the corresponding s rows so as to get s vectors in R 2n .These s vectors then form the double DM.Hence, it is obvious that rank DM = rank M. (137) Note that in the case of M (m) under discussion, the extent of the double DM (m) is 2 r−1 × 2 r .
In the calculations below, the reader should remember that for such matrices we make an obvious convention concerning the column index α, the second half of the double being indexed by those α with r ∈ α.More formally, letting Recall that the rows of M (mp r ) are indexed by the subsets β ⊂ {1, 2, . . ., r} and that they are denoted by R(β).Our task is to show that the row space of the matrix M (mp r ) is given as RowSpace(M (mp r )) = LinSpan(RowSpace(DM (m)), R({r}).
We also let the 2 r solutions of the congruence x 2 ≡ 1 mod mp r in the range 1 x mp r − 1 be denoted by c α , and we let the 2 r−1 solutions of the congruence x 2 ≡ 1 mod m in the range 1 x m − 1 be denoted by b α .We also let as before p r = 1 + 2 • m • λ.With these notations and with the assumptions of theorem 8 we then get the following lemmas.
Lemma 3 (i) For a fixed index set α ⊂ {1, 2, . . ., r − 1} the difference of the corresponding roots of unity is given as either of the following two expressions (ii) For a fixed set α ⊂ {1, 2, . . ., r} with r ∈ α let = α − {r}.Then the difference of the corresponding roots of unity is given as either of the following two expressions In order to prove the first part of lemma 3 we use the results and notations of section 9 on parametric properties.We first consider the case r ∈ α and we use the equation (62) in the forms with the constraints as in (64) we see that so that from the assumptions p r ≡ 1 mod m and gcd(n α , n ζ ) = 1 we have that Then from (147) it follows that k α = h α .Substituting this back into (142) and forming the difference c α − p r b α the equation (139) then easily follows.Also (147) implies via (141), (142) that This shows (139) and hence proves the first part of the lemma.
For the second part it was assumed that α ⊂ {1, 2, . . ., r} contains r ∈ α and that = α − {r}.By (63) applied to c α and to b respectively we get This implies that y n ≡ 2 mod n ξ (155) which implies via gcd(n , n ξ ) = 1 that Combining this with (152) we get an equality of integers Feeding this back into (151) and forming the difference c α − p r b we get (140).Similarly This proves the second part of the lemma.Next let us consider the rows of M (mp r ) indexed by subsets β ⊂ {1, 2, . . ., r} and written as R(β).In the special case β = {r} the vector R({r}) can be easily computed as follows.
Lemma 4 The components at α of the row vector R({r}) admit the following two representations: (II) As a simple consequence of the lemma note that as the coordinates of the first and second half of R({r}) are manifestly distinct.
For the proof we recall the general defining relation with |s α,β | < n β 2 .In particular if β = {r} and if r ∈ α then we see that with s β,α = 1.This is clear from c α ≡ 1 mod p and from |s β,α | < p r 2 .Substituting this value into (164) we get the first half of (160).
The second half of (160), with the condition r ∈ α being very similar, is omitted.This proves part (I) of lemma 4. Part (II) then follows by using the relations (139) and (140).
Next it will be shown that any row R(β) of the matrix M (mp r ) with r ∈ β is a linear combination of the corresponding row in DM (m) and a multiple of the vector R({r}).
Lemma 5 Assume that r ∈ β.Let d β,α be the entry of the matrix DM (m) in the position as indicated by the subsets.Then we have This is now almost trivial.First consider the case with r ∈ α.Let us abbreviate m β,α = R(β) α .
First note that as above Subtracting the above two equations and dividing by n β we see that Now as r ∈ α, r ∈ β we get c α − b α ≡ 0 mod n β , and thus each term in (168) should be an integer, so that from Then the difference vector entry in question in (165) is This shows the first line in (165).The second line follows in a similar way by using (140).The details are omitted.This shows lemma 5. We now give an interpretation of the results of lemma 4 (II) and of lemma 5 in terms of row vectors.For the case r ∈ β we see that the difference vector R(β) − d β is just a constant multiple (with factor 2mλ n β ) of the row vector R({r}).In other words in case r ∈ β the vectors R(β) lie in the space LinSpan(RowSpace(DM (m)), R({r}) .And conversely the row vectors d β of the doubled matrix DM (m) lie in the row space RowSpace(M (mp r )).
The second half of the proof will show the same for the vectors R(β) − d β in case r ∈ β and β = {r}.Unfortunately the situation is more complicated here, and we need some further estimates to conclude the proof.
Assume then that r ∈ β, that β = {r}, and regard first of all those coordinates which are indexed by sets α with r ∈ α.We let γ = β − {r} and we wish to compare the rows R(β) in the matrix M (mp r ) and d γ in the matrix DM (m).
As before we abbreviate m β,α = R(β) α and d γ,α as the corresponding entries in row β or γ and in column α.
For convenience we also assume that the prime numbers are ordered according to their sizes, so that p 1 < p 2 < • • • < p r−1 < p r .There is no loss of generality in assuming such an ordering.Of course the assumption p r = 1 + 2mλ guarantees that p r is the largest of these primes.In particular if the smallest prime p 1 > 3, then we get an equality of vectors R(β) = d γ for all β with r ∈ β.
First we deal with the case r ∈ α.Recall by using (166) and an equation analogous to (167) for the index set γ and from (139) that the difference c α − p r b α can be worked out in two ways as Hence rearranging and applying the triangle inequality gives Thus we must have σ γ,α = ± n γ − 1 2 .Now as we have b γ ≡ ±1 mod p i , for all i = 1, 2, .., r − 1 it follows that for all prime divisors p j of n γ the congruences σ γ,α ≡ ±1 mod p j also hold.This proves that the integer σ γ,α satisfies the congruence σ 2 γ,α ≡ 1 mod n γ .
Hence n γ should be an odd integer that satisfies the condition Hence there exists an integer g such that and rearranging and factoring we get that n γ (n γ − 2 − 4g) = 3, i.e. n γ divides 3. (180) This forces γ = {1}, as γ is not empty, and then p 1 = 3 as claimed.
A similar argument can be given in case of coordinates indexed by α with r ∈ α, where the exceptional case occurring is now m β,α − d γ,α = +1.This, however, is omitted for the sake of brevity.
This completes the proof of lemma 6.
We have now shown that in all cases we will have R(β) = d γ , except for the case p 1 = 3 where a single row, namely the one indexed by β = {1, r} has not yet been decided.We will discuss this example in conclusion, thereby establishing the proof of theorem 8. To this end we first consider the case of a coordinate indexed by α with 1 ∈ α, r ∈ α and such that p 1 = 3.We need to compute the corresponding entry of R({1, r}) − d {1} which is We let z = m 3 = m p 1 and p r = 1 + 6zλ is the largest prime involved we get Now consider the congruences c α + 1 ≡ 2 mod p r and 1 3 ≡ −2zλ mod p r which can be combined into c α + 1 3 ≡ −4zλ mod p r . (182) Hence we can calculate Now via lemma 4 and the above method it is easy to directly compute the row vector R({r}) − 3 • R({1, r}) in its coordinates as Putting things together we get ( case β = {1, r}, γ = {1} ) and this completes the proof of equality in (138) for the case p 1 = 3.Using (162) and (137) we get the rank formula and hence the proof of theorem 8 is done in all cases.
In order to prove the corollary we may use induction on r, starting with r = 2, where any two distinct odd primes p 1 , p 2 will do, and then each time we step by step build up using theorem 8.The existence of infinitely many examples follows from the Dirichlets theorem on the infinity of primes in an arithmetic progression.

Conclusion
In the present paper a matrix of multipliers M (n) was considered.Several properties were derived, and the existence of n attaining a lower bound in case ρ = 1 2 was established.In the course of the proof it turned out that certain modular calculations were necessary.So the question of the mod p rank of the matrices M (n) deserves some more interest.
On the other hand, one may form symmetric matrices of the (c α c β ) type [2] and consider their multipliers.This already being of interest in relation with the present material may throw further light on the matrices M (n).
Now all entries in I 2 + W M should be positive integers, as 1 < c, d and the congruences (45) hold.Hence we have that W M = w 11 w 12 w 21 w 22 with integers w 12 , w 21 > 0 and w 11 , w 22 0. Thus we may compute det(W M ) = w 11 w 22 − w 12 w 21 = (1 + w , . . ., r}.Let us refer to (62) and (63) as the (first and second) parametric equation for c β .Now let us make the following additional assumptions.First let as in theorem 6 n = p 1 p 2 . . .p r−1 p r (66) be another squarefree integer n with r distinct odd prime factors, such that all except one of the prime factors of n and n agree.Without loss of generality we may assume p r > p r .Now let us denote the common part of n and n by m, i.e. m = gcd(n, n ) = p 1 p 2 . . .p r−1 .

Example 4 2 25
Consider the case r = 3, with p 1 = 3, p 2 = 5, and with p 3 = 11, p 3 = 41.This satisfies the assumptions of lemma 2. With the (admissible) value of ρ = 12 √ it can be computed that for p 3 = 11 we get the multiplier matrix