The functions in this section perform reduction of lattice bases. For each reduction algorithm there are three functions: a function which takes a basis matrix, a function which takes a Gram matrix and a function which takes a lattice.
Method: MonStgElt Default: "FP"
Delta: RngElt Default: 0.75
InitialDelta: RngElt Default:
DeltaSteps: RngIntElt Default:
Sort: BoolElt Default: true
FPBlock: RngIntElt Default:
Large: BoolElt Default:
StepLimit: RngIntElt Default:
TimeLimit: RngElt Default:
UseGram: BoolElt Default:
UnderflowCheck: BoolElt Default: true
Given a matrix X belonging to the matrix module S=Hom_R(M, N) or the matrix algebra S=M_n(R), where R is a subring of the real field, compute a matrix Y whose rows are a LLL-reduced basis for the Z-lattice spanned by the rows of X (which need not be Z-linearly independent). This function returns three values:Note that the returned transformation matrix T is always over the ring of integers Z, even if R is not Z.
- A LLL-reduced matrix Y in S whose rows span the same lattice (Z-module) as that spanned by the rows of X.
- A unimodular matrix T in the matrix ring over Z whose degree is the number of rows of X such that TX=Y;
- The rank of X.
Since the input matrix X need not have linearly independent rows, this function really performs the MLLL algorithm of M. Pohst. But it is more convenient to combine the LLL and MLLL algorithms under one function called LLL. By default the rows of the returned matrix Y are sorted so that all the zero rows are at the bottom and the non-zero rows are sorted first with respect to their norms (so the row with the smallest norm is at the top) and then with respect to lexicographical order. This final sorting may result in the basis being no longer strictly LLL-reduced (since the order of the rows does matter), but is in general the most convenient way to present the result (like the Hermite normal form, etc.). In fact, if the sorted basis is not LLL-reduced, reapplication of LLL will yield a better basis. By setting the parameter Sort to {false}, the final sorting will not be done so the (non-zero portion of the) result will be a true LLL-reduced basis.
By default the delta constant used in the LLL condition is taken to be 0.75. Setting the parameter Delta to a real number in the range (0.25, 1) changes the delta constant to that number. A large value of delta (e.g., 0.999) tends to yield a higher quality result while taking extra time; a small value of delta (e.g., 0.26) tends to yield a lower quality result while taking less time. The difference in quality will be slight for some examples but marked for other examples. Perhaps surprisingly, if a matrix is reduced using a small value for delta and the result of this is reduced using a larger value for delta ("iterating the reduction"), the whole computation may run faster than just using one reduction for the larger value. Let d be the value of the parameter Delta, or 0.75 if that is not set. If the parameter InitialDelta is set to a real number s in the range (0.25, delta), and the parameter DeltaSteps is set to a positive integer n, then the algorithm proceeds by iterating the reduction using the successive delta values given by the arithmetic progression from s to d inclusive having n steps. Note that this iterative method is faster for some examples but slower for most others so that it is why it is not used automatically by default.
For matrices over Z or Q, there are two main methods for handling the mu_(i, j) and beta_i variables used in the algorithm: the Schnorr-Euchner floating point method (LLLFP) [C.P. Schnorr and M. Euchner, op. cit.], and the exact integral method of de Weger [B.M.M. de Weger, op. cit.].
The Schnorr-Euchner floating point method in Magma can be used with either "small" (C doubles) or "large" (normalized C doubles with integer exponents) floating point numbers used for the mu_(i, j) and beta_i variables. The large case is needed when the integers go beyond the maximum allowed in a C double (about 10^(308) on most machines). Magma will use the small method if possible which is much faster but it will have to use the large method if the integers are too large. By setting the parameter Large to false, the function will be forced to use the small case, and by setting the parameter to true, the function will be forced to use the large case. Large should not be set to {false} unless one is sure that the integers can all stay in the appropriate range, but there may be cases where doing this is faster and Magma is too conservative in choosing the large case. The parameter UnderflowCheck indicates whether the underflow check in the floating point method should be performed. For some matrices turning this off may make the function run much faster but for other matrices underflow errors would occur making the answer incorrect. Since the method uses floating point approximations, round-off errors can occur so the algorithm is not guaranteed to terminate and it is also possible that the resulting basis is not actually LLL-reduced. A smaller value for delta decreases the stability while a larger value for delta increases the stability. The floating point method is used by default (or if the parameter Method is set to "FP").
The de Weger integral method in Magma uses exact integers for the mu_(i, j) and beta_i variables (simulating the rational numbers used in the original presentation of the algorithm), which guarantees that the algorithm terminates and that the result is rigorously LLL-reduced. Setting the parameter Method to "Integral" will cause this method to be used instead of the floating point method.
The Schnorr-Euchner floating point method is nearly always very much faster than the de Weger integral method for small to medium dimensions (sometimes even up to hundreds of times faster) so that is why the former method is used by default. However, for very large dimensions (e.g., number of rows over 300), the de Weger integral method may in fact be faster because of the number of swaps needed! The time for the floating point method is dominated by all the swaps since the mu_(i, j) and beta_i variables must be computed anew for a swapped row to ensure stability, while the time for the integral method is dominated by the initial setup of the mu_(i, j) and beta_i variables and not as much by all the swaps (since a stable formula is used to quickly adjust all these variables). Consequently, a LLL computation needing very many swaps (which usually happens for very large dimensions) will probably perform faster (and certainly with guaranteed stability) using the integral method. Note, however, that some matrices with, say, more than 500 rows do not need many swaps to be reduced so the floating point method will be still much faster for such matrices. By turning on the verbose flag for LLL (see below), one can compare the performance the two methods on any particular input matrix.
A hybrid method is available which combines the advantages of both of the above two methods. The method is specified by setting the parameter Method to "FPInteger" and the parameter FPBlock to a positive integer b. The algorithm first divides the input matrix X into blocks having b rows each (with possibly less than b rows for the last block of course), and then LLL-reduces each block using the floating point method. The resulting matrix is then fully LLL-reduced using the integral method. This hybrid method effectively then uses the floating point method to "preprocess" the matrix by reducing it somewhat (but using blocks small enough so that instability is not a problem) and then feeding the result to the integral method obtaining a result which is guaranteed to be LLL-reduced. If there is a large amount of reduction to be done (since the input matrix is "very messy"), this hybrid method will be much faster than the use of the integral method alone. If the Method parameter is set to "FPInteger" but the FPBlock parameter is not specified, the block size b is taken by default to be 50, since the floating point method is practically always stable for blocks of that dimension. For the first floating point stage the value of delta is taken to be 0.999 and all other parameters apply to the second stage reduction using the integral method.
The parameter UseGram specifies that for the floating point method the computation should be performed by computing the Gram matrix F, using the LLLGram algorithm below and applying the appropriate transformation to X. Magma will automatically do this internally for the floating point method if it deems that it is more efficient to do so (and it indeed usually is if all the entries of X are very small or the size of the matrix is large), so this parameter just allows one to stipulate that this method should or should not be used. For the integral method, using the Gram matrix never improves the computation so the value of this parameter is simply ignored in this case.
Setting the parameter StepLimit to a positive integer s will cause the function to terminate after s steps of the main loop. Similarly, setting the parameter TimeLimit to a positive real number t will cause the function to terminate after the algorithm has been executed for t seconds (process user) time. In such cases, the current reduced basis is returned, together with the appropriate transformation matrix and an upper bound for the rank of the matrix. Nothing precise can then be said exactly about the reduced basis (it will not be LLL-reduced in general of course) but will at least be reduced to a certain extent.
Finally, note that if the elementary divisors of the matrix are fairly small, relative to the size of the matrix and the size of its entries, then it may be very much quicker to reduce the matrix to Hermite form first (using the function HermiteForm) and then to LLL-reduce this matrix. This case often arises: for example, when one has a "very messy" matrix for which it is known that the lattice described by it has a relatively small basis.
Check: BoolElt Default: false
Given a symmetric positive semidefinite matrix F belonging to the the matrix module S=Hom_R(M, M) or the matrix algebra S=M_n(R), where R is a subring of the real field, so that F=XX^(tr) for some matrix X over the real field, compute a matrix G which is the Gram matrix corresponding to a LLL-reduced form of the matrix X. The rows of the corresponding generator matrix X need not be Z-linearly independent in which case F will be singular. This function returns three values:Note that the returned transformation matrix T is always over the ring of integers Z, even if R is not Z. The rows and columns of G are sorted so that all the zero rows/columns are at the end and the non-zero rows/columns are sorted with respect to their norms (the diagonal entries) so the smallest diagonal entry is at the top left corner.
- A LLL-reduced Gram matrix G in S of the Gram matrix F;
- A unimodular matrix T in the matrix ring over Z whose degree is the number of rows of F such that G=TFT^(tr).
- The rank of F (which equals the dimension of the lattice generated by X).
The matrix F must be a symmetric positive semidefinite matrix; if it is not the results are unpredictable. By default, Magma does not check this since it may be expensive in higher dimensions to do so and in many applications F will be known to be such a priori. The checking can be invoked by setting Check := true.
The other parameters (not listed here again) are the same as for the function LLL (q.v.), except that the parameter UseGram is irrelevant (since the function operates always with a Gram matrix).
Given a lattice L with basis matrix B, return the LLL basis matrix B' of L, together with the transformation matrix T such that B'=TB. The LLL basis matrix B' is simply defined to be a LLL-reduced form of B; it is stored in L when computed and subsequently used internally by many lattice functions. The LLL basis matrix will be created automatically internally as needed with delta=0.999 by default (note that this is different from the usual default of 0.75); by the use of parameters to this function one can ensure that the LLL basis matrix is created in a way which is different to the default. The parameters (not listed here again) are the same as for the function LLL (q.v.), except that the limit parameters are illegal.
Given a lattice L with Gram matrix F, return the LLL Gram matrix F' of L, together with the transformation matrix T such that F'=TFT^(tr). F' is simply defined to be B'(B')^(tr), where B' is the LLL basis matrix of L---see the function LLLBasisMatrix. The parameters (not listed here again) are the same as for the function LLL (q.v.), except that the limit parameters are illegal.
Given a lattice L with basis matrix B, return a new lattice L' with basis matrix B' and a transformation matrix T so that L' is equal to L but B' is LLL-reduced and B'=TB. Note that the inner product used in the LLL computation is that given by the inner product matrix of L (so, for example, the resulting basis may not be LLL-reduced with respect to the standard Euclidean norm). The LLL basis matrix of L is used so calling this function with argument L is completely equivalent (ignoring the second return value) to the invocation LatticeWithBasis(LLLBasisMatrix(L), InnerProductMatrix(L)). The parameters (not listed here again) are the same as for the function LLL (q.v.), except that the limit parameters are illegal.
(Procedure.) Set the verbose printing level for the LLL algorithm to be v. Currently the legal values for v are true, false, 0, 1 or 2 (false is the same as 0, and true is the same as 1). Both non-zero levels notify when the maximum LLL-reduced rank of the LLL algorithm increases, and level 2 also prints the norms of the reduced vectors at such points.
For this example we set Q to a sequence of n=10 integers of varying bit lengths (to make the problem a bit harder).
> Q := [ 67015143, 248934363018, 109210, 25590011055, 74631449, > 10230248, 709487, 68965012139, 972065, 864972271 ]; > n := #Q; > n; 10We next choose a scale factor S large enough and then create the n x (n + 1) matrix X=[I_n | C] where C is the column vector whose i-th entry is S.Q[i].
> S := 100; > X := RMatrixSpace(IntegerRing(), n, n + 1) ! 0; > for i := 1 to n do X[i][i] := 1; end for; > for i := 1 to n do X[i][n + 1] := S * Q[i]; end for; > X; [1 0 0 0 0 0 0 0 0 0 6701514300] [0 1 0 0 0 0 0 0 0 0 24893436301800] [0 0 1 0 0 0 0 0 0 0 10921000] [0 0 0 1 0 0 0 0 0 0 2559001105500] [0 0 0 0 1 0 0 0 0 0 7463144900] [0 0 0 0 0 1 0 0 0 0 1023024800] [0 0 0 0 0 0 1 0 0 0 70948700] [0 0 0 0 0 0 0 1 0 0 6896501213900] [0 0 0 0 0 0 0 0 1 0 97206500] [0 0 0 0 0 0 0 0 0 1 86497227100]Finally, we compute a LLL-reduced form L of X.
> L := LLL(X); > L; [ 0 1 0 -15 -6 3 1 2 -3 -3 0] [ 5 2 -2 2 -14 -8 6 -8 -1 4 0] [ 1 2 0 2 13 -8 -6 -8 -10 2 0] [ 1 -1 -1 -12 11 -6 -1 8 9 4 0] [ 16 0 3 3 9 -3 0 -1 -4 -11 0] [ 6 -1 16 -4 1 -6 6 5 -7 7 0] [ 3 0 -11 -5 2 4 15 2 -1 -12 0] [ 9 0 1 0 -6 -13 -7 0 -18 0 0] [ 9 -3 -10 7 -3 1 -7 8 0 18 0] [ 3 1 -13 1 4 -2 -3 -4 -5 1 100]Notice that the large weighting on the last column forces the LLL algorithm to produce as many vectors as possible at the top of the matrix with a zero entry in the last column (since such vectors will have shorter norms than any vector with a non-zero entry in the last column). Thus the GCD of the entries in Q is the entry in the bottom right corner of L divided by S, viz. 1. The first n entries of the last row of L gives a sequence of multipliers M for the extended GCD algorithm. Also, taking the first n entries of each of the first n - 1 rows of L gives independent null-relations for the entries of Q (i.e., the kernel of the corresponding column matrix). We check that M gives a correct sequence of multipliers.
> M := Eltseq(L[10])[1 .. n]; M; [ 3, 1, -13, 1, 4, -2, -3, -4, -5, 1 ] > &+[Q[i]*M[i]: i in [1 .. n]]; 1
Given a matrix X belonging to the the matrix module S=Hom_R(M, N) or the matrix algebra S=M_n(R), where R is Z or Q, compute a matrix Y whose rows form a pairwise reduced basis for the Z-lattice spanned by the rows of X. Being pairwise reduced (i.e., 2(v, w) <= max(||v||, ||w||) for all pairs of basis vectors) is a much simpler criterion than being LLL-reduced, but often yields sufficiently good results very quickly. It can also be used as a preprocessing for LLL or in alternation with LLL to obtain better bases. The rows of X need not be Z-linearly independent. This function returns two values:
- The pairwise reduced matrix Y row-equivalent to X as a matrix in S;
- A unimodular matrix T in the matrix ring over Z whose degree is the number of rows of X such that TX=Y.
Check: BoolElt Default: false
Given a symmetric positive semidefinite matrix F belonging to the the matrix module S=Hom_R(M, M) or the matrix algebra S=M_n(R), where R is Z or Q, so that F=XX^(tr) for some matrix X over the real field, compute a matrix G which is the Gram matrix corresponding to a pairwise reduced form of the matrix X. The rows of the corresponding matrix X need not be Z-linearly independent. This function returns two values:The matrix F must be a symmetric positive semidefinite matrix; if it is not the results are unpredictable. By default, Magma does not check this since it may be expensive in higher dimensions and in many applications will be known a priori. The checking can be invoked by setting Check := true.
- The pairwise reduced Gram matrix G of F as a matrix in S;
- A unimodular matrix T in the matrix ring over Z whose degree is the number of rows of F which gives the corresponding transformation: G=TFT^(tr).
Given a lattice L with basis matrix B, return a new lattice L' with basis matrix B' and a transformation matrix T so that L' is equal to L but B' is pairwise reduced and B'=TB. Note that the inner product used in the pairwise reduction computation is that given by the inner product matrix of L (so, for example, the resulting basis may not be pairwise reduced with respect to the standard Euclidean norm).
Given a matrix X belonging to the the matrix module S=Hom_R(M, N) or the matrix algebra S=M_n(R), where R is Z or Q, compute a matrix Y whose rows form a Seysen-reduced basis for the Z-lattice spanned by the rows of X. The rows of X need not be Z-linearly independent. The Seysen-reduced matrix Y is such that the entries of the corresponding Gram matrix G=YY^(tr) and its inverse G^(-1) are simultaneously reduced. This function returns two values:
- The Seysen-reduced matrix Y corresponding to X as a matrix in S;
- A unimodular matrix T in the matrix ring over Z whose degree is the number of rows of X such that TX=Y.
Check: BoolElt Default: false
Given a symmetric positive semidefinite matrix F belonging to the the matrix module S=Hom_R(M, M) or the matrix algebra S=M_n(R), where R is Z or Q, so that F=XX^(tr) for some matrix X over the real field, compute a matrix G which is the Gram matrix corresponding to a Seysen-reduced form of the matrix X. The rows of the corresponding matrix X need not be Z-linearly independent. The Seysen-reduced Gram matrix G is such that the entries of G and its inverse G^(-1) are simultaneously reduced. This function returns two values:The matrix F must be a symmetric positive semidefinite matrix; if it is not the results are unpredictable. By default, Magma does not check this since it may be expensive in higher dimensions and in many applications will be known a priori. The checking can be invoked by setting Check := true.
- The Seysen-reduced Gram matrix G of F as a matrix in S;
- A unimodular matrix T in the matrix ring over Z whose degree is the number of rows of F which gives the corresponding transformation: G=TFT^(tr).
Given a lattice L with basis matrix B, return a new lattice L' with basis matrix B' and a transformation matrix T so that L' is equal to L but B' is Seysen-reduced and B'=TB. Note that the inner product used in the Seysen-reduction computation is that given by the inner product matrix of L (so, for example, the resulting basis may not be Seysen-reduced with respect to the standard Euclidean norm). The effect of the reduction is that the basis of L' and the dual basis of L' will be simultaneously reduced.
> F := GramMatrix(Lattice("Lambda", 24)); > [ F[i][i] : i in [1..24] ]; [ 8, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4 ] > [ (F^-1)[i][i] : i in [1..24] ]; [ 72, 8, 12, 8, 10, 4, 4, 8, 8, 4, 4, 8, 4, 4, 4, 4, 4, 4, 4, 4, 8, 4, 4, 8 ] > L := LLLGram(F); > P := PairReduceGram(F); > S := SeysenGram(F); > [ L[i][i] : i in [1..24] ]; [ 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4 ] > [ P[i][i] : i in [1..24] ]; [ 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4 ] > [ S[i][i] : i in [1..24] ]; [ 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4 ] > [ (L^-1)[i][i] : i in [1..24] ]; [ 6, 4, 8, 4, 32, 4, 4, 8, 18, 4, 4, 8, 8, 8, 12, 4, 6, 4, 20, 4, 8, 4, 14, 4 ] > [ (P^-1)[i][i] : i in [1..24] ]; [ 72, 40, 12, 8, 10, 4, 4, 8, 8, 4, 4, 8, 4, 4, 4, 4, 4, 4, 4, 4, 8, 4, 4, 8 ] > [ (S^-1)[i][i] : i in [1..24] ]; [ 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4 ]