Skip to main content


  • Research
  • Open Access

A new method of generating hard random lattices with short bases

EURASIP Journal on Information Security20192019:8

  • Received: 22 January 2019
  • Accepted: 16 April 2019
  • Published:


This paper first gives a regularity theorem and its corollary. Then, a new construction of generating hard random lattices with short bases is obtained by using this corollary. This construction is from a new perspective and uses a random matrix whose entries obeyed Gaussian sampling which ensures that the corresponding schemes have a wider application future in cryptography area. Moreover, this construction is more specific than the previous constructions, which makes it can be implemented easier in practical applications.


  • Cryptography
  • Gaussian distribution
  • Hard random lattices
  • Short bases

1 Introduction

A lattice has a typical linear structure and some problems about it have been proven to be NP-hard. Many exciting developments in lattice-based cryptography have occurred in the past few years [110], and there has been renewed interest in lattice-based cryptography as prospects for a real quantum computer improve. As is well known, some lattice-based cryptosystems can be resistant to attack by both classical and quantum computers. But the basic problems about short bases and short vector are studied in only a few papers [1114]. But such problems occupy an important place in the study on lattice-based cryptography. And more researches are based on these basic problems.

Ajtai’s seminal work [15] in the lattice-based cryptography demonstrated a random class of lattice whose elements could be generated along with a short vector in them for which finding a short nonzero vector in the random lattice is at least as hard as finding the length of a shortest nonzero vector for any random lattice, and is at least as hard as finding a basis for the random lattice in 1996. And he showed how to generate a hard random lattice with knowledge of one relatively short nonzero lattice vector which can be used as secret information in cryptography applications. In addition, Ajtai also had given the reductions of some hard problems on the lattice.

In 1999, Ajtai also demonstrated an entirely different method of generating a random lattice along with a short basis based on his previous studies [11]. His algorithm had an important property that the resulting lattice is drawn, under the appropriate distribution, from the hard family defined in [15]. Interestingly, the algorithm apparently went without application until recently, when Gentry, Peikert, and Vaikuntanathan constructed several provably secure cryptographic schemes that crucially use the short bases as the secret keys [16].

Alwen and Perkert revisited the problem of generating a hard random lattice with a relatively short basis [12] in 2011. They elucidated and modularized Ajtai’s basic approach for generating a hard random lattice with a relatively short basis. They endeavored to give a top-down exposition of the key aspects of the problem and the techniques. They have based the algorithm around the concept of the Hermite normal form.

Micciancio and Peikert then gave the methods for generating and using “strong trapdoors” in cryptographic lattices [7]. Their methods involved a kind of trapdoor and included specialized algorithms for inverting LWE, randomly sampling SIS preimages, and securely delegating trapdoors. The trapdoor generator strictly subsumed the prior ones of [11, 12], in that it proves the main theorems from those works.

We construct a new hard random lattice together with a relatively short basis from a new perspective in this paper. Firstly, we give and demonstrate a useful theorem called regularity theorem, which plays an important role in the cryptography area. Before this, we get that after proper matrix elementary transformations, any random matrix uniformly on \( {Z}_q^{k\times {l}_1} \) can be written a special matrix whose first k columns consist an identity matrix and the other columns are uniformly on \( {Z}_q^k \). Furthermore, we can learn from the theorem that the result matrix of the above special matrix right multiplied by a matrix whose entries follow Gaussian distribution is a uniform matrix. Then, by using the regularity theorem, we give our simple, wider applied, and more particular algorithm in which Gaussian distribution is used. Then, the concrete expression of each matrix in our algorithm is given. Lastly, we give the analysis of the short basis in our algorithm.

2 Preliminaries

Some notations are given in this section that will be used throughout the paper. We denote the integer ring by Z and the modular q residue ring by Zq. For any real x, the largest integer not greater than x is denoted by x. For a vector x = (× 1, ···,xn), x is defined as (× 1, ···, xn). We write log for the logarithm to the base 2, and logq when the base q is any number possibly different from 2. A negligible amount in n is defined as an amount that is asymptotically smaller than nc for any constant c > 0. Also, when we say that an expression is exponentially small in n, we mean that it is at most 2−Ω(n). Finally, when we say that an expression is exponentially close to 1, we mean that it is 1 − 2−Ω(n).

All the k-dimensional vectors over a domain D are written by Dk. Similarly, (D)m × n denotes all m by n matrices whose entries belong to D. The Euclidean norm of a vector x = (x1, , xn)  Rn is \( \left\Vert x\right\Vert =\sqrt{\sum \limits_i{x}_i^2} \), and the associated distance of two vectors x and y is dist(x, y)= x − y. The distance function is extended to sets in a customary way: dist(x, S) = dist(S, x)= minyS dist(x, y) where x is a point, S is a set, and yS. We often use matrix notation to denote sets of vectors. For example, the matrix S Rn × m represents the set of n-dimensional vectors s1, , sm, where s1, , sm are the columns of S. [A | B] denotes a block matrix whose left part is A and the right part is B. We denote the maximum norm of the column vectors in S by S and the number of all elements in S by |S|. For the vectors x = (x1, , xn) and y = (y1, , yn) in Rn, 〈x, y〉 denotes the inner product of x and y, that is, 〈x, y〉 = ∑ixiyi. The linear space spanned by a set S of m vectors s1, , sm is denoted by span (S) = {∑ixisi : xiRfor 1 ≤ i ≤ m}. For any set of n linearly independent vectors in S, we define the half-open parallelepiped as \( P\left(\mathrm{S}\right)=\left\{{\sum}_i{x}_i{s}_i:0\le {x}_i<1 for\kern0.5em 1\le i\le n\right\} \). Random matrix is a matrix whose each entry is chosen randomly from some set.

We now review some basic definitions of lattice. A lattice in Rn is defined as the set of all integer combinations of n linearly independent vectors. This set of vectors is known as a basis of the lattice and it is not unique.

Definition 21. [15] An n-dimensional lattice A is the set of all integer combinations
$$ \Lambda =L\left(\mathbf{B}\right)=\left\{\sum \limits_{i=1}^n{x}_i{\mathbf{b}}_i:{x}_i\in Z\mathrm{for}\kern0.5em 1\le i\le n\right\} $$
of n linearly independent vectors b1, , bn in Rn.

The set of vectors b1, , bn is called a basis for the lattice. A basis can be represented by the matrix B = (b1, , bn)  Rn × n having the basis vectors as its columns. The lattice generated by B is denoted L(B). Notice that L(B) = {Bx : xZn}, where Bx is the usual matrix-vector multiplication.

For any lattice basis B B and point x, there exists a unique vector yP(B) such that y − xL(B). This vector is denoted by y = x mod B, and it can be computed in polynomial time when given B and x.

The dual of a lattice Λ in Rn, denoted ΛV, is the lattice given by the set of all vectors yRn such that 〈x, y〉 Z for all vectors x Λ.

We now recall some about Gaussian measures.

Definition 22. [17] For any vectors c, x and any r > 0, let
$$ {\rho}_{r,c}\left(\mathbf{x}\right)={\mathrm{e}}^{-\pi {\left\Vert \left(\mathbf{x}-\mathbf{c}\right)/\mathrm{r}\right\Vert}^2} $$
be a Gaussian function centered in c scaled by a factor of r and normally let c = 0.
Note that \( \underset{\mathbf{x}\in {R}^n}{\int }{\rho}_{r,c}\left(\mathbf{x}\right)={r}^n \). Hence, Gaussian distribution around c with parameter r can be defined as its probability density function
$$ \left(\forall \mathbf{x}\in {\mathrm{R}}^n\right){\mathrm{D}}_{r,c}\left(\mathbf{x}\right)=\frac{\rho_{r,c}\left(\mathbf{x}\right)}{r^n}. $$

We know that the expected square distance from c of a vector chosen from the distribution is nr2/(2π). So Dr, c can be seen as a sphere of radius \( r\sqrt{n/\left(2\pi \right)} \) centered around c. Notice that a sample from the above Gaussian distribution can be obtained by taking n independent samples from the 1-dimensional Gaussian distribution.

For any vector c, real r > 0, and lattice L, define the probability distribution DL, r, c over L by
$$ \left(\forall \mathbf{x}\in L\right){\mathrm{D}}_{L,r,c}\left(\mathbf{x}\right)=\frac{{\mathrm{D}}_{r,c}\left(\mathbf{x}\right)}{{\mathrm{D}}_{r,c}(L)}=\frac{\rho_{r,c}\left(\mathbf{x}\right)}{\rho_{r,c}(L)}. $$

We refer to DL, r, c(x) as a discrete Gaussian distribution. And for a large enough r, DL, r, c behaves in many respects like the continuous Gaussian distribution Dr, c. In particular, vectors distributed according to DL, r, c have an average value which is very close to the center c and the expected squared distance from the vector c is very close to nr2/(2π). The center vector is zero sometimes and is omitted.

We give the definition of smoothing parameter introduced by Micciancio and Regev.

Definition 23. [17] For a lattice Λ and a positive real ε > 0, the smoothing parameter ηε(Λ) is the smallest s such that
$$ {\rho}_{1/s}\left({\Lambda}^{\mathbf{v}}\backslash \left\{0\right\}\right)\le \varepsilon . $$

Lemma 24. [17] For any lattice Λ, real ε > 0 and s ≥ ηε(Λ), and cH, we have ρs(Λ + c)  [1 ± ε]sn det(Λ)−1.

3 New hard random lattice

We will give a new method of generating a hard random lattice along with short bases by using our regularity theorem in this section. On the topic of hard random lattice with short bases, the large hard random matrix and its corresponding short base are required simultaneously in cryptography to ensure the security. But the matrix in our regularity theorem needs to be a special form, that is, the left part is an identity matrix and the right part is a random matrix. So we must first transform the hard random matrix into the special form matrix.

We now describe our basic framework for constructing our new hard random lattice and the corresponding short basis.

Firstly, we must ensure that the large hard random matrix contains an invertible submatrix, and we will prove that any k × l1 random matrix A uniformly chosen from \( {\left({\mathrm{Z}}_q\right)}^{k\times {l}_1} \), with very great probability, contains k independent column vectors, that is, A contains an invertible submatrix.

Secondly, our regularity theorem, which can be extended a useful corollary, will be constructed and be proved. Moreover, an effective parameter will be calculated in the regularity theorem which ensures that our hard random lattice with a short basis is generated.

Thirdly, we will give the new construction of generating the hard random lattice with a short basis and the framework of our algorithm by using the fact in the first part of this section and the regularity in the second part of this section.

Fourthly, the concrete expression of each matrix in the algorithm is given.

Finally, the quality of the short basis S will be analyzed.

3.1 Generate a random matrix containing an invertible submatrix

Let q be an integer and Zq be a modular q residue ring. Let \( \mathbf{A}\in {\left({\mathrm{Z}}_q\right)}^{k\times {l}_1} \) be a random matrix where k, l1Z whose entries are chosen randomly from Zq.

Case 1: Let p be a prime integer and \( q={p}^{\overline{k}} \) be an integer for some integer \( \overline{k} \). It is obvious that a submatrix on (Zq)k × k contained in A is invertible if and only if the submatrix mod p is invertible.

Firstly, we choose a k-dimensional vector on Zqk randomly and the probability that the vector can be one column of the invertible submatrix of A is \( \frac{\left({\mathrm{p}}^k-1\right){\mathrm{p}}^{k\left(\overline{\mathrm{k}}-1\right)}}{q^k}=\frac{{\mathrm{p}}^k-1}{p^k} \). Then we let this column be the first column of A.

After fixing the first column of A, we choose a k-dimensional vector randomly on \( {Z}_q^k \) again and the probability that this vector can be another column of the invertible submatrix of A is \( \frac{\left({p}^k-p\right){p}^{k\left(\overline{\mathrm{k}}-1\right)}}{q^k}=\frac{p^k-p}{p^k} \).Similarly, we let this column be the second column of A.

And so on, after fixing the first k − 1 columns of A, the probability that a vector is chosen randomly can be the kth column of the invertible submatrix of A is \( \frac{p^k-{p}^{k-1}}{p^k} \).

Thus, we choose a matrix A randomly on \( {\left({\mathrm{Z}}_q\right)}^{k\times {l}_1} \), the probability that A contains an invertible submatrix is \( \prod \limits_{i=1}^k\left(1-\frac{1}{p^i}\right) \).

Case 2: From another perspective, let \( q={p}_1^{k_1}{p}_2^{k_2}\cdots {p}_t^{k_t} \), where each pi is a prime and kiZ(i = 1, 2, , t). Similarly, the probability of generating a random matrix on \( {\left({\mathrm{Z}}_q\right)}^{k\times {l}_1} \) which contains an invertible submatrix in t steps is \( \prod \limits_{s=1}^t\prod \limits_{i=1}^k\left(1-\frac{1}{{p_s}^i}\right) \).

If l1k, then any k × l1 random matrix A uniformly chosen from \( {\left({\mathrm{Z}}_q\right)}^{k\times {l}_1} \), with very great probability, contains k independent column vectors, and we suppose these columns are the first k columns of A. After proper elementary matrix transformations, A can be written as \( \left(\mathbf{I}|\overline{\mathbf{A}}\right) \), where I is a k × k identity matrix and \( \overline{\mathbf{A}} \) is a k × (l1 − k) uniformly random matrix on \( {\left({\mathrm{Z}}_q\right)}^{k\times \left({l}_1-k\right)} \). And the columns of A generate all of the Zqk.

3.2 Regularity

We will construct a theorem called “Regularity Theorem” and the proof also will be given in this subsection. The regularity theorem can be widely used into cryptography applications, who gives an important property that a special matrix being multiplied by a vector sampled from Gaussian distribution, with great probability, produces a uniform vector.

Suppose that A \( \in {\left({\mathrm{Z}}_q\right)}^{k\times {l}_1} \), we define
$$ {\Lambda}^{\perp}\left(\mathbf{A}\right)=\left\{\mathbf{z}\in {\mathrm{Z}}^{l_1}:\mathbf{Az}=\mathbf{0}\operatorname{mod}q\mathrm{Z}\right\}. $$

Then, we give our regularity theorem on integer lattice as following:

Theorem 31. Let z be an integer and q ≥ 2 be an integer. Let A \( =\left({\mathbf{I}}_k|\overline{\mathbf{A}}\right)\in {\left({\mathrm{Z}}_q\right)}^{k\times {l}_1} \), where Ik (Zq)k × k is identity matrix and \( \overline{\mathbf{A}}\in {\left({\mathrm{Z}}_q\right)}^{k\times \left({l}_1-k\right)} \) is a uniformly random matrix. Then, for all r (\( \sqrt{l_1}<r<\sqrt{l_1}{g}_s \)), \( {E}_{\overline{\mathbf{A}}}\left[{\rho}_{1/r}\left({\Lambda}^{\perp }{\left(\mathrm{A}\right)}^{\mathrm{V}}\right)\right]\le 1+{2}^{-\Omega (k)} \).

Proof For any \( \mathbf{A}\in {\left({\mathrm{Z}}_q\right)}^{k\times {l}_1} \), the dual lattice of Λ(A) is
$$ {\left({\Lambda}^{\perp}\left(\mathbf{A}\right)\right)}^{\mathrm{V}}={Z}^{l_1}+\left\{\frac{1}{q}{\mathbf{A}}^T\mathbf{s}:\mathbf{s}\in {\mathrm{Z}}_q^k\right\}. $$
Then, we have
$$ {\displaystyle \begin{array}{l}{\mathrm{E}}_{\overline{\mathbf{A}}}\left[{\rho}_{1/r}{\left({\Lambda}^{\perp}\left(\mathbf{A}\right)\right)}^{\mathrm{V}}\right]\\ {}=\sum \limits_{\mathbf{s}\in {Z}_q^k}{\mathrm{E}}_{\overline{\mathbf{A}}}\left[{\rho}_{1/r}\left({Z}^{l_1}+\frac{1}{q}{\mathbf{A}}^T\mathbf{s}\right)\right]\\ {}=\sum \limits_{\mathbf{s}\in {Z}_q^k}{\rho}_{1/r}\left({Z}^k+\frac{\mathbf{s}}{q}\right)+{E}_{\mathbf{a}}{\left[{\rho}_{1/r}\left(Z+\frac{\left\langle \mathbf{a},\mathbf{s}\right\rangle }{q}\right)\right]}^{l_1-k}\end{array}} $$
where a is chosen uniformly from \( {\mathrm{Z}}_{\mathrm{q}}^{\mathrm{k}} \). For any \( \mathbf{s}={\left({\mathrm{s}}_1,\cdots, {\mathrm{s}}_k\right)}^T\in {Z}_q^k \), let hs = gcd(s1, , sk, q) and define the ideal
$$ \mathbf{I}\mathrm{s}=\left(\mathrm{hs}\cdotp \mathrm{Z}\right)={\mathrm{s}}_1\mathrm{Z}+\cdots +{\mathrm{s}}_k\mathrm{Z}+\mathrm{qZ}\subseteq Z. $$
Let \( \left|\frac{{\mathrm{I}}_s}{qZ}\right|={\mathrm{g}}_{\mathbf{s}}=\mathrm{q}/\gcd \left({\mathrm{s}}_1,\cdots, {\mathrm{s}}_k,\mathrm{q}\right) \). Note that 〈a, s〉 is uniformly random on \( \frac{{\mathrm{I}}_s}{qZ} \). Then
$$ {E}_{\mathbf{a}}\left[{\rho}_{1/r}\left(Z+\frac{\left\langle \mathbf{a},\mathbf{s}\right\rangle }{q}\right)\right]={\left|\frac{{\mathrm{I}}_s}{qZ}\right|}^{\hbox{-} 1}{\rho}_{1/r}\left(\frac{{\mathrm{I}}_s}{q}\right) $$
Then, the expectation is
$$ {\displaystyle \begin{array}{l}{\mathrm{E}}_{\overline{\mathbf{A}}}\left[{\rho}_{1/r}{\left({\Lambda}^{\perp}\left(\mathbf{A}\right)\right)}^{\mathrm{V}}\right]\\ {}=\sum \limits_{\mathbf{s}\in {Z}_q^k}{\rho}_{1/r}\left({Z}^k+\frac{\mathbf{s}}{q}\right)+{E}_{\mathbf{a}}{\left[{\rho}_{1/r}\left(Z+\frac{\left\langle \mathbf{a},\mathbf{s}\right\rangle }{q}\right)\right]}^{l_1-k}\\ {}={\rho}_{1/r}{Z}^{l_1}+\sum \limits_{\mathbf{s}\in {Z}_q^k,\mathbf{s}\ne 0\operatorname{mod}q}{\rho}_{1/r}\left({Z}^k+\frac{s}{q}\right)\\ {}+{E}_{\mathbf{a}}{\left[{\rho}_{1/r}\left(Z+\frac{\left\langle \mathbf{a},\mathbf{s}\right\rangle }{q}\right)\right]}^{l_1-k}\end{array}} $$
$$ {\displaystyle \begin{array}{l}\le {\rho}_{1/r}{Z}^{l_1}+\sum \limits_{I_{\mathbf{s}},\mathbf{s}\ne 0\operatorname{mod}q}{\left|\frac{I_{\mathbf{s}}}{qZ}\right|}^{-\left({\mathrm{l}}_1-\mathrm{k}\right)}\\ {}\cdot {\rho}_{1/r}{\left(\frac{I_{\mathbf{s}}}{q}\right)}^{\left({\mathrm{l}}_1-\mathrm{k}\right)}\cdot \left({\rho}_{1/r}{\left(\frac{I_{\mathbf{s}}}{q}\right)}^k-1\right)\\ {}\le {\rho}_{1/r}{Z}^{l_1}\\ {}+\sum \limits_{I_{\mathbf{s}},\mathbf{s}\ne 0\operatorname{mod}q}{\left|\frac{I_{\mathbf{s}}}{qZ}\right|}^{-\left({\mathrm{l}}_1-\mathrm{k}\right)}\cdot \left({\rho}_{1/r}{\left(\frac{I_{\mathbf{s}}}{q}\right)}^{{\mathrm{l}}_1}-1\right)\end{array}} $$
$$ {\displaystyle \begin{array}{l}=1+\sum \limits_{I_{\mathbf{s}}}{\left|\frac{I_{\mathbf{s}}}{qZ}\right|}^{-\left({\mathrm{l}}_1-\mathrm{k}\right)}\cdot \left({\rho}_{1/r}{\left(\frac{I_{\mathbf{s}}}{q}\right)}^{{\mathrm{l}}_1}-1\right)\\ {}\le 1+\sum \limits_{\mathbf{s}}{\left({\mathrm{g}}_s\right)}^{-\left({\mathrm{l}}_1-\mathrm{k}\right)}\cdot \left({\rho}_{1/r}{\left(\frac{Z}{g_s}\right)}^{{\mathrm{l}}_1}-1\right)\end{array}} $$
$$ {\displaystyle \begin{array}{l}\le 1+\sum \limits_{\mathbf{s}}{\left({\mathrm{g}}_s\right)}^{-\left({\mathrm{l}}_1-\mathrm{k}\right)}{\left(\frac{\eta }{r}\right)}^{l_1}{\left({\mathrm{g}}_s\right)}^{l_1}\left({\rho}_{1/\eta}\left({Z}^{{\mathrm{l}}_1}\right)-1\right)\\ {}\left(\mathrm{Let}\kern0.5em \eta >\frac{r}{g_s}\right)\\ {}\le 1+\sum \limits_{\mathbf{s}}{\left({\mathrm{g}}_s\right)}^{\mathrm{k}}{\left(\frac{\eta }{r}\right)}^{l_1}\left({\rho}_{1/\eta}\left({Z}^{{\mathrm{l}}_1}\right)-1\right)\end{array}} $$
Because \( q={p}_1^{k_1}{p}_2^{k_2}\cdots {p}_t^{k_t} \). So we have
$$ {\displaystyle \begin{array}{l}\sum \limits_{\mathbf{s}}{\left({\mathrm{g}}_s\right)}^{\mathrm{k}}\\ {}=\prod \limits_{i=1}^t\left(1+{\mathrm{p}}_i^k+{\mathrm{p}}_i^{2k}+\cdots +{\mathrm{p}}_i^{k_ik}\right)\\ {}=\prod \limits_{i=1}^t\frac{1-{\mathrm{p}}_i^{\mathrm{k}\left({k}_i+1\right)}}{1-{\mathrm{p}}_i^k}\\ {}\le \prod \limits_{i=1}^t\frac{{\mathrm{p}}_i^{\mathrm{k}{k}_i}}{1-{\mathrm{p}}_i^{-k}}\\ {}={q}^k\cdot \prod \limits_{i=1}^t{\left(1-{\mathrm{p}}_i^{-k}\right)}^{-1}\\ {}={q}^k\cdot {e}^{\ln \prod \limits_{i=1}^t{\left(1-{\mathrm{p}}_i^{-k}\right)}^{-1}}\\ {}={q}^k\cdot {e}^{\sum \limits_{i=1}^t\ln \left(1+\frac{1}{p_i^k-1}\right)}\\ {}\le e\cdot {q}^k.\end{array}} $$
So the above expectation is as the following
$$ {\displaystyle \begin{array}{l}1+\sum \limits_{\mathbf{s}}{\left({\mathrm{g}}_s\right)}^{\mathrm{k}}{\left(\frac{\eta }{r}\right)}^{l_1}\left({\rho}_{1/\eta}\left({Z}^{{\mathrm{l}}_1}\right)-1\right)\\ {}\le 1+{q}^k\cdot e{\left(\frac{\eta }{r}\right)}^{l_1}\left({\rho}_{1/\eta}\left({Z}^{{\mathrm{l}}_1}\right)-1\right)\\ {}\le 1+{eq}^k\cdot {\left(\frac{\sqrt{l_1}}{r}\right)}^{l_1}{2}^{-2{l}_1}\left( Let\kern0.5em \eta =\sqrt{l_1}\right)\\ {}\le 1+{2}^{-\Omega (k)}\left(\sqrt{l_1}<r<\sqrt{l_1}{g}_s\right)\end{array}} $$

Because of the fact that the matrix A contains an identity submatrix and Lemma24, then we can get the following more applicative corollary.

Corollary 32. Let Z, q, t, k, and l1 be as in Theorem 31. Assume that A \( =\left({\mathbf{I}}_k|\overline{\mathbf{A}}\right)\in {\left({\mathrm{Z}}_q\right)}^{k\times {l}_1} \) is chosen as in Theorem 31. Then, with probability 1–2−Ω(k) over the choice of A, the distribution of Ax (Zq)k, where each coordinate of x \( \in {\left({\mathrm{Z}}_q\right)}^{l_1} \) is chosen from a discrete Gaussian distribution with parameter r (\( \sqrt{l_1}<r<\sqrt{l_1}{g}_s \)) over Z, satisfies that the probability of each of the possible outcomes is within statistical distance 2−Ω(k) of the uniform distribution over (Zq)k.

Therefore, any random matrix A′ chosen uniformly from \( {{\mathrm{Z}}_q}^{k\times {l}_1} \), after proper matrix elementary transformations, with great probability, can be written as the special formal \( \left(\mathbf{I}|\overline{\mathbf{A}}\right) \) where \( \overline{\mathbf{A}}\in {{\mathrm{Z}}_q}^{k\times \left({l}_1-k\right)} \) is a uniformly random matrix. That is, \( \mathbf{A}\hbox{'}\mathbf{T}{=}_c\mathbf{A}=\left(\mathbf{I}|\overline{\mathbf{A}}\right)\in {{\mathrm{Z}}_q}^{k\times {l}_1} \), where T is the product of the proper elementary transformation matrices. Similarly, for a matrix \( \mathbf{R}\in {{\mathrm{Z}}_q}^{l_1\times {l}_2} \) whose entries are chosen from Gaussian distribution with parameter r (\( \sqrt{l_1}<r<\sqrt{l_1}{g}_s \)), we know that AR is a uniformly random matrix on \( {Z}_q^{k\times {l}_2} \).

3.3 Framework of our new algorithm

By using the regularity theorem and its corollary obtained in the previous subsection, the algorithm for constructing a hard random lattice with a short basis is given in this subsection. Also, our construction is simple and guaranteed bound on basis quality.

Now we will give the common framework in Table 1.
Table 1

The framework for constructing the hard random lattice with a short basis

$$ {\displaystyle \begin{array}{l}\mathbf{G}\cdot \mathbf{S}\\ {}=\left(\mathbf{I}\overline{\mathbf{A}}|{\mathbf{A}}_1\right)\left(\begin{array}{cc}\mathbf{I}-\mathbf{RP}& -\left(\mathbf{R}+\mathbf{F}\right)\mathbf{B}\\ {}\mathbf{P}& \mathbf{B}\end{array}\right)\\ {}=\mathbf{0}\operatorname{mod}q\end{array}} $$

Let l = l1 + l2 for some sufficiently large dimensions l1 and l2. Before discussing our algorithm, we first give a uniformly random matrix \( \mathbf{A}\hbox{'}\in {Z}_q^{k\times {l}_1} \) and then using the proper elementary transformation matrices, we can obtain the special form matrix \( \mathbf{A}=\left(\mathbf{I}|\overline{\mathbf{A}}\right) \), where A is a uniformly random matrix with great probability.

Our algorithm for constructing a hard random lattice with a short basis is given, where the input of the algorithm is the uniformly random matrix \( \overline{\mathbf{A}}\in {Z}_q^{k\times \left({l}_1-k\right)} \).

and \( \overline{\mathbf{A}} \) can be extended to the matrix \( \mathbf{G}=\left(\mathbf{A}|{\mathbf{A}}_1\right)=\left(\mathbf{I}|\overline{\mathbf{A}}|{\mathbf{A}}_1\right)\in {Z}_q^{k\times l} \) by generating \( {\mathbf{A}}_1\in {Z}_q^{k\times {l}_2} \) together with some short basis \( \mathbf{S}=\left(\begin{array}{cc}\mathbf{I}-\mathbf{RP}& -\left(\mathbf{R}+\mathbf{F}\right)\mathbf{B}\\ {}\mathbf{P}& \mathbf{B}\end{array}\right)\in {Z}^{l\times l} \) of (G).

From Table 1, we can see that the output matrix S has a block structure, which contains four component matrices B, F, P, and R. The properties of the four matrices are as following:
  • B is nonsingular and typically unimodular;

  • F has entries that grow geometrically, which is a relationship to the parameter q and the special matrix A;

  • P is a short matrix depending on F such that FP is short;

  • R is a randomly short matrix whose entries are from Gaussian distribution with the parameter r where \( \sqrt{l_1}<r<\sqrt{l_1}{g}_s \).

The matrix A′ is uniformly random matrix on \( {\mathrm{Z}}^{k\times {l}_1} \), then the matrix A′T is also uniformly random matrix which follows from the uniformity of A′. We have the matrix (A ' T| A ' T(R + F)) = (A| A(R + F)) is near-uniformly random because of random choice of R whose entries from Gaussian distribution by Theorem 3.1 and its corollary.

Since the matrix \( \mathbf{A}=\left(\mathbf{I}|\overline{\mathbf{A}}\right)\in {\mathrm{Z}}_q^{k\times {l}_1} \), and let \( {\Lambda}^{\perp}\left(\mathbf{A}\right)=\left\{\mathbf{x}:\mathbf{x}\in {Z}^{l_1},\mathbf{Ax}=\mathbf{0}\operatorname{mod}q\right\} \). Obviously, the basis matrix of the lattice Λ(A) can be obtained, that is, \( \mathbf{H}=\left(\begin{array}{cc}\mathrm{q}\mathbf{I}& -\overline{\mathbf{A}}\\ {}\mathbf{0}& \mathbf{I}\end{array}\right) \).

Let D = R + F, then we can get that
$$ \left(\mathbf{A}|\mathbf{0}\right)\left(\begin{array}{cc}\mathbf{I}& \mathbf{D}\\ {}\mathbf{0}& \mathbf{I}\end{array}\right)\left(\begin{array}{cc}\mathbf{I}& -\mathbf{D}\\ {}\mathbf{0}& \mathbf{I}\end{array}\right)\left(\begin{array}{cc}\mathbf{H}& \mathbf{0}\\ {}\mathbf{0}& \mathbf{I}\end{array}\right)=\mathbf{0}\operatorname{mod}\mathrm{q}, $$
That is,
$$ \left(\mathbf{A}|\mathbf{AD}\right)\left(\begin{array}{cc}\mathbf{H}& -\mathbf{D}\\ {}\mathbf{0}& \mathbf{I}\end{array}\right)=\mathbf{0}\operatorname{mod}\mathrm{q}, $$
and we let the matrix B be a nonsingular matrix, then the block matrix \( \left(\begin{array}{cc}\mathbf{I}& \mathbf{0}\\ {}\mathbf{P}& \mathbf{B}\end{array}\right) \) is also a nonsingular matrix, so we can get that
$$ \left(\mathbf{A}|\mathbf{A}\left(\mathbf{R}+\mathbf{F}\right)\right)\left(\begin{array}{cc}\mathbf{H}& -\left(\mathbf{R}+\mathbf{F}\right)\\ {}\mathbf{0}& \mathbf{I}\end{array}\right)\left(\begin{array}{cc}\mathbf{I}& \mathbf{0}\\ {}\mathbf{P}& \mathbf{B}\end{array}\right)=\mathbf{0}\operatorname{mod}\mathrm{q}, $$
that is,
$$ \left(\mathbf{A}|\mathbf{A}\left(\mathbf{R}+\mathbf{F}\right)\right)\left(\begin{array}{cc}\mathbf{H}-\left(\mathbf{R}+\mathbf{F}\right)\mathbf{P}& -\left(\mathbf{R}+\mathbf{F}\right)\mathrm{B}\\ {}\mathbf{P}& \mathbf{B}\end{array}\right)=\mathbf{0}\operatorname{mod}\mathrm{q}, $$
Because the entries of the short random matrix R are from Gaussian distribution which leads to the matrix (A| A(R + F)) is uniformly random with great probability. Furthermore, H is the basis of Λ(A) and I is the identity matrix, thus we have that
$$ \left(\begin{array}{cc}\mathbf{H}-\left(\mathbf{R}+\mathbf{F}\right)\mathbf{P}& -\left(\mathbf{R}+\mathbf{F}\right)\mathrm{B}\\ {}\mathbf{P}& \mathbf{B}\end{array}\right)=\left(\begin{array}{cc}\mathbf{I}-\mathbf{RP}& -\left(\mathbf{R}+\mathbf{F}\right)\mathrm{B}\\ {}\mathbf{P}& \mathbf{B}\end{array}\right) $$

is a nonsingular matrix.

In the block structure, we know the matrices P, B, and R are short matrices and so are the matrices RP and RB. But the norm of F is large, so we must use B to reduce the norm of the block matrix, such that the matrix FB is also short. Then, the block matrix \( \left(\begin{array}{cc}\mathbf{I}-\mathbf{RP}& -\left(\mathbf{R}+\mathbf{F}\right)\mathbf{B}\\ {}\mathbf{P}& \mathbf{B}\end{array}\right) \) is short. Simultaneously, we must ensure that the matrix equation I + FP = H is correct.

Lemma 33. The algorithm shows that if I + FP Λ(A), we have S Λ(G). Moreover, S is a basis of Λ(G) if and only if I + FP is a basis of Λ(A).

Proof It is obvious that A(I + FP) = 0 mod q implies GS = 0 mod q, that is, I + FP Λ(A), implies S Λ(G).

By the block structure of S, the determinant of S is
$$ \left|\mathbf{S}\right|=\left|\begin{array}{cc}\mathbf{I}-\mathbf{RP}& -\left(\mathbf{R}+\mathbf{F}\right)\mathbf{B}\\ {}\mathbf{P}& \mathbf{B}\end{array}\right|=\left|\mathbf{I}+\mathbf{F}\mathbf{P}\right|\left|\mathbf{B}\right| $$
Since the matrix B is nonsingular, that is |B| ≠ 0, then we have that the block matrix S is nonsingular if and only if the matrix I + FP is nonsingular. Because all the columns of A1 = A(R + F) can be linearly represented by the columns of A, we have that the additive subgroup \( \mathbf{G}\subseteq {Z}_q^n \) generated by the columns of A is exactly the subgroup generated by the columns of G = (A| A1). Therefore,
$$ \det \left({\Lambda}^{\perp}\left(\mathbf{G}\right)\right)=\left|\mathbf{G}\right|=\det \left({\Lambda}^{\perp}\left(\mathbf{A}\right)\right) $$

Then S is a basis of Λ(G) exactly when I + FP is a basis of Λ(A).

Now, we know that the block matrix S is the basis of Λ(G), then the remaining problem is that S must be relatively short. By the above discussion, we know that the matrices B and R are short, where B is unimodular and R is chosen from Gaussian distribution on \( {Z}^{l_1\times {l}_2} \); moreover, the matrix P must be short and the columns of I + FP is ensured to be nontrivial vectors in Λ(A). So a part of the matrix F should be long. Simultaneously, the matrix FB must be short because it is a part of −(R + F)B where R and R are short or a part of the block matrix B.

3.4 Concrete expression

The framework of our algorithm for constructing the hard random lattice with a short basis was given in the previous subsection. In this subsection, the concrete expression of each matrix in our algorithm will be shown as follows.

Given any random matrix A' uniformly on \( {Z_{\mathrm{q}}}^{k\times {l}_1} \), after proper matrix elementary transformations, A' can be written as \( \left(\mathbf{I}|\overline{\mathbf{A}}\right) \) where \( \overline{\mathbf{A}} \) is uniformly matrix on \( {Z_{\mathrm{q}}}^{k\times \left({l}_1-k\right)} \) with great probability. The chosen uniformly random matrix A' corresponds to the formal \( \mathbf{A}=\left(\mathbf{I}|\overline{\mathbf{A}}\right) \) where \( \overline{\mathbf{A}} \) is uniformly random. So we can give \( \overline{\mathbf{A}} \). By the discussion in the last subsection, H is the basis of Λ(A), and let H=I + FP. So FP=HI=\( \left(\begin{array}{cc}\left(q-1\right)\mathbf{I}& -\mathbf{A}\\ {}\mathbf{0}& \mathbf{0}\end{array}\right) \). Let d be an integer and m = logdq.

Definition of F: The matrix F has the formal
$$ \mathbf{F}=\left({\mathbf{F}}^{(1)}|{\mathbf{F}}^{(2)}|\cdots |{\mathbf{F}}^{\left({\mathrm{l}}_1\right)}|\mathbf{0}\right)\in {\mathrm{Z}}^{l_1\times {l}_2}, $$
which has l1 + 1 blocks containing l1 blocks F(i)(i = 1, 2, , l1), each of which has m columns, and one zero block having remaining l2 − l1m columns. The first k blocks have the structure that the vector \( \mathbf{f}=\left({f}_1,{f}_2,\cdots, {f}_m\right)=\left(\left\lfloor \frac{q-1}{d^{m-1}}\right\rfloor, \left\lfloor \frac{q-1}{d^{m-2}}\right\rfloor, \kern0.6em \cdots, \left\lfloor \frac{q-1}{d}\right\rfloor, q-1\right) \) is the ith row in F(i)(i = 1, 2, , k) and other rows are zero vectors. The other l1 − k blocks have the structure that \( {\mathbf{F}}^{\left(k+i\right)}=\left(\left\lfloor \frac{-{\mathbf{a}}^{(i)}}{d^{m-1}}\right\rfloor, \left\lfloor \frac{-{\mathbf{a}}^{(i)}}{d^{m-2}}\right\rfloor, \cdots, \left\lfloor \frac{-{\mathbf{a}}^{\left(\mathrm{i}\right)}}{d}\right\rfloor, -{\mathbf{a}}^{\left(\mathrm{i}\right)}\right)\kern0.24em \left(i=1,2,\cdots, {l}_1-k\right) \) where a(i) denotes the ith column of \( \overline{\mathbf{A}} \). We can get that the entry of f1 in each F(i) is in the range [0,r−1].

Definition of P: The columns of the matrix \( \mathbf{P}=\left({p}_1,{p}_2,\cdots, {p}_{l_1}\right)\in {\mathrm{Z}}^{l_2\times {l}_1} \) are some identity vectors which are written as \( {\mathbf{p}}_j={\mathbf{e}}_{jm}\in {Z}^{l_2} \) where j = 1, 2, , l1. This construction of P guarantees that FP=H−I and pj2 = 1(j = 1, 2, , l1).

Definition of B: Let the matrix \( \mathbf{B}\in {Z}^{l_2\times {l}_2} \) be a unimodular matrix such that FB is short. Let BmZm × m be the unimodular matrix whose diagonal entries are 1, upper diagonal entries are −d, and zero entries elsewhere, that is,
$$ {\mathbf{B}}_m=\left(\begin{array}{cccccc}1& -d& 0& \cdots & 0& 0\\ {}0& 1& -d& \cdots & 0& 0\\ {}0& 0& 1& \cdots & 0& 0\\ {}\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ {}0& 0& 0& \cdots & 1& -d\\ {}0& 0& 0& \cdots & 0& 1\end{array}\right). $$
Then define \( \mathbf{B}\in {Z}^{l_2\times {l}_2} \) to be a block-diagonal matrix consisting of l1 the block Bm and one identity matrix block \( \mathbf{I}\in {Z}^{\left({l}_2-{l}_1m\right)\times \left({l}_2-{l}_1m\right)} \) in the main diagonal, and other blocks are zero matrices, that is,
$$ \mathbf{B}=\left(\begin{array}{ccccc}{\mathbf{B}}_m& \mathbf{0}& \cdots & \mathbf{0}& \mathbf{0}\\ {}\mathbf{0}& {\mathbf{B}}_m& \cdots & \mathbf{0}& \mathbf{0}\\ {}\vdots & \vdots & \ddots & \vdots & \vdots \\ {}\mathbf{0}& \mathbf{0}& \cdots & {\mathbf{B}}_m& \mathbf{0}\\ {}\mathbf{0}& \mathbf{0}& \cdots & \mathbf{0}& \mathbf{I}\end{array}\right) $$
and we can learn that bi2 ≤ d2 + 1(j = 1, 2, , l2).

Then, \( \mathbf{FB}=\left({\mathbf{F}}^{(1)}{\mathbf{B}}_m|{\mathbf{F}}^{(2)}{\mathbf{B}}_m|\cdots |{\mathbf{F}}^{\left({\mathrm{l}}_1\right)}{\mathbf{B}}_m|\mathbf{0}\right)\in {\mathrm{Z}}^{l_1\times {l}_2}, \) and all the entries of FB are in the range [0,d]. So the norm of each column of FB is less than or equal to \( d\sqrt{l_1} \) and FB is short.

Definition of R: The entries of the matrix \( \mathbf{R}\in {\mathrm{Z}}^{l_1\times {l}_2} \) are chosen randomly from Gaussian distribution with the parameter r where \( \sqrt{l_1}<r<\sqrt{l_1}{g}_s \). Then by the Theorem 3.1, the entries of (I|A)R are uniformly on Zq with great probability. Because the parameter r is relatively large, we have that the vectors distributed according to Gaussian distribution have an average value very close to zero and expected squared distance from zero very close to r2l1/(2π). So the norm of R is less than or equal to \( r\sqrt{l_1/\left(2\pi \right)} \).

The above discussion in this subsection shows that our algorithm for constructing the hard random lattice with a short basis is reasonable, and the basis of the dual lattice is indeed short. We will analyze the quality of the basis matrix S to prove the advantage of short in the next subsection.

3.5 Analysis and comparison

We analyze the norm of the basis matrix S in this subsection. Firstly, we have that
$$ {\left\Vert \mathbf{S}\right\Vert}^2\le \max \left\{{\left\Vert \mathbf{I}-\mathbf{RP}\right\Vert}^2+{\left\Vert \mathbf{P}\right\Vert}^2,{\left(\left\Vert \mathbf{RB}\right\Vert +\left\Vert \mathbf{FB}\right\Vert \right)}^2+{\left\Vert \mathbf{B}\right\Vert}^2\right\} $$
From the discussion in the previous subsection, we know that P2 = 1 and \( {\left\Vert \mathbf{I}-\mathbf{RP}\right\Vert}^2\le {\left(\mathrm{r}\sqrt{l_1/\left(2\pi \right)}+1\right)}^2 \). Then,
$$ {\left\Vert \mathbf{I}-\mathbf{RP}\right\Vert}^2+{\left\Vert \mathbf{P}\right\Vert}^2\le 2{r}^2{l}_1/\left(2\pi \right). $$
Then we consider the other part, FB2 ≤ l1d2, RB2 ≤ (d + 1)2r2l1/(2π), and B2 ≤ d2 + 1. So we have
$$ {\displaystyle \begin{array}{l}{\left(\left\Vert \mathbf{RB}\right\Vert +\left\Vert \mathbf{FB}\right\Vert \right)}^2+{\left\Vert \mathbf{B}\right\Vert}^2\\ {}\le 4{\left\Vert \mathbf{RB}\right\Vert}^2\\ {}\le 4{\left(\mathrm{d}+1\right)}^2{r}^2{l}_1/\left(2\pi \right)\end{array}} $$
Our construction for generating a hard random lattice with a short basis was from a new perspective and our algorithm in the construction first used a random matrix whose entries were obeyed Gaussian distribution, not an independent {0, ±1}-valued random variable in [12] or uniform distribution from the set {0,1} in [11] as shown in Table 2, and the parameter q is a large regular, which ensure that our algorithm has a wider application future in cryptography area. Moreover, our construction is more specific than the previous constructions which makes our construction be implemented easier in practical applications. What is more, the problem we discussed is the basis of lattice-based cryptograph, so it can resist the attack by quantum computers.
Table 2

The comparison on distribution of random matrix R

4 Conclusion

In this paper, we firstly have proved a fact that a uniformly random matrix contains an invertible submatrix and using elementary transformations the uniformly random matrix can be transformed into the special matrix which contains two parts, an identity matrix part and a uniform matrix part. Secondly, a useful regularity theorem and its corollary on Zq has been proved and the useful parameters could be obtained. Thirdly, using the above fact and corollary, a new construction of hard random lattice with a short basis have been proposed, and then we have given the framework of our algorithm. Fourthly, the concrete expression of our construction, that is the concrete form of the matrices in our algorithm, has been given. Lastly, we have analyzed the quality of the short basis S, which shows that the quality of the short basis in our algorithm is as same as the Alwen and Peikert algorithm.



This work is supported by National Key R&D Program of China (no. 2017YFB0802400), National Science Foundation of China (no. 61373171) and the 111 Project (no. B08038).


This work is supported by National Key R&D Program of China No. 2017YFB0802400, National Science Foundation of China under grant no. 61373171 and the 111 Project under grant no. B08038.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Authors’ contributions

All authors actively participated in the discussions, and read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

State Key Laboratory of Integrated Services Networks, Xidian University, Xi’an, Shaanxi, 710071, People’s Republic of China
Department of Mathematics, Xi’an Polytechnic University, Xi’an, Shaanxi, 710048, People’s Republic of China
Computer Engineering College, Jimei University, Xiamen, Fujian, 361021, People’s Republic of China


  1. C. Peikert, in Annual Cryptology Conference. An efficient and parallel Gaussian sampler for lattices (Springer, Berlin, 2010), pp. 80–97Google Scholar
  2. Z. Brakerski, V. Vaikuntanathan, in Adannces in Cryptology -CRYPTO 2011. Fully homomorphic encryption from ring-LWE and security for key dependent message (2011), pp. 505–524View ArticleGoogle Scholar
  3. S. Dov Gordon, J. Katz, V. Vaikuntanathan, in Advances in cryptology -ASIACRYPT 2010, Lecture Notes in Computer Science. A group signature scheme from lattice assumptions, vol 6477 (2010), pp. 395–412View ArticleGoogle Scholar
  4. V. Lyubashevsky, C. Peikert, O. Regev, in Advances in Cryptology C EUROCRYPT 2010. On ideal lattices and learning with errors over rings (2010), pp. 1–23Google Scholar
  5. V. Lyubashevsky, C. Peikert, O. Regev, in Advances in Cryptology C EUROCRYPT 2013. A Toolkit for Ring-LWE Cryptography (2013), pp. 35–54View ArticleGoogle Scholar
  6. S. Ling, K. Nguyen, D. Stehl, et al., in Public-Key CryptographyCPKC 2013. Improved zero-knowledge proofs of knowledge for the ISIS problem, and applications (Springer, Berlin, 2013), pp. 107–124View ArticleGoogle Scholar
  7. D. Micciancio, C. Peikert, in Advances in Cryptology C EUROCRYPT 2012. Trapdoor for lattices: simpler, tighter, faster, smaller (2012), pp. 700–718View ArticleGoogle Scholar
  8. O. Regev, Lattice-based cryptography[C]. Annual International Cryptology Conference. (Springer, Berlin, Heidelberg, 2006), pp. 131–141Google Scholar
  9. C. Peikert, B. Waters, Lossy trapdoor functions and their applications[J]. SIAM J. Comput. 40(6), 1803–1844 (2011)MathSciNetView ArticleGoogle Scholar
  10. T. Poppelmann, Efficient implementation of ideal lattice-based cryptography. Inf. Technol. 59(6), 305–309 (2017)Google Scholar
  11. M. Ajtai, in International Colloquium on Automata, Languages, and Programming. Generating hard instances of the short basis problem (1999), pp. 1–9Google Scholar
  12. J. Alwen, C. Peikert, Generating shorter bases for hard random lattices. Theory Comput. Syst. 48(3), 535–553 (2011)MathSciNetView ArticleGoogle Scholar
  13. V. Lyubashevsky, D. Micciancio, in Advances in Cryptology-CRYPTO 2009. On bounded distance decoding, unique shortest vectors, and the minimum distance problem (2009), pp. 577–594View ArticleGoogle Scholar
  14. T. Laarhoven, M. Mosca, J. Van De Pol, Finding shortest lattice vectors faster using quantum search. Des. Codes Crypt. 77(2–3), 375400 (2015)MathSciNetMATHGoogle Scholar
  15. M. Ajtai, in ACM Symposium on Theory of Computing -STOC. Generating hard instances of lattice problem-s (1996), pp. 99–108Google Scholar
  16. C. Gentry, C. Peikert, V. Vaikuntanathan, in Proceedings of the fortieth annual ACM symposium on Theory of computing. Trapdoors for hard lattices and new cryptographic constructions (2008), pp. 197–206Google Scholar
  17. D. Micciancio, O. Regev, Worst-case to average-case reductions based on Gaussian measures. SIAM J. Comput. 37(1), 267–302 (2007)MathSciNetView ArticleGoogle Scholar


© The Author(s). 2019