time via Discrete Gaussian Sampling Divesh Aggarwal École Polytechnique Fédérale de Lausanne EPFL Daniel Dadush Centrum Wiskunde en Informatica CWI Noah Stephens ID: 601544
Download Presentation The PPT/PDF document "Solving CVP in" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Solving CVP in timevia Discrete Gaussian Sampling
Divesh
Aggarwal
École
Polytechnique Fédérale
de
Lausanne (
EPFL)
Daniel
Dadush
Centrum
Wiskunde
en
Informatica
(CWI)
Noah Stephens-
Davidowitz
New York
University (NYU)Slide2
A lattice
is all integral combinations
of some
basis. denotes lattice generated by .Define .
LatticesSlide3
Closest Vector Problem (CVP)
Given:
Lattice basis , target .Goal: Compute
minimizing
.
Slide4
Applications of SVP & CVPHardness of CVP
Optimization:
Integer and Linear Programming
Number Theory: Factoring Polynomials, Number Field SieveCommunication Theory: Decoding Gaussian ChannelsDatabase Search: Approximate Nearest Neighbor Search Cryptanalysis: RSA with Small Exponent, Knapsack Crypto SystemsCryptography: Lattice based Crypto (hardness of LWE / SIS)
-SVP
-CVP (CVP is the ``hardest’’ lattice problem)
NP-Hard
NP
coAM
NP
coNP
P
Slide5
Main Result
Method
Apx
TimeSpaceAuthorsBasis ReductionLLL 83, Sch. 85, Bab. 86, MV 10
LLL 83, Kan. 87,
…, HS 08Randomized Sieve
AKS 01, AKS 02, BN 07, …
Voronoi
Cell
SFS 09, MV 13Discrete Gaussian
ADS 15
Method
Apx
Time
Space
Authors
Basis Reduction
LLL 83, Sch.
85, Bab. 86,
MV 10
LLL 83, Kan. 87,
…, HS 08Randomized Sieve
AKS 01, AKS 02, BN
07, …
Voronoi Cell
SFS 09, MV 13
Discrete Gaussian
ADS 15Slide6
OutlineApproximate CVP via (shifted) DGS sampling.Relation between parameter and approx. factor.A shifted DGS sampler.
Number of samples we can generate at desired parameters.
Sample clustering & recursion for exact CVP.
Learning the coordinates of a closest vector.Slide7
Shifted Discrete Gaussian
.
discrete Gaussian distribution over with parameter ,
for
.
Slide8
Shifted Discrete Gaussian
The discrete Gaussian
is
more concentratedas the parameter decreases.
Slide9
Discrete Gaussian and CVP
Closest vectors to
in
correspond to shortest vectors in .Question: Can we hit closest vectors by sampling from for small enough ? Slide10
+
Discrete Gaussian and CVP
Problem:
Can have arbitrarily many approximate
closest vectors!
Let
denote distance
of
to
.
Slide11
+
Discrete Gaussian and CVP
Problem:
Have little chance that
hits a closest vector unless is tiny.
Slide12
Approximate CVP via DGS
Lemma:
For
, if
then
.
Note
Slide13
Approximate CVP via DGS
To get
-approximate closest vector to
it suffices to sample once from
for
.
Slide14
Hermite-Korkine-Zolotarev Basis
For a basis
, define the projections
orthog. projection onto
.Define the GSO
of by
for
. is a Hermite-Korkine-Zolotarev (HKZ)basis for if
for
.Computable with calls to an SVP oracle.
Slide15
Shifted DGS Sampler
Theorem:
an
-dimensional lattice, , and .There is an algorithm which generates at least
sampleswith joint distribution -close to i.i.d.
in time
for any .
Can hit all
“high weight” cosets in . Slide16
Shifted DGS Sampler
Theorem:
an
-dimensional lattice, , and .There is an algorithm which generates at least
sampleswith joint distribution -close to i.i.d.
in time
for any .
If
gives
-
approx!
Slide17
Shifted DGS Sampler
Theorem:
an
-dimensional lattice, , and .There is an algorithm which generates at least
sampleswith joint distribution -close to i.i.d.
in time
for any .
More importantly, will suffice to solve exact CVP.Slide18
Averaging Discrete Gaussians
Let
,
,
.
Then for
,
and
,
.
Hence
conditioned on landing in
is distributed as
.
Slide19
Averaging Discrete Gaussians
Let
,
,
.
Then for
,
.
With above identity can show amazing inequalities.
Slide20
Shifted DGS Combiner
Input:
i.i.d. samples from
.Output: i.i.d. samples from
.
Initialization:Apply SVP solver to compute HKZ basis for .Use [GPV08,BLPRS13] sampler on to produce
samples at
in polytime. Slide21
Shifted DGS Combiner
Input:
i.i.d. samples from
.Output: i.i.d. samples from
.
Meta Procedure: Repeat timesSample with probability
.
Pick unused
, return
.
Slide22
Shifted DGS Combiner
Input:
i.i.d. samples from
.Output: i.i.d. samples from
.
Question: How big can be?Should at least not exhaust supply on expectation.
Slide23
Shifted DGS Combiner
Input:
i.i.d. samples from
.Output: i.i.d. samples from
.
Question: How big can be?Worst case for maximizing
.
Slide24
Shifted DGS Combiner
Input:
i.i.d. samples from
.Output: i.i.d. samples from
.
Question: How big can be?Worst case for maximizing
.
Slide25
Shifted DGS Combiner
Let
,
maximizes
.Theorem:
The loss after steps
Need at least
initial samples to go
steps.
Slide26
Key Inequality
Let
,
maximize
.Lemma:
.
Proof:
Take
and
. Then
Slide27
Key Inequality
Let
,
maximize
.Lemma:
.
Proof:
Take
and
. Then
Slide28
Hope for Exact CVP
From approximate CVP solutions can try to learn
subspaces that must contain the closest vector.
lattice subspaces
Slide29
Clustering approx. closest vectors
Lemma:
Assume that
,
, are at distance at most
from
.
Then
.
Slide30
Clustering approx. closest vectors
Lemma:
Assume that
, , are at distance at most
from . Then
.Proof: Since
,
.
Slide31
How many closest vectors?
Corollary:
For
-dimension lattice and target there are most closest vectors to .
Bound is trivially tight.
Slide32
How many closest vectors?
Corollary:
For
-dimension lattice and target there are most closest vectors to .Proof: By lemma, any two closest vectors in the same coset of are equal (their distance is ). Furthermore, the number of distinct cosets is
.
Slide33
Dimension Reduction via Clustering
Let
be an HKZ basis of
.Lemma: Assume that ,
, are at distance at most
from . Then if
,
have the same last coordinates w.r.t. .Proof: Suffices to show
. If not,
then
is non-zero and
.
Slide34
Exact CVP
Main Idea:
Given
HKZ basis of will show that for chosen carefully, the last coordinates of any close enough vector to are determined by their parity.For close enough to
, will show that
is essentially determined by
.
Slide35
Exact CVP
Can
group approx. closest vectors
by their coefficientswith respect to .Indexes at most shifts of dimensional sublattice
, which we recurse on.
lattice subspaces
Slide36
High Level AlgorithmInput: -dimensional lattice
and target .
Output:
Closest lattice vectors in to .Compute HKZ basis of , and number of “high order coordinates”. Sample many approx. closest vectors via DGS.Group them according to last coordinates with respect to and recurse onassociated shifts of
.
Slide37
Complexity SketchInitialization: (one shot
time)
Compute
short basis of , and number of “high order coordinates” (can compute for each rec. level). Per level work: ( time)Sample many approx. closest vectors via DGS.Recursion: ( subproblems of dim. )Group them according to last
coordinates with respect to and recurse.Total runtime:
Slide38
Key ChallengesRuntime: Getting many DGS samples at low parameters.Show last
coeff
s
determined by their parity.Deal with subproblems in recursion analysis.Correctness:Show that we hit last coeffs of an exact closest vector with high probability. (will show that we hit exact parity) Slide39
Conclusions Fastest algorithm for CVP:
time.
Explicitly / implicitly use ideas from all known algorithm types:
basis reduction, sieving, Voronoi. The discrete Gaussian is a very powerful tool! Many of its properties are still poorly understood... Slide40
Open ProblemsIs optimal under SETH? (matches # closest vectors)
Is there a deterministic / Las Vegas algorithm?(
time once the
Voronoi cell is computed [BD 15])Find a simpler & cleaner algorithm…
Slide41
THANK YOU!