log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
BKW Meets Fourier: New Algorithms for LPN with Sparse Parities
Friday, October 22, 2021, 1:00-2:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

https://eprint.iacr.org/2021/994

 

We consider the Learning Parity with Noise (LPN) problem with sparse secret, where the secret vector s of dimension n has Hamming weight at most k. We are interested in algorithms with asymptotic improvement in the exponent beyond the state of the art. Prior work in this setting presented algorithms with runtime nc*k for constant c < 1, obtaining a constant factor improvement over brute force search, which runs in time (n choose k).
We obtain the following results:
 - We first consider the constant error rate setting, and in this case present a new algorithm that leverages a subroutine from the acclaimed BKW algorithm [Blum, Kalai, Wasserman, J.~ACM '03] as well as techniques from Fourier analysis for p-biased distributions. Our algorithm achieves asymptotic improvement in the exponent compared to prior work, when the sparsity k = k(n) = n / log1+ 1/c(n), where c in o(log log(n)) and c in omega(1). The runtime and sample complexity of this algorithm are approximately the same.
 - We next consider the low noise setting, where the error is subconstant. We present a new algorithm in this setting that requires only a polynomial number of samples and achieves asymptotic improvement in the exponent compared to prior work, when the sparsity k = 1 / eta . (log(n)/log(f(n)) and noise rate of eta not equal to 1/2 and eta2 = ((log(n)/n).f(n)), for f(n) in omega(1) \cap n^(o(1)). To obtain the improvement in sample complexity, we create subsets of samples using the design of Nisan and Wigderson [J.~Comput.~Syst.~Sci. '94], so that any two subsets have a small intersection, while the number of subsets is large. Each of these subsets is used to generate a single p-biased sample for the Fourier analysis step. We then show that this allows us to bound the covariance of pairs of samples, which is sufficient for the Fourier analysis.
 - Finally, we show that our first algorithm extends to the setting where the noise rate is very high 1/2 - o(1), and in this case can be used as a subroutine to obtain new algorithms for learning DNFs and Juntas. Our algorithms achieve asymptotic improvement in the exponent for certain regimes. For DNFs of size s with approximation factor epsilon this regime is when log(s/epsilon) in omega (c/(logn loglogc)), and log(s/ epsilon) in n1 - o(1), for c in n1 - o(1). For Juntas of k the regime is when k in omega (c/ logn loglogc)), and k in n1 - o(1), for c in n1 - o(1).

===============

https://umd.zoom.us/j/97585901703?pwd=T1hBZFFMdnV5VXdiaVdtaWo0RnNmZz09

Meeting ID: 975 8590 1703

Passcode: lattices??

This talk is organized by David Miller