log in  |  register  |  feedback?  |  help  |  web accessibility
Tight Lower Bound on Equivalence Testing in Conditional Sampling Model
IRB 3137 or Zoom: https://umd.zoom.us/j/6778156199?pwd=NkJKZG1Ib2Jxbmd5ZzNrVVlNMm91QT09
Thursday, May 23, 2024, 3:30-4:30 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

We study the equivalence testing problem where the goal is to determine if the given two unknown distributions on [n] are equal or far in the total variation distance in the conditional sampling model wherein a tester can get a sample from the distribution conditioned on any subset. Equivalence testing is a central problem in distribution testing, and there has been a plethora of work on this topic in various sampling models.
Despite significant efforts over the years, there remains a gap in the current best-known upper bound of O(log log n) [FJOPS, COLT 2015] and lower bound of \Omega(\sqrt{log log n}) [ACK, Theory of Computing 2018]. Closing this gap has been repeatedly posed as an open problem (listed as open problem 87 at sublinear.info). In this work, we completely resolve the query complexity of this problem by showing a lower bound of \Omega(log log n). For that purpose, we develop a novel and generic proof technique that enables us to break the \sqrt{log log n} barrier, not only for the equivalence testing problem but also for other distribution testing problems.

(joint work with Sourav Chakraborty and Gunjan Kumar)

Bio

Dr Diptarka Chakraborty is an assistant professor in the Department of Computer Science. His research interests are in the areas of Theoretical Computer Science; more specifically, algorithms on large data sets, sublinear algorithms, approximation algorithms, graph algorithms, and data structures.

This talk is organized by Kishen N Gowda