Friday, November 09, 2012

Posts from FOCS (Part 3)

Continuing posts from FOCS, with several guest-writers and guest-editor Justin Thaler.


[Editor: This is Part 3 in our series of posts covering FOCS 2012. Part 4 will be coming in the next couple of days, and there might or might not be a Part 5. We start with Harvard post-doc Karthekeyan (Karthik) Chandrasekaran , who contributes a summary of Nisheeth Vishnoi's talk on A Permanent Approach to the Traveling Salesman Problem from Session 2A on Sunday, October 21.]

Title: A Permanent Approach to the Traveling Salesman Problem.
Author: Nisheeth Vishnoi

Nisheeth gives a randomized algorithm to find a TSP tour of length at most n(1+O(1/sqrt(logk)) for simple k-regular graphs on n vertices. Both the algorithm as well as the analysis are simple and easy to state. The algorithm: pick a cycle cover, contract all cycles, find a minimum spanning tree in the resulting graph and output the tour consisting of the cycles and two copies of the edges in the tree. The cycles contribute to n edges and what we lose over n is bounded by the number of cycles. So, the goal is to pick a good cycle cover, one in which the number of cycles is small.

To pick a cycle cover, map the given graph to a bipartite graph by making two copies u_L and u_R for each vertex u, and adding edges (u_L, v_R) and (v_L, u_R) for each edge (u,v). Then find a perfect matching in the bipartite graph. Such a perfect matching can be mapped to a cycle cover in the original graph (such cycle covers could contain vertex-disjoint edges). Nisheeth shows that in the case of k-regular graphs, picking a uniformly random perfect matching in the bipartite graph leads to a good cycle cover with good probability. This is because:

- The number of perfect matchings in k-regular bipartite graphs (approx. equal to the number of cycle covers in k-regular graphs) is
the permanent of the adjacency matrix which is known to be large (per(A)=Omega(k^n) due to Egorychev, Falikman)

and

- The number of bad cycle covers (with large number of small cycles) in k-regular graphs is much smaller (by a counting argument).
Setting the parameters appropriately, this leads to at most n/sqrt(k) cycles with good probability.

To obtain a poly-time randomized algorithm, obtain a near-uniform perfect matching in the bipartite graph using Jerrum-Sinclair-Vigoda -- this increases the probability of picking a bad cycle cover only by 1/poly(n).

[Editor: Fourth-year grad student Michael Forbes of MIT contributes a summary of Mark Zhandry's talk on constructing quantum random functions from  Session 13A on October 23. Michael does a great job making this summary accessible even to non-quantum folks.]

Title: How to Construct Quantum Random Functions
Author: Mark Zhandry

Zhandry considers cryptography in the post-quantum world, where we want to have good cryptographic primitives despite the potential for quantum adversaries. There are two sides to this question.  The first part is determining which concrete problems seem hard for quantum computers to solve, and using them to construct concrete instantiations of cryptographic primitives.  As an example, it is well known that some concrete cryptographic schemes based on the hardness of factoring are broken because of Shor's algorithm, which can factor integers efficiently on a quantum computer. As such, post-quantum research in concrete cryptography has focused on primitives based on latticed-based problems.

Another side to this question, and this is what Zhandry considers, is identifying the relative structure of cryptographic primitives, such as determining which primitives can be constructed given other primitives.  As an example, it is classically known that one-way functions imply the existence of pseudo-random generators (PRG), which in turn imply the existence of pseudo-random functions (PRF). Zhandy answers the question (amongst others): does the implication "PRG=>PRF" still hold, assuming quantum adversaries?

We should note that there are two variants of the above question.  One variant allows the adversary to use a quantum computer, but only to interact with the cryptographic primitive in a classical way.  The more interesting variant that Zhandry considers is where the interaction can be quantum, and because of the nature of the quantum model,any query by the adversary can "involve" all of the PRF.  In the query model, quantum algorithms can be [provably] exponentially better than classical algorithms, so the adversary Zhandry considers is indeed powerful.

The "PRG=>PRF" result is commonly known as the GGM construction, based on the authors Goldreich-Goldwasser-Micali. They show how to take a PRG, stretching n bits to 2n bits, and from that produce a (seeded) family of functions from n bits to n bits, such that this family of functions is indistinguishable from random by a classical adversary.  The construction of this family is based on a large tree iterating the PRG on itself, where we split the PRG into the left n-bits and right n-bits, and plug one of these n-bits back into the PRG.  The input to the function family dictates the left/right pattern in which we apply the PRG. The security of this construction follows the standard hybrid argument approach.

The main difficulty in applying the above construction in the face of quantum adversaries, is that the hybrid argument in the classical case exploits that classical algorithms can only "look at" as many parts of the PRF as the runtime of the algorithm.  However, as mentioned above, quantum computers can "look at" the entire PRF even in a single query.  Thus, if one wanted to perform a hybrid argument, one might need exponentially many hybrids, which would result (by the averaging used in the hybrid argument) in an exponential (and thus unacceptable) loss in the parameters.

To get around this, Zhandry replaces the distribution of interest with another distribution (quantumly indistinguishable to small-query algorithms) such that this new distribution is created by intrinsically few parameters, so that any hybrid argument on this new distribution only needs to be over few hybrids, allowing the rest of the GGM argument to go through.  Specifically, consider a distribution D, and any r. If we choose a function g via D^[r] (that is, r random samples from D, indexed 1 through r) and f from [r]^X (that is, a random map from the space X to the numbers 1 through r), then g composed with f is a function giving each element in X a sample from D, while only involving r true samples from D.  When r goes to infinity, this distribution will converge to each element in X getting a uniformly random sample from D.  Further, Zhandry shows that for a fixed r, a quantum algorithm with q queries cannot distinguish this process from truly random samples with better than poly(q)/r statistical distance.  Thus, as the above random process only depends on r truly random samples, the hybrid argument will only involve r terms, and, once the choice of r is made, this allows the GGM analysis to be completed in this quantum setting.



No comments: