Saturday, November 03, 2012

Posts from FOCS (Part 2)

Continuing posts from FOCS, with several guest-writers and guest-editor Justin Thaler.

[Editor: First-year grad student Mark Bun of Harvard contributes a summary of two talks on submodular maximization from Session 13A on Tuesday, Oct. 23]

We had not just one, but two talks on new tight algorithms for submodular maximization. A submodular function is a set function whose value offers diminishing returns to its input. That is, adding some element to a set causes a smaller increase in the value of the function than does adding that element to a subset. Many combinatorial optimization problems can be expressed as the minimization or maximization of a submodular function, including min- and max-cut, coverage problems, and welfare maximization in algorithmic game theory. Unsurprisingly, submodular maximization tends to be NP-hard for most natural choices of constraints, so we look for approximation algorithms. Both results regard the input submodular function as a value oracle, which evaluates the function on a set in one time step.

Paper Title: A Tight Linear Time (1/2)-Approximation for Unconstrained Submodular Maximization

Roy Schwartz spoke about work with Niv Buchbinder, Moran Feldman and Seffi Naor on a randomized linear-time 1/2-approximation for unconstrained submodular maximization. This matches a lower bound due to Feige, Mirrokni and Vondrak, who showed that achieving any approximation factor larger than 1/2 requires exponentially many queries to the value oracle. That paper also gave a 2/5-approximation based on local search, which was later improved to 0.41 by Gharan and Vondrak by simulated annealing, and then to 0.42 by Feldman, Naor and Schwartz by taking advantage of structural similarity. Given this line of work, the result here is remarkable for being both tight and incredibly simple. It is essentially a greedy algorithm that exploits both randomness and the symmetry of the optimization problem.

Paper Title: A Tight Combinatorial Algorithm for Submodular Maximization Subject to a Matroid Constraint.

Justin Ward then presented a new tight algorithm for monotone submodular maximization subject to a matroid constraint, in joint work with Yuval Filmus. Calinescu et al. gave a tight approximation where they relaxed the problem to continuous space, performed gradient ascent, and then rounded. This algorithm achieves the same approximation, but is purely combinatorial. It works by applying the standard greedy algorithm to an auxiliary potential function that has lower curvature than the input function. One of the neat parts of this talk was a discussion of how the authors came up with this auxiliary function. It was quite the tale of experimenting with matroids of low-rank, solving an LP, and guessing a necessary recurrence.
===================
[Editor: Fourth-year grad student Michael Forbes contributes a summary of Ketan D. Mulmuley's talk on Geometric Complexity Theory from Session 12B on Tuesday, October 23.]

Paper Title: Geometric Complexity Theory V: Equivalence between blackbox derandomization of polynomial identity testing and derandomization of Noether's normalization lemma

Ketan starts with the following problem. Consider the space of t-tuples of n by n complex matrices. We impose an equivalence relation on this space via simultaneous conjugation by a fixed matrix. For example, when t=2, we consider (A,B) and (PAP^{-1},PBP^{-1}) equivalent, for any invertible matrix P. The question is to classify the equivalence classes under this action, by identifying a suitably nice normal form. In this context, "nice" means that t-tuples in normal form are the solutions-to/parameterized-by some polynomial equations.

For t=1, this problem is solved by the Jordan normal form. For t>1, this problem is considered "wild". That is, this problem is considered quite difficult in representation theory, and any other problem that embeds this classification problem is called "wild". This problem is similar to hard problems in complexity theory, as it can be shown that the t=2 case embeds the t>2 case, so there is some sort of "completeness" result going on.

It is classically known that Noether's normalization lemma is at the heart of the above classification problem. This lemma states that there exist some set S of generators, such that a certain ring is integral over the ring generated by S. Further, a random choice of S will work. In his paper, Ketan argues that explicit derandomization of the polynomial identity testing (PIT) problem will imply explicit sets S of generators as needed for Noether's lemma, which would imply explicit classification of the "wild" problems mentioned above. That is, the (black-box) PIT problem is to construct a small set H of evaluation points, such that for any small algebraic circuit C computing a non-zero polynomial, C will evaluate to non-zero on some point in H.

The PIT problem has seen a lot of interesting work in the past decade, much of it focused on constructing such sets H (called hitting sets) for restricted classes of circuits, such as depth-3 circuits with bounded fan-in. Ketan suggests that because the general PIT question embeds these wild problems from representation theory, there is a tension between the consensus in the two fields: representation theory sees these classification problems as notoriously difficult, while complexity theory might suggest that these problems (via their connection to PIT) are possible to solve (but perhaps still difficult).
=====================
[Editor: Third-year grad student Thomas Steinke of Harvard contributes a summary of two talks on pseudorandomness from Session 2B on Sunday, October 21.]

FOCS 2012 had very many interesting talks. I'll discuss two papers on
unconditional pseudorandom generators, namely Pseudorandomness from Shrinkage by Russell Impagliazzo, Raghu Meka, and David Zuckerman (speaker) and Better Pseudorandom Generators from Milder Pseudorandom Restrictions by Parikshit Gopalan, Raghu Meka (speaker), Omer Reingold, Luca Trevisan, and Salil Vadhan.

Both of these papers show a connection between pseudorandom generators and (pseudo)random restrictions.

Random restrictions are often used to prove lower bounds for circuits, branching programs, or other models of computation. Typically one shows that a circuit `simplifies' or `shrinks' when a random subset of its inputs are set to random values. For example, Hastad's switching lemma is used to show that an AC0 circuit reduces in depth each time a random restriction is applied. However, the parity function does not simplify when restricted. Thus proving that parity is not in AC0.

A pseudorandom generator for a function family immediately gives a lower bound for such functions. Meka et al. give a partial converse to this by converting a lower bound analysis using random restrictions into a pseudorandom generator. They show that, if a pseudorandom restriction sufficiently shrinks circuits in a given family, then a pseudorandom generator can be constructed that essentially matches the lower bound given by the analysis. Thus this paper adds to the well-known connection between hardness and randomness. Their work yields pseudorandom generators with polynomial stretch for several models, including near-quadratic stretch for branching programs that can read their input in an arbitrary fixed order, while the previous best result gives linear stretch.

Gopalan et al. also analyse pseudorandom restrictions, but with very different techniques and results. They analyse mild restrictions (i.e. only a constant fraction of input bits are restricted) that do not yield the strong shrinkage required by Meka et al.. They are able to show that these restrictions, when `averaged out,' leave a function that is fooled by a small bias distribution. Recursively applying this analysis yields a pseudorandom generator with near-optimal almost-logarithmic seed length for combinatorial rectangles and read-once CNFs, as well as a hitting set generator for width-3 branching programs. Previous work was only able to achieve these results with large error; when the error of the pseudorandom generator is required to be polynomially small, their results are a polynomial improvement in terms of seed length.
=====================
[Editor: Fourth-year grad student Justin Thaler of Harvard contributes a summary of two unrelated talks.]

Paper Title: The Cutting Plane Method is Polynomial for Perfect Matchings.

Harvard's own Karthekeyan Chandrasekaran talked about joint work with Laszlo A. Vegh and Santosh S. Vempala on cutting plane algorithms for matching problems. The cutting plane method is a popular algorithm for solving integer programs (IPs), used in commercial solvers. It works by starting with an LP relaxation of the given IP to obtain basic optimal solution x_0, and then iteratively adding constraints that are valid for integer solutions but violated by the basic optimum. It continues until the basic optimum is integral. The goal of this paper is to take a step toward explaining the practical efficiency of cutting plane methods, by giving an efficient cutting-plane algorithm for min-cost perfect matching (MWPM) --MWPM is known to be in P, but it was open (apparently for 30 years) whether there was a polynomial-time cutting-plane algorithm for this problem.

A brief summary of how they achieve this is as follows. They start with a natural, well-known LP relaxation of the MWPM problem, called the bipartite relaxation. This relaxation has the nice property that all basic optima x are half-integral, and the support of x is a disjoint union of edges and odd cycles. This makes it easy to find cuts (the cuts correspond to what are called blossom inequalities, see the paper for details). A major challenge, though, is that naively adding cuts will not preserve the half-integrality of intermediate LPs, so at each iteration they throw away some of the old cuts that were added earlier in the execution. They need to take considerable care in choosing which cuts to keep in order to guarantee half-integrality of intermediate LPs (and to ensure that their algorithm makes progress at a sufficiently high rate).

Paper Title: Lower bounds on information complexity via zero-communication protocols and applications.

I unfortunately didn't catch the name of the presenter, but this work was by Iordanis Kerenidis, Sophie Laplante, Virginie Lerays, Jeremie Roland and David Xiao. As the title states, this paper was about information complexity. The idea underlying this concept is as follows. In the standard two-party communication model, Alice and Bob have inputs x and y respectively, and they communicate with each other in order to compute a function f(x, y), with the goal of minimizing the total number of bits transmitted; this is the communication cost CC. Even if Alice and Bob exchange a lot of bits, we can still ask how much information about Alice's input was revealed to Bob, and vice versa; this is the information cost IC. In principle, this could be much smaller than the total communication, as a message can sometimes reveal less information than its length.

As far as I can tell, people started studying IC precisely because it provided a way to reason about and lower bound CC (it is easy to see that the amount of information revealed about the inputs is never larger than the total communication), but IC is a natural complexity measure in its own right, and determining the precise relationship between IC and CC (i.e. whether it is possible to ``compress communication down to the information complexity'') is still a major open question. This is the context underlying the paper, and finally here is a summary of the results.

The authors given a new method, called the relaxed partition bound (RPB), for lower bounding the information complexity of any function. The proof that the RPB indeed lower bounds IC goes through a new connection between "zero-communication protocols" and information complexity-- see the paper for details. Remarkably, the RPB method is stronger than almost all known methods for lower bounding the *communication* complexity f (the only exception is the (unrelaxed) partition bounded itself). Essentially this means that any existing CC lower bound proved via a technique other than the partition bound is actually something stronger: it is an IC lower bound. They are therefore able to resolve the information complexity of a number of important problems, resolving three open questions from a STOC 2012 paper by Mark Braverman.




1 comment:

r said...

The speaker was Virginie for the last talk.