Thursday, November 01, 2012

Posts from FOCS (Part 1)

Justin Thaler went to FOCS, and managed to arrange for various people to write up their thoughts on various parts of the conference.  So Justin is the famed Editor below.  We haven't seen much posted about FOCS, so hopefully these summaries will be of interest.  I expect we'll have a few posts in this series.

[Editor: Third-year grad student Matt Weinberg of MIT contributes a summary of many of the EconCS talks from FOCS. Matt starts with the Workshop on Bayesian Mechanism Design on Saturday, October 20, followed by some talks from the main program.]

Missing: Jason Hartline, Introduction to Bayesian Mechanism Design, and Tim Roughgarden, Simple/Prior Independent mechanisms part A.

1) Tim Roughgarden, Simple/Prior Independent mechanisms part B: For some settings, such as selling a single item, the revenue-optimal auction is well understood but difficult to implement in practice. On the other hand, the second price auction (item goes to the highest bidder, she pays the second highest price), is often used in practice but has absolutely no revenue guarantee. A famous theorem in Economics of Bulow and Klemperer helps explain this: While the second price auction itself has no revenue guarantee, the revenue of the second price auction after recruiting one extra bidder is at least as good as the optimal auction (without recruitment). In other words, a seller's efforts are better spent advertising than analyzing the market (in these scenarios).

In addition to presenting this theorem, Tim discussed several extensions of this theorem. In a paper with Jason Hartline, they showed that in more complex settings (called "single-dimensional"), the spirit of Bulow-Klemperer holds: simple auctions yield approximately optimal revenue. In a paper with Peerapong Dhangwatnotai and Qiqi Yan, they show further that one can obtain approximately optimal revenue via a simple auction with very limited information about the market. With Inbal Talgram-Cohen and Qiqi Yan, they showed that such results are also possible in certain multi-dimensional settings, such as matching markets.

2) Costis Daskalakis, Revenue Optimization in Multi-Item Auctions: As mentioned in Tim's talk, revenue-optimization in single-item auctions is very well understood. Not only does this mean that revenue can be optimized exactly with enough work, but it enables the study of simple, approximately optimal auctions such as those presented by Tim. Unfortunately, as soon as a second item enters the picture, all bets are off and all of a sudden we know absolutely nothing. Not only do we not know what the optimal auction looks like, but we don't have any grasp on a good benchmark for designing approximations in general settings (in certain restricted settings, Shuchi Chawla, Jason Hartline, David Malec and Balasubramanian Sivan were able to approximate a meaningful benchmark via a clever multi- to single-dimensional reduction). In contrast, welfare maximization is extremely well understood, and has been for 40 years, 10 years before we knew how to maximize revenue with even a single item.

Costis presented work with Yang Cai and Matt Weinberg showing that in very general multi-dimensional settings, revenue maximization can be computationally efficiently reduced to welfare maximization. Specifically, the mechanism design problem of maximizing revenue is reduced to the algorithm design problem of maximizing welfare (without worrying about incentives).

3) Nicole Immorlica, Black Box Reductions from Mechanism to Algorithm Design: One could argue that the overarching goal in mechanism design is to reduce problems with incentives (such as maximizing an objective subject to truthfulness constraints) to problems without (such as designing an algorithm to optimize some objective on all inputs). Several famous economic results, such as the second-price auction, VCG (to maximize welfare), and Myerson's (to maximize revenue selling a single item) follow this paradigm. In fact, so does the main result presented in Costis' previous talk. An obvious important question is simply "when can such reductions exist?" and more specifically, "when are they compatible with approximation algorithms?"

Nicole presented work by Jason Hartline and Brendan Lucier, which shows that in all Bayesian, single-dimensional settings an alpha-approximation algorithm for welfare implies a Bayesian truthful alpha-approximation mechanism for welfare. Follow-up work done independently by Jason Hartline, Robert Kleinberg, and Azarakhsh Malekian and Xiaohui Bei and Zhiyi Huang showed that the same result holds in multi-dimensional settings as well. In a more recent paper, Nicole with Shuchi Chawla and Brendan Lucier provided some negative results, essentially showing that these techniques will not apply far beyond these settings. They show first that it is not possible to replace Bayesian truthfulness with deterministic truthfulness, even in single-dimensional settings. They also show that it is not possible to replace welfare with a non-linear objective (such as makespan).

4) Shuchi Chawla, Non-Linear Objectives in Mechanism Design: Non-linear objectives, such as minimizing makespan, have proven much more difficult to handle in mechanism design than linear ones, such as welfare. Specifically, consider the problem where there are n jobs that need to be run on one of m machines. Each machine i reports a processing time t(i,j), and the goal is to allocate the jobs to machines in a way that minimizes the makespan (maximum processing time among the machines). In the single-dimensional version of this problem, Aaron Archer and Eva Tardos showed how to obtain a 3-approximation, and Peerapong Dhangwatnotai, Shahar Dobzinski, Shaddin Dughmi, and Tim Roughgarden later gave a PTAS. For the general problem, Noam Nisan and Amir Ronen showed that VCG only obtains an m-approximation and Itai Ashlagi, Shahar Dobzinski, and Ron Lavi later proved that no natural, truthful mechanism can do better (for a very reasonable definition of natural mechanisms).

Shuchi presented joint work with Jason Hartline, David Malec, and Balusabramanian Sivan that beats this hardness result using Bayesian truthfulness, obtaining a constant factor approximation whenever n = O(m). In some sense, this is the "hard" case of the problem. In light of the hardness result from Nicole's previous talk, the mechanism cannot simply be a black-box reduction to algorithm design, but instead uses novel ideas.

5) Eva Tardos, Price of Anarchy of Practical Auctions: The price of anarchy studies the effect of selfish behavior in games. In order to minimize congestion in a network, a central designer might try to enforce that certain users use certain paths. As this is quite a daunting task, one might instead hope that simply letting users choose their own paths will result in good (but perhaps not optimal) congestion. In a 2001 paper, Tim Roughgarden and Eva Tardos showed exactly this: that as long as users choose paths in a Nash Equilibrium, the congestion is only a constant-factor worse than optimal. Since then, several other games were shown to have a good price of anarchy, including several auction scenarios. 


Until recently, all of these results were proved using different techniques, and every new game required a new key insight to prove a PoA bound. In addition, once interest grew to other solution concepts beyond Nash Equilibrium, each setting had to be revisited and reproved. In 2009, Tim Roughgarden introduced a "smoothness" technique which essentially reduced proving PoA bounds for several solution concepts to proving that a game (or auction) was "smooth." Eva presented recent work with Vasilis Syrgkanis and Renato Paes Leme where they extend the smoothness framework to accommodate auctions and combinations of auctions. They show first that several practical auctions, including first price auctions and second price auctions (under some mild assumptions) are "smooth." Second, they show that any sequential or parallel combination of smooth auctions is itself smooth (also under some natural assumptions), and therefore has good price of anarchy in several solution concepts. This nicely models lots of practical settings (such as participating in multiple auctions over ebay) and shows that simplicity doesn't cost us much.

==================

[Editor: Matt also contributes a summary of the AGT Session (3A) from Sunday October 21st].

Session 3A:

1) Yang Cai, Optimal Multi-Dimensional Mechanism Design: Reducing Revenue to Welfare Maximization (joint work with Costis Daskalakis and Matt Weinberg): Designing a revenue-maximizing auction for multiple items is one of the central open problems in mathematical economics (see summary of Costis' workshop talk for more background on the problem). An ideal solution, especially for computer scientists, would be one that reduces the problem of designing the auction (a problem that must respect bidder's incentives) to the problem of designing an algorithm (a problem with no incentive constraints). This paper does exactly that in very general settings.

The main idea is that the revenue-maximizing auction can be described as a solution to an LP with polynomially many variables but exponentially many constraints. Remarkably, a separation oracle for the feasible region can be constructed using only black-box access to an algorithm that maximizes welfare (without concern for incentives). See Tim Roughgarden's summary of this talk by Costis here: http://agtb.wordpress.com/2012/08/29/cmu-summer-school-recap-part-3-hartline-and-daskalakis/.

2) Zhiyi Huang, The Exponential Mechanism for Social Welfare: Private, Truthful, and Nearly Optimal (joint work with Sampath Kannan): Traditional mechanism design usually focuses on maximizing (or nearly maximizing) an objective subject to truthfulness constraints. Traditional differential privacy focuses on maximizing an objective subject to privacy constraints. This paper explores the possibility of obtaining all three goals simultaneously. In this paper, the authors show that the Exponential mechanism, a well-studied mechanism from differential privacy, is actually truthful as well. In other words, truthfulness is compatible with differential privacy,at least for maximizing welfare.

The main idea is that the Exponential mechanism is a maximal-in-distributional-range mechanism, offset by the Shannon entropy. MIDR mechanisms are known to be truthful. They also show that in several settings, the mechanism can be implemented efficiently.

3) Laszlo Vegh, Concave Generalized Flows with Applications to Market Equilibria: Concave generalized flow networks build upon standard flow networks. In a traditional flow network, when x flow leaves node v on a path to w, x flow arrives at w. In a Concave generalized flow network, every edge has a concave, monotone gain function, so that when x flow leaves node v on a path to w, g_e(x) flow arrives at w instead. The goal is still to maximize the flow arriving at the sink. This problem (and the special case where every gain function is linear) has been extensively studied, and can be solved via convex programming. However, until this result, no combinatorial algorithm was known (that applied in all settings).

The paper provides a combinatorial algorithm with quite good running time (not much worse than just solving a regular network flow), and also provides new applications to finding market equilibria. Specifically, the paper shows that equilibrium for Linear Fisher markets (and also other market equilibrium solutions) can be found using concave generalized flows.

No comments: