Aranyak Mehta, Amin Saberi, Umesh Vazirani, Vijay Vazirani:
AdWord and Generalized Online Matching
AdWord and Generalized Online Matching
This paper introduced a generalization of the classical online bipartite matching problem, modeling questions in online advertising. The algorithm, as well as the notion of a trade-off–revealing LP used in its design, had a significant impact on the theory and practice of electronic commerce.
Subhash A. Khot, Nisheeth K. Vishnoi:
The Unique Games Conjecture, Integrality Gap for Cut Problems and Embeddability of Negative Type
The Unique Games Conjecture, Integrality Gap for Cut Problems and Embeddability of Negative Type
This paper solved an important question both in the theory of hardness of approximation, as well as in the geometric theory of metric embeddings. The paper shows super-constant integrality gaps for unique games, for the basic SDP relaxation program, even when enhanced with triangle inequality constraints. To date, this is still the strongest known program in the SDP or SoS hierarchy for which unique games are provably hard. On top of that, the result has an interesting geometric interpretation, showing that every embedding of “negative type” metrics into ell_1 must have non-constant distortion, thus disproving a conjecture by Goemans and Linial.
Adam T. Kalai, Adam R. Klivans, Yishay Mansour Rocco A. Servedio:
Agnostically Learning Half Spaces
Agnostically Learning Half Spaces
This paper provided a breakthrough result in computational learning theory: An efficient learning algorithm for the class of halfspaces (linear separators) that could tolerate adversarial label noise, under reasonable distributional assumptions. Prior to this work, there were almost no positive results for learning interesting concept classes with noise, except in the most benign of noise models. Moreover, connections to cryptography suggested that efficiently learning halfspaces with adversarial label noise might be impossible in the distribution-free setting. Kalai, Klivans, Mansour, and Servedio overcame this barrier, showing that under a wide class of distributions (e.g., log-concave distributions), halfspaces 𝑐𝑎𝑛 be efficiently learned regardless of adversarial label noise. The work contributed to a fundamental shift in the field’s perspective, leading to an outpouring of new positive results for learning geometric concepts in more challenging noise models and with relaxed distributional assumptions.