For Participants

Programme

For authors

Organization

Misc

COLT 2009 Accepted Papers

Yiming Ying and Colin Campbell. Generalization Bounds for Learning the Kernel Problem

Adam Kalai and Ravi Sastry. The Isotron Algorithm: High-Dimensional Isotonic Regression

Nader H. Bshouty and Phil Long. Linear classifiers are nearly optimal when hidden variables have diverse effect

Eric Friedman. Active Learning for Smooth Problems

Sanjay Jain and Frank Stephan. Consistent Partial Identification

Sivan Sabato and Naftali Tishby. Homogeneous Multi-Instance Learning with Arbitrary Dependence

Nicolo Cesa-Bianchi and Gabor Lugosi. Combinatorial Bandits

Nader Bshouty. Optimal Algorithms for the Coin Weighing Problem with a Spring Scale

Hans Simon and Nikolas List. SVM-Optimization and Steepest-Descent Line Search

Daniel Hsu, Sham M. Kakade and Tong Zhang. A Spectral Algorithm for Learning Hidden Markov Models

Eyal Even-Dar, Robert Kleinberg, Shie Mannor and Yishay Mansour. Online Learning for Global Cost Functions

Arnak Dalalyan and Alexandre Tsybakov. Sparse Regression Learning by Aggregation and Langevin Monte-Carlo

Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro and Karthik Sridharan. Stochastic Convex Optimization

Adam Tauman Kalai, Varun Kanade and Yishay Mansour. Reliable Agnostic Learning

Shai Ben-David, David Pal and Shai Shalev-Shwartz. Agnostic Online Learning

Andreas Maurer and Massimiliano Pontil. Empirical Bernstein Bounds and Sample-Variance Penalization

Ofer Dekel and Ohad Shamir. Vox Populi: Collecting High-Quality Labels from a Crowd

Gabor Lugosi, Omiros Papaspiliopoulos and Gilles Stoltz. Online Multi-task Learning with Hard Constraints

Mark Reid and Robert Williamson. Generalised Pinsker Inequalities

Jean-Yves Audibert and Sebastien Bubeck. Minimax policies for adversarial and stochastic bandits

Yishay Mansour, Mehryar Mohri and Afshin Rostamizadeh. Domain Adaptation: Learning Bounds and Algorithms

Alessandro Lazaric and Remi Munos. Hybrid Stochastic-Adversarial On-line Learning

Karim Lounici, Massimiliano Pontil, Alexandre B. Tsybakov and Sara A. van de Geer. Taking Advantage of Sparsity in Multi-Task Learning

Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro and Karthik Sridharan. Learnability and Stability in the General Learning Setting

Samory Kpotufe. Escaping the curse of dimensionality with a tree-based regressor

Hayato Kobayashi and Ayumi Shinohara. Complexity of Teaching by a Restricted Number of Examples

Maria Florina Balcan and Mark Braverman. Finding low error clusterings

Ingo Steinwart, Don Hush and Clint Scovel. Optimal Rates for Regularized Least Squares Regression

Yisong Yue, Josef Broder, Robert Kleinberg and Thorsten Joachims. The K-armed Dueling Bandits Problem

Jacob Abernethy and Alexander Rakhlin. Beating the Adaptive Bandit with High Probability

Luis Rademacher and Navin Goyal. Learning convex bodies is hard

Vitaly Feldman. Robustness of Evolvability

Nicolo Cesa-Bianchi, Claudio Gentile and Fabio Vitale. Fast and Optimal Prediction on a Labeled Tree

H. Brendan McMahan and Matthew Streeter. Tighter Bounds for Multi-Armed Bandits with Expert Advice

Lorenzo Rosasco, Mikhail Belkin and Ernesto De Vito. A Note on Learning with Integral Operators

Hariharan Narayanan and Partha Niyogi. On the sample complexity of learning smooth cuts on a manifold

Jacob Abernethy, Alekh Agarwal, Peter Bartlett and Alexander Rakhlin. A Stochastic View of Optimal Regret through Minimax Duality

Steve Hanneke. Adaptive Rates of Convergence in Active Learning

Jeffrey Jackson and Karl Wimmer. New results for random walk learning

Mark Herbster and Guy Lever. Predicting the Labelling of a Graph via Minimum p-Seminorm Interpolation