Course description
It is well-known that functions of large numbers of random quantities
tend to behave rather predictably and `less
randomly’ than their constituents. For
instance, the laws of large numbers tell us
that the average of many independendent random
variables is asymptotically the expected
value; higher-order refinements such as the
central limit theorem and large deviations
techniques uncover the asymptotic rate at
which this reduction in randomness takes
place. However, if one is interested in
sharper estimates, for the probability of
deviation from the typical value, for a fixed
number of observations, for functions other
than the average, or for functions of
dependent random variables, one must take
recourse to more specific measure
concentration bounds. Perhaps the most basic,
nontrivial examples in this regard are the
Markov and Chebyshev inequalities, which are
encountered in a first course on probability.
This graduate-level course on concentration inequalities will cover
the basic material on this classic topic as
well as introduce several advanced topics and
techniques. The utility of the inequalities
derived will be illustrated by drawing on
applications from electrical engineering,
computer science and statistics.
- Introduction & motivation: Limit results and concentration bounds
- Chernoff bounds: Hoeffding’s inequality, Bennett’s inequality,
Bernstein’s inequality
- Variance bounds:
Efron-Stein inequality, Poinca ́re inequality
- The entropy method and bounded difference inequalities
- Log-Sobolev inequalities and hypercontractivity
- Isoperimetric inequalities (Talagrand's convex distance inequality)
- The transportation method
- Influences and threshold phenomena
Textbook Stephane Boucheron, Gabor Lugosi, and Pascal Massart,
``Concentration Inequalities,” Oxford
University Press, 2013.