Longest Path Search

Machine learning, information, and tail bounds

Posted 2018-09-04

Usually, in explaining the connection between information theory and machine learning, I would begin by writing down the definition of entropy and deriving some useful results about it, and then come back to tell you that you can look at ML as an information problem, where nature picks some parameters and 'communicates them' through a noisy channel (e.g. via samples from some distribution or some other stochastic process), which you then have to infer (i.e., decode). While this approach is common, my explanations are likely to be far worse than the many available texts. So I'll try to do something mildly differently (and perhaps a bit backwards) and hope it's at least somewhat entertaining and, ideally, do it all without assuming much more than some comfort with statistics (e.g. the weak law and Markov's inequality).

Maximum-likelihood and KL-divergence

The maximum-likelihood estimator (MLE) is probably the simplest estimator, if you have a probability distribution p(xθ)p(x|\theta) which models your data. In this case we try to pick the hypothesis θ\theta which makes our observed data the most likely. In other words, we want to solve the optimization problem:

θMLE=argmaxθ  p(x  θ). \theta^\mathrm{MLE} = \underset{\theta}{\operatorname{argmax}}~~p(x~|~\theta).

While this framework is quite general, we'll prove that this estimator is consistent in the case where our data points, x={xi}x = \{x^i\}, are all independently drawn from p(  θ)p(\cdot ~|~ \theta^*), where θ\theta^* is the "true" hypothesis. In other words, when

p(x  θ)=ip(xi  θ) p(x~|~\theta) = \prod_i p(x^i~|~\theta)

The proof that this estimator is consistent is relatively simple and assumes only the weak law of large numbers, which says that the empirical mean of a bunch of i.i.d variables {Yi}\{Y_i\} converges[1] to its expectation

1niYipE[Y1]. \frac{1}{n}\sum_i Y_i \overset{p}{\to} \mathbb{E}[Y_1].

(from here on out, I will write 'converges in probability' just as \to, instead of p\overset{p}{\to}).

First, note that[2]

argmaxθ(ip(xi  θ))=argmaxθ(log(ip(xi  θ))), \underset{\theta}{\operatorname{argmax}}\left(\prod_i p(x^i~|~\theta)\right) = \underset{\theta}{\operatorname{argmax}}\left(\log \left(\prod_i p(x^i~|~\theta)\right)\right),

since 0<xy    log(x)log(y)0 < x \le y \iff \log(x) \le \log(y) (i.e. log\log is monotonic[3], so it preserves any maximum or minimum). We also have

log(ip(xi  θ))=ilogp(xi  θ), \log \left(\prod_i p(x^i~|~\theta)\right) = \sum_i \log p(x^i ~|~ \theta),

and now we need some way of comparing the current hypothesis θ\theta, with the true hypothesis θ\theta^*. The simplest way is to subtract one from the other and show that the difference is less than zero whenever θθ\theta \ne \theta^*, so this is what we will do.[4] In particular:

ilogp(xi  θ)ilogp(xi  θ)=ilog(p(xi  θ)p(xi  θ)). \sum_i \log p(x^i~|~\theta) - \sum_i \log p(x^i~|~\theta^*) = \sum_i \log\left(\frac{p(x^i~|~\theta)}{p(x^i~|~\theta^*)}\right).

If we can prove the quantity above is negative with high probability, then we're set! So divide by nn on both sides and note that, by the weak law, we have

1nilog(p(xi  θ)p(xi  θ))EX(log(p(X  θ)p(X  θ))). \frac{1}{n}\sum_i \log\left(\frac{p(x^i~|~\theta)}{p(x^i~|~\theta^*)}\right) \to \mathbb{E}_{X}\left(\log\left(\frac{p(X~|~\theta)}{p(X~|~\theta^*)}\right)\right).

(the expectation here is taken with respect to the true distribution p(  θ)p(\cdot ~|~\theta^*)). Now, log(x)\log(x) is a concave function (proof: take the second derivative and note that it's always negative), so, this means that

E(log(Y))log(E(Y)), \mathbb{E}(\log(Y)) \le \log(\mathbb{E}(Y)),

for any random variable YY (this is Jensen's inequality). In fact, in this case, equality can only happen if YY takes on a single value, so in general, we have

E(log(Y))<log(E(Y)). \mathbb{E}(\log(Y)) < \log(\mathbb{E}(Y)).

Applying this inequality to the previous line is the only magical part of the proof, which gives us

EX(log(p(X  θ)p(X  θ)))<logEX(p(X  θ)p(X  θ))=logSp(X  θ)p(X  θ)p(X  θ) dX=logSp(X  θ) dX=log1=0. \begin{aligned} \mathbb{E}_{X}\left(\log\left(\frac{p(X~|~\theta)}{p(X~|~\theta^*)}\right)\right) &< \log\mathbb{E}_{X}\left(\frac{p(X~|~\theta)}{p(X~|~\theta^*)}\right) \\ &= \log \int_S p(X~|~\theta^*)\frac{p(X~|~\theta)}{p(X~|~\theta^*)}~dX\\ &= \log \int_S p(X~|~\theta)~dX\\ &= \log 1\\ &= 0. \end{aligned}

So, as nn \uparrow\infty, we find that

1nilog(p(xi  θ)p(xi  θ))<0 \frac{1}{n}\sum_i \log\left(\frac{p(x^i~|~\theta)}{p(x^i~|~\theta^*)}\right) < 0

or, multiplying by nn and rearranging,

ilogp(xi  θ)<ilogp(xi  θ) \sum_i \log p(x^i~|~\theta) < \sum_i \log p(x^i~|~\theta^*)

with high probability. So, any point which is not θ\theta^* will have a lower likelihood than θ\theta^*.[5]

Tail bounds on information

The next question, of course, is how many samples do we need to actually guess the right hypothesis? There are several ways of attacking this question, but let's start with a basic one: what is the probability that a wrong (empirical) likelihood is actually better than the true empirical likelihood? In other words, can we give an upper bound on

P(ip(xi  θ)ip(xi  θ)) P\left(\prod_i p(x^i~|~\theta) \ge \prod_i p(x^i~|~\theta^*)\right)

that depends on some simple, known quantities? Applying Markov's inequality directly yields the trivial result

P(ip(xi  θ)p(xi  θ)1)Ex(ip(xi  θ)p(xi  θ))=1, P\left(\prod_i \frac{p(x^i~|~\theta)}{p(x^i~|~\theta^*)} \ge 1\right) \le \mathbb{E}_x\left(\prod_i \frac{p(x^i~|~\theta)}{p(x^i~|~\theta^*)}\right) = 1,

that the probability is at most 1. So where do we go from here? Well, as before, we can turn the product into a sum by taking the log of both sides and dividing by nn (déjà vu, anyone?),

P(ip(xi  θ)p(xi  θ)1)=P(1nilog(p(xi  θ)p(xi  θ))0). P\left(\prod_i \frac{p(x^i~|~\theta)}{p(x^i~|~\theta^*)} \ge 1\right) = P\left(\frac1n\sum_i \log\left( \frac{p(x^i~|~\theta)}{p(x^i~|~\theta^*)}\right) \ge 0\right).

Here, we can weaken the question a little bit by asking: how likely is it that our wrong hypothesis has a higher log-likelihood than the right one by any amount, ε>0\varepsilon > 0. In other words, let's give a bound on

P(1nilog(p(xi  θ)p(xi  θ))ε). P\left(\frac1n\sum_i \log\left( \frac{p(x^i~|~\theta)}{p(x^i~|~\theta^*)}\right) \ge \varepsilon\right).

Here comes a little bit of magic, but this is a general method in what are known as Chernoff bounds. It's a good technique to keep in your toolbox if you haven't quite seen it before!

Anyways, since log\log is a monotonic function, note that exp\exp (its inverse) is also monotonic, so,

P(1nilog(p(xi  θ)p(xi  θ))ε)=P(exp{ilog(p(xi  θ)p(xi  θ))}enε), P\left(\frac{1}{n}\sum_i \log\left(\frac{p(x^i~|~\theta)}{p(x^i~|~\theta^*)}\right) \ge \varepsilon\right) = P\left(\exp\left\{\sum_i \log\left( \frac{p(x^i~|~\theta)}{p(x^i~|~\theta^*)}\right)\right\} \ge e^{n\varepsilon}\right),

and applying Markov's inequality to the right-hand-side yields

P(exp{ilog(p(xi  θ)p(xi  θ))}enε)enε, P\left(\exp\left\{\sum_i \log\left(\frac{p(x^i~|~\theta)}{p(x^i~|~\theta^*)}\right)\right\} \ge e^{n\varepsilon}\right) \le e^{-n\varepsilon},

so as the number of samples increases, our wrong hypothesis becomes exponentially unlikely to exceed the true hypothesis by more than ε\varepsilon.

Of course, at any point in this proof, we could've multiplied both sides of the inequality by λ>0\lambda > 0 and everything would've remained true, but note that then we would have a bound

P(1nilog(p(xi  θ)p(xi  θ))ε)EX[(p(X  θ)p(X  θ))λ] eλnε, P\left(\frac1n\sum_i \log\left( \frac{p(x^i~|~\theta)}{p(x^i~|~\theta^*)}\right) \ge \varepsilon\right) \le \mathbb{E}_X\left[\left(\frac{p(X ~|~ \theta)}{p(X~|~\theta^*)}\right)^\lambda\right]~ e^{-\lambda n\varepsilon},

which looks almost nice, except that we have no control over the tails of

(p(X  θ)p(X  θ))λ, \left(\frac{p(X ~|~ \theta)}{p(X~|~\theta^*)}\right)^\lambda,

since at no point have we assumed anything about the dependence of pp on θ\theta or XX (apart from it being a correct probability distribution). More generally speaking, given λ0,1\lambda \ne 0, 1, we can find a function such that this quantity blows up (exercise for the reader!), which makes our bound trivial.

It is possible to make some assumptions about how these tails behave, but it's not entirely clear that these assumptions would be natural or useful. If anyone has further thoughts on this, I'd love to hear them!

On Fisher Information and Lower-Bounds

The second set of lower-bounds that are easy to derive and are surprisingly useful are the Cramér-Rao bounds on estimators. In particular, we can show that, for any estimator θ^\hat \theta whose expectation is E(θ^)=ψ(θ)\mathbb{E}(\hat \theta) = \psi(\theta), with underlying probability distribution p(  θ)p(\cdot~|~\theta), then[6]

Var(θ)[ψ(θ)]2I(θ), \operatorname{Var}(\theta) \ge \frac{[\psi ' (\theta)]^2}{I(\theta)},

where I(θ)0I(\theta) \ge 0 is the Fisher information of p(  θ)p(\cdot~|~\theta), which is something like the local curvature of logp(  θ)\log p(\cdot~|~\theta) around θ\theta. In particular, it is defined as

I(θ)=EX(2logp(X  θ)θ2). I(\theta) = -\mathbb{E}_X\left(\frac{\partial^2 \log p(X~|~\theta)}{\partial \theta^2}\right).

In other words, the inequality says that, the more flat logp\log p is at θ\theta, the harder it is to correctly guess the right parameter. This makes sense, since the flatter the distribution is at this point, the harder it is for us to distinguish it from points around it.

I'll give a simple proof of this statement soon (in another post, since this one has become quite a bit longer than expected), but, to see why this makes sense, let's go back to the original proof of the consistency of the MLE estimator for a given probability distribution. Note that assuming that ψ(θ)=θ\psi(\theta) = \theta then gives us bounds on the variance of an unbiased estimator of θ\theta.

At one point we used the fact that

1nilog(p(xi  θ)p(xi  θ))EX(log(p(X  θ)p(X  θ)))D(θ  θ)0. \frac{1}{n}\sum_i \log\left(\frac{p(x^i~|~\theta)}{p(x^i~|~\theta^*)}\right) \to \mathbb{E}_{X}\left(\log\left(\frac{p(X~|~\theta)}{p(X~|~\theta^*)}\right)\right) \equiv -D(\theta ~\Vert~\theta^*) \le 0.

This quantity on the right is called the KL-divergence, and it has some very nice information-theoretic interpretations, which I highly recommend you read about, but which I will not get into here. Anyways, assuming that θ\theta is close to θ\theta^*, we can do a Taylor expansion around the true parameter θ\theta^* to find

1nilog(p(xi  θ)p(xi  θ))1ni(log(p(xi  θ)p(xi  θ))=0+(θθ)θp(  θ)p(  θ)+) \frac{1}{n}\sum_i \log\left(\frac{p(x^i~|~\theta)}{p(x^i~|~\theta^*)}\right) \approx \frac{1}{n}\sum_i \left(\underbrace{\log\left(\frac{p(x^i~|~\theta^*)}{p(x^i~|~\theta^*)}\right)}_{=0} + (\theta - \theta^*)\frac{\partial_\theta p(\cdot ~|~\theta^*)}{p(\cdot ~|~ \theta^*)} + \dots\right)

and the quantity on the right hand side goes to, as nn\uparrow \infty,

1ni((θθ)θp(  θ)p(  θ)+O((θθ)2))(θθ)EX(θp(X  θ)p(X  θ))=(θθ)Sθp(X  θ)p(X  θ)p(X  θ) dX=(θθ)Sθp(X  θ) dX=(θθ)θ(Sp(X  θ) dX)=(θθ)θ(1)=0 \begin{aligned} \frac{1}{n}\sum_i \left((\theta - \theta^*)\frac{\partial_\theta p(\cdot ~|~\theta^*)}{p(\cdot ~|~ \theta^*)} + O((\theta - \theta^*)^2)\right) &\to (\theta - \theta^*)\mathbb{E}_X\left(\frac{\partial_\theta p(X ~|~\theta^*)}{p(X ~|~ \theta^*)}\right) \\ &= (\theta - \theta^*)\int_S \frac{\partial_\theta p(X~|~\theta^*)}{p(X~|~\theta^*)}p(X~|~\theta^*)~dX\\ &=(\theta - \theta^*)\int_S \partial_\theta p(X~|~\theta^*)~dX\\ &=(\theta - \theta^*)\partial_\theta\left(\int_S p(X~|~\theta^*)~dX\right)\\ &= (\theta - \theta^*)\partial_\theta (1)\\ &= 0 \end{aligned}

...zero?![7] Well, the expectation of the first derivative of the log-likelihood vanishes, so taking the second term in the Taylor expansion yields

12(θθ)2EX(θ2log(p(X  θ)))=12(θθ)2I(θ). \frac{1}{2}(\theta - \theta^*)^2\mathbb{E}_X\left(\partial_\theta^2 \log(p(X~|~\theta^*)\right)) = -\frac12(\theta - \theta^*)^2I(\theta^*).

Putting it all together, we have that

1nilog(p(xi  θ)p(xi  θ))D(θθ)12(θθ)2I(θ)+O((θθ)3), \frac{1}{n}\sum_i \log\left(\frac{p(x^i~|~\theta)}{p(x^i~|~\theta^*)}\right) \to -D(\theta \Vert \theta^*) \approx -\frac12(\theta - \theta^*)^2I(\theta^*) + O((\theta - \theta^*)^3),

or that the curvature of the KL-divergence around θ\theta^* is the Fisher information! In this case, it shouldn't be entirely surprising that there is some connection between how well we can measure a parameter's value and the Fisher information of that parameter, since the likelihood's noise around that parameter is given by the Fisher information.

On the other hand, I haven't been able to find a direct proof of the above bound (or, even, any other nice bounds) given only the above observation. So, while the connection might make sense, it turns out the proof of the Cramér-Rao bound uses a slightly different technique, which I will present later (along with some other fun results!).

[1] In probability. I.e., the probability that the empirical mean differs from the expectation by some amount, 1niYiE[Y1]>ε\left|\frac{1}{n}\sum_i Y_i - \mathbb{E}[Y_1]\right| > \varepsilon, goes to zero as nn\uparrow \infty. A simple proof in the finite-variance case follows from Chebyshev's inequality (exercise for the reader!).
[2] It's often easier to deal with log-probabilities than it is to deal with probabilities, so this trick is relatively common.
[3] In fact, the logarithm is strictly monotonic, so it preserves minima uniquely. In other words, for any function ϕ:SR>0\phi: S\to \mathbb{R}^{> 0}, ϕ\phi and logϕ\log \circ\, \phi have minima and maxima at exactly the same points.
[4] I am, of course, being sneaky: the subtraction happens to work since this just happens to yield the KL-divergence in expectation—but that's how it goes. Additionally, the requirement really is not that θθ\theta \ne \theta^*, but rather that p(x  θ)p(x  θ)p(x~|~\theta^*) \ne p(x~|~\theta), just in case there happen to be multiple hypotheses with equivalent distributions. Since you're reading this then just assume throughout that p(  θ)p(  θ)p(\cdot~|~\theta) \ne p(\cdot~|~\theta^*) on some set with nonzero probability (in the base distribution) whenever θθ\theta \ne \theta^*.
[5] While it may seem that there should be easy bounds to give immediately based on this proof, the problem is that we do not have good control of the second moment of log(p(  θ)/p(  θ))\log(p(\cdot~|~\theta^*)/p(\cdot~|~\theta)) (this quantity may not even converge in a nice way). This makes giving any kind of convergence rate quite difficult, since the proof of the weak law given only a first-moment guarantee uses the dominated convergence theorem to give non-constructive bounds.
[6] This is a derivation of the one-dimensional case, but the nn-dimensional case is almost identical.
[7] All I will say about changing the derivative and integral around is that it is well-justified by dominated convergence.

Built with Franklin.jl and Julia.