§ Markov and chebyshev from a measure theoretic lens
I've been idly watching Probability and Stochastics for finance: NPTEL , and I came across this nice way to
think about the markov and chebyshev inequality. I wonder whether Chernoff
bounds also fall to this viewpoint.
§ Markov's inequality
In markov's inequality, we want to bound P(X≥A). Since we're in measure land,
we have no way to directly access P(⋅). The best we can do is to integreate
the constant function 1, since the probability is "hidden inside" the measure.
This makes us compute:
P(X≥A)≡=∫{X≥A}1dμ
Hm, how to proceed? We can only attempt to replace the 1 with the X to get
some non-trivial bound on X. But we know that X≥A. so we should perhaps
first introduce the A:
P(X≥A)≡=∫{X≥A}1dμ=1/A∫{X≥A}Adμ
Now we are naturally led to see that this is always less than X:
P(X≥A)≡=∫{X≥A}1dμ=1/A∫{X≥A}Adμ<1/A∫{X≥A}Xdμ=1/AE[X]
This completes marov's inequality:
P(X≥A)≤E[X]/A
So we are "smearing" the indicator 1 over the domain {X≥A} and attempting
to get a bound.