Documentation

TestingLowerBounds.Divergences.KullbackLeibler.KullbackLeibler

Kullback-Leibler divergence #

Main definitions #

Main statements #

noncomputable def ProbabilityTheory.kl {α : Type u_1} {mα : MeasurableSpace α} (μ : MeasureTheory.Measure α) (ν : MeasureTheory.Measure α) :

Kullback-Leibler divergence between two measures.

Equations
Instances For
    theorem ProbabilityTheory.kl_of_ac_of_integrable {α : Type u_1} {mα : MeasurableSpace α} {μ : MeasureTheory.Measure α} {ν : MeasureTheory.Measure α} (h1 : μ.AbsolutelyContinuous ν) (h2 : MeasureTheory.Integrable (MeasureTheory.llr μ ν) μ) :
    ProbabilityTheory.kl μ ν = (∫ (x : α), MeasureTheory.llr μ ν xμ)
    @[simp]
    theorem ProbabilityTheory.kl_of_not_ac {α : Type u_1} {mα : MeasurableSpace α} {μ : MeasureTheory.Measure α} {ν : MeasureTheory.Measure α} (h : ¬μ.AbsolutelyContinuous ν) :
    theorem ProbabilityTheory.kl_toReal_of_ac {α : Type u_1} {mα : MeasurableSpace α} {μ : MeasureTheory.Measure α} {ν : MeasureTheory.Measure α} (h : μ.AbsolutelyContinuous ν) :
    (ProbabilityTheory.kl μ ν).toReal = ∫ (a : α), MeasureTheory.llr μ ν aμ

    If μ ≪ ν, then toReal of the Kullback-Leibler divergence is equal to an integral, without any integrability condition. Not true in general without μ ≪ ν, as the integral might be finite but non-zero.

    theorem ProbabilityTheory.rightDeriv_mul_log {x : } (hx : x 0) :
    rightDeriv (fun (x : ) => x * Real.log x) x = Real.log x + 1
    theorem ProbabilityTheory.kl_ge_mul_log' {α : Type u_1} {mα : MeasurableSpace α} {μ : MeasureTheory.Measure α} {ν : MeasureTheory.Measure α} [MeasureTheory.IsFiniteMeasure μ] [MeasureTheory.IsProbabilityMeasure ν] (hμν : μ.AbsolutelyContinuous ν) :
    (μ Set.univ).toReal * (Real.log (μ Set.univ).toReal) ProbabilityTheory.kl μ ν
    theorem ProbabilityTheory.kl_ge_mul_log {α : Type u_1} {mα : MeasurableSpace α} (μ : MeasureTheory.Measure α) (ν : MeasureTheory.Measure α) [MeasureTheory.IsFiniteMeasure μ] [MeasureTheory.IsFiniteMeasure ν] :
    (μ Set.univ).toReal * (Real.log ((μ Set.univ).toReal / (ν Set.univ).toReal)) ProbabilityTheory.kl μ ν

    Gibbs' inequality: the Kullback-Leibler divergence between two probability distributions is nonnegative.

    theorem ProbabilityTheory.kl_eq_zero_iff {α : Type u_1} {mα : MeasurableSpace α} {μ : MeasureTheory.Measure α} {ν : MeasureTheory.Measure α} [MeasureTheory.IsFiniteMeasure μ] [MeasureTheory.IsFiniteMeasure ν] (h_mass : μ Set.univ = ν Set.univ) :
    ProbabilityTheory.kl μ ν = 0 μ = ν

    Converse Gibbs' inequality: the Kullback-Leibler divergence between two finite measures is zero if and only if the two distributions are equal.

    Converse Gibbs' inequality: the Kullback-Leibler divergence between two probability distributions is zero if and only if the two distributions are equal.

    The Data Processing Inequality for the Kullback-Leibler divergence.