Latex and Machine Learning
Information Gain
- Original article
- Information Gain and Mutual Information for Machine Learning
\begin{equation} \mathbf{IG}(\mathbf{S}, a) = \mathbf{H}(\mathbf{S}) – \mathbf{H}(\mathbf{S} | a) \end{equation}
Mutual information
- References
- Information Gain and Mutual Information for Machine Learning \ An introduction to mutual information - YouTube
Concerns the outcome of two random variables.
If we know the value of one of the random variables in a system there is a corresponding reduction in uncertainty for predicting the other one and mutual information measures that reduction in uncertainty.
\begin{equation} \textbf{I}(X_1 ; X_2) = \textbf{I}(X_1, X_2) = \sum_{X_1} \sum_{X_2} = \textbf{H}(X_1) - \textbf{H}(X_1 | X_2) = P(X_1, X_2) log \dfrac{P(X_1, X_2)}{\underbrace{P(X_1)P(X_2)}_{\text{Marginal Likelihood}}} \end{equation}
Where \(X_1\) and \(X_2\) are random variables and \(\textbf{I}(X_1 ; X_2)\) is the mutual information for \(X_1\) and \(X_2\).
Bayes’ Rule
\begin{equation} \underbrace{p(\mathbf{z} \mid \mathbf{x})}_{\text{Posterior}} = \underbrace{p(\mathbf{z})}_{\text{Prior}} \times \frac{\overbrace{p(\mathbf{x} \mid \mathbf{z})}^{\text{Likelihood}}}{\underbrace{\int p(\mathbf{x} \mid \mathbf{z}) , p(\mathbf{z}) , \mathrm{d}\mathbf{z}}_{\text{Marginal Likelihood}}} \enspace , \end{equation}
where \(\mathbf{z}\) denotes latent parameters we want to infer and \(\mathbf{x}\) denotes data.** Euclid
Glossary
|
|
If this article appears incomplete, it may be intentional. Try prompting for a continuation.