Variational autoencoders

As part of one of my current research projects, I’ve been looking into variational autoencoders (VAEs) for the purpose of identifying and analyzing attractor solutions within higher-dimensional phase spaces. Of course, I couldn’t resist diving into the deeper mathematical theory underlying these generative models, beyond what was strictly necessary in order to implement one. As in the case of the restricted Boltzmann machines I’ve discussed before, there are fascinating relationships between physics, information theory, and machine learning at play here, in particular the intimate connection between (free) energy minimization and Bayesian inference. Insofar as I actually needed to learn how to build one of these networks however, I’ll start by introducing VAEs from a somewhat more implementation-oriented mindset, and discuss the deeper physics/information-theoretic aspects afterwards.

Mathematical formulation

An autoencoder is a type of neural network (NN) consisting of two feedforward networks: an encoder, which maps an input {X} onto a latent space {Z}, and a decoder, which maps the latent representation {Z} to the output {X'}. The idea is that {\mathrm{dim}(Z)<\mathrm{dim}(X)=\mathrm{dim}(X')}, so that information in the original data is compressed into a lower-dimensional “feature space”. For this reason, autoencoders are often used for dimensional reduction, though their applicability to real-world problems seems rather limited. Training consists of minimizing the difference between {X} and {X'} according to some suitable loss function. They are a form of unsupervised (or rather, self-supervised) learning, in which the NN seeks to learn highly compressed, discrete representation of the input.

VAEs inherit the network structure of autoencoders, but are fundamentally rather different in that they learn the parameters of a probability distribution that represents the data. This makes them much more powerful than their simpler precursors insofar as they are generative models (that is, they can generate new examples of the input type). Additionally, their statistical nature — in particular, learning a continuous probability distribution — makes them vastly superior in yielding meaningful results from new/test data that gets mapped to novel regions of the latent space. In a nutshell, the encoding {Z} is generated stochastically, using variational techniques—and we’ll have more to say on what precisely this means below.

Mathematically, a VAE is a latent-variable model {p_\theta(x,z)} with latent variables {z\in Z} and observed variables (i.e., data) {x\in X}, where {\theta} represents the parameters of the distribution. (For example, Gaussian distributions are uniquely characterized by their mean {\mu} and standard deviation {\sigma}, in which case {\theta\in\{\mu,\sigma\}}; more generally, {\theta} would parametrize the masses and couplings of whatever model we wish to construct. Note that we shall typically suppress the subscript {\theta} where doing so does not lead to ambiguity). This joint distribution can be written

\displaystyle p(x,z)=p(x|z)p(z)~. \ \ \ \ \ (1)

The first factor on the right-hand side is the decoder, i.e., the likelihood {p(x|z)} of observing {x} given {z}; this provides the map from {Z\rightarrow X'\simeq X}. This will typically be either a multivariate Gaussian or Bernoulli distribution, implemented by an RBM with as-yet unlearned weights and biases. The second factor is the prior distribution of latent variables {p(z)}, which will be related to observations {x} via the likelihood function (i.e., the decoder). This can be thought of as a statement about the variable {z} with the data {x} held fixed. In order to be computational tractable, we want to make the simplest possible choice for this distribution; accordingly, one typically chooses a multivariate Gaussian,

\displaystyle p(z)=\mathcal{N}(0,1)~. \ \ \ \ \ (2)

In the context of Bayesian inference, this is technically what’s known as an informative prior, since it assumes that any other parameters in the model are sufficiently small that Gaussian sampling from {Z} does not miss any strongly relevant features. This is in contrast to the somewhat misleadingly named uninformative prior, which endeavors to place no subjective constraints on the variable; for this reason, the latter class are sometimes called objective priors, insofar as they represent the minimally biased choice. In any case, the reason such a simple choice (2) suffices for {p(z)} is that any distribution can be generated by applying a sufficiently complicated function to the normal distribution.

Meanwhile, the encoder is represented by the posterior probability {p(z|x)}, i.e., the probability of {z} given {x}; this provides the map from {X\rightarrow Z}. In principle, this is given by Bayes’ rule:

\displaystyle p(z|x)=\frac{p(x|z)p(z)}{p(x)}~, \ \ \ \ \ (3)

but this is virtually impossible to compute analytically, since the denominator amounts to evaluating the partition function over all possible configurations of latent variables, i.e.,

\displaystyle p(x)=\int\!\mathrm{d}z\,p(x|z)p(z)~. \ \ \ \ \ (4)

One solution is to compute {p(x)} approximately via Monte Carlo sampling; but the impression I’ve gained from my admittedly superficial foray into the literature is that such models are computationally expensive, noisy, difficult to train, and generally inferior to the more elegant solution offered by VAEs. The key idea is that for most {z}, {p(x|z)\approx0}, so instead of sampling over all possible {z}, we construct a new distribution {q(z|x)} representing the values of {z} which are most likely to have produced {x}, and sample over this new, smaller set of {z} values [2]. In other words, we seek a more tractable approximation {q_\phi(z|x)\approx p_\theta(z|x)}, characterized by some other, variational parameters {\phi}—so-called because we will eventually vary these parameters in order to ensure that {q} is as close to {p} as possible. As usual, the discrepancy between these distributions is quantified by the familiar Kullback-Leibler (KL) divergence:

\displaystyle D_z\left(q(z|x)\,||\,p(z|x)\right)=\sum_z q(z|x)\ln\frac{q(z|x)}{p(z|x)}~, \ \ \ \ \ (5)

where the subscript on the left-hand side denotes the variable over which we marginalize.

This divergence plays a central role in the variational inference procedure we’re trying to implement, and underlies the connection to the information-theoretic relations alluded above. Observe that Bayes’ rule enables us to rewrite this expression as

\displaystyle D_z\left(q(z|x)\,||\,p(z|x)\right)= \langle \ln q(z|x)-\ln p(z)\rangle_q -\langle\ln p(x|z)\rangle_q +\ln p(x) \ \ \ \ \ (6)

where {\langle\ldots\rangle_q} denotes the expectation value with respect to {q(z|x)}, and we have used the fact that {\sum\nolimits_z q(z|x) \ln p(x)=\ln p(x)} (since probabilities are normalized to 1, and {p(x)} has no dependence on the latent variables {z}). Now observe that the first term on the right-hand side can be written as another KL divergence. Rearranging, we therefore have

\displaystyle \ln p(x)-D_z\left(q(z|x)\,||\,p(z|x)\right)=-F_q(x) \ \ \ \ \ (7)

where we have identified the (negative) variational free energy

\displaystyle -F_q(x)=\langle\ln p(x|z)\rangle_q-D_z\left(q(z|x)\,||\,p(z)\right)~. \ \ \ \ \ (8)

As the name suggests, this is closely related to the Helmholtz free energy from thermodynamics and statistical field theory; we’ll discuss this connection in more detail below, and in doing so provide a more intuitive definition: the form (8) is well-suited to the implementation-oriented interpretation we’re about to provide, but is a few manipulations removed from the underlying physical meaning.

The expressions (7) and (8) comprise the central equation of VAEs (and variational Bayesian methods more generally), and admit a particularly simple interpretation. First, observe that the left-hand side of (7) is the log-likelihood, minus an “error term” due to our use of an approximate distribution {q(z|x)}. Thus, it’s the left-hand side of (7) that we want our learning procedure to maximize. Here, the intuition underlying maximum likelihood estimation (MLE) is that we seek to maximize the probability of each {x\!\in\!X} under the generative process provided by the decoder {p(x|z)}. As we will see, the optimization process pulls {q(z|x)} towards {p(z|x)} via the KL term; ideally, this vanishes, whereupon we’re directly optimizing the log-likelihood {\ln p(x)}.

The variational free energy (8) consists of two terms: a reconstruction error given by the expectation value of {\ln p(x|z)} with respect to {q(z|x)}, and a so-called regulator given by the KL divergence. The reconstruction error arises from encoding {X} into {Z} using our approximate distribution {q(z|x)}, whereupon the log-likelihood of the original data given these inferred latent variables will be slightly off. The KL divergence, meanwhile, simply encourages the approximate posterior distribution {q(z|x)} to be close to {p(z)}, so that the encoding matches the latent distribution. Note that since the KL divergence is positive-definite, (7) implies that the negative variational free energy gives a lower-bound on the log-likelihood. For this reason, {-F_q(x)} is sometimes referred to as the Evidence Lower BOund (ELBO) by machine learners.

The appearance of the (variational) free energy (8) is not a mere mathematical coincidence, but stems from deeper physical aspects of inference learning in general. I’ll digress upon this below, as promised, but we’ve a bit more work to do first in order to be able to actually implement a VAE in code.

Computing the gradient of the cost function

Operationally, training a VAE consists of performing stochastic gradient descent (SGD) on (8) in order to minimize the variational free energy (equivalently, maximize the ELBO). In other words, this will provide the cost or loss function (9) for the model. Note that since {\ln p(x)} is constant with respect to {q(z|x)}, (7) implies that minimizing the variational energy indeed forces the approximate posterior towards the true posterior, as mentioned above.

In applying SGD to the cost function (8), we actually have two sets of parameters over which to optimize: the parameters {\theta} that define the VAE as a generative model {p_\theta(x,z)}, and the variational parameters {\phi} that define the approximate posterior {q_\phi(z|x)}. Accordingly, we shall write the cost function as

\displaystyle \mathcal{C}_{\theta,\phi}(X)=-\sum_{x\in X}F_q(x) =-\sum_{x\in X}\left[\langle\ln p_\theta(x|z)\rangle_q-D_z\left(q_\phi(z|x)\,||\,p(z)\right) \right]~, \ \ \ \ \ (9)

where, to avoid a preponderance of subscripts, we shall continue to denote {F_q\equiv F_{q_\phi(z|x)}}, and similarly {\langle\ldots\rangle_q=\langle\ldots\rangle_{q_\phi(z|x)}}. Taking the gradient with respect to {\theta} is easy, since only the first term on the right-hand side has any dependence thereon. Hence, for a given datapoint {x\in X},

\displaystyle \nabla_\theta\mathcal{C}_{\theta,\phi}(x) =-\langle\nabla_\theta\ln p_\theta(x|z)\rangle_q \approx-\nabla_\theta\ln p_\theta(x|z)~, \ \ \ \ \ (10)

where in the second step we have replaced the expectation value with a single sample drawn from the latent space {Z}. This is a common method in SGD, in which we take this particular value of {z} to be a reasonable approximation for the average {\langle\ldots\rangle_q}. (Yet-more connections to mean field theory (MFT) we must of temporal necessity forgo; see Mehta et al. [1] for some discussion in this context, or Doersch [2] for further intuition). The resulting gradient can then be computed via backpropagation through the NN.

The gradient with respect to {\phi}, on the other hand, is slightly problematic, since the variational parameters also appear in the distribution with respect to which we compute expectation values. And the sampling trick we just employed means that in the implementation of this layer of the NN, the evaluation of the expectation value is a discrete operation: it has no gradient, and hence we can’t backpropagate through it. Fortunately, there’s a clever method called the reparametrization trick that circumvents this stumbling block. The basic idea is to change variables so that {\phi} no longer appears in the distribution with respect to which we compute expectation values. To do so, we express the latent variable {z} (which is ostensibly drawn from {q_\phi(z|x)}) as a differentiable and invertible transformation of some other, independent random variable {\epsilon}, i.e., {z=g(\epsilon; \phi, x)} (where here “independent” means that the distribution of {\epsilon} does not depend on either {x} or {\phi}; typically, one simply takes {\epsilon\sim\mathcal{N}(0,1)}). We can then replace {\langle\ldots\rangle_{q_\phi}\rightarrow\langle\ldots\rangle_{p_\epsilon}}, whereupon we can move the gradient inside the expectation value as before, i.e.,

\displaystyle -\nabla_\phi\langle\ln p_\theta(x|z)\rangle_{q_\phi} =-\langle\nabla_\phi\ln p_\theta(x|z)\rangle_{p_\epsilon}~. \ \ \ \ \ (11)

Note that in principle, this results in an additional term due to the Jacobian of the transformation. Explicitly, this equivalence between expectation values may be written

\displaystyle \begin{aligned} \langle f(z)\rangle_{q_\phi}&=\int\!\mathrm{d}z\,q_\phi(z|x)f(z) =\int\!\mathrm{d}\epsilon\left|\frac{\partial z}{\partial\epsilon}\right|\,q_\phi(z(\epsilon)|x)\,f(z(\epsilon))\\ &\equiv\int\!\mathrm{d}\epsilon \,p(\epsilon)\,f(z(\epsilon)) =\langle f(z)\rangle_{p_\epsilon} \end{aligned} \ \ \ \ \ (12)

where the Jacobian has been absorbed into the definition of {p(\epsilon)}:

\displaystyle p(\epsilon)\equiv J_\phi(x)\,q_\phi(z|x)~, \quad\quad J_\phi(x)\equiv\left|\frac{\partial z}{\partial\epsilon}\right|~. \ \ \ \ \ (13)

Consequently, the Jacobian would contribute to the second term of the KL divergence via

\displaystyle \ln q_\phi(z|x)=\ln p(\epsilon)-\ln J_\phi(x)~. \ \ \ \ \ (14)

Operationally however, the reparametrization trick simply amounts to performing the requisite sampling on an additional input layer for {\epsilon} instead of on {Z}; this is nicely illustrated in both fig. 74 of Mehta et al. [1] and fig. 4 of Doersch [2]. In practice, this means that the analytical tractability of the Jacobian is a non-issue, since the change of variables is performed downstream of the KL divergence layer—see the implementation details below. The upshot is that while the above may seem complicated, it makes the calculation of the gradient tractable via standard backpropagation.

Implementation

Having fleshed-out the mathematical framework underlying VAEs, how do we actually build one? Let’s summarize the necessary ingredients, layer-by-layer along the flow from observation space to latent space and back (that is, {X\rightarrow Z\rightarrow X'\!\simeq\!X}), with the Keras API in mind:

  • We need an input layer, representing the data {X}.
  • We connect this input layer to an encoder, {q_\phi(z|x)}, that maps data into the latent space {Z}. This will be a NN with an arbitrary number of layers, which outputs the parameters {\phi} of the distribution (e.g., the mean and standard deviation, {\phi\in\{\mu,\sigma\}} if {q_\phi} is Gaussian).
  • We need a special KL-divergence layer, to compute the second term in the cost function (8) and add this to the model’s loss function (e.g., the Keras loss). This takes as inputs the parameters {\phi} produced by the encoder, and our Gaussian ansatz (2) for the prior {p(z)}.
  • We need another input layer for the independent distribution {\epsilon}. This will be merged with the parameters {\phi} output by the encoder, and in this way automatically integrated into the model’s loss function.
  • Finally, we feed this merged layer into a decoder, {p_\theta(x|z)}, that maps the latent space back to {X}. This is generally another NN with as many layers as the encoder, which relies on the learned parameters {\theta} of the generative model.

At this stage of the aforementioned research project, it’s far too early to tell whether such a VAE will ultimately be useful for accomplishing our goal. If so, I’ll update this post with suitable links to paper(s), etc. But regardless, the variational inference procedure underling VAEs is interesting in its own right, and I’d like to close by discussing some of the physical connections to which I alluded above in greater detail.

Deeper connections

The following was largely inspired by the exposition in Mehta et al. [1], though we have endeavored to modify the notation for clarity/consistency. In particular, be warned that what these authors call the “free energy” is actually a dimensionless free energy, which introduces an extra factor of {\beta} (cf. eq. (158) therein); we shall instead stick to standard conventions, in which the mass dimension is {[F]=[E]=[\beta^{-1}]=1}. Of course, we’re eventually going to set {\beta=1} anyway, but it’s good to set things straight.

Consider a system of interacting degrees of freedom {s\in\{x,z\}}, with parameters {\theta} (e.g., {\theta\in\{\mu,\sigma\}} for Gaussians, or would parametrize the couplings {J_{ij}} between spins {s_i} in the Ising model). We may assign an energy {E(s;\theta)=E(x,z;\theta)} to each configuration, such that the probability {p(s;\theta)=p_\theta(x,z)} of finding the system in a given state at temperature {\beta^{-1}} is

\displaystyle p_\theta(x,z)=\frac{1}{Z[\theta]}e^{-\beta E(x,z;\theta)}~, \ \ \ \ \ (15)

where the partition function with respect to this ensemble is

\displaystyle Z[\theta]=\sum_se^{-\beta E(s;\theta)}~, \ \ \ \ \ (16)

where the sum runs over both {x} and {z}. As the notation suggests, we have in mind that {p_\theta(x,z)} will serve as our latent-variable model, in which {x,z} respectively take on the meanings of visible and latent degrees of freedom as above. Upon marginalizing over the latter, we recover the partition function (4) for {\mathrm{dim}(Z)} finite:

\displaystyle p_\theta(x)=\sum_z\,p_\theta(x,z)=\frac{1}{Z[\theta]}\sum_z e^{-\beta E(x,z;\theta)} \equiv\frac{1}{Z[\theta]}e^{-\beta E(x;\theta)}~, \ \ \ \ \ (17)

where in the last step, we have defined the marginalized energy function {E(x;\theta)} that encodes all interactions with the latent variables; cf. eq. (15) of our post on RBMs.

The above implies that the posterior probability {p(z|x)} of finding a particular value of {z\in Z}, given the observed value {x\in X} (i.e., the encoder) can be written as

\displaystyle p_\theta(z|x) =\frac{p_\theta(x,z)}{p_\theta(x)} =e^{-\beta E(x,z;\theta)+\beta E(x;\theta)} \equiv e^{-\beta E(z|x;\theta)} \ \ \ \ \ (18)

where

\displaystyle E(z|x;\theta) \equiv E(x,z;\theta)-E(x;\theta) \ \ \ \ \ (19)

is the hamiltonian that describes the interactions between {x} and {z}, in which the {z}-independent contributions have been subtracted off; cf. the difference between eq. (12) and (15) here. To elucidate the variational inference procedure however, it will be convenient to re-express the conditional distribution as

\displaystyle p_\theta(z|x)=\frac{1}{Z_p}e^{-\beta E_p} \ \ \ \ \ (20)

where we have defined {Z_p} and {E_p} such that

\displaystyle p_\theta(x)=Z_p~, \qquad\mathrm{and}\qquad p_\theta(x,z)=e^{-\beta E_p}~. \ \ \ \ \ (21)

where the subscript {p=p_\theta(z|x)} will henceforth be used to refer to the posterior distribution, as opposed to either the joint {p(x,z)} or prior {p(x)} (this to facilitate a more compact notation below). Here, {Z_p=p_\theta(x)} is precisely the partition function we encountered in (4), and is independent of the latent variable {z}. Statistically, this simply reflects the fact that in (20), we weight the joint probabilities {p(x,z)} by how likely the condition {x} is to occur. Meanwhile, one must be careful not to confuse {E_p} with {E(z|x;\theta)} above. Rather, comparing (21) with (15), we see that {E_p} represents a sort of renormalized energy, in which the partition function {Z[\theta]} has been absorbed.

Now, in thermodynamics, the Helmholtz free energy is defined as the difference between the energy and the entropy (with a factor of {\beta} for dimensionality) at constant temperature and volume, i.e., the work obtainable from the system. More fundamentally, it is the (negative) log of the partition function of the canonical ensemble. Hence for the encoder (18), we write

\displaystyle F_p[\theta]=-\beta^{-1}\ln Z_p[\theta]=\langle E_p\rangle_p-\beta^{-1} S_p~, \ \ \ \ \ (22)

where {\langle\ldots\rangle_p} is the expectation value with respect to {p_\theta(z|x)} and marginalization over {z} (think of these as internal degrees of freedom), and {S_p} is the corresponding entropy,

\displaystyle S_p=-\sum_zp_\theta(z|x)\ln p_\theta(z|x) =-\langle\ln p_\theta(z|x)\rangle_p~. \ \ \ \ \ (23)

Note that given the canonical form (18), the equivalence of these expressions for {F_p} — that is, the second equality in (22) — follows immediately from the definition of entropy:

\displaystyle S_p=\sum_z p_\theta(z|x)\left[\beta E_p+\ln Z_p\right] =\beta\langle E_p\rangle_p+\ln Z_p~, \ \ \ \ \ (24)

where, since {Z_p} has no explicit dependence on the latent variables, {\langle\ln Z_p\rangle_p=\langle1\rangle_p\ln Z_p=\ln Z_p}. As usual, this partition function is generally impossible to calculate. To circumvent this, we employ the strategy introduced above, namely we approximate the true distribution {p_\theta(z|x)} by a so-called variational distribution {q(z|x;\phi)=q_\phi(z|x)}, where {\phi} are the variational (e.g., coupling) parameters that define our ansatz. The idea is of course that {q} should be computationally tractable while still capturing the essential features. As alluded above, this is the reason these autoencoders are called “variational”: we’re eventually going to vary the parameters {\phi} in order to make {q} as close to {p} as possible.

To quantify this procedure, we define the variational free energy (not to be confused with the Helmholtz free energy (22)):

\displaystyle F_q[\theta,\phi]=\langle E_p\rangle_q-\beta^{-1} S_q~, \ \ \ \ \ (25)

where {\langle E_p\rangle_q} is the expectation value of the energy corresponding to the distribution {p_\theta(z|x)} with respect to {q_\phi(z|x)}. While the variational energy {F_q} has the same form as the thermodynamic definition of Helmholtz energy {F_p}, it still seems odd at first glance, since it no longer enjoys the statistical connection to a canonical partition function. To gain some intuition for this quantity, suppose we express our variational distribution in the canonical form, i.e.,

\displaystyle q_\phi(z|x)=\frac{1}{Z_q}e^{-\beta E_q}~, \quad\quad Z_q[\phi]=\sum_ze^{-\beta E_q(x,z;\phi)}~, \ \ \ \ \ (26)

where we have denoted the energy of configurations in this ensemble by {E_q}, to avoid confusion with {E_p}, cf. (18). Then {F_q} may be written

\displaystyle \begin{aligned} F_q[\theta,\phi]&=\sum_z q_\phi(x|z)E_p-\beta^{-1}\sum_z q_\phi(x|z)\left[\beta E_q+\ln Z_q\right]\\ &=\langle E_p(\theta)-E_q(\phi)\rangle_q-\beta^{-1}\ln Z_q[\phi]~. \end{aligned} \ \ \ \ \ (27)

Thus we see that the variational energy is indeed formally akin to the Helmholtz energy, except that it encodes the difference in energy between the true and approximate configurations. We can rephrase this in information-theoretic language by expressing these energies in terms of their associated ensembles; that is, we write {E_p=-\beta^{-1}\left(\ln p+\ln Z_p\right)}, and similarly for {q}, whereupon we have

\displaystyle F_q[\theta,\phi]=\beta^{-1}\sum_z q_\phi(z|x)\ln\frac{q_\phi(z|x)}{p_\theta(z|x)}-\beta^{-1}\ln Z_p[\theta]~, \ \ \ \ \ (28)

where the {\ln Z_q} terms have canceled. Recognizing (5) and (21) on the right-hand side, we therefore find that the difference between the variational and Helmholtz free energies is none other than the KL divergence,

\displaystyle F_q[\theta,\phi]-F_p[\theta]=\beta^{-1}D_z\left(q_\phi(z|x)\,||\,p_\theta(z|x)\right)\geq0~, \ \ \ \ \ (29)

which is precisely (7)! (It is perhaps worth stressing that this follows directly from (24), independently of whether {q(z|x)} takes canonical form).

As stated above, our goal in training the VAE is to make the variational distribution {q} as close to {p} as possible, i.e., minimizing the KL divergence between them. We now see that physically, this corresponds to a variational problem in which we seek to minimize {F_q} with respect to {\phi}. In the limit where we perfectly succeed in doing so, {F_q} has obtained its global minimum {F_p}, whereupon the two distributions are identical.

Finally, it remains to clarify our implementation-based definition of {F_q} given in (8) (where {\beta=1}). Applying Bayes’ rule, we have

\displaystyle \begin{aligned} F_q&=-\langle\ln p(x|z)\rangle_q+D_z\left(q(z|x)\,||\,p(z)\right) =-\left<\ln\frac{p(z|x)p(x)}{p(z)}\right>_q+\langle\ln q(z|x)-\ln p(z)\rangle_q\\ &=-\langle\ln p(z|x)p(x)\rangle_q+\langle\ln q(z|x)\rangle_q =-\langle\ln p(x,z)\rangle_q-S_q~, \end{aligned} \ \ \ \ \ (30)

which is another definition of {F_q} sometimes found in the literature, e.g., as eq. (172) of Mehta et al. [1]. By expressing {p(x,z)} in terms of {E_p} via (20), we see that this is precisely equivalent to our more thermodynamical definition (24). Alternatively, we could have regrouped the posteriors to yield

\displaystyle F_q=\langle\ln q(z|x)-\ln p(z|x)\rangle_q-\langle\ln p(x)\rangle_q =D\left(q(z|x)\,||\,p(z|x)\right)+F_p~, \ \ \ \ \ (31)

where the identification of {F_p} follows from (20). Of course, this is just (28) again, which is a nice check on internal consistency.

References

  1. The review by Mehta et al., A high-bias, low-variance introduction to Machine Learning for physicists is absolutely perfect for those with a physics background, and the accompanying Jupyter notebook on VAEs in Keras for the MNIST dataset was especially helpful for the implementation bits above. The latter is a more streamlined version of this blog post by Louis Tiao.
  2. Doersch has written a Tutorial on Autoencoders, which I found helpful for gaining some further intuition for the mapping between theory and practice.
This entry was posted in Minds & Machines. Bookmark the permalink.

2 Responses to Variational autoencoders

  1. Vincent says:

    Hi there. I came across your blog and I really like your take on some of the topics in Machine Learning. On the subject of Variational Autoencoder, if you view the variational objective a.k.a. ELBO as a Legendre transform, I was wondering what would be the intuition from this viewpoint. The folllowing blog mentions “Legendre-Fenchel Transform can provide a Free Energy, convexified along the direction internal (configurational) Entropy, allowing the Temperature to control how many local Energy minima are sampled.”, though this is not exactly clear to me as a non-physicist. Thanks for your time!

    https://calculatedcontent.com/2017/07/04/what-is-free-energy-part-i-hinton-helmholtz-and-legendre/

    Like

    • Hi Vincent! Thanks for your compliment, and your question. First, the Legendre-Fenchel transform (or convex conjugate) is morally the same as the Legendre transform; the difference is that the latter only provides a one-to-one map for convex functions, while the former holds more generally. The author of the post you mentioned is right to distinguish between the two, but this technicality doesn’t alter the underlying physics, so we can safely ignore it for our immediate purposes.

      I personally find the author’s take on {F} as an average solution to be a bit funny, though: the partition function is already a c-number, so {\langle\ln Z\rangle=\ln Z}; i.e., {F} is technically an ensemble average, but only in a trivial sense. Regardless however, one can certainly think of the temperature as controlling how many minima (i.e., energy eigenstates) are sampled: intuitively, increasing the total energy (equivalently, the temperature) of the system increases the number of internal microstates, which is what the partition function {Z=\sum\nolimits_ie^{-E_i/T}} is counting. Hence the free energy {F=-T\ln Z} increases in magnitude, by definition.

      Now, the ELBO is simply machine learning jargon for the negative of the dimensionless free energy, {-\beta F}. And as the author points out, the free energy is the Legendre transform of the entropy; i.e., free energy and entropy are conjugate variables in the canonical sense. What this means in practical terms is that instead of working with entropy as a function of energy, {S(E)}, we can equally well work with free energy as a function of temperature, {F(T)}. I discuss this relationship in greater depth in my recent post on cumulants.

      So the somewhat cryptic remark you’re asking about is essentially just summarizing this relationship, where the Legendre transform has provided a map from the configuration space (the internal energy, {E}) to the dual vector space ({T}, or rather {\beta}, which can be seen as a conjugate vector by taking derivative of the action). Again, I’ll refer you to the aforementioned post for more details.

      However, I glossed over a very important detail above, namely that the ELBO is the negative of the variational free energy, not the Helmholtz free energy we’ve been discussing in the context of thermodynamics—note the difference between eqn. (22) and eqn. (25) above. In this case, the analogue of the fundamental thermodynamic relation is eqn. (27), which takes the place of the Legendre transform between free energy and entropy above, where here {\beta} plays the role of the conjugate variable to the difference in energy expectation values between the true and approximate distributions.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s