As a theoretical physicist making their first foray into machine learning, one is immediately captivated by the fascinating parallel between deep learning and the renormalization group. In essence, both are concerned with the extraction of relevant features via a process of coarse-graining, and preliminary research suggests that this analogy can be made rather precise. This is particularly interesting in the context of attempts to develop a theoretical framework for deep learning; insofar as the renormalization group is well-understood in theoretical physics, the strength of this mathematical analogy may pave the way towards a general theory of deep learning. I hope to return to this tantalizing correspondence soon; but first, we need to understand restricted Boltzmann machines (RBMs).
As the name suggests, an RBM is a special case of the Boltzmann machines we’ve covered before. The latter are useful for understanding the basic idea behind energy-based generative models, but it turns out that all-to-all connectivity leads to some practical problems when training such networks. In contrast, an RBM restricts certain connections (hence the name), which makes training them much more efficient. Specifically, the neurons in an RBM are divided into one of two layers, consisting of visible and hidden units, respectively, with intralayer connections prohibited. Visible units essentially correspond to outputs: these are the variables to which the observer has access. In contrast, hidden units interact only through their connection with the visible layer, but in such a way as to encode complex, higher-order interactions between the visible degrees of freedom. This ability to encode such latent variables is what makes RBMs so powerful, and underlies the close connection with the renormalization group (RG).
Latent or hidden variables correspond to the degrees of freedom one integrates out (“marginalizes over”, in machine learning (ML) parlance) when performing RG. When properly performed, this procedure ensures that the relevant physics is encoded in correlations between the remaining (visible) degrees of freedom, while allowing one to work with a much simpler model, analogous to an effective field theory. To understand how this works in detail, suppose we wish to apply Wilsonian RG to the Ising model (a pedagogical review by Dimitri Vvedensky can be found here). To do this, we must first transform the (discrete) Ising model into a (continuous) field theory, so that momentum-space RG techniques can be applied. This is achieved via a trick known as the Hubbard-Stratonovich transformation, which neatly illustrates how correlations between visible/relevant variables can be encoded via hidden/irrelevant degrees of freedom.
with spins , and symmetric coupling matrix if and are nearest-neighbors, and zero otherwise. The partition function is
which is typically solved by the transfer-matrix method. To recast this as a field theory, we observe that the sum over the second term in (2) may be written
where the spin degrees of freedom have been summed over, and we’ve re-exponentiated the result in preparation for its insertion below. This enables us to express the partition function entirely in terms of the latent variable :
where, since the first term in (2) is independent of , the sum over spins simply introduces a prefactor of . We have thus obtained an exact transformation from the original, discrete model of spins to an equivalent, continuum field theory. To proceed with renormalization of this model, we refer the interested reader to the aforementioned Vvedensky reference. The remarkable upshot, for our purposes, is that all of the physics of the original spin degrees of freedom are entirely encoded in the new field . To connect with our ML language above, one can think of this as a latent variable that mediates the correlations between the visible spins, analogous to how UV degrees of freedom give rise to effective couplings at low-energies.
So what does this have to do with (restricted) Boltzmann machines? This is neatly explained in the amazing review by Pankaj Mehta et al., A high-bias, low-variance introduction to Machine Learning for physicists, which receives my highest recommendations. The idea is that by including latent variables in generative models, one can encode complex interactions between visible variables without sacrificing trainability (because the correlations between visible degrees of freedom are mediated via the UV degrees of freedom over which one marginalizes, rather than implemented directly as intralayer couplings). The following exposition draws heavily from section XVI of Mehta et al., and the reader is encouraged to turn there for more details.
where is a vector of visible neurons, and on the second line we’ve appealed to SVD to decompose the coupling matrix as . In its current form, the visible neurons interact directly through the second, coupling term. We now wish to introduce latent or hidden variables that mediate this coupling instead. To do this, consider the Hubbard-Stratonovish transformation (1), with , , and . Then the second term in (7) becomes
Of course, this is simply a special case of the following multidimensional Gaussian integral: recall that if is an -dimensional, symmetric, positive-definite matrix, then
The salient feature of this new hamiltonian is that it contains no interactions between the visible neurons: the original, intralayer coupling is now mediated via the latent degrees of freedom instead. As we’ll see below, this basic mechanism can be readily generalized such that we can encode arbitrarily complex — that is, arbitrarily high-order — interactions between visible neurons by coupling to a second layer of hidden neurons.
A general RBM is described by the hamiltonian
where the functions and can be freely chosen. According to Mehta et al., the most common choices are
The upper choice, with binary neurons, is referred to as a Bernoulli layer, while the lower choice, with continuous outputs, is called a Gaussian layer. Note that choosing the visible layer to be Bernoulli and the hidden Gaussian reduces to the example considered above, with standard deviations . Of course, if we marginalize over (i.e., integrate out) the latent variable , we recover the distribution of visible neurons, cf. (10) above, which we may write as
The associated cumulant generating function is
where is the expectation value with respect to the distribution , and the cumulant is obtained by differentiating the power series times and evaluating the result at , i.e., . The reason for introducing (16) and (17) is that the cumulant of are actually encoded in the marginal energy distribution (15)! To see this, observe that taking in the cumulant generating function yields precisely the log term in (15). Therefore we can replace this term with the cumulant expansion, whereupon we obtain
In other words, after marginalizing over the latent variables, we obtain an effective hamiltonian with arbitrarily high-order interactions between the visible neurons, with the effective couplings weighted by the associated cumulant. As emphasized by Mehta et al., it is the ability to encode such complex interactions with only the simplest couplings between visible and hidden neurons that underlies the incredible power of RBMs. See my later post on the relationship between deep learning and RG for a much more in-depth look at this.
As final comment, there’s an interesting connection between cumulants and statistical physics, which stems from the fact that for a thermal system with partition function , the free energy likewise generates thermodynamic features of the system via its derivatives. Pursuing this here would take us too far afield, but it’s interesting to note yet another point where statistical thermodynamics and machine learning cross paths.