QFT in curved space, part 2: Bogolyubov transformations & the Unruh effect

One of the most important lessons of QFT in curved space is that the notion of a particle is an observer-dependent concept. That’s not to say it isn’t locally useful, but without specifying the details of the mode decomposition and the trajectory/background of the detector/observer, it’s simply not meaningful. In addition to challenging our intuition, this fact lies at the heart of some of the most vexing paradoxes in physics, firewalls chief among them. In this post, I want to elucidate this issue by introducing a very important tool: the Bogolyubov (also transliterated as “Bogoliubov”) transformation.

In a nutshell, a Bogolyubov transformation relates mode decompositions of the field in different states. Consider quantizing the free scalar field {\phi} in {(1\!+\!1)}-dimensional Minkowski (i.e., flat) spacetime. From part 1, we have

\displaystyle \phi(x)=\int\!\frac{\mathrm{d}\mathbf{k}}{\sqrt{4\pi\omega}}\,\left( a_k u_k+a_k^\dagger u_k^*\right)~, \qquad\qquad u_k\equiv \frac{1}{\sqrt{4\pi\omega}}e^{i\mathbf{k} \mathbf{x}-i\omega t}~, \ \ \ \ \ (1)

where {x=(t,\mathbf{x})}, {k=(\omega,\mathbf{k})}, and the creation/annihilation operators satisfy the standard equal-time commutation relation, which in our convention is {\big[a_k,a_{k'}^\dagger\big]=4\pi\omega\delta(\mathbf{k}-\mathbf{k}')}, with the vacuum state {\left|\Omega\right>} such that {{a_k\left|\Omega\right>=0}}. (The derivation of this commutation relation, as well as the reason for splitting the normalization in this particular way when identifying the modes {u_k}, will be discussed below). Typically, we choose the positive solution to the Klein-Gordon equation, {\omega\!>\!0}, so that the operator {a_k^\dagger} has an interpretation as the creation operator for a positive-frequency mode traveling forwards in time. Note however that this statement depends on the existence of a global timelike Killing vector. That is, the modes {u_k\sim e^{ikx}} are of positive frequency with respect to some time parameter {t}, by which we mean they’re eigenfunctions of the operator {\partial_t} with eigenvalue {\omega\!>\!0}:

\displaystyle \partial_t u_k=-i\omega u_k~. \ \ \ \ \ (2)

In general spacetimes however, there will be no Killing vectors with respect to which we can define positive-frequency modes, so this decomposition depends on the observer’s frame of reference. Another way to say this is that the vacuum state is not invariant under general diffeomorphisms, because these have the potential to mix positive- and negative-frequency modes. Indeed, the spirit of general relativity is that there are no privileged coordinate systems. Hence we could equally-well quantize the field with a different set of modes:

\displaystyle \phi(x)=\int\!\frac{\mathrm{d} \mathbf{p}}{\sqrt{4\pi\varpi}}\,\left( \bar a_p \bar u_p+\bar a_p^\dagger \bar u_p^*\right)~, \qquad\qquad \bar u_p\equiv\frac{1}{\sqrt{4\pi\varpi}}e^{i\mathbf{p}\mathbf{x}-i\varpi t}~, \ \ \ \ \ (3)

which represent excitations with respect to some other vacuum state {\left|\bar\Omega\right>}, i.e., {\bar a_p\left|\bar\Omega\right>=0}. Since both sets of modes are complete, they must be linear combinations of one another, i.e.,

\displaystyle \bar u_p=\int\!\mathrm{d}\mathbf{k}\,\left(\alpha_{kp}u_k+\beta_{kp}u_k^*\right)~, \qquad\mathrm{and}\qquad u_k=\int\!\mathrm{d}\mathbf{p}\,\left(\alpha_{kp}^*\bar u_p+\beta_{kp}^*\bar u_p^*\right)~. \ \ \ \ \ (4)

These are the advertised Bogolyubov transformations, and the matrices {\alpha_{ij}}, {\beta_{ij}} are called Bogolyubov coefficients. (Note that unlike [1], we’re working directly in the continuum, so the coefficients should be interpreted as continuous functions rather than discrete matrices). These can be determined by appealing to the normalization condition imposed by the Klein-Gordon inner product,

\displaystyle (f,g)\equiv i\int_t\left[\,\left(\partial_tf\right) g^*-f\partial_tg^*\right]~, \ \ \ \ \ (5)

where the integral is performed over any Cauchy slice; for the present case of 1d flat space, {\int_t=\int\!\mathrm{d}\mathbf{x}}. The idea behind this choice of inner product is that it allows the easy extraction of the Fourier coefficients from (1) as follows:

\displaystyle \begin{aligned} (\phi,u_k)&=i\int\!\mathrm{d}\mathbf{x}\,\left(\partial_t\phi\,u_k^*-\phi\,\partial_tu_k^*\right) =2\omega\int\!\frac{\mathrm{d}\mathbf{p}}{\sqrt{4\pi\omega}}\int\!\mathrm{d}\mathbf{x}\,a_pu_pu_k^*\\ &=\frac{1}{\sqrt{4\pi\omega}}\!\int\!\frac{\mathrm{d}\mathbf{p}}{2\pi}\,a_p\!\int\!\mathrm{d}\mathbf{x}\,e^{i(\mathbf{p}-\mathbf{k})\mathbf{x}} =\frac{1}{\sqrt{4\pi\omega}}\int\!\mathrm{d}\mathbf{p}\,\delta(\mathbf{k}-\mathbf{p})\,a_p =\frac{1}{\sqrt{4\pi\omega}}a_k~, \end{aligned} \ \ \ \ \ (6)

and similarly for {a_k^\dagger}; commuting the two yields the standard commutation relation for the creation/annihilation operators above. The modes {u_k} are orthonormal with respect to this inner product, and therefore provide a convenient basis for the solution space:

\displaystyle (u_k,u_p)=\delta(\mathbf{k}-\mathbf{p})~,\qquad (u_k^*,u_p^*)=-\delta(\mathbf{k}-\mathbf{p})~,\qquad (u_k,u_p^*)=0~, \ \ \ \ \ (7)

and similarly for {\bar u_k}, hence the funny split in the normalization we mentioned below (1) (for our notation/normalization conventions, see part 1). For example,

\displaystyle \begin{aligned} (u_k,u_p)&=i\int\!\mathrm{d}\mathbf{x}\,\left[\,\left(\partial_tu_k\right) u_p^*-u_k\partial_tu_p^*\right] =2\omega\!\int\!\mathrm{d}\mathbf{x}\,u_ku_p^*\\ &=\frac{1}{2\pi}\int\!\mathrm{d}\mathbf{x}\,e^{i(\mathbf{k}-\mathbf{p})\mathbf{x}} =\delta(\mathbf{k}-\mathbf{p})~. \end{aligned} \ \ \ \ \ (8)

Taking the inner product of different sets of modes then allows one to isolate the Bogolyubov coefficients:

\displaystyle \alpha_{kp}=(\bar u_p,u_k)~,\qquad\mathrm{and}\qquad \beta_{kp}=-(\bar u_p,u_k^*)~. \ \ \ \ \ (9)


\displaystyle \begin{aligned} (\bar u_p,u_k)&=i\!\int\!\mathrm{d}\mathbf{x}\,\left[\,\left(\partial_t\bar u_p\right){u_k}^*-\bar u_p\,\partial_t{u_k}^*\right]\\ &=i\!\int\!\mathrm{d}\mathbf{x}\mathrm{d}\mathbf{j}\,\big[\left(\alpha_{jp}\partial_t u_j+\beta_{jp}\partial_tu_j^*\right){u_k}^*-\left(\alpha_{jp}u_j+\beta_{jp}u_j^*\right)\,\partial_t{u_k}^*\big]\\ &=\omega\!\int\!\mathrm{d}\mathbf{x}\mathrm{d}\mathbf{j}\,\big[\left(\alpha_{jp}u_j-\beta_{jp}u_j^*\right){u_k}^*+\left(\alpha_{jp}u_j+\beta_{jp}u_j^*\right)\,u_k^*\big]\\ &=2\omega\!\int\!\mathrm{d}\mathbf{x}\mathrm{d}\mathbf{j}\,\alpha_{jp}u_ju_k^* =\int\!\mathrm{d}\mathbf{j}\,\alpha_{jp}\delta(\mathbf{j}-\mathbf{k}) =\alpha_{kp}~, \end{aligned} \ \ \ \ \ (10)

where the second equality in the last line follows from (8); similarly for {\beta_{kp}}. Equating the two mode expansions (1) and (3) for the same field {\phi} then leads to the following relationship between the creation/annihilation operators:

\displaystyle a_k=\int\!\mathrm{d}\mathbf{p}\,\left(\alpha_{kp}\bar a_p+\beta_{kp}^*\bar a_p^\dagger\right)~, \qquad\mathrm{and}\qquad \bar a_p=\int\!\mathrm{d}\mathbf{k}\,\left(\alpha_{kp}^* a_k+\beta_{kp}a_k^\dagger\right)~, \ \ \ \ \ (11)

For example, the first of these is obtained by expressing everything in terms of {u_k}:

\displaystyle \begin{aligned} \int\!\mathrm{d}\mathbf{k}\,\left( a_k u_k+a_k^\dagger u_k^*\right)=&\int\!\mathrm{d}\mathbf{p}\,\left( \bar a_p \bar u_p+\bar a_p^\dagger \bar u_p^*\right)\\ =&\int\!\mathrm{d}\mathbf{p}\,\mathrm{d}\mathbf{j}\,\big[\bar a_p \left(\alpha_{jp}u_j+\beta_{jp}u_j^*\right)+\bar a_p^\dagger\left(\alpha_{jp}^*u_j^*+\beta_{jp}^*u_j\right)\big]~. \end{aligned} \ \ \ \ \ (12)

Taking the Klein-Gordon product with {u_q} (recall this is how we isolate the coefficients, cf. (6)), and using the fact that the modes are orthonormal (7), we then have

\displaystyle \int\!\mathrm{d}\mathbf{k}\,a_k \delta(\mathbf{k}-\mathbf{q})=\int\!\mathrm{d}\mathbf{p}\,\mathrm{d}\mathbf{j}\,\left(\alpha_{jp}\bar a_p+\beta_{jp}^*\bar a_p^\dagger\right)\delta(\mathbf{j}-\mathbf{q}) \;\implies\; a_k=\int\!\mathrm{d}\mathbf{p}\,\left(\alpha_{kp}\bar a_p+\beta_{kp}^*\bar a_p^\dagger\right)~. \ \ \ \ \ (13)

Note that had we taken the Klein-Gordon product with {u_q^*}, we’d simply have obtained {a_k^\dagger}. To solve for the second relation, {\bar a_p}, we repeat this process, instead rewriting the left-hand side of (12) in terms of {\bar u_p}. Importantly, one can see immediately from (11) that these two quantizations, (1) and (3), lead to different Fock spaces (i.e., different vacuum states) unless {\beta_{kp}=0}. Said differently, {\beta_{kp}\neq0} implies that the alternative quantization {\bar u_p} is a linear combination of both positive- and negative-frequency modes with respect to the Killing vector {\partial_t}. In this case, {a_k\left|\Omega\right>=0} does not imply {\bar a_p\left|\Omega\right>=0}, so the latter quantization will detect particles when the former does not! In some sense, this is just the same spirit of relativity (i.e., no privileged reference frames) carried over to QFT, but it’s a radical conclusion nonetheless.

To make things more concrete, let’s apply this machinery to Rindler space, which describes a uniformly accelerating observer in the Minkowski background. The metric is obtained via a hyperbolic transformation of the flat space coordinates {(t,\mathbf{x})} to {(\eta,\boldsymbol{\xi})}:

\displaystyle t=\frac{1}{a}e^{a\boldsymbol{\xi}}\sinh a\eta~,\qquad\qquad \mathbf{x}=\frac{1}{a}e^{a\boldsymbol{\xi}}\cosh a\eta~,\qquad\qquad \ \ \ \ \ (14)

which yields

\displaystyle \mathrm{d} s^2=e^{2a\boldsymbol{\xi}}\left(-\mathrm{d}\eta^2+\mathrm{d}\xi^2\right)~, \ \ \ \ \ (15)

where {a\!>\!0} is some constant that parametrizes the acceleration. Note however that these coordinates cover only a single quadrant of Minkowski space, namely the wedge {\mathbf{x}\!>\!|t|}. The line {\mathbf{x}=t} is the Rindler horizon, the physics of which well-approximates the near-horizon geometry of a black hole (when the curvature and radial directions can be ignored). This last fact underlies the central importance of Rindler space in the study of black holes, and QFT in curved space in general, as it captures many of the key features of horizons (e.g., their thermodynamic properties). Rindler time-slices — that is, lines of constant {\eta} — are straight, {\mathbf{x}\propto t}, while lines of constant {\eta} — which represent a fixed position from the horizon in these coordinates — are hyperbolae, {\mathbf{x}^2-t^2=a^{-2}e^{2a\boldsymbol{\xi}}}. The proper acceleration is given by {ae^{-a\boldsymbol{\xi}}}, so that the closer one gets to the horizon, the harder one has to accelerate (note again the analogy with black holes here). These hyperbolae asymptote to the null rays {\mathbf{x}\pm t}, so that the accelerated observer approaches the speed of light in the limit {\eta\pm\infty}; see the image below (I found this online, so don’t let the different labels confuse you):

Rindler coordinates


Now, suppose we want to determine the particle spectrum that a Rindler observer would detect in the Minkowski vacuum. That is, if you were a constantly accelerating observer in empty flat space, would you still see vacuum? To answer this question, let’s compute the expectation value of the number operator for the Rindler modes in the Minkowski vacuum state, {\left|0_\textrm{\tiny{M}}\right>}. This requires expressing the Rindler creation/annihilation operators in terms of their Minkowski counterparts, in order to figure out how they act on this state.

There’s a trick in Birrell and Davies [1], credited to Unruh, for deducing the linear combination of Minkowski modes that corresponds to a given Rindler mode that bypasses the need to work out the Bogolyubov coefficients explicitly (though the normalization must be inserted by hand, and the switch to a second, alternative set of Minkowski modes appears rather out of the blue). The basic idea is to write down the combination of Minkowski modes with the same positive-frequency analyticity properties as the Rindler modes in the desired wedged. While this is sufficient if one is only interested in knowing how the Rindler modes act on the Minkowski vacuum, working out the Bogolyubov coefficients explicitly is required in many other scenarios, and is a character-building exercise besides. Since I had to do this in the course of a previous project anyway (we wanted to compute the overlap between two states, which required commuting Rindler and Minkowski modes), I may as well share the calculation here. For simplicity, in what follows I’m going to restrict to the case of a massless scalar field. Performing the computation in {(t,\mathbf{x})} and {(\eta,\boldsymbol{\xi})} coordinates is rather tedious, and requires a case-by-case analysis depending on the signs of various momenta. A more elegant method, presented in [2] (beware, wrong sign convention!), is to first rotate to lightcone coordinates

\displaystyle x_\pm = (\mathbf{x}\pm t)/\sqrt{2}~,\qquad\qquad \bar x_\pm = (\boldsymbol{\xi}\pm \eta)/\sqrt{2}~, \ \ \ \ \ (16)

where we’ve chosen the signs so that {x_\pm>0} in the wedge under consideration. Since we’re working with massless fields, these coordinates have the significant advantage of not mixing positive and negative frequencies. That is, observe that in these coordinates, the Minkowski and Rindler metrics become, respectively,

\displaystyle \mathrm{d} s^2=\mathrm{d} x_+\mathrm{d} x_- \qquad\mathrm{and}\qquad \mathrm{d} s^2=e^{\sqrt{2}a(\bar x_++\bar x_-)}\mathrm{d}\bar x_+\mathrm{d}\bar x_-~, \ \ \ \ \ (17)

which makes the transformation between them much simpler:

\displaystyle x_\pm = \frac{1}{\sqrt{2}a}e^{\sqrt{2}a\bar x_\pm}~. \ \ \ \ \ (18)

The key point is the following: since the wave equation is conformally invariant, and Rindler is conformal to Minkowski, we can write the mode expansion in terms of plane waves just as we did in the flat space case above, cf. (1). The additional simplification afforded by the use of lightcone coordinates is that the contribution from positive-frequency modes in this expansion can be directly matched to the corresponding positive-frequency contribution in flat space (similarly for the negative frequencies). The lack of mode mixing in these coordinates greatly shortens the computation of the Bogolyubov coefficients, because — in addition to avoiding the case-by-case treatment — it enables us to isolate the desired modes via a simple Fourier transform, rather than needing to perform the full Klein-Gordon inner product as above.

So, let’s split the Minkowski mode decomposition into positive- and negative-frequency parts; from (1), we have

\displaystyle \begin{aligned} \phi&=\int_{-\infty}^\infty\!\frac{\mathrm{d}\mathbf{k}}{4\pi\omega}\left( a_ke^{i\mathbf{k}\mathbf{x}-i\omega t}+a_k^\dagger e^{-i\mathbf{k}\mathbf{x}+i\omega t}\right)\\ &=\int_0^\infty\!\frac{\mathrm{d}\mathbf{k}}{4\pi\mathbf{k}}\left( a_ke^{i\mathbf{k}(\mathbf{x}-t)}+a_k^\dagger e^{-i\mathbf{k}(\mathbf{x}-t)}\right) +\int_{-\infty}^0\!\frac{\mathrm{d}\mathbf{k}}{4\pi|\mathbf{k}|}\left( a_ke^{-i|\mathbf{k}|(\mathbf{x}+t)}+a_k^\dagger e^{i|\mathbf{k}|(\mathbf{x}+t)}\right)\\ &=\int_0^\infty\!\frac{\mathrm{d}\mathbf{k}}{4\pi\mathbf{k}}\left( a_ke^{i\mathbf{k}(\mathbf{x}-t)}+a_k^\dagger e^{-i\mathbf{k}(\mathbf{x}-t)}+a_{-k}e^{-i\mathbf{k}(\mathbf{x}+t)}+a_{-k}^\dagger e^{i\mathbf{k}(\mathbf{x}+t)}\right)~, \end{aligned} \ \ \ \ \ (19)

where in the second line, we used the on-shell condition {\omega=|\mathbf{k}|>0}, and then in the last line changed integration variables {\mathbf{k}\rightarrow-\mathbf{k}} in the second term in order to combine the integrals. In lightcone coordinates (16), this becomes

\displaystyle \phi=\int_0^\infty\!\frac{\mathrm{d}\mathbf{p}}{4\pi\mathbf{p}}\left( a_pe^{i\mathbf{p} x_-}+a_p^\dagger e^{-i\mathbf{p} x_-}+a_{-p}e^{-i\mathbf{p} x_+}+a_{-p}^\dagger e^{i\mathbf{p} x_+}\right)~, \ \ \ \ \ (20)

where we’ve absorbed the pesky factor of {\sqrt{2}} via the rescaling {\mathbf{p}\equiv\sqrt{2}\mathbf{k}}. Observe how the final expression naturally decomposes into creation/annihilation operators for left- and right-movers. And since the Rindler metric is conformally flat, it admits a formally identical expression within the wedge {|x|\!>\!t}:

\displaystyle \phi=\int_0^\infty\!\frac{\mathrm{d}\mathbf{q}}{4\pi\mathbf{q}}\left( b_qe^{i\mathbf{q} \bar x_-}+b_q^\dagger e^{-i\mathbf{q} \bar x_-}+b_{-q}e^{-i\mathbf{q} \bar x_+}+b_{-q}^\dagger e^{i\mathbf{q} \bar x_+}\right)~, \ \ \ \ \ (21)

where {\mathbf{q}} is the rescaled Rindler momentum, and {b_q\left|0_\textrm{\tiny{R}}\right>=b_{-q}\left|0_\textrm{\tiny{R}}\right>=0}. Now comes the chief advantage of this approach mentioned above: since the notion of positive/negative momenta is preserved under the conformal transformation from Minkowski to Rindler space, we can directly identify

\displaystyle \begin{aligned} \int_0^\infty\!\frac{\mathrm{d}\mathbf{p}}{4\pi\mathbf{p}}\left( a_pe^{i\mathbf{p} x_-}+a_p^\dagger e^{-i\mathbf{p} x_-}\right)&=\int_0^\infty\!\frac{\mathrm{d}\mathbf{q}}{4\pi\mathbf{q}}\left( b_qe^{i\mathbf{q} \bar x_-}+b_q^\dagger e^{-i\mathbf{q} \bar x_-}\right)~,\\ \int_0^\infty\!\frac{\mathrm{d}\mathbf{p}}{4\pi\mathbf{p}}\left( a_{-p}e^{-i\mathbf{p} x_+}+a_{-p}^\dagger e^{i\mathbf{p} x_+}\right)&=\int_0^\infty\!\frac{\mathrm{d}\mathbf{q}}{4\pi\mathbf{q}}\left( b_{-q}e^{-i\mathbf{q} \bar x_+}+b_{-q}^\dagger e^{i\mathbf{q} \bar x_+}\right)~. \end{aligned} \ \ \ \ \ (22)

Now, what we’re after is an expression for the Rindler operators {b_{\pm q},b_{\pm q}^\dagger} in terms of the Minkowski operators {a_{\pm p},a_{\pm p}^\dagger}, so that we can compute the expectation value of the number operator, {\langle0_\textrm{\tiny{M}}\big|\hat N_b\big|0_\textrm{\tiny{M}}\rangle}. To that end, we first isolate the right-moving annihilation mode {b_q} by an appropriate Fourier transform of the first of these equations:

\displaystyle \begin{aligned} \int_{-\infty}^{\infty}\!\mathrm{d}\bar x_-\!&\int_0^\infty\!\frac{\mathrm{d}\mathbf{p}}{4\pi\mathbf{p}}\left( a_pe^{i\mathbf{p} x_--i\mathbf{q}'\bar x_-}+a_p^\dagger e^{-i\mathbf{p} x_--i\mathbf{q}'\bar x_-}\right)\\ &=\int_{-\infty}^\infty\!\mathrm{d}\bar x_-\!\int_0^\infty\!\frac{\mathrm{d}\mathbf{q}}{4\pi\mathbf{q}}\left( b_qe^{i(\mathbf{q}-\mathbf{q}')\bar x_-}+b_q^\dagger e^{-i(\mathbf{q}+\mathbf{q}') \bar x_-}\right)~,\\ &=\int_0^\infty\!\frac{\mathrm{d}\mathbf{q}}{2\mathbf{q}}\left[ b_q\,\delta(\mathbf{q}-\mathbf{q}')+b_q^\dagger\,\delta(\mathbf{q}+\mathbf{q}')\right] =\frac{1}{2\mathbf{q}'}\times\begin{cases} b_{q'}\quad&\mathbf{q}'>0~,\\ b_{-q'}^\dagger\quad&\mathbf{q}'<0~, \end{cases}~, \end{aligned} \ \ \ \ \ (23)

where, a priori, we allowed the momentum {\mathbf{q}'} in the Fourier transform to take any sign. Recalling however that the expressions (22) were derived for positive momenta, we must have

\displaystyle b_{q}=\int_0^\infty\!\frac{\mathrm{d}\mathbf{p}}{2\pi}\frac{\mathbf{q}}{\mathbf{p}}\left[ a_p\,F(\mathbf{p},\mathbf{q})+a_p^\dagger\,F(-\mathbf{p},\mathbf{q})\right]~, \ \ \ \ \ (24)

where we have defined

\displaystyle F(\mathbf{p},\mathbf{q})\equiv\int_{-\infty}^{\infty}\!\mathrm{d} \bar x_-\,e^{i\mathbf{p} x_--i\mathbf{q}\bar x_-} =\int_{-\infty}^{\infty}\!\mathrm{d}\bar x_-\, \mathrm{exp}\left(\frac{i\mathbf{p}}{\sqrt{2}a}e^{\sqrt{2}a\bar x_-}-i\mathbf{q} \bar x_-\right)~, \ \ \ \ \ (25)

where in the second equality we have substituted the definition (18). We can then obtain {b_q^\dagger} by simply taking the hermitian conjugate of this expression, noting that by definition,

\displaystyle F^\dagger(\mathbf{p},\mathbf{q})=F(-\mathbf{p},-\mathbf{q})~. \ \ \ \ \ (26)

Similarly, by Fourier transforming the second equation in (22), we obtain

\displaystyle b_{-q}^\dagger=\int_0^\infty\!\frac{\mathrm{d}\mathbf{p}}{2\pi}\frac{\mathbf{q}}{\mathbf{p}}\left[ a_{-p}\,F(-\mathbf{p},\mathbf{q})+a_{-p}^\dagger\,F(\mathbf{p},\mathbf{q})\right]~, \ \ \ \ \ (27)

the hermitian conjugate of which gives {b_{-q}}.

With the expressions (24), (27) for the Rindler creation/annihilation operators in terms of their Minkowski counterparts in hand, we can compute the expectation value of the Rindler number operator {\hat N_b=\int\!\frac{\mathrm{d}\mathbf{q}}{4\pi\omega}b_q^\dagger b_q} in the Minkowski vacuum, to determine what a uniformly accelerating observer would observe. For compactness, we shall denote {\langle0_\textrm{\tiny{M}}|\ldots|0_\textrm{\tiny{M}}\rangle\equiv\langle\ldots\rangle} in the following computation:

\displaystyle \begin{aligned} \langle\hat N_b\rangle=\int\!\frac{\mathrm{d}\mathbf{q}}{4\pi\omega}\langle b_q^\dagger b_q\rangle &=\int\!\frac{\mathrm{d}\mathbf{q}}{4\pi\omega}\int_0^\infty\!\mathrm{d}\mathbf{p}\,\mathrm{d}\mathbf{k}\,\beta_{pq}^*\beta_{kq}\langle a_pa_k^\dagger\rangle\\ &=\int\!\frac{\mathrm{d}\mathbf{q}}{4\pi\omega}\int_0^\infty\!\mathrm{d}\mathbf{p}\,\mathrm{d}\mathbf{k}\,\beta_{pq}^*\beta_{kq}\,4\pi\mathbf{p}\,\delta(\mathbf{p}-\mathbf{k})\\ &=\int\!\mathrm{d}\mathbf{q}\int_0^\infty\!\mathrm{d}\mathbf{p}\,\frac{\mathbf{p}}{|\mathbf{q}|}|\beta_{pq}|^2\\ &=\int\!\mathrm{d}\mathbf{q}\int_0^\infty\!\frac{\mathrm{d}\mathbf{p}}{(2\pi)^2}\,\frac{|\mathbf{q}|}{\mathbf{p}}\big|F(-\mathbf{p},\mathbf{q})\big|^2~. \end{aligned} \ \ \ \ \ (28)

Now, the seemingly straightforward way to evaluate the integral over {\mathbf{p}} is to explicitly work out {F}, but the latter requires a contour integral, and one is left with an integral over a product of complex Gamma functions for the former (I’ve included this at the end of this post, for the masochistic among you). A more clever alternative, taking a tip from [2], is to use the normalization condition on the Bogolyubov coefficients one obtains from the canonical commutation relations:

\displaystyle \begin{aligned} {}[b_q,b_{q'}^\dagger]&=\int\!\mathrm{d}\mathbf{p}\,\mathrm{d}\mathbf{p}'\left(\alpha_{pq}^*\alpha_{p'q'}-\beta_{p'q'}^*\beta_{pq}\right) [a_p,a_{p'}^\dagger]\\ &=\int\!\mathrm{d}\mathbf{p}\,4\pi|\mathbf{p}|\left(\alpha_{pq}^*\alpha_{pq'}-\beta_{pq'}^*\beta_{pq}\right)\\ &=\int\!\frac{\mathrm{d}\mathbf{p}}{\pi}\frac{\mathbf{q}\mathbf{q}'}{|\mathbf{p}|}\left[F(\mathbf{p},\mathbf{q})F^*(\mathbf{p},\mathbf{q}')-F(-\mathbf{p},\mathbf{q})F^*(-\mathbf{p},\mathbf{q}')\right]\\ &=\int\!\frac{\mathrm{d}\mathbf{p}}{\pi}\frac{\mathbf{q}\mathbf{q}'}{|\mathbf{p}|}\left[\left( e^{\pi \mathbf{q}/\sqrt{2}a}+e^{\pi \mathbf{q}'/\sqrt{2}a}\right)-1\right]F(-\mathbf{p},\mathbf{q})F^*(-\mathbf{p},\mathbf{q}') \\ \end{aligned} \ \ \ \ \ (29)

where in the last step we used the fact that {F(\mathbf{p},\mathbf{q})=e^{\pi \mathbf{q}/\sqrt{2}a}F(-\mathbf{p},\mathbf{q})} (exercise for the reader, or see the appendix below). Since the left-hand side is equal to {4\pi|\mathbf{q}|\delta(\mathbf{q}-\mathbf{q}')}, we must have

\displaystyle \int\!\frac{\mathrm{d}\mathbf{p}}{(2\pi)^2}\frac{\mathbf{q}'}{|\mathbf{p}|}\left[\left( e^{\pi \mathbf{q}/\sqrt{2}a}+e^{\pi \mathbf{q}'/\sqrt{2}a}\right)-1\right]F(-\mathbf{p},\mathbf{q})F^*(-\mathbf{p},\mathbf{q}')=\delta(\mathbf{q}-\mathbf{q}')~, \ \ \ \ \ (30)

and therefore, setting {\mathbf{q}=\mathbf{q}'}, we obtain

\displaystyle \int\!\frac{\mathrm{d}\mathbf{p}}{(2\pi)^2}\frac{\mathbf{q}}{|\mathbf{p}|}\big|F(-\mathbf{p},\mathbf{q})\big|^2=\left( e^{\pi\sqrt{2} \mathbf{q}/a}-1\right)^{-1}\delta(0)~, \ \ \ \ \ (31)

which is exactly the integral we needed in (28)! Therefore,

\displaystyle \langle\hat N_b\rangle =\delta(0)\int\!\mathrm{d}\mathbf{q}\left( e^{\pi\sqrt{2}\mathbf{q}/a}-1\right)^{-1} =\delta(0)\int\!\mathrm{d}\mathbf{k}\left( e^{\frac{2\pi}{a}\mathbf{k}}-1\right)^{-1}~, \ \ \ \ \ (32)

where in the last step, we have undone our rescaling below (20): {\mathbf{q}=\sqrt{2}\mathbf{k}} (that is, here {\mathbf{k}} plays the role of the original Rindler momentum in (3). Note that the integration measure doesn’t pick up a factor of {\sqrt{2}}, because we really should have integrated with respect to {\mathbf{k}} from the very start of (28)).

This is the final answer, except for two divergences: an IR divergence {\delta(0)}, and a UV divergence due to the fact that we integrated over arbitrarily high momenta. The first is rather silly, and arises from the infinite spatial volume {V}:

\displaystyle 2\pi\delta(0)=\lim_{L\rightarrow\infty}\int_{-L/2}^{L/2}\!\mathrm{d}\mathbf{x}\,e^{-i\mathbf{p}\mathbf{x}}\Big|_{\mathbf{p}=0}=\lim_{L\rightarrow\infty}\int_{-L/2}^{L/2}\!\mathrm{d}\mathbf{x}=V~. \ \ \ \ \ (33)

Meanwhile, the UV divergence is to be expected, since there are an infinite number of modes. We could impose a momentum cutoff on the integral to regulate this, but what we’re really after is the number density {\langle \hat n_b\rangle}, where {\hat N_b=\frac{1}{V}\int\!\frac{\mathrm{d}\mathbf{k}}{2\pi}\,\hat n_b}. Hence, stripping off the divergences in this manner, we arrive at

\displaystyle \langle \hat n_b\rangle = \left( e^{2\pi\mathbf{k}/a}-1\right)^{-1}~, \ \ \ \ \ (34)

which is precisely the Plank spectrum for a black body at temperature {T=a/2\pi}.

This result — that a uniformly accelerating observer in Minkowski vacuum observes a thermal spectrum — is called the Unruh effect, and is nothing short of remarkable. From an operationalist standpoint, one can say that the acceleration of the Rindler observer is what provides the energy for these excitations, but this sidesteps the question of their ontological status (indeed, the closely related question of exactly how pairwise entangled Hawking modes evolve to particles in the inertial frame at infinity is a non-trivial problem to which I hope to return). In the context of the maximum entropy procedure we’ve discussed before, the thermal spectrum may be regarded as a consequence of the fact that the Rindler observer has traced-out everything on the other side of the horizon, so the only information she has left is the temperature set by her own acceleration. If we restore units, this temperature reads

\displaystyle T=\frac{\hbar a}{2\pi c k_B}~. \ \ \ \ \ (35)

The Unruh effect thus represents a deep interplay between statistical thermodynamics née information theory ({k_B}), relativity ({c}), and quantum field theory ({\hbar}). It is another manifestation of the thermal nature of horizons we mentioned in part 1, and arises as an inevitable consistency condition between inertial and Rindler frames. Furthermore, since the Rindler frame well-approximates the near-horizon geometry of a static black hole, the Unruh effect is intimately related to Hawking radiation, and my personal suspicion is that when we understand things more deeply, we’ll find the same underlying explanation for both. Indeed, in the Bekenstein-Hawking formula for black hole entropy {S=\frac{k_BA}{4\ell_P^2}}, the only additional ingredient in this exceptional confluence is the presence of gravity ({G})—the nature of which we’re far from having grasped.


  1. N. D. Birrell and P. C. W. Davies, Quantum Fields in Curved Space. Cambridge Monographs on Mathematical Physics. Cambridge Univ. Press, Cambridge, UK, 1984. http://www.cambridge. org/mw/academic/subjects/physics/theoretical-physics-and-mathematical-physics/ quantum-fields-curved-space?format=PB.
  2. V. F. Mukhanov and S. Winitzki, Introduction to Quantum Fields in Classical Backgrounds. 2004. https://uwaterloo.ca/physics-of-information-lab/sites/ca. physics-of-information-lab/files/uploads/files/text.pdf. (freely-available draft of the published textbook).

As an exercise in contour integration, one can obtain an explicit expression for the integral (25). The trick is to massage it into a gamma function:

\displaystyle \Gamma(\zeta,a,b)=\int_a^b\!\mathrm{d} \chi\,e^{-\chi}\chi^{\zeta-1} =\begin{cases} \Gamma(\zeta)~\qquad &a\!=\!0~,\;\;b\rightarrow\infty~,\\ \Gamma(\zeta,s)~\qquad &a\!=\!s~,\;\;b\rightarrow\infty~,\\ \gamma(\zeta,s)~\qquad &a\!=\!0~,\;\;b\!=\!s~, \end{cases} \ \ \ \ \ (36)

where {\Gamma(\zeta,s)} and {\gamma(\zeta,s)} are the upper and lower incomplete gamma functions, respectively, and {\zeta\in\mathbb{C}} with {\mathrm{Re}(\zeta)\!>\!0}. We then begin by defining a new variable {z=e^{\sqrt{2}a\bar x_-}}, so that

\displaystyle F=\frac{1}{\sqrt{2}a}\int_0^\infty\!\mathrm{d} z\,e^{-ip z}z^{-iq-1}~, \ \ \ \ \ (37)

where we have defined the rescaled momentum vectors {\sqrt{2}a\,p=\mathbf{p}}, {\sqrt{2}a\,q=\mathbf{q}} (and broken the boldface convention, because I didn’t want to introduce even more letters). This is pretty close to the desired form already, but the complex exponent {e^{-ipz}} necessitates that we promote {z} to a complex variable, and treat this as a contour integral in the complex plane.

Recall that we can write any {z\in\mathbb{C}} in the form {z^\alpha=r^\alpha e^{i\alpha\theta}} with {r\in\mathbb{R}}, which has branch points at both {z=0} and {z=\infty}, since there’s a non-trivial monodromy {z^\alpha\rightarrow z^\alpha e^{2\pi i\alpha}} along a closed path that encircles either point in the complex plane. Since the integral converges as {z\rightarrow i\infty} due to the exponential damping, let’s choose the branch cut to run from {z=0} to {z=-\infty} along the negative real axis, so that the following contour encloses no poles:

\displaystyle \oint=\int_0^\infty+\int_\mathrm{arc}+\int_{i\infty}^0=0~, \ \ \ \ \ (38)

where the arc term is a quarter-circle that connects the positive real and complex axes at {|z|\rightarrow\infty}, which vanishes since {q,p>0}. Thus, taking {z=re^{i\pi/2}} along the positive complex axis, the integral (37) becomes

\displaystyle F=\frac{1}{\sqrt{2}a}\int_0^{i\infty}\!\mathrm{d} z\,e^{-ipz}z^{-iq-1} =\frac{1}{\sqrt{2}a}e^{\pi q/2}\int_0^{\infty}\!\mathrm{d} r\,e^{pr}r^{-iq-1}~, \ \ \ \ \ (39)

where {\left( e^{i\pi/2}\right)^{-iq-1}=e^{\pi q/2}e^{-i\pi/2}=-i\,e^{\pi q/2}}. Comparing with the form of the gamma function (36), we identify {\chi=pr} and {\zeta=-iq}, whence we find

\displaystyle F(\mathbf{p},\mathbf{q}) =\frac{1}{\sqrt{2}a}e^{\pi q/2}p^{iq}\,\Gamma(-iq)~. \ \ \ \ \ (40)

As an aside, while the form of this expression makes it non-obvious, the property (26) is preserved, as one can easily show by noting that

\displaystyle e^{-\pi q/2}(-p)^{-iq} =e^{-\pi q/2}\left( e^{i\pi}p\right)^{-iq} =\left( e^{-i \pi/2}e^{i\pi}p\right)^{-iq} =\left( e^{i \pi/2}p\right)^{-iq} =e^{\pi q/2}p^{-iq}~. \ \ \ \ \ (41)

Substituting this result back into (24), we at last obtain

\displaystyle b_{q}=\int_0^\infty\!\mathrm{d}\mathbf{p}\,\left(\alpha_{pq}^*a_p+\beta_{pq}a_p^\dagger\,\right) \quad\implies\quad b_q^\dagger=\int_0^\infty\!\mathrm{d}\mathbf{p}\,\left(\beta_{pq}^*a_p+\alpha_{pq}a_p^\dagger\,\right)~, \ \ \ \ \ (42)

where the Bogolyubov coefficients are

\displaystyle \alpha_{pq}^*=\frac{\mathbf{q}}{2\pi\mathbf{p}}F(\mathbf{p},\mathbf{q})~, \qquad \qquad \beta_{pq}=\frac{\mathbf{q}}{2\pi\mathbf{p}}F(-\mathbf{p},\mathbf{q})~. \ \ \ \ \ (43)

The left-movers {b_{-q},b_{-q}^\dagger} are similarly obtained from (27).

Posted in Physics | 2 Comments

QFT in curved space, part 1: Green functions

I was recently** asked to give a lecture on black hole thermodynamics and the associated quantum puzzles, which provided a perfect excuse to spend some time reviewing one of my favourite subjects: quantum field theory (QFT) in curved spacetime. I’ll mostly follow the canonical reference by Birrell and Davies [1], and will use this series of posts to highlight a number of important and/or interesting aspects along the way. I spent many a happy hour with this book as a graduate student, and warmly recommend it to anyone desiring a more complete treatment.

**(That is, “recently” when I started this back in February. I had intended to finish the series before posting the individual parts, to avoid retroactive edits as my understanding and plans for future segments evolves, but alas the constant pressure to publish (or, perhaps more charitably, the fact that I don’t get paid to teach a course on this stuff) means that studies of this sort are frequently pushed down my priority list (and that was before an international move in the midst of a global pandemic, for those wondering why I haven’t posted in so long). Since such time constraints are likely to continue — on top of which, I have no fixed end-point in mind for this vast subject — I’ve decided to release the first few parts I’ve been sitting on, lest they never see the light of day. I hope to add more installments as time permits.)

I’m going to start by discussing Green functions (commonly but improperly called “Green’s functions”), which manifest one of the deepest relationships between gravity, quantum field theory, and thermodynamics, namely the thermodynamic character of the vacuum state. Specifically, the fact that Green functions are periodic in imaginary time — also known as the KMS or Kubo-Martin-Schwinger condition — hints at an intimate relationship between Euclidean field theory and statistical mechanics, and underlies the thermal nature of horizons (including, but not limited to, those of black holes).

For simplicity, I’ll stick to Green functions of free scalar fields {\phi(x)}, where the {D}-vector {x=(t,\mathbf{x})}. As a notational aside, I will depart from [1] in favour of the modern convention in which the spacetime dimension is denoted {D=d\!+\!1}, with Greek indices running over the full {D}-dimensional spacetime, and Latin indices restricted to the {d}-dimensional spatial component. While I’m at it, I should also warn you that [1] uses the exceedingly unpalatable mostly-minus convention {\eta_{\mu\nu}=(+,-,-,-)}, whereas I’m going to use mostly-plus {\eta_{\mu\nu}=(-,+,+,+)}. The former seems to be preferred by particle physicists, because they share with small children a preference for timelike 4-vectors to have positive magnitude. But the latter is generally preferred by relativists and most workers in high-energy theory and quantum gravity, because 3-vectors have no minus signs (i.e., it’s consistent with the non-relativistic case, whereas mostly-plus yields a negative-definite metric), raising and lowering indices involves flipping only a single sign (arguably the most important for our purposes, since we’ll be Wick rotating between Lorentzian and Euclidean signature; mostly-plus would again lead to a negative-definite Euclidean metric), and the extension to general dimensions contains only a single {-1} in the determinant (as opposed to a factor of {(-1)^D} in mostly-minus).

Notational disclaimers dispensed with, the Lagrangian density is

\displaystyle \mathcal{L}(x)=-\frac{1}{2}\left(\eta^{\mu\nu}\partial_\mu\phi\partial_\nu\phi+m^2\phi^2\right)~, \ \ \ \ \ (1)

where {\eta^{\mu\nu}} is the Minkowski metric (no curved space just yet). By applying the variational principle {\delta S=0} to the action

\displaystyle S=\int\!\mathrm{d}^Dx\mathcal{L}~, \ \ \ \ \ (2)

we obtain the familiar Klein-Gordon equation

\displaystyle \left(\square-m^2\right)\phi=0~, \ \ \ \ \ (3)

where {\square\equiv\eta^{\mu\nu}\partial_\mu\partial_\nu}. The general solution, upon imposing that {\phi} be real-valued, is

\displaystyle \phi(x)=\int\!\frac{\mathrm{d}^dk}{2\omega(2\pi)^d}\left[a_k e^{i\mathbf{k}\mathbf{x}-i\omega t}+a_k^\dagger e^{-i\mathbf{k}\mathbf{x}+i\omega t}\right]~, \ \ \ \ \ (4)

where {k=(\omega,\mathbf{k})} (note that [1] restricts to the case of a discrete spectrum, as though the system were in a box; useful for imposing an IR regulator, but unnecessary for our purposes, and potentially problematic if we want to consider Lorentz boosts or Euclidean continuations). Here {a_k} is the annihilation operator that kills the vacuum state, i.e., {a_k|0\rangle=0} (so {\phi}, by extension, is a real-valued field operator).

One last (lengthy, but important) notational aside: different authors make different choices for the integration measure {\int\!\frac{\mathrm{d}^dk}{f(k)}}, which affects a number of later formulas, and can cause confusion when comparing different sources. The convention I’m using is physically well-motivated in that it makes the measure Lorentz invariant while encoding the on-shell condition {k^2\!=\!-m^2}. That is, the Lorentz invariant measure in the full {D}-dimensional spacetime is {\int\!\mathrm{d}^Dk}. If we then impose the on-shell condition along with {\omega>0} (in the form of the Heaviside function {\Theta(\omega)}), we have

\displaystyle \int\!\mathrm{d}^Dk\,\delta(k^2+m^2)\Theta(\omega) =\int\!\mathrm{d}\omega\int\!\mathrm{d}^dk\,\delta(\mathbf{k}^2-\omega^2+m^2)\Theta(\omega)~. \ \ \ \ \ (5)

We now use the following trick: if a smooth function {g(x)} has a root at {x_0}, then we may write

\displaystyle \int\!\mathrm{d} x\,\delta(g(x))=\int\!\mathrm{d} x\,\frac{\delta(x-x_0)}{|g'(x)|} =\frac{1}{|g'(x_0)|} \ \ \ \ \ (6)

where the prime denotes the derivative with respect to {x}. In the present case, {{g(\omega)=\mathbf{k}^2-\omega^2+m^2}}, and {x_0^2=\mathbf{k}^2+m^2} (note that the Heaviside function will select the positive root). Thus

\displaystyle \int\!\mathrm{d}\omega\!\int\!\mathrm{d}^dk\,\delta(\mathbf{k}^2-\omega^2+m^2)\Theta(\omega) =\int\!\frac{\mathrm{d}^dk}{2\sqrt{\mathbf{k}^2+m^2}}~. \ \ \ \ \ (7)

Finally, since {k} and {x} are related by a Fourier transform, we must adopt a convention for the associated factor of {(2\pi)^d}. Mathematicians seem to prefer splitting this so that both {\mathrm{d}^dk} and {\mathrm{d}^d x} get a factor of {(2\pi)^{d/2}}, but physicists favour simply attaching it all to the momentum, so that

\displaystyle \hat\phi(k)=\int\!\mathrm{d}^dx\,e^{-ikx}\phi(x) \qquad\mathrm{and}\qquad \phi(x)=\int\!\frac{\mathrm{d}^dk}{(2\pi)^d}\,e^{ikx}\hat\phi(k)~. \ \ \ \ \ (8)

which further implies the convention

\displaystyle (2\pi)^d\delta^d(k-p)=\int\!\mathrm{d}^dx\,e^{-i(k-p)x} \qquad \mathrm{and} \qquad \delta^d(x-y)=\int\!\frac{\mathrm{d}^dk}{(2\pi)^d}\,e^{ik(x-y)}~, \ \ \ \ \ (9)

as one can readily verify by substituting {\phi(x)} into {\hat\phi(k)} (or vice versa):

\displaystyle \begin{aligned} \hat\phi(k)&=\int\!\mathrm{d}^dx\,e^{-ikx}\!\int\!\frac{\mathrm{d}^dp}{(2\pi)^d}\,e^{ipx}\hat\phi(p) =\int\!\frac{\mathrm{d}^dp}{(2\pi)^d}\int\!\mathrm{d}^dx\,e^{-i(k-p)x}\hat\phi(p)\\ &=\int\!\mathrm{d}^dp\,\delta^d(k-p)\hat\phi(p) =\hat\phi(k)~. \end{aligned} \ \ \ \ \ (10)

Thus our choice for the measure in (4):

\displaystyle \int\!\frac{\mathrm{d}^dk}{f(k)}\equiv\int\!\frac{\mathrm{d}^dk}{2\sqrt{\mathbf{k}^2+m^2}(2\pi)^d} =\int\!\frac{\mathrm{d}^dk}{2\omega(2\pi)^d}~. \ \ \ \ \ (11)

(I realize that was a bit tedious, but setting one’s conventions straight will pay dividends later. Trust me: I’ve lost hours trying to sort out factors of {2\pi} and the like for failure to invest this time at the start).

We can now consider vacuum expectation values of products of field operators {\phi}. For free scalar fields, these can always be decomposed into two-point functions, which therefore play a defining role. In particular, we can construct various Green functions of the wave operator {(\square-m^2)} from the two-point correlator {\langle\phi(x)\phi(x')\rangle}, including the familiar Feynman propagator. Following [1], we’ll denote the expectation values of the commutator and anticommutator as follows:

\displaystyle \begin{aligned} iG(x,x')&=\langle\left[\phi(x),\phi(x')\right]\rangle=G^+\!(x,x')-G^-\!(x,x')~,\\ G^{(1)}(x,x')&=\langle\left\{\phi(x),\phi(x')\right\}\rangle=G^+\!(x,x')+G^-\!(x,x')~, \end{aligned} \ \ \ \ \ (12)

where {G^{\pm}} on the far right-hand sides are the so-called positive/negative frequency Wightman functions,

\displaystyle \begin{aligned} G^+\!(x,x')&=\langle\phi(x)\phi(x')\rangle~,\\ G^-\!(x,x')&=\langle\phi(x')\phi(x)\rangle~. \end{aligned} \ \ \ \ \ (13)

Note that while physicists call all of these Green functions, they’re technically kernels, i.e.,

\displaystyle \left(\square_x-m^2\right)\mathcal{G}(x,x')=0~,\qquad\qquad\mathcal{G}\in\{G,\,G^{(1)}\!,G^{\pm}\}~. \ \ \ \ \ (14)

One can immediately verify this by observing that since {\square_x} acts only on {\phi(x)} (that is, {\square_x\phi(x')=0}), it reduces to the Klein-Gordon equation above for the Wightman functions, from which the others follow.

Using these building blocks, we can consider the true Green functions

\displaystyle iG_F(x,x')=\langle\mathcal{T}\phi(x)\phi(x')\rangle=\Theta(t-t')G^+\!(x,x')+\Theta(t'-t)G^-\!(x,x')~, \ \ \ \ \ (15)

which is the familiar (time-ordered, {\mathcal{T}}) Feynman propagator, and

\displaystyle \begin{aligned} G_R(x,x')&=-\Theta(t-t')G(x,x')~,\\ G_A(x,x')&=\Theta(t'-t)G(x,x')~, \end{aligned} \ \ \ \ \ (16)

which are the retarded (R) and advanced (A) propagators. All three of these are Green functions of the wave operator, i.e.,

\displaystyle \left(\square_x-m^2\right)\mathcal{G}(x,x')=\delta^D(x-x')~,\qquad\qquad\mathcal{G}\in\{G_F,G_R,G_A\}~. \ \ \ \ \ (17)

Let’s verify this for the Feynman propagator; the others are similar. Using the fact that {\eta^{\mu\nu}\partial_\nu\Theta(t-t')=\eta^{\mu0}\partial_0\Theta(t-t')=\eta^{\mu0}\delta(t-t')}, we have

\displaystyle \begin{aligned} \square_x G_F=&-i\eta^{\mu0}\partial_\mu\left[\delta(t-t')G^+\!(x,x')-\delta(t'-t)G^-\!(x,x')\right]\\ &-i\eta^{\mu\nu}\partial_\mu\left[\Theta(t-t')\partial_\nu G^+\!(x,x')+\Theta(t'-t)\partial_\nu G^-\!(x,x')\right]~. \end{aligned} \ \ \ \ \ (18)

Now observe that by virtue of the delta function, the equal-time commutator {{[\phi(t,\mathbf{x}),\phi(t,\mathbf{x}')]=0}} means that in the first line, {G^+=G^-}. And since the delta function itself is even, this implies that the first two terms cancel, so we continue with just the second line:

\displaystyle \begin{aligned} \square_x G_F=&-i\eta^{00}\left[\delta(t-t')\partial_0 G^+\!(x,x')-\delta(t'-t)\partial_0 G^-\!(x,x')\right]\\ &-i\eta^{\mu\nu}\left[\Theta(t-t')\partial_\mu\partial_\nu G^+\!(x,x')+\Theta(t'-t)\partial_\mu\partial_\nu G^-\!(x,x')\right]\\ =&\,\,i\delta(t-t')\left[\pi(x)\phi(x')-\phi(x')\pi(x)\right]\\ &-i\left[\Theta(t-t')\square_x G^+\!(x,x')+\Theta(t'-t)\square_x G^-\!(x,x')\right]~, \end{aligned} \ \ \ \ \ (19)

where in the second step, we have used the fact that the delta function is even, and identified the conjugate momentum {\pi(x)=\partial_0\phi(x)}. Then by (14), the second line will vanish for all values of {t\!-\!t'} when we add in the {-m^2} term of the wave operator, and the first line is just (minus) the equal-time commutator {[\phi(t,\mathbf{x}),\pi(t,\mathbf{x}')]=i\delta^d(\mathbf{x}-\mathbf{x}')}. Hence

\displaystyle \left(\square_x-m^2\right) G_F=\delta(t-t')\delta^d(\mathbf{x}-\mathbf{x}')=\delta^D\!(x-x')~. \ \ \ \ \ (20)

Thus the Feynman propagator is indeed a Green function of the wave operator {(\square_x\!-\!m^2)}; similarly for {G_R} and {G_A}.

The reason I’ve been calling the Green functions {G_F,G_R,G_A} “propagators” is that, unlike the kernels {G,G^{(1)},G^{\pm}}, they represent the transition amplitude for a particle (virtual or otherwise) propagating from {x} to {x'}, subject to appropriate boundary conditions. To see this, consider the integral representation

\displaystyle \mathcal{G}(x,x')=\int\!\frac{\mathrm{d}^Dk}{(2\pi)^D}\frac{e^{ik(x-x')}}{-k_0^2+\mathbf{k}^2+m^2}~, \ \ \ \ \ (21)

where {k^2=k^\mu k_\mu=-k_0^2+\mathbf{k}^2}. Due to the poles at {k_0=\omega=\pm\sqrt{\mathbf{k}^2+m^2}}, we need to choose a suitable contour for the integral to be well-defined (analytically continuing to {{k\in\mathbb{C}}}). The particular choice of contour determines which of the kernels {G,G^{(1)},G^{\pm}} or Green functions {G_F,G_R,G_A} we obtain. (As for how we obtained (21) in the first place, one can directly substitute in the mode expansion (4) to the definitions, and convert the Heaviside functions into an appropriate integral. An easier way, at least for the Green functions, is to simply Fourier transform the wave equation (17):

\displaystyle \begin{aligned} \left(\square_x-m^2\right)\int\!\frac{\mathrm{d}^Dk}{(2\pi)^D}\,\tilde{\mathcal{G}}(k)\,e^{ik(x-x')}&=\delta^D(x-x')\\ \implies \int\!\frac{\mathrm{d}^Dk}{(2\pi)^D}\left(-k^2-m^2\right)\,e^{ik(x-x')}\tilde{\mathcal{G}}(k)&=\int\!\frac{\mathrm{d}^Dp}{(2\pi)^D}\,e^{ip(x-x')}~. \end{aligned} \ \ \ \ \ (22)

Since this expression (i.e., the delta function) is even in {x\!-\!x'}, we may absorb the sign into the integration variable, and identify

\displaystyle \tilde{\mathcal{G}}(k)=\frac{1}{k^2+m^2}~, \ \ \ \ \ (23)

whereupon Fourier transforming back to position space yields (21). As alluded above however, these expressions don’t make sense without specifying a pole prescription, so this argument isn’t very rigorous; it’s just a quick-and-dirty way of convincing yourself that (21) is plausible.)

To make sense of this expression, we split the integral based on the two poles of {k_0}:

\displaystyle \begin{aligned} \mathcal{G}(x,x')&=\int\frac{\mathrm{d}^dk}{(2\pi)^D}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}')}\oint\mathrm{d} k_0\frac{e^{-ik_0(x_0-x'_0)}}{-k_0^2+\mathbf{k}^2+m^2}\\ &=\int\frac{\mathrm{d}^dk}{(2\pi)^D}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}')}\oint\mathrm{d} k_0\frac{e^{-ik_0(x_0-x'_0)}}{-2k_0}\left(\frac{1}{k_0-\sqrt{\mathbf{k}^2+m^2}}+\frac{1}{k_0+\sqrt{\mathbf{k}^2+m^2}}\right)~. \end{aligned} \ \ \ \ \ (24)

Now, the boundary conditions of the propagator at hand determines the {i\epsilon} prescription, i.e., which of the poles we want to enclose with the choice of contour. Consider first the retarded propagator {G_R}: the boundary condition implicit in (16) is that the function should vanish when {x_0\!<\!x_0'} (where {x_0\!=\!t}). Conversely, when {x_0\!>\!x_0'}, we must close the contour in the negative half-plane so that {e^{-ik_0(x_0-x'_0)}\rightarrow e^{-i(-i\infty)(x_0-x'_0)}=e^{-\infty(x_0-x'_0)}=e^{-\infty}}, and the integral converges. Thus we should introduce factors of {i\epsilon} such that both poles are slightly displaced into the lower half-plane. We can then apply Cauchy’s integral formula to correctly capture the poles at {k_0=\pm\omega-i\epsilon}, and then take {\epsilon\rightarrow0}:

\displaystyle \begin{aligned} G_R(x,x')&=\int\frac{\mathrm{d}^dk}{(2\pi)^D}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}')}\oint\mathrm{d} k_0\frac{e^{-ik_0(x_0-x'_0)}}{-2k_0}\left(\frac{1}{k_0-\sqrt{\mathbf{k}^2+m^2}+i\epsilon}+\frac{1}{k_0+\sqrt{\mathbf{k}^2+m^2}+i\epsilon}\right)\\ &=\Theta(x_0-x'_0)\int\frac{\mathrm{d}^dk}{(2\pi)^D}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}')}(2\pi i)\left(\frac{e^{-i\omega(x_0-x'_0)}}{-2\omega}+\frac{e^{i\omega(x_0-x'_0)}}{2\omega}\right)\\ &=-i\Theta(x_0-x'_0)\int\frac{\mathrm{d}^dk}{2\omega(2\pi)^d}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}')}\left( e^{-i\omega(x_0-x'_0)}-e^{i\omega(x_0-x'_0)}\right)\\ &=-i\Theta(x_0-x'_0)\int\frac{\mathrm{d}^dk}{2\omega(2\pi)^d}\left[e^{ik(x-x')}-e^{-ik(x-x')}\right]\\ &=i\Theta(x_0-x'_0)\langle\left[\phi(x),\phi(x')\right]\rangle =-\Theta(t-t')G(x,x')~, \end{aligned} \ \ \ \ \ (24)

where in the penultimate line, we have taken {\mathbf{k}\rightarrow-\mathbf{k}} in the second term, using the fact that the integration over all (momentum) space is even; in the last line, we have used the mode expansion (4) and the commutation relation {[a_k,a_k'^\dagger]=2\omega(2\pi)^d\delta^d(\mathbf{k}-\mathbf{k}')}. Note that to yield the correct signs, we’ve chosen the contour to run counter-clockwise (note the factor of {+2\pi i}), which means that it runs from {+\infty} to {-\infty} along the real axis. The prescription for the advanced propagator is precisely similar, except we deform both poles in the positive complex direction (so that the integral vanishes when we close the contour below, as required for {x_0-y_0>0}), and the non-vanishing contribution comes from closing the contour in the positive half-plane, encircling both poles clockwise rather than counter-clockwise (so that the integral again runs from {+\infty} to {-\infty} along the real axis).

Note that {G_R,G_A} are superpositions of both positive ({\omega\!>\!0}) and negative ({\omega\!<\!0}) energy modes, which is necessary in order for them to vanish outside their prescribed lightcones (past and future, respectively). In contrast, the Heaviside functions in the Feynman propagator are tantamount to imposing boundary conditions such that it picks up only positive or negative frequencies, depending on the sign of {t\!-\!t'}. For {t\!>\!t'}, we close the contour in the lower-half plane for convergence ({e^{-ik^0(t-t')}=e^{-i(-i\infty)(t-t')}=e^{-\infty(t-t')}}), and enclose {k_0=\omega} counter-clockwise (in the present conventions, we’re again going from {+\infty} to {-\infty} along the real axis); conversely, we close the contour clockwise in the upper-half plane to converge with {k_0=-\omega} when {t\!<\!t'}. Hence the corresponding {i\epsilon} prescription is

\displaystyle \begin{aligned} iG_F(x,x')&=i\int\frac{\mathrm{d}^dk}{(2\pi)^D}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}')}\oint\mathrm{d} k_0\frac{e^{-ik_0(x_0-x'_0)}}{-2k_0}\left(\frac{1}{k_0-\sqrt{\mathbf{k}^2+m^2}-i\epsilon}+\frac{1}{k_0+\sqrt{\mathbf{k}^2+m^2}+i\epsilon}\right)\\ &=i\int\frac{\mathrm{d}^dk}{(2\pi)^D}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}')}\left[(2\pi i)\Theta(x_0-x_0')\frac{e^{-i\omega(x_0-x'_0)}}{-2\omega}+(-2\pi i)\Theta(x_0'-x_0)\frac{e^{i\omega(x_0-x'_0)}}{2\omega}\right]\\ &=\int\frac{\mathrm{d}^dk}{2\omega(2\pi)^d}\left[\Theta(x_0-x_0')e^{ik(x-x')}+\Theta(x_0'-x_0)e^{-ik(x-x')}\right]\\ &=\Theta(t-t')G^+(x,x')+\Theta(t'-t)G^-(x,x')~, \end{aligned} \ \ \ \ \ (25)

as desired. It is in this sense that the time-ordering is automatically encoded by the Feynman propagator: for {t\!>\!t'}, it corresponds to a positive-energy particle propagating forwards in time, while for {t\!<t'}, we have a negative-energy particle (i.e., an antiparticle) propagating backwards.

(I won’t go into the pole prescriptions for the kernels here, but the contours are illustrated in fig. 3 of [1]. The essential difference is that unlike the Green functions, the contours for the kernels are all closed loops, so these don’t correspond to propagating amplitudes.)

So far everything I’ve reviewed is for zero-temperature field theory; as alluded in the introduction of this post however, finite-temperature is where things get really interesting. Recall from quantum mechanics that a mixed state can be thought of as a statistical ensemble of pure states, so rather than computing expectation values with respect to the vacuum state, we compute them with respect to the mixed state given by the thermal density matrix

\displaystyle \rho=\sum_ip_i\left|\psi_i\rangle\langle\psi_i\right|=\frac{1}{Z}e^{-\beta H}~, \ \ \ \ \ (26)

where the system, governed by the Hamiltonian {H}, is in any of the states {|\psi_i\rangle} with (classical) probability

\displaystyle p_i=\frac{1}{Z}e^{-\beta E_i}~. \ \ \ \ \ (27)

Of course, not all mixed states are thermal, but the latter is the correct state to use in the absence of any additional constraints. (One way to think of this is that the mixedness of a quantum state is a measure of our ignorance, which is why pure states are states of minimum entropy). Expectation values of operators {\mathcal{O}} with respect to (26) are then ensemble averages at fixed temperature {T=\beta^{-1}}:

\displaystyle \langle\mathcal{O}\rangle_\beta=\mathrm{tr}\left(\rho\,\mathcal{O}\right) =\sum_ip_i\langle\psi_i\left|\mathcal{O}\right|\psi_i\rangle~. \ \ \ \ \ (28)

Note that we’re in the canonical ensemble (fixed temperature), rather than the microcanonical ensemble (fixed energy), because the energy — that is, the expectation value of the hamiltonian operator {H=\sum_k \omega\,a_k^\dagger a_k} — will fluctuate as quanta are created or destroyed. Strictly speaking I should also include the chemical potential, since the number operator {N=\sum_ka_k^\dagger a_k} also fluctuates, but it doesn’t play any important role in what follows. (The distinction is worth keeping in mind when discussing black hole thermodynamics, where one should use the microcanonical ensemble instead, because the negative specific heat makes the canonical ensemble unstable).

The thermal Green functions (and kernels), which we denote with the subscript {\beta}, are then obtained by replacing the vacuum expectation value with the expectation value in the thermal state, (28); for example, the Wightman functions become

\displaystyle \begin{aligned} G_\beta^+\!(x,x')&=\langle\phi(x)\phi(x')\rangle_\beta~,\\ G_\beta^-\!(x,x')&=\langle\phi(x')\phi(x)\rangle_\beta~. \end{aligned} \ \ \ \ \ (29)

The aforementioned KMS condition can then be obtained from the Heisenberg equation of motion,

\displaystyle \phi(t_1,\mathbf{x})=e^{iH(t_1-t_0)}\phi(t_0,\mathbf{x})e^{-iH(t_1-t_0)}~, \ \ \ \ \ (30)

by evolving in Euclidean time by {t_1-t_0=i\beta}:

\displaystyle \begin{aligned} G_\beta^+&=\frac{1}{Z}\mathrm{tr}\left[e^{-\beta H}\phi(t,\mathbf{x})\phi(t,\mathbf{x}')\right] =\frac{1}{Z}\mathrm{tr}\left[e^{-\beta H}\phi(t,\mathbf{x})e^{\beta H}e^{-\beta H}\phi(t,\mathbf{x}')\right]\\ &=\frac{1}{Z}\mathrm{tr}\left[\phi(t+i\beta,\mathbf{x})e^{-\beta H}\phi(t,\mathbf{x}')\right] =\frac{1}{Z}\mathrm{tr}\left[e^{-\beta H}\phi(t',\mathbf{x}')\phi(t+i\beta,\mathbf{x})\right]~, \end{aligned} \ \ \ \ \ (31)

where the last step relied on the cyclic property of the trace; similarly for {G_\beta^-}. Thus we arrive at the KMS condition

\displaystyle G_\beta^\pm(t,\mathbf{x};t',\mathbf{x}')=G_\beta^\mp(t+i\beta,\mathbf{x};t',\mathbf{x}')~. \ \ \ \ \ (32)

Note that this is a statement about expectation values of operators in the particular state (26) (indeed, this can easily be formulated for a general observable {\mathcal{O}}, we’re just sticking with scalar fields for concreteness; for a slightly more rigorous treatment, with suitable comments about boundedness and whatnot, see for example [2]). More generally however, any state which satisfies (32) is called a KMS state, and describes a system in thermal equilibrium. Similar relations hold for the other Green functions / kernels as well; e.g.,

\displaystyle G_\beta^{(1)}(t,\mathbf{x};t',\mathbf{x}')=G_\beta^{(1)}(t+i\beta,\mathbf{x};t',\mathbf{x}')~. \ \ \ \ \ (33)

As an exception to this however, note that since the commutator of free scalar fields is a c-number, {G} in (12) remains unchanged, i.e., {G_\beta=G}.

In arriving at (32), we evolved in imaginary time {\tau=it} by an amount given by the (inverse) temperature {\beta}. This is none other than the usual Wick rotation from Minkowski to Euclidean space, except that the periodicity of the Green functions implies that the Euclidean or thermal time direction is compact, with period {\beta}. That is, if the original field theory lived on {\mathbb{R}^{d+1}}, the finite-temperature field theory lives on {\mathbb{R}^d\times S_\beta^1}, where {\beta} denotes the (inverse) circumference of the {S^1} (observe that as {\beta\rightarrow\infty}, we recover the zero temperature Euclidean theory on {\mathbb{R}^{d+1}}). Thus in general, a Wick rotation in which Euclidean time is periodic makes an intimate connection between QFT and statistical thermodynamics, where the compact direction controls the temperature.

So what does this have to do with black holes, or horizons more generally? As I hope to cover in a future part of this sequence, the spacetime outside a horizon is also described by a thermal state. From the statistical thermodynamics or information theory perspective, one can think of this as due to the fact that we traced over the states on the other side, so the mixed density matrix that now describes the part of the vacuum to which we have access is a reflection of our ignorance. As alluded in the previous paragraph however, the thermodynamic character of the vacuum in the black hole state is already encoded in the periodicity of the Euclidean time direction, and emerges quite neatly in the case of the Schwarzschild black hole,

\displaystyle \mathrm{d} s^2=-f(r)\mathrm{d} t^2+\frac{1}{f(r)}\mathrm{d} r^2+r^2\mathrm{d}\Omega_{d-1}^2~, \quad\quad f(r)=1-\frac{r_s}{r}~, \ \ \ \ \ (34)

where {r_s} is the Schwarzschild radius, and {\mathrm{d}\Omega_{d-1}^2} is the metric on the {(d\!-\!1)-}sphere, which we’ll ignore since it just comes along for the ride. Recall from my very first blog post that after Wick rotating to Euclidean time, one can make a coordinate change so that the near-horizon metric becomes

\displaystyle \mathrm{d} s^2=\mathrm{d}\rho^2+\frac{\rho^2}{4r_s^2}\mathrm{d}\tau^2~, \ \ \ \ \ (35)

where {\rho} is the radial direction, and — since these are polar coordinates — {\tau} takes on the role of the angular coordinate, which must be periodic to avoid a conical singularity; that is, for any integer {n},

\displaystyle \frac{\tau}{2r_s}\sim\frac{\tau}{2r_s}+2\pi n \quad\implies\quad \tau\sim\tau+4\pi r_s n~, \ \ \ \ \ (36)

and thus we identify the period {4\pi r_s=\beta}.

As a closing comment, the density matrix for KMS states has deeper relations to the idea of time translation symmetry via Tomita-Takesaki theory, through the modular hamiltonian that generates this 1-parameter family of automorphisms of the algebra of operators in the corresponding region. See for example [3]; this strikes me as a surprisingly under-researched direction, and I hope to revisit it in glorious detail soon.


  1. N. D. Birrell and P. C. W. Davies, Quantum Fields in Curved Space. Cambridge Monographs on Mathematical Physics. Cambridge Univ. Press, Cambridge, UK, 1984. http://www.cambridge.org/mw/academic/subjects/physics/theoretical-physics-and-mathematical-physics/quantum-fields-curved-space?format=PB.
  2. S. Fulling and S. Ruijsenaars, “Temperature, periodicity and horizons,” Physics Reports 152 no. 3, (1987) 135 – 176.
  3. A. Connes and C. Rovelli, “Von neumann algebra automorphisms and time-thermodynamics relation in generally covariant quantum theories,” Classical and Quantum Gravity 11 no. 12, (Dec, 1994) 2899–2917, https://arxiv.org/abs/gr-qc/9406019
Posted in Physics | 2 Comments

Criticality in deep neural nets

In the previous post, we introduced mean field theory (MFT) as a means of approximating the partition function for interacting systems. In particular, we used this to determine the critical point at which the system undergoes a phase transition, and discussed the characteristic divergence of the correlation function. In this post, I’d like to understand how recent work in machine learning has leveraged these ideas from statistical field theory to yield impressive training advantages in deep neural networks. As mentioned in part 1, the key feature is the divergence of the correlation length at criticality, which controls the depth of information propagation in such networks.

I’ll primarily follow a pair of very interesting papers [1,2] on random neural nets, where the input (i.e., pre-activation) {z_i^l} of neuron {i} in layer {l} is given by

\displaystyle z_i^l=\sum_jW_{ij}^ly_j^{l-1}+b_i^l~, \ \ \ \ \ (1)

where {y_j^{l-1}=\phi(z_j^{l-1})} is some non-linear activation function of the neurons in the previous layer, and {W_{ij}^l} is an {N_l\times N_{l-1}} matrix of weights. The qualifier “random” refers to the fact that the weights and biases are i.i.d. according to the normal distributions

\displaystyle \begin{aligned} W_{ij}^l\sim\mathcal{N}\left(0,\sigma_w^2/N_{l-1}\right) &\implies p\left( w_{ij}^l\right)=\sqrt{\frac{N_{l-1}}{2\pi\sigma_w^2}}\,e^{-\frac{1}{2}\left(\!\frac{w_{ij}^l}{\sigma_w}\right)^2}~,\\ b_i^l\sim\mathcal{N}\left(0,\sigma_b^2\right) &\implies p\left( b_i^l\right)=\frac{1}{\sqrt{2\pi}\sigma_b}\,e^{-\frac{1}{2}\left(\!\frac{b_i^l}{\sigma_b}\right)^2}~, \end{aligned} \ \ \ \ \ (2)

where the factor of {N_{l_1}} has been introduced to ensure that the contribution from the previous layer remains {O(1)} regardless of layer width.

Given this, what is the probability distribution for the inputs {z_i^l}? Treating {y_i^{l-1}} as fixed, this is a linear combination of independent, normally-distributed variables, so we expect it to be Gaussian as well. Indeed, we can derive this rather easily by considering the characteristic function, which is a somewhat more fundamental way of characterizing probability distributions than the moment and cumulant generating functions we’ve discussed before. For a random variable {x\in\mathbb{R}}, the characteristic function is defined as

\displaystyle \phi_x(t)=\langle e^{itx}\rangle=\int_\mathbb{R}\!\mathrm{d} x\,e^{itx}p(x) \ \ \ \ \ (3)

where in the second equality we assume the probability density (i.e., distribution) function {p(x)} exists. Hence for a Gaussian random variable {x\sim\mathcal{N}(\mu,\sigma^2)}, we have

\displaystyle \phi_x(t)=\frac{1}{\sqrt{2\pi}\sigma}\int\!\mathrm{d} x\,e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2+itx} =e^{-\frac{1}{2}\sigma^2t^2+it\mu}~. \ \ \ \ \ (4)

A pertinent feature is that the characteristic function of a linear combination of independent random variables can be expressed as a product of the characteristic functions thereof:

\displaystyle \phi_{a_1x_1+\ldots+a_nx_n}(t)=\phi_{x_1}(a_1t)\ldots\phi_{x_n}(a_nt)~, \ \ \ \ \ (5)

where {\{x_i\}} are independent random variables and {\{a_i\}} are some constants. In the present case, we may view each {y_j^{l-1}} as a constant modifying the random variable {W_{ij}^l}, so that

\displaystyle \begin{aligned} \phi_{z_i^l}(t)&=\phi_{b_i^l}(t)\prod_j\phi_{W_{ij}^l}(y_j^{l-1}t) =e^{-\frac{1}{2}\sigma_b^2t^2}\prod_je^{-\frac{1}{2N_{l-1}}\sigma_w^2(y_j^{l-1})^2}\\ &=\exp\left[-\frac{1}{2}\left(\sigma_w^2\frac{1}{N_{l-1}}\sum_j(y_j^{l-1})^2+\sigma_b^2\right) t^2\right]~. \end{aligned} \ \ \ \ \ (6)

Comparison with (4) then implies that {z_i^l} indeed follow a Gaussian,

\displaystyle z_i^l\sim\mathcal{N}(0,\sigma_z^2)~,\qquad\mathrm{with}\qquad \sigma_z^2=\sigma_w^2\frac{1}{N_{l-1}}\sum_j(y_j^{l-1})^2+\sigma_b^2~. \ \ \ \ \ (7)

This is the probability distribution that describes the pre-activations of the neurons in layer {l}. Note that since the neurons are indistinguishable (i.e., all identical), in the large-{N} limit {\sigma_z^2} becomes the variance of the distribution of inputs across the entire layer. The MFT approximation is then to simply replace the actual neurons with values drawn from this distribution, which becomes exact in the large-{N} limit. For consistency with [1] (and because tracking the {l}-dependence by writing {\sigma_{z^l}} is cumbersome), we’ll introduce the notation {q^l:=\sigma_z^2} to denote the variance of the pre-activations in layer {l}.

Now, observe that in the large-{N} limit, the average over the activations of the previous layer becomes an integral:

\displaystyle \frac{1}{N_{l-1}}\sum_j(y_j^{l-1})^2 =\frac{1}{N_{l-1}}\sum_j\phi(z_j^{l-1})^2 \;\overset{N\rightarrow\infty}{\longrightarrow}\; \int\!\mathcal{D} z_j\,\phi(z_j^{l-1})^2 =\int\!\frac{\mathrm{d} z}{\sqrt{2\pi}}\,e^{-\frac{z^2}{2}}\,\phi\left(\!\sqrt{q^{l-1}}z\right)^2~, \ \ \ \ \ (12)

where we’ve used {\mathcal{D} z} to denote the standard normal Gaussian measure, and we’ve defined {z:= z_j^{l-1}/\sqrt{q^{l-1}}} so that {z\sim\mathcal{N}(0,1)}. That is, we can’t just write

\displaystyle \lim_{N\rightarrow\infty}\frac{1}{N}\sum_if(z_i)=\int\!\mathrm{d} z_i\,f(z_i)~, \ \ \ \ \ (13)

because this makes an assumption on the measure, {\mathrm{d} z_i}. Specifically, it assumes that we’re integrating over the real line, whereas we know that {z_i} in fact lie along a Gaussian. Thus we need a Lebesgue integral (rather than the more familiar Riemann integral) with an appropriate measure—in this case, {{\mathcal{D} z=\tfrac{1}{\sqrt{2\pi}}e^{-z^2/2}\mathrm{d} z}}. (We could alternatively have written {{\mathcal{D} z_j=\tfrac{1}{\sqrt{2\pi q^{l-1}}}e^{-\tfrac{1}{2}\left( z_j^{l-1}/q^{l-1}\right)^2}}}, but it’s cleaner to simply rescale things). Substituting this back into (7), we thus obtain a recursion relation for the variance:

\displaystyle q^l=\sigma_w^2\int\!\mathcal{D} z\,\phi\left(\!\sqrt{q^{l-1}}z\right)^2+\sigma_b^2~, \ \ \ \ \ (14)

which is eqn. (3) of [1]. Very loosely speaking, this describes the spread of information at a single neuron, with smaller values implying a more tightly peaked distribution. We’ll return to the importance of this expression momentarily.

As you might have guessed from all the song-and-dance about correlation functions in part 1 and above, a better probe of information propagation in the network is the correlator of two inputs (in this case, the correlation between different inputs in two identically prepared copies of the network). Let us thus add an additional Latin index {a,b,\ldots} to track particular inputs through the network, so that {z_{i,a}^l} is the value of the {i^\mathrm{th}} neuron in layer {l} in response to the data {\mathbf{x}_a=\{x_{i,a}\}}, {z_{i,b}^l} is its value in response to data {\mathbf{x}_b}, etc., where the data is fed into the network by setting {\mathbf{y}_a^0=\mathbf{x}_a} (here boldface letters denote the entire layer, treated as a vector in {\mathbb{R}^l}). We then consider the two-point correlator between these inputs at a single neuron:

\displaystyle \begin{aligned} \langle z_{i,a}^lz_{i,b}^l\rangle&= \left<\left(\sum_jW_{ij}^ly_{j,a}^{l-1}+b_i\right)\left(\sum_kW_{ik}^ly_{k,b}^{l-1}+b_i\right)\right>\\ &=\sigma_w^2\frac{1}{N_{l-1}}\sum_jy_{j,a}^{l-1}y_{j,b}^{l-1}+\sigma_b^2~, \end{aligned} \ \ \ \ \ (15)

where we have used the fact that {\langle W_{ij}^lW_{ik}^l\rangle=\delta_{ij}\sigma_w^2/N_{l-1}}, and that the weights and biases are independent with {\langle W_{ij}^l\rangle=0=\langle b_i^l\rangle}. Note that since {{\langle z_{i,a}^l\rangle\!=\!0}}, this also happens to be the covariance {{\mathrm{cov}(z_{i,a}^l,z_{i,b}^l)\!=\!\langle z_{i,a}^lz_{i,b}^l\rangle-\langle z_{i,a}^l\rangle\langle z_{i,b}^l\rangle}}, which we shall denote {q_{ab}^l} in accordance with the notation from [1] introduced above.

As in the single-input case above, we again take the continuum (i.e., large-{N}) limit, whereupon the sum becomes

\displaystyle \frac{1}{N_{l-1}}\sum_jy_{j,a}^{l-1}y_{j,b}^{l-1} \;\overset{N\rightarrow\infty}{\longrightarrow}\; \int\!\mathcal{D} z_a\mathcal{D} z_b\,\phi(z_a^{l-1})\phi(z_b^{l-1})~. \ \ \ \ \ (16)

Unlike the previous case however, we can’t just use independent standard Gaussian measures for {\mathcal{D} z_a} and {\mathcal{D} z_b}, since that would preclude any correlation in the original inputs {\mathbf{x}_a,~\mathbf{x}_b}. Rather, any non-zero correlation in the data will propagate through the network, so the appropriate measure is the bivariate normal distribution,

\displaystyle \mathcal{D} z_a\mathcal{D} z_b =\frac{\mathrm{d} z_a\mathrm{d} z_b}{2\pi\sqrt{q_a^{l-1}q_b^{l-1}(1-\rho^2)}}\, \exp\!\left[-\frac{1}{2(1-\rho^2)}\left(\frac{z_a^2}{q_a^{l-1}}+\frac{z_b^2}{q_b^{l-1}}-\frac{2\rho\,z_az_b}{\sqrt{q_a^{l-1}q_b^{l-1}}}\right)\right]~, \ \ \ \ \ (17)

where the (Pearson) correlation coefficient is defined as

\displaystyle \rho^l:=\frac{\mathrm{cov}(z_a^l,z_b^l)}{\sigma_{z_a}\sigma_{z_b}} =\frac{q_{ab}^l}{\sqrt{q_a^lq_b^l}}~, \ \ \ \ \ (18)

and we have denoted the squared variance in layer {l} corresponding to the input {\mathbf{x}_a} by {{q_a^l:=\sigma_{z_a^l}^2}}. If the inputs are uncorrelated (i.e., {{\langle \mathbf{x}_a\mathbf{x}_b\rangle=\langle\mathbf{x}_a\rangle\langle\mathbf{x}_b\rangle}}), then {{\mathrm{cov}(\mathbf{x}_a,\mathbf{x}_b)\!=\!0\implies\rho\!=\!0}}, and the measure reduces to a product of independent Gaussian variables, as expected. (I say “as expected” because the individual inputs {z_i^l} are drawn from a Gaussian. In general however, uncorrelatedness does not necessarily imply independence! The basic reason is that the correlation coefficient {\rho(x,y)} is only sensitive to linear relationships between random variables {x} and {y}; if these are related non-linearly, then it’s possible for them to be dependent ({{p(x,y)\neq p(x)p(y)}}) despite being uncorrelated ({{\rho(x,y)\!=\!0}}). See Wikipedia for a simple example of such a case.)

Following [1], the clever trick is then to define new integration variables {z_1,z_2} such that

\displaystyle z_a=\sqrt{q_a^{l-1}}\,z_1~,\qquad\qquad z_b=\sqrt{q_b^{l-1}}\left(\rho^{l-1} z_1+\sqrt{1-(\rho^{l-1})^2}\,z_2\right)~, \ \ \ \ \ (19)

(Do not confuse the standard normal integration variables {z_1,z_2} with the inputs {z_{i,a}^l}; the former are distinguished by the lack of any layer label {l}). With this substitution, the integral (16) becomes

\displaystyle \int\!\mathcal{D} z_a\mathcal{D} z_b\,\phi(z_a^{l-1})\phi(z_b^{l-1}) =\int\!\frac{\mathrm{d} z_1\mathrm{d} z_2}{2\pi}e^{-\frac{1}{2}\left( z_1^2+z_2^2\right)}\phi(z_a)\phi(z_b)~, \ \ \ \ \ (20)

and thus we obtain the recursion relation for the covariance, cf. eqn. (5) of [1]:

\displaystyle q_{ab}^l=\sigma_w^2\int\!\mathcal{D} z_1\mathcal{D} z_2\,\phi(z_a)\phi(z_b)+\sigma_b^2~, \ \ \ \ \ (21)

with {z_a,z_b} given by (19). Note that since {z_1,z_2} are Gaussian, their independence (i.e., the fact that {p(z_1,z_2)=p(z_1)p(z_2)}) follows from their uncorrelatedness, {\rho(z_1,z_2)\!=\!0}; the latter is easily shown by direct computation, and recalling that {\langle z_a^2\rangle\!=\!\sigma_a^2}.

Now, the interesting thing about the recursion relations (14) and (21) is that they exhibit fixed points for particular values of {\sigma_w}, {\sigma_b}, which one can determine graphically (i.e., numerically) for a given choice of non-linear activation {\phi(z)}. For example, the case {\phi(z)=\tanh(z)} is plotted in fig. 1 of [1] (shown below), demonstrating that the fixed point condition

\displaystyle q^*=\sigma_w^2\int\!\frac{\mathrm{d} z}{\sqrt{2\pi}}\,e^{-\frac{z^2}{2}}\tanh(\sqrt{q^*}z)^2+\sigma_b^2 \ \ \ \ \ (22)

is satisfied for a range of {(\sigma_w,\sigma_b)}. Therefore, when the parameters of the network (i.e., the distributions (2)) are tuned appropriately, the variance remains constant as signals propagate through the layers. This is useful, because it means that non-zero biases at each layer prevent signals from decaying to zero. Furthermore, provided the non-linearity {\phi} is concave, the convergence to the fixed point occurs relatively rapidly as a function of depth (i.e., layer {l}). One can verify this numerically for the case {\phi(z)=\tanh(z)} considered above: since {\tfrac{\mathrm{d}^2 q^{l}}{\mathrm{d}(q^{l-1})^2}\!<\!0}, the dynamics ensures convergence to {q^*} for all initial values. To visualize this, consider the following figure, which I took from [1] ({\mathcal{V}} is their notation for the recursion relation (14)). In the left plot, I’ve added the red arrows illustrating the convergence to the fixed point for {\sigma_w=4.0}. For the particular starting point I chose, it takes about 3 iterations to converge, which is consistent with the plot on the right.


Similarly, the swift convergence to {q^*} for the variance of individual neurons implies a fixed point for (21). Observe that when {q_a^{l-1}\!=\!q_b^{l-1}\!=\!q^*}, {\rho^{l-1}\!=\!q_{ab}^{l-1}/q^*}. Hence by dividing (21) by {q^*}, we have

\displaystyle \rho^l=\frac{\sigma_w^2}{q^*}\int\!\mathcal{D} z_1\mathcal{D} z_2\, \phi\left(\sqrt{q^*}z_1\right) \phi\left(\sqrt{q^*}\left[\rho^{l-1}z_1+\sqrt{1-(\rho^{l-1})^2}z_2\right]\right) +\frac{\sigma_b^2}{q^*}~. \ \ \ \ \ (23)

This equation always has a fixed point at {\rho^*\!=\!1}:

\displaystyle \rho^l=\frac{\sigma_w^2}{q^*}\int\!\mathcal{D} z_1\mathcal{D} z_2\, \phi\left(\sqrt{q^*}z_1\right)^2 +\frac{\sigma_b^2}{q^*} =\frac{1}{q^*}q^{l}=1~,\ \ \ \ \ \ (24)

where {\int\!\mathcal{D} z_2\!=\!1}, and we have identified the single-neuron variance {q^l}, cf. (14), which is {q_a^l\!=\!q_b^l\!=\!q^*}.

Let us now ask about the stability of this fixed point. As observed in [1], this is probed by the first derivative evaluated at {\rho^*}:

\displaystyle \chi_1:= \frac{\partial \rho^l}{\partial\rho^{l-1}}\bigg|_{\rho^{l-1}=1} =\;\sigma_w^2\int\!\mathcal{D} z\,\phi'\left(\sqrt{q^*}z\right)^2~, \ \ \ \ \ (25)

where {\phi'(u)=\tfrac{\partial\phi(u)}{\partial u}} (the second equality is not obvious; see the derivation at the end of this post if you’re curious). The reason is that for monotonic functions {\phi(z)}, {\chi<1} implies that the curve (23) must approach the unity line from above (i.e., it’s concave, or has a constant slope with a vertical offset at {\rho^{l-1}\!=\!0} set by the biases), and hence in this case {\rho^l} is driven towards the fixed point {\rho^*}. Conversely, if {\chi>1}, the curve must be approaching this point from below (i.e., it’s convex), and hence in this case {\rho^l} is driven away from unity. Depending on the strengths of the biases relative to the weights, the system will then converge to some new fixed point that approaches zero as {\sigma_b\rightarrow 0}. (It may help to sketch a few examples, like I superimposed on the left figure above, to convince yourself that this slope/convergence argument works).

The above implies the existence of an order-to-chaos phase transition in the {(\sigma_w,\sigma_b)} plane: {\chi>1} corresponds to the chaotic phase, in which the large random weights cause any correlation between inputs to shrink, while {\chi<1} corresponds to the ordered phase, in which the inputs become perfectly aligned and hence {\rho\rightarrow1}. At exactly {\chi\!=\!1}, the system lies at a critical point, and should therefore be characterized some divergence(s) (recall that, in the limit {N\rightarrow\infty}, a phase transition is defined by the discontinuous change of some quantity; one often hears of an “{n^\mathrm{th}} order phase transition” if the discontinuity lies in the {n^\mathrm{th}} derivative of the free energy).

We saw in part 1 that at a critical point, the correlation length that governs the fall-off of the two-point correlator diverges. This is the key physics underlying the further development of this mean field framework in [2], who used it to show that random networks are trainable only in the vicinity of this phase transition. Roughly speaking, the intuition is that for such a network to be trainable, information must be able to propagate all the way through the network to establish a relationship between the input (data) and the output (i.e., the evaluation of the cost function) (actually, it needs to go both ways, so that one can backpropagate the gradients through as well). In general, correlations in the input data will exhibit some fall-off behaviour as a function of depth, which limits how far information can propagate through these networks. However, borrowing our physics terminology, we may say that a network at criticality exhibits fluctuations on all scales, so there’s no fall-off behaviour limiting the propagation of information with depth.

So, what is the correlation length in the present problem? In the previous post on mean field theory, we determined this by examining the fall-off of the connected 2-point correlator (i.e., the Green function), which in this case is just {\langle z_a^lz_b^l\rangle=q_{ab}^l}. To determine the fall-off behaviour, we expand near the critical point, and examine the difference {|\rho^l-\rho^*|} as {l\!\rightarrow\!\infty} (where recall {\rho^l=q_{ab}^l/q^*}. The limit simply ensures that {q^*,\rho^*} exist; as shown above, we don’t have to wait that long in practice). This is done explicitly in [2], so I won’t work through all the details here; the result is that upon expanding {\rho^l=\rho^*+\epsilon^l}, one finds the recurrence relation

\displaystyle \epsilon^{l+1}=\epsilon^l\,\sigma_w^2\int\!\mathcal{D} z_1\mathcal{D} z_2\,\phi'(z_a)\phi'(z_b)~, \ \ \ \ \ (26)

up to terms of order {(\epsilon^l)^2}. This implies that, at least asymptotically, {\epsilon^l\sim e^{-l/\xi}}, which one can verify by substituting this into (26) and identifying

\displaystyle \xi^{-1}=-\ln\left[\sigma_w^2\int\!\mathcal{D} z_1\mathcal{D} z_2\,\phi'(z_a)\phi'(z_b)\right]~. \ \ \ \ \ (27)

Thus {\xi} is the correlation length that governs the propagation of information — i.e., the decay of the two-point correlator — through the network. And again, we know from our discussion in part 1 that the correlation length diverges at the critical point, which is indeed what happens here: denoting the value of the correlation length at the critical point {\rho^*\!=\!1} by {\xi_1}, observe that the argument of (27) reduces to (25), i.e.,

\displaystyle \xi_1^{-1}=-\ln\chi_1~, \ \ \ \ \ (28)

which goes to infinity precisely at the order-to-chaos phase transition, since the latter occurs at exactly {\chi\!=\!1}.

In practice of course, both {l} and the number of neurons {N} are finite, which precludes a genuine discontinuity at the phase transition (since the partition function is then a finite sum of analytic functions). Nonetheless, provided {\xi\gg1}, we expect that the basic conclusion about trainability above still holds even if the correlation length remains finite. And indeed, the main result of [2] is that at the critical point, not only can information about the inputs propagate all the way through the network, but information about the gradients can also propagate backwards (that is, exploding/vanishing gradients happen only in the ordered/chaotic phases, respectively, but are stable at the phase transition), and consequently these random networks are trainable provided {l\lesssim\xi}.

Incidentally, you might worry about the validity of these MFT results, given that — as stressed in part 1 — it’s precisely at a critical point that MFT may break down, due to the relevance of higher-order terms that the approximation ignores. But in this case, we didn’t neglect anything: we saw in (7) that the distribution of inputs already follows a Gaussian, so the only “approximation” lies in ignoring any finite-size effects associated with backing away from the {N\rightarrow\infty} limit.

The application of MFT to neural networks is actually a rather old idea, with the presence of an order-to-chaos transition known in some cases since at least the 1980’s [3]. More generally, the idea that the critical point may offer unique benefits to computation has a long and eclectic history, dating back to an interesting 1990 paper [4] on cellular automata that coined the phrase “computation at the edge of chaos”. However, only in the past few years has this particular interdisciplinary ship really taken off, with a number of interesting papers demonstrating the power of applying physics to deep learning. The framework developed in [1,2] that I’ve discussed here has already been generalized to convolutional [5] and recurrent [6,7] neural networks, where one again finds that initializing the network near the phase transition results in significant improvements in training performance. It will be exciting to see what additional insights we can discover in the next few years.


  1. B. Poole, S. Lahiri, M. Raghu, J. Sohl-Dickstein, and S. Ganguli, “Exponential expressivity in deep neural networks through transient chaos,” arXiv:1606.05340 [stat.ML].
  2. S. S. Schoenholz, J. Gilmer, S. Ganguli, and J. Sohl-Dickstein, “Deep Information Propagation,” arXiv:1611.01232 [stat.ML].
  3. H. Sompolinsky, A. Crisanti, and H. J. Sommers, “Chaos in random neural networks,” Phys. Rev. Lett. 61 (Jul, 1988) 259–262.
  4. C. G. Langton, “Computation at the edge of chaos: Phase transitions and emergent computation,” Physica D: Nonlinear Phenomena 42 no. 1, (1990) 12 – 37.
  5. L. Xiao, Y. Bahri, J. Sohl-Dickstein, S. S. Schoenholz, and J. Pennington, “Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks,” arXiv:1806.05393 [stat.ML].
  6. M. Chen, J. Pennington, and S. S. Schoenholz, “Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks,” arXiv:1806.05394 [stat.ML].
  7. D. Gilboa, B. Chang, M. Chen, G. Yang, S. S. Schoenholz, E. H. Chi, and J. Pennington, “Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs,” arXiv:1901.08987 [cs.LG].

The second equality in (25), namely

\displaystyle \frac{\partial \rho^l}{\partial\rho^{l-1}}\bigg|_{\rho^{l-1}=1} =\;\sigma_w^2\int\!\mathcal{D} z\,\phi'\left(\sqrt{q^*}z\right)^2~, \ \ \ \ \ (29)

is not obvious, but requires an exercise in basic calculus. Since all the functional dependence on {\rho^{l-1}} — denoted simply {\rho} here for brevity — is contained in {z_b} (cf.(19)), we have from (23)

\displaystyle \begin{aligned} \frac{\partial \rho^l}{\partial\rho} &=\frac{\sigma_w^2}{q^*}\int\!\mathcal{D} z_1\mathcal{D} z_2\,\phi(z_a)\frac{\partial}{\partial\rho}\phi(z_b) =\frac{\sigma_w^2}{q^*}\int\!\mathcal{D} z_1\mathcal{D} z_2\,\phi(z_a)\phi'(z_b)\frac{\partial z_b}{\partial\rho}\\ &=\frac{\sigma_w^2}{\sqrt{q^*}}\int\!\mathcal{D} z_1\mathcal{D} z_2\,\phi(z_a)\phi'(z_b)\left( z_1-\frac{\rho}{\sqrt{1-\rho^2}}z_2\right)~, \end{aligned} \ \ \ \ \ (30)

where the prime denotes the derivative of {\phi} with respect to it’s argument. We now use the hint given in the appendix of [1], namely that for the standard Gaussian measure, integration by parts allows us to write {{\int\!\mathcal{D} z\,zf(z)=\int\!\mathcal{D} z\,f'(z)}}. We therefore have the identities

\displaystyle \int\!\mathcal{D} z_1\,z_1\phi(z_a) =\int\!\mathcal{D} z_1\frac{\partial\phi(z_a)}{\partial z_1} =\int\!\mathcal{D} z_1\,\phi'(z_a)\frac{\partial z_a}{\partial z_1} =\sqrt{q^*}\int\!\mathcal{D} z_1\,\phi'(z_a) \ \ \ \ \ (31)


\displaystyle \begin{aligned} \int\!\mathcal{D} z_2\,z_2\phi'(z_b) &=\int\!\mathcal{D} z_2\,z_2\frac{\partial\phi(z_b)}{\partial z_2}\left(\frac{\partial z_b}{\partial z_2}\right)^{-1} =\sqrt{q^*(1-\rho^2)}\int\!\mathcal{D} z_2\,z_2\frac{\partial\phi(z_b)}{\partial z_2}\\ &=\sqrt{q^*(1-\rho^2)}\int\!\mathcal{D} z_2\,(z_2^2-1)\phi(z_b)~, \end{aligned} \ \ \ \ \ (32)

where in the last step we have again applied integration by parts over the Gaussian measure. Substituting these results into (30), we find

\displaystyle \frac{\partial \rho^l}{\partial\rho} =\sigma_w^2\int\!\mathcal{D} z_1\mathcal{D} z_2\,\phi'(z_a)\phi'(z_b) +\sigma_w^2\rho\int\!\mathcal{D} z_1\mathcal{D} z_2\,(z_2^2-1)\phi(z_a)\phi(z_b)~. \ \ \ \ \ (33)

If we evaluate this at {\rho^*\!=\!1}, we can integrate freely over {z_2}, whereupon the second term vanishes,

\displaystyle \sigma_w^2\rho\int\!\mathcal{D} z_1\mathcal{D} z_2\,(z_2^2-1)\phi\left(\sqrt{q^*}z_1\right)^2=0 \ \ \ \ \ (34)

and the first returns the claimed result:

\displaystyle \frac{\partial \rho^l}{\partial\rho}\bigg|_{\rho\!=\!1} =\sigma_w^2\int\!\mathcal{D} z_1\,\phi'\left(\sqrt{q^*}z_1\right)^2=\chi_1~. \ \ \ \ \ (35)

Posted in Minds & Machines | 2 Comments

Mean field theory: from physics to deep neural nets

In a previous post, I alluded to the question of whether criticality played any role in deep neural networks. The question I originally had in mind was whether the fact that the correlation length diverges at a critical point implies an advantage in terms of information propagation in such systems. In particular, would a neural net operating at or near the critical point (e.g., by savvy initialization of weights and biases) exhibit advantages in training or performance? As it turns out, a few papers have actually addressed this issue using a prescription from physics known as mean field theory (MFT). In this two-part sequence, I’d like to first explain briefly what MFT is, and then in part 2 understand how machine learning researchers have applied it to obtain impressive real-world improvements in training performance.

In a nutshell, the idea behind MFT is that most partition functions (containing interactions) are too hard to evaluate explicitly, but can be made tractable by replacing each degree of freedom, together with its interactions, with an effective degree of freedom — the mean field — in which these interactions have been averaged over. Note that MFT is an approximation, for two reasons: it ignores higher-order fluctuations, and the averaging prescription necessarily washes-out some fine-grained information. We’ll cover both these points in more detail below, when discussing the situations under which MFT fails. Perhaps ironically given the previous paragraph, MFT breaks down precisely at the critical point, so it’s important to understand the conditions under which the associated predictions are valid.

To make our discussion concrete, consider the {d}-dimensional Ising hamiltonian with {N} spins total (i.e., {N/d} spins per direction):

\displaystyle H=-\frac{1}{2}\hat J^{ij}\sigma_i\sigma_j-\hat h^i\sigma_i~, \ \ \ \ \ (1)

where {\sigma_i=\pm1}, and for compactness I’m employing Einstein’s summation convention with {i=\{1,\ldots,N\}} and {\hat h^i=\hat h\mathbf{1}}. Since all directions are spacelike, there’s no difference between raised and lowered indices (e.g., {\sigma_i=\sigma^i}), so I’ll denote the inverse matrix {\hat J^{-1}} explicitly to avoid any possible confusion, i.e., {\hat J\hat J^{-1}=\mathbf{1}} and {{\hat J^{ij}\hat J_{ij}^{-1}=\delta^i_{~i}=N}}. In a 1d lattice, one would typically avoid boundary effects by joining the ends into an {S^1} by setting {\sigma_{N+1}=\sigma_1}, but this issue won’t be relevant for our purposes, as we’ll be interested in the thermodynamic limit {N\rightarrow\infty} anyway.

One approach to constructing a MFT for this model is to observe that for a given spin {\sigma_i}, the effect of all the other spins acts like an external magnetic field. That is, observe that we may write (1) as

\displaystyle H=\sum_i\sigma_i\left(-\frac{1}{2}\sum_j\hat J_{ij}\sigma_j-\hat h\right)~. \ \ \ \ \ (2)

We then replace {\sigma_j} by the average value, {\langle\sigma_j\rangle\equiv s}. We’ll give a more thorough Bayesian treatment of MFT below, but the idea here is that since no spin is special, the most likely value of {\sigma_j} is the mean. This allows us to define an effective magnetic field at site {i}:

\displaystyle \hat h^\textrm{\tiny{eff}}_i\equiv\frac{s}{2}\sum_j\hat J_{ij}+\hat h~, \ \ \ \ \ (3)

so that the hamiltonian becomes

\displaystyle H\approx-\sum_i\sigma_i\hat h_i^\textrm{\tiny{eff}}~, \ \ \ \ \ (4)

where the interactions have been absorbed into the effective magnetic field. Thus we’ve reduced an interacting many-body problem to a non-interacting one-body problem, which is much easier to solve!

As mentioned above however, this result is an approximation, which in the present case amounts to neglecting the correlation between spins at different sites. That is, when we replaced {\sigma_j} by the mean, we secretly discarded terms of order {\delta s^2}. This is illustrated explicitly in John McGreevy’s entertaining notes on the Renormalization Group, and runs as follows. As mentioned above, one can think of MFT as the replacement of the degrees of freedom by the average value plus fluctuations:

\displaystyle \begin{aligned} \sigma_i\sigma_i&=\left[s+(\sigma_i-s)\right]\left[s+(\sigma_j-s)\right] =\left( s+\delta s_i\right)\left( s+\delta s_j\right)\\ &=s^2+s\left( \delta s_i+\delta s_j\right)+O(\delta s^2) =-s^2+s\left(\sigma_i+\sigma_j\right)+O(\delta s^2)~, \end{aligned} \ \ \ \ \ (5)

where {\delta s_i\equiv\sigma_i-s}. We then substitute this into the hamiltonian (1); taking {J_{ij}=J_{ji}} for simplicity, we obtain

\displaystyle H=\frac{s^2}{2}\sum_{ji}\hat J_{ij} +\sum_i\sigma_i\left(-\sum_j\hat J_{ij}s-\hat h\right)+O(\delta s^2)~. \ \ \ \ \ (6)

Note that the first term is some constant factor of the lattice size, and hence doesn’t affect the dynamics (we simply absorb it into the normalization of the partition function). If we then define an effective action as in (3) (with a suitable factor of 2) and work to linear order in the fluctuations, we recover the effective one-body hamiltonian (4). Thus, in the course of our mean field approximation, we averaged over the fluctuations, but lost some information about the interactions between spins.

Despite its approximate nature, the hamiltonian (6) (equivalently, (1)) is quite useful insofar as it can be used to obtain upper bound on the free energy. To understand this, let us introduce the Bayesian perspective promised above. In MFT, we’re ignoring some information about the system at hand, but we want to make inferences that are as accurate as possible subject to the available constraints. Recall from our discussion of entropy that if we do not know the underlying distribution with respect to which a particular expectation value is computed, the most rational choice is obtained by maximizing the von Neumann entropy. In particular, if we know the average energy, {\langle H\rangle=E}, this procedure yields the Boltzmann distribution

\displaystyle p_i\equiv p(x_i)=\frac{1}{Z[\beta]}e^{-\beta E_i}~, \qquad\qquad\sum_ip_i=1~, \ \ \ \ \ (7)

where {E_i\equiv\langle H(x_i)\rangle}, and we identify the inverse temperature {\beta} as the Lagrange multiplier arising from the constraint on the energy.

Now suppose that instead of knowing the average energy, we know only the temperature (i.e., we consider the canonical rather than microcanonical ensemble). As explained in a previous post, this amounts to a constraint in the (canonically) dual space, so the appropriate extremization procedure is instead to minimize the free energy,

\displaystyle F=E-\beta^{-1}S~. \ \ \ \ \ (8)

(Note that for a given energy configuration, the free energy is minimized when the entropy is maximized). One finds again (7), with {Z[\beta]=e^{\beta\lambda}} where {\lambda} is the Lagrange multiplier for the normalization constraint, cf. the max-ent procedure here.

The upshot is that the max-ent distribution (7) has minimum free energy: had we used any other distribution, it would amount to the imposition of additional constraints on the system, thereby reducing the entropy and increasing {F} in (8). This is essentially what happens in MFT, since we select a more tractable distribution with respect to which we can compute expectation values (i.e., a simpler hamiltonian, with a constraint on the fluctuations). In the present case, this implies that the normalized free energy obtained via the mean-field hamiltonian (6), denoted f_\textrm{\tiny{MF}}, provides an upper bound on the true (equilibrium) free energy {f}:

\displaystyle f_\textrm{\tiny{MF}}\geq f~, \ \ \ \ \ (9)

where {f=F/N} (henceforth I’ll refer to this simply as the free energy, without the “normalized” qualifier). This statement, sometimes referred to as the Bogolyubov inequality, can be easily shown to follow from Gibb’s inequality. (This apparently standard derivation is however not a sufficient explanation per se, since it simply assumes that expectation values are taken with respect to the mean-field distribution. Had we chosen to take them with respect to the equilibrium (max-ent) distribution, the bound would be reversed!)

Working to linear order in the fluctuations, the mean-field partition function for (6) is

\displaystyle \begin{aligned} Z_\textrm{\tiny{MF}}&=\sum_{\{\sigma\}}e^{-\beta H_\textrm{\tiny{MF}}} =\prod_{k=1}^N\sum_{\sigma_k=\pm1}\exp\left[-NdJs^2+\left(2dJs+h\right)\sum_i\sigma_i\right]\\ &=e^{-NdJs^2}\prod_{k=1}^N\exp\left[-\left(2dJs+h\right)+\left(2dJs+h\right)\right]\\ &=2^Ne^{-NdJs^2}\cosh^N\left(2dJs+h\right)~. \end{aligned} \ \ \ \ \ (10)

where for simplicity we have restricted to homogeneous nearest-neighbor interactions (in {d} spatial dimensions, each of the {N} spins has {2d} neighbors with coupling strength {J\equiv\beta\hat J} and {h\equiv\beta \hat h}). The corresponding free energy is then

\displaystyle f_\textrm{\tiny{MF}}(s)=-\frac{1}{N\beta}{\ln Z} =\beta^{-1}dJs^2-\beta^{-1}\ln\cosh\left(2dJs+h\right)~, \ \ \ \ \ (11)

where we have dropped the {\beta^{-1}\ln 2} term, since this doesn’t contribute to any of the observables for which {F} serves as a generating function (that is, it’s just a normalization factor).

Now, as per our discussion above, (11) provides an upper bound on the true free energy. Thus we can obtain the tightest possible bound (given our ansatz (6)) by minimizing over {s}:

\displaystyle \frac{\partial f_{\textrm{\tiny{MF}}}}{\partial s}=0 \quad\implies\quad s=\tanh\left(2dJs+h\right)~. \ \ \ \ \ (12)

This last is referred to as the self-consistency condition. The reason is that it’s precisely what we would have obtained had we computed the average spin via the single-site hamiltonian (6), or equivalently (4): since the linear term has been absorbed into the effective magnetic field {\hat h^\textrm{\tiny{eff}}}, it looks as though {J=0}, and therefore

\displaystyle s=\langle\sigma\rangle=\sum_{\sigma=\pm1}\!\sigma\,p(\sigma) =\frac{1}{Z}\sum_{\sigma=\pm1}\sigma e^{ h^{\textrm{\tiny{eff}}}\sigma} =\frac{\sum_{\sigma=\pm1}\sigma e^{h^{\textrm{\tiny{eff}}}\sigma}}{\sum_{\sigma=\pm1} e^{h^{\textrm{\tiny{eff}}}\sigma}} =\tanh h^\textrm{\tiny{eff}}~, \ \ \ \ \ (13)

where {h^\textrm{\tiny{eff}}\equiv\beta\hat h^\textrm{\tiny{eff}}}. Substituting in (3) for the homogeneous nearest-neighbor case at hand (that is, without the factor of {1/2}, cf. (6)) then gives precisely (12).

The self-consistency equation can be solved graphically, i.e., the critical points are given by the intersections of the left- and right-hand sides; see [1] for a pedagogical treatment. In brief, for {h\!\neq\!0}, the global minimum of {f_\textrm{\tiny{MF}}} is given by the intersection at positive {s}, regardless of temperature. For {h\!=\!0} in contrast, there’s a single minimum at {s\!=\!0} for high temperatures, and two degenerate minima at {\pm s_0} at low temperatures (depending on whether {\tanh(2d\beta\hat Js)} crosses {s} for {s\neq0}; to see this, recall that for small {x}, {\tanh x\approx x-x^3/3+\ldots}, so a sufficiently small value of {\beta} makes this an approximately straight line whose slope is less than {s}). The critical temperature that divides these two regimes is found by imposing that

\displaystyle s=\tanh(2d\beta_c\hat Js)\overset{!}{=}\tanh(s) \quad\implies\quad T_c=2d\hat J~. \ \ \ \ \ (14)

Note that this is completely wrong for low dimensions! For {d\!=\!1}, {T_c\!=0\!}, while for {d\!=\!2}, {T_c\approx2.269\hat J}; we’ll have more to say about this failure below.

Let’s concentrate on the {h\!=\!0} case henceforth: note that the critical point {s_0} will always be small ({|s_0|\!<\!1}) independent of {T} (since {\lim\nolimits_{x\rightarrow\pm\infty}\tanh x=\pm1}), so we can expand

\displaystyle \ln\cosh(2dJs_0)=\frac{1}{2}(2dJ)^2s_0^2-\frac{1}{12}(2dJ)^4s_0^4+O(s_0^6)~, \ \ \ \ \ (15)

whence the free energy (11) near the critical point is approximately

\displaystyle f_\textrm{\tiny{MF}}(s_0)\approx \frac{r}{2}s_0^2+\frac{g}{4!}s_0^4~, \ \ \ \ \ (16)

where we have defined

\displaystyle \begin{aligned} r\equiv 2d&J\beta^{-1}\left(1-2dJ\right) =\frac{T_c}{T}\left( T-T_c\right)~,\\ g&\equiv 32\beta^{-1}d^4J^4 =\frac{2T_c^4}{T^3}~, \end{aligned} \ \ \ \ \ (17)

and dropped all higher-order terms. Observe that the sign of {r} changes at the critical temperature {T\!=\!T_c}, which determines whether the global minimum of {f_\textrm{\tiny{MF}}} lies at {s_0\!=\!0} ({T\!>\!T_c}) or {\pm s_0\!>\!0} ({T\!<\!T_c}). The physical interpretation is that below the critical temperature, it is energetically favourable for the spins to align, resulting in a non-zero magnetization (which is what the average spin {\langle\sigma\rangle=s} is). Above the critical temperature however, thermal fluctuations disrupt this ordering, so the net magnetization is zero. For this reason, the magnetization {s} is an example of an order parameter, since it parametrizes which “order” — that is, which phase — we’re in on either side of the critical point.

As alluded above however, there’s a problem with the MFT results for the critical point, namely that it’s precisely at the critical point where MFT breaks down! The reason is that at the critical point, fluctuations at all scales are important, whereas MFT includes only fluctuations to linear order (cf. (5)). The contribution from all scales is related to the statement we made in the introductory paragraph, namely that the correlation length diverges at the critical point. To properly understand this, we need to go beyond the MFT approach above. In particular, while the discrete lattice is a helpful starting point, we can gain further insight by considering a continuum field theory. We’ll see that MFT corresponds to the leading-order saddle point approximation, and that the first corrections to this expansion can qualitatively change these results.

To proceed, we’ll map our square-lattice Ising model to an equivalent theory of scalar fields. (If you like, you can just jump to the Ginzburg-Landau action (32) and take it as an ansatz, but I find the mapping both neat and instructive). Starting again from (1), the partition function is

\displaystyle Z=\sum_{\{\sigma\}}e^{-\beta H} =\prod_{k=1}^N\sum_{\sigma_k=\pm1}\exp\left(\frac{1}{2} J^{ij}\sigma_i\sigma_j+h^i\sigma_i\right)~, \ \ \ \ \ (18)

where as before we have absorbed the pesky factor of {\beta} by defining {J=\beta\hat J}, {h=\beta\hat h}. The first step is to apply the Hubbard-Stratanovich transformation,

\displaystyle e^{\frac{1}{2}K^{ij}s_is_j}=\left[\frac{\mathrm{det}K}{(2\pi)^N}\right]^{1/2}\!\int\!\mathrm{d}^N\phi\exp\left(-\frac{1}{2}K^{ij}\phi_i\phi_j+K^{ij}s_i\phi_j\right)~, \qquad\forall\,s_i\in\mathbb{R}~. \ \ \ \ \ (19)

(We used this before in our post on restricted Boltzmann machines; the difference here is that we want to allow {h\neq0}). Applying this transformation to the first term in the partition function, we have

\displaystyle Z=\left[\frac{\mathrm{det}J}{(2\pi)^N}\right]^{1/2}\!\int\!\mathrm{d}^N\phi\sum_{\{\sigma\}}\exp\left[-\frac{1}{2}J^{ij}\phi_i\phi_j+\left( J^{ij}\phi_j+h^i\right)\sigma_i\right]~. \ \ \ \ \ (20)

At a computational level, the immediate advantage of the Hubbard-Stratanovich transformation in the present case is that we can sum over the binary spins {\sigma}, leaving us with an expression entirely in terms of the new field variables {\phi}. Observe that for each spin,

\displaystyle \sum_{\sigma_i=\pm1}\exp\left[\left( J^{ij}\phi_j+h^i\right)\sigma_i\right]=2\cosh\left( J^{ij}\phi_j+h^i\right)~, \ \ \ \ \ (21)

and therefore by re-exponentiating this expression, the partition function becomes

\displaystyle Z=\left[\left(\frac{2}{\pi}\right)^N\!\mathrm{det}J\right]^{1/2}\!\int\!\mathrm{d}^N\phi\exp\!\left[-\frac{1}{2}J^{ij}\phi_i\phi_j+\sum_i\ln\cosh\left( J^{ij}\phi_j+h^i\right)\right]~. \ \ \ \ \ (22)

We now observe that {J^{ij}\phi_j+h^j\equiv\mu^i} can be thought of as the mean field {\langle\phi_i\rangle} at site {i}, incorporating the interaction with all other sites as well as the external magnetic field. We can then express the partition function in terms of the mean field {\mu^i} by inverting this identification:

\displaystyle \begin{aligned} \phi^i=J^{-1}_{ij}\left(\mu^j-h^j\right) \quad\implies\quad J^{ij}\phi_i\phi_j&=J^{ij}J^{-1}_{in}J^{-1}_{jm}\left(\mu^n-h^n\right)\left(\mu^m-h^m\right)\\ &=J^{-1}_{ij}\left(\mu^i\mu^j-h^i\mu^j-h^j\mu^i+h^ih^j\right)~. \end{aligned} \ \ \ \ \ (23)

As for the change in the measure, it follows from the anti-symmetry of the wedge product, together with the fact that {J_{ii}=0}, that

\displaystyle \mathrm{d}\phi_i=J^{-1}_{ij}\mathrm{d}\mu^j \quad\implies\quad \mathrm{d}^N\!\phi=\mathrm{det}J^{-1}\mathrm{d}^N\!\mu~. \ \ \ \ \ (24)

Hence the partition function may be equivalently expressed as

\displaystyle Z=\left[\left(\frac{2}{\pi}\right)^N\!\mathrm{det}J^{-1}\right]^{1/2}\!e^{-\frac{1}{2}J^{-1}_{ij}h^ih^j}\!\int\!\mathrm{d}^N\!\mu\exp\!\left[-\frac{1}{2}J^{-1}_{ij}\mu^i\mu^j+J^{-1}_{ij}h^i\mu^j+\sum_i\ln\cosh\mu_i\right]~, \ \ \ \ \ (25)

where I’ve assumed {J_{ij}=J_{ji}}. While this only becomes a proper (mean) field theory in the thermodynamic limit, it’s worth emphasizing that up to this point, the transformation from the original lattice model (18) to is exact!

Now comes the approximation: to obtain a more tractable expression, let’s consider the case where the external magnetic field is very small as we did above. In this case, since the spin interactions don’t induce any preferred direction, we expect the mean field to be centered near zero, i.e., {|\mu_i|\ll1}. We can then expand

\displaystyle \ln\cosh\mu_i=\frac{1}{2}\mu_i^2-\frac{1}{12}\mu_i^4+O(\mu_i^6)~, \ \ \ \ \ (26)

whereupon the partition function becomes

\displaystyle Z\approx\left[\left(\frac{2}{\pi}\right)^N\!\mathrm{det}J^{-1}\right]^{1/2}\!e^{-\frac{1}{2}J^{-1}_{ij}h^ih^j}\!\int\!\mathrm{d}^N\mu\exp\!\left[-\frac{1}{2}J^{-1}_{ij}\mu^i\mu^j +J^{-1}_{ij}h^i\mu^j +\sum_i\left(\frac{1}{2}\mu_i^2-\frac{1}{12}\mu_i^4\right)\right]~. \ \ \ \ \ (27)

Finally, we take the continuum limit, in which we label the field at each site by the {{d}-dimensional} vector {\mathbf{x}} (i.e., {\mu_i\rightarrow\mu(\mathbf{x})} and {\sum\nolimits_i\rightarrow\int\!\mathrm{d}^dx=\int\!\mathrm{d}\mathbf{x}}), and obtain the path-integral measure

\displaystyle \left[\left(\frac{2}{\pi}\right)^N\!\mathrm{det}J^{-1}\right]^{1/2}\!e^{-\frac{1}{2}J^{-1}_{ij}h^ih^j}\!\int\!\mathrm{d}^N\mu \;\longrightarrow\;\mathcal{N}\!\int\!\mathcal{D}\mu~. \ \ \ \ \ (28)

Thus the continuum field theory for the Ising model is

\displaystyle \begin{aligned} Z\approx\mathcal{N}\!\int\!\mathcal{D}\mu\exp\bigg\{\!&-\frac{1}{2}\int\!\mathrm{d}\mathbf{x}\mathrm{d}\mathbf{y}\,\mu(\mathbf{x})J^{-1}(\mathbf{x}-\mathbf{y})\big[\mu(\mathbf{y})-h\big]\\ &+\frac{1}{2}\int\!\mathrm{d}\mathbf{x}\left[\mu(\mathbf{x})^2-\frac{1}{6}\mu(\mathbf{x})^4\right]\bigg\}~, \end{aligned} \ \ \ \ \ (29)

where {h(\mathbf{x})=h}, since the external magnetic field is the same for all lattice sites, cf. the definition of {\hat h^i} below (1).

In obtaining (29), I haven’t made any assumptions about the form of the coupling matrix {J}, except that it be a symmetric invertible matrix, with no self-interactions ({J_{ii}=0}). Typically however, we’re interested in the case in which the hamiltonian (1) includes only nearest-neighbor interactions — as we eventually specified in our lattice model, cf. (10) — and we’d like to preserve this notion of locality in the field theory. To do this, we take {|\mathbf{y}-\mathbf{x}|\ll 1} and Taylor expand the field {\phi(\mathbf{y})} around {\mathbf{x}}:

\displaystyle \mu(\mathbf{y})=\mu(\mathbf{x})+(y^i-x^i)\partial_i\mu(\mathbf{x})+\frac{1}{2}(y^i-x^i)(y^j-x^j)\partial_i\partial_j\mu(\mathbf{x})+O\left((\mathbf{y}-\mathbf{x})^3\right)~. \ \ \ \ \ (30)

That is, we view {J^{-1}(\mathbf{y}-\mathbf{x})} as mediating interactions between fields at infinitesimally separated points in space, with increasingly non-local (i.e., higher-derivative) terms suppressed by powers of the separation. Upon substituting this expansion into (29), and working to second-order in this local expansion, one obtains a partition function of the form

\displaystyle Z\approx\mathcal{N}\!\int\!\mathcal{D}\mu\,e^{-S[\mu]}~, \ \ \ \ \ (31)


\displaystyle S[\mu]=\int\!\mathrm{d}^d\mathbf{x}\left[\frac{1}{2}\kappa(\nabla\mu)^2-h\mu+\frac{1}{2}\tilde r\mu^2+\frac{\tilde g}{4!}\mu^4\right]~, \ \ \ \ \ (32)

where the coefficients {\kappa}, {\tilde r}, and {\tilde g} are some (analytic) functions of the physical parameters, and can be expressed in terms of the zero modes of the inverse coupling matrix. I’m not going to go through the details of that computation here, since a great exposition is already available in the answer to this post on Stack Exchange (note however that they do not keep the linear {h\mu} term).

The main lesson of this field-theoretic exercise is that MFT is nothing more than the leading saddle point of (31). Denoting the minimum {\mu_0}, and expanding the action to second order in the fluctuations {(\mu-\mu_0)}, we have

\displaystyle Z\approx\mathcal{N} e^{-S[\mu_0]}\int\!\mathcal{D}\mu\,e^{-\frac{1}{2}(\mu-\mu_0)^2S''[\mu_0]}~, \ \ \ \ \ (33)

where the prime denotes variation with respect to {\mu}, and the linear term has vanished by definition, i.e., {\mu_0} is given by

\displaystyle \frac{\delta S}{\delta\mu}=0 \quad\implies\quad \kappa\nabla^2\mu_0=-h+\tilde r\mu_0+\frac{\tilde g}{6}\mu_0^3~, \ \ \ \ \ (34)

where we have applied integration by parts and assumed that the field vanishes at infinity. If we then keep only the leading-order saddle point, the partition function is given entirely by the prefactor

\displaystyle Z\approx\mathcal{N} e^{-S[\mu_0]} \quad\mathrm{with}\quad S[\mu_0]=\int\!\mathrm{d}^d\mathbf{x}\left(\tilde r\mu_0^2+\frac{\tilde g}{8}\mu_0^4-\frac{3}{2}h\mu_0\right)~, \ \ \ \ \ (35)

so that the free energy is

\displaystyle f_\textrm{\tiny{sp}}=\frac{1}{\beta}\left(\tilde r\mu_0^2+\frac{\tilde g}{2}\mu_0^4-\frac{3}{4}h\mu_0\right) =\frac{\hat r}{2}\hat\mu_0^2+\frac{\hat g}{4!}\hat\mu_0^4-\hat h\mu_0~, \ \ \ \ \ (36)

where the subscript “sp” stands for “saddle point”, and we have dropped the non-dynamical {\ln\mathcal{N}} term. In the second equality, we have simply extracted the factor of {\beta} from {\mu} by defining {\mu=\beta\hat\mu} (cf. the absorption of {\beta} into the coefficients {J,h} in (18), and the definition of {\mu} below (22)), and defined {2\hat r\equiv\beta\tilde r}, {\hat g\equiv12\tilde g\beta^3}, and {4\hat h\equiv3h}. For {h\!=\!0}, this is formally identical to {f_\textrm{\tiny{MF}}} obtained above, cf. (16)! (By this point the two theories are technically different, though Kopietz et al. [1] do give an argument as to how one might match the coefficients; otherwise one can compute them explicitly via Fourier transform as mentioned above).

Now suppose we kept the leading correction to the MFT result, given by the quadratic term in the path integral (33). For {S''[\mu_0]}, we have the operator

\displaystyle K\equiv\frac{\delta^2S}{\delta\mu_x\delta\mu_y}=\left(-\kappa\nabla^2+\tilde r+\frac{\tilde g}{2}\mu_0^2\right)\delta^d(\mathbf{x}-\mathbf{y})~, \ \ \ \ \ (37)

where {\mu_x\equiv\mu(\mathbf{x})}. Substituting this into (33) and doing the Gaussian integral, one finds that the contribution from this term is given by the sum of the eigenvalues of the operator {K}. I’m not going to go through this in detail, since this post is getting long-winded, and McGreevy’s notes already do a great job. The result is an additional contribution to the free energy that shifts the location of the critical point. Whether or not these higher-order corrections are important depends not only on the size of the fluctuations, but also on the spatial dimension of the system. It turns out that for systems in the Ising universality class (that is, systems whose critical points are characterized by the same set of critical exponents), the MFT result is good enough in {d\!\geq\!4}, but the fluctuations diverge in {d\!<\!4} and hence render its conclusions invalid. We’ll give a better explanation for this dimensional-dependent validity below.

There’s another way to see the breakdown of MFT at the critical point in a manner that makes more transparent the precise role of the higher-order terms in the expansion, via the renormalization group. Suppose we’d included all higher-order terms in (32)—that is, all terms consistent with the symmetries of the problem (rotation & translation invariance, and {\mathbb{Z}_2} symmetry if {h\!=\!0}). The result is called the Ginzburg-Landau action, after the eponymous authors who first used it to study systems near critical points. Now observe that the field {\mu} has mass dimension {\tfrac{d-2}{2}}, so the squared mass {\tilde r} has dimension 2, the quartic coupling {\tilde g} has dimension {4\!-\!d}, a sextic coupling would have dimension {6\!-\!2d}, and so on. Recall that a coupling is relevant if its mass dimension {\Delta\!>\!0} (since the dimensionless coupling carries a factor of {E^{-\Delta}}, e.g., {m^2=\tilde r/E^2}), irrelevant if {\Delta\!<\!0} (since it runs like {E^\Delta}), and marginal if {\Delta\!=\!0}. Thus we see that the quadratic term is always relevant, and that higher-order corrections are increasingly suppressed under RG in a dimension-dependent manner.

So, a more sophisticated alternative to our particular MFT attempt above — where we kept the quartic {\mu_0^4} term in the saddle point — is to compute the Gaussian path integral consisting of only the quadratic contribution, and treat the quartic and higher terms perturbatively. (As these originally arose from higher-order terms in the Taylor expansion, this is morally in line with simply taking {\mu_0\ll1} in the MFT result {f_\textrm{\tiny{sp}}}). Treating the action as a Gaussian integral also allows us to obtain a simple expression for the two-point correlator that captures the limiting behaviour in which we’re primarily interested. That is, tying all this back to the information theory / neural network connections alluded in the introduction, we’re ultimately interested in understanding the propagation of information near the critical point, so understanding how correlation functions behave in the leading-order / MFT / saddle point approximation — and how perturbative corrections from fluctuations might affect this — is of prime importance.

We thus consider the partition function (31), with the quadratic action

\displaystyle S[\mu]=\int\!\mathrm{d}^d\mathbf{x}\left[\frac{1}{2}(\nabla\mu(\mathbf{x}))^2+\frac{m^2}{2}\mu(\mathbf{x})^2-h\mu(\mathbf{x})\right]~, \ \ \ \ \ (38)

where we’ve set {\kappa\!=\!1} and relabelled the quadratic coefficient {m^2}. Evaluating partition functions of this type is a typical exercise in one’s first QFT course, since the action now resembles that of a free massive scalar field, where {-h} plays the role of the source (normally denoted {J}). The basic prescription is to Fourier transform to momentum space, where the modes decouple, and then absorb the remaining source-independent term into the overall normalization. The only difference here is that we’re in Euclidean rather than Lorentzian signature, so there’s no issues of convergence; see for example Tong’s statistical field theory notes for a pedagogical exposition. The result is

\displaystyle Z\simeq\exp\frac{1}{2}\!\int\!\frac{\mathrm{d}^dk}{(2\pi)^d}\frac{\tilde h_{\mathbf{k}}\tilde h_{-\mathbf{k}}}{k^2+m^2} =\exp\frac{1}{2}\!\int\!\mathrm{d}^dx\mathrm{d}^dy\,h(\mathbf{x})G(\mathbf{x}-\mathbf{y})h(\mathbf{y})~, \ \ \ \ \ (39)

where the position-dependence of the external field {h(\mathbf{x})} merely serves as a mnemonic, and {\tilde h_\mathbf{k}} is the same field in momentum space. In the second equality, we’ve simply Fourier transformed back to real space by identifying the propagator

\displaystyle G(\mathbf{x}-\mathbf{y})=\int\!\frac{\mathrm{d}^dk}{(2\pi)^2}\frac{e^{-i\mathbf{k}\mathbf{x}}}{k^2+m^2}~, \ \ \ \ \ (40)

which describes the correlation between the field {\mathbf{x}} and {\mathbf{y}}. To see that this is indeed a correlation function, recall that the variance is given by the second cumulant:

\displaystyle \kappa_2=\frac{\partial^2\ln Z}{\partial h^2} \overset{\tiny{(38)}}{=} \langle\mu_x\mu_y\rangle-\langle\mu_x\rangle\langle\mu_y\rangle \overset{\tiny{(40)}}{=} G(\mathbf{x}-\mathbf{y})~, \ \ \ \ \ (41)

and thus {G} is indeed the connected 2-point correlator. (I should mention that in the present case, there’s a special name for this which seems to be preferred by condensed matter theorists: it’s the magnetic susceptibility, defined as the sensitivity of {s} with respect to {h},

\displaystyle \chi=\partial_hs~, \ \ \ \ \ (42)

where the connection arises by observing that the magnetization is none other than the mean (i.e., the first cumulant),

\displaystyle s=-\frac{\partial\ln Z}{\partial h}=\langle\mu\rangle~. \ \ \ \ \ (43)

But I’ll continue to refer to it as the correlation function, or the connected Green function, since calling it the “susceptibility” obscures its deeper physical and information-theoretic significance. Actually seeing that this is a Green function does however require slightly more work.)

The evaluation of (40) is treated very nicely in the aforementioned notes by Tong. In brief, we proceed by defining a length scale {\xi^2=m^{-2}}, and use the identity

\displaystyle \int_0^\infty\!\mathrm{d} t\,e^{-t(k^2+\xi^{-2})}=\frac{1}{k^2+1/\xi^2} \ \ \ \ \ (44)

to massage the integral into the following form:

\displaystyle G(r)=\frac{1}{(4\pi)^{d/2}}\int_0^\infty\!\mathrm{d} t\,t^{-d/2}e^{-r^2/4t-t/\xi^2} \ \ \ \ \ (45)

which is obtained by completing the square in the exponential and performing the integral over {\mathrm{d}^dk}; we’ve also used rotation invariance with {r=|\mathbf{x}|} (not to be confused with the old name for the quadratic coefficient).

As will shortly become apparent, {\xi} is the correlation length that determines the size of fluctuations, and hence the spatial structure of correlations. Since we’re primarily interested in the limiting cases where {r\gg\xi} and {r\ll\xi}, it is more illuminating to evaluate the integral via saddle point, rather than to preserve the exact form (which, as it turns out, can be expressed as a Bessel function). We thus exponentiate the {t^{-d/2}} factor to write

\displaystyle G(r)=\frac{1}{(4\pi)^{d/2}}\int_0^\infty\!\mathrm{d} t\,e^{-P(t)}~, \qquad P(t)\equiv\frac{r^2}{4t}+\frac{t}{\xi^2}+\frac{d}{2}\ln t~, \ \ \ \ \ (46)

so that we have, to second order,

\displaystyle G(r)\sim \sqrt{\frac{\pi}{2S''(t_*)}}\,e^{-P(t_*)}~, \ \ \ \ \ (47)

where the saddle point {t_*} is given by

\displaystyle S'(t_*)=0\implies t_*=\frac{\xi^2}{2}\left(-\frac{d}{2}+\sqrt{\frac{d^2}{4}+\frac{r^2}{\xi^2}}\right) \approx\begin{cases} \frac{r^2}{2d}\; & r\ll\xi~,\\ \frac{r\xi}{2}\; & r\gg\xi~. \end{cases} \ \ \ \ \ (48)

Substituting these case values into (47), we find the following limiting behaviour for the correlation function:

\displaystyle G(r)\sim\begin{cases} \frac{1}{r^{d-2}}\; & r\ll\xi~,\\ \frac{e^{-r/\xi}}{r^{(d-1)/2}}\; & r\gg\xi~. \end{cases} \ \ \ \ \ (49)

Recalling that {m^2\sim|T-T_c|} near the critical point, we see that the correlation length diverges as

\displaystyle \xi\sim\frac{1}{|T-T_c|^{1/2}} \ \ \ \ \ (50)

as the system approaches criticality. This means that at the critical point, we are always in the regime {r\ll\xi}, and hence the correlator exhibits a power law divergence. Another way to say this is that there is no longer any length scale in the problem (since that role was played by {\xi}, which has gone to infinity). This is why the divergence of the correlator at criticality must be a power law: any other function would require a length scale on dimensional grounds.

In the previous MFT treatment, we mentioned that fluctuations can change the results. From the RG perspective, this is because the quadratic coupling (which determines the location of the critical point) may be adjusted in the renormalization process as we integrate out UV modes. (In fact, we saw an explicit example of this in our post on deep learning and the renormalization group). The lower the dimension, the more relevant operators we need to take into account; in particular, all operators are relevant in {d\!=\!2}, so the saddle point approximation is exceedingly poor. In contrast, as the dimension increases, more and more operators are suppressed under RG flow. In the lattice picture, we can understand the fact that MFT gets more accurate in higher dimensions by noting that more dimensions means more neighbors, and hence approximating the degrees of freedom by the mean field is more likely to be accurate.

Finally, let us return to the comment we made at the beginning of this post, namely that the correlation length diverges at a critical point. This is another way of understanding the breakdown of MFT, since a divergent correlation length implies that fluctuations on all scales are important (and hence we neglect them at our peril). Explicitly, MFT (broadly interpreted) is valid when the fluctuations are much smaller than the mean or background field around which they’re fluctuating, i.e., {\langle\mu^2\rangle\ll\langle\mu\rangle^2}. Tong offers a clean way to see the dimensional dependence explicitly: simply integrate these expectation values over a ball of radius {\xi} and compare the ratio

\displaystyle R\equiv\frac{\int_0^\xi\!\mathrm{d}^dx\langle\mu(\mathbf{x})\mu(0)\rangle}{\int_0^\xi\!\mathrm{d}^dx\langle\mu^2\rangle} \simeq \frac{1}{\mu_0^2\xi^d}\int_0^\xi\!\mathrm{d} r\frac{r^{d-1}}{r^{d-2}} =\frac{\xi^{2-d}}{\mu_0^2} \sim|T-T_c|^{(d-4)/2}~, \ \ \ \ \ (51)

where {\mu_0=\langle\mu\rangle} is the mean field from above, and in the last step we have used the scaling behaviours of {\xi\sim|T-T_c|^{-1/2}} and {\mu_0\sim|T-T_c|^{1/2}} (the latter can be obtained by minimizing the quartic result for {f_\textrm{\tiny{MF}}} (16)). Upon demanding that this ratio be much less than unity, we see that the MFT results (for the Ising universality class) are only trustworthy in {d\geq4} dimensions. (The case {d\!=\!4} actually requires a more careful RG treatment due to the logarithmic divergence; see Tong’s notes for more details).

To summarize: MFT is a useful approximation method that averages over interactions and enables one to obtain closed-form expressions of otherwise intractable partition functions. It is tantamount to the saddle point approximation, and — in the context of RG — may be qualitatively altered by any relevant higher-order terms. While these corrections can potentially shift the location of the critical point however, the basic fact that the correlation function diverges at criticality remains unchanged. As we’ll see in part 2, it is this feature that makes phase transitions interesting from a computational perspective, since it means that the propagation of information at this point is especially stable.



1. P. Kopietz, L. Bartosch, and F. Schütz, “Mean-field Theory and the Gaussian Approximation,” Lect. Notes Phys. 798 (2010).

Posted in Physics | Leave a comment

Black hole thermodynamics, quantum puzzles, and the holographic principle

I was asked to give a lecture on “quantum puzzles and black holes” at the 20th Jürgen Ehlers Spring School, which was to be hosted at AEI this week. Unfortunately the school was cancelled due to the SARS-CoV-2 pandemic, but since I enjoyed researching the topic so much, I thought I’d make a post of it instead. Part of what made preparing for this lecture so interesting is that the students — primarily undergrads bordering on Masters students — hadn’t had quantum field theory (QFT), which meant that if I wanted to elucidate, e.g., the firewall paradox or the thermal nature of horizons in general, I’d have to do so without recourse to the standard toolkit. And while there’s a limit to how far one can get without QFT in curved spacetime, it was nice to go back and revisit some of the things with which long familiarity has made me take for granted.

Accordingly, I’ve endeavored to make this post maximally pedagogical, assuming only basic general relativity (GR) and a semblance of familiarity with undergraduate quantum mechanics and statistical thermodynamics. I’ll start by introducing black hole thermodynamics, which leads to the conclusion that black holes have an entropy given by a quarter the area of their event horizons in Planck units. Then in the second section, I’ll discuss some quantum puzzles that arise in light of Hawking’s discovery that black holes radiate, which seems to imply that information is lost as they evaporate, in violation of quantum mechanics. In the third and final section, I’ll explain how the considerations herein gave rise to the holographic principle, one of the deepest revelations in physics to date, which states that the three-dimensional world we observe is described by a two-dimensional hologram.

1. Black hole thermodynamics

Classically, black hole thermodynamics is a formal analogy between black holes and statistical thermodynamics. It was originally put forth by Jacob Bekenstein in his landmark 1973 paper [1], in which he proposed treating black holes thermodynamically, and argued that the entropy should be proportional to the area of the event horizon. Let’s start be examining the idea of black holes as thermodynamic objects, and build up to the (in)famous entropy-area relation as we go.

As I’ve mentioned before, black holes must be endowed with an entropy in order to avoid violating the second law of thermodynamics; otherwise, one could decrease the entropy of the universe simply by dropping anything into the black hole. Taking entropy as a measure of our ignorance — equivalently, as a measure of the inaccessibility of the internal configuration — this is intuitive, since the degrees of freedom comprising whatever object one dropped in are now hidden behind the horizon and should thus be counted among the internal microstates of the black hole. Furthermore, one knows from Hawking’s area theorem [2] that the surface area of a classical black hole is non-decreasing, and thus the dynamics of black holes appears to select a preferred direction in time, analogous to the thermodynamic arrow of time consequent of the fact that entropy (of any closed thermodynamic system) always increases. This led Bekenstein [1] to propose that one could “develop a thermodynamics of black holes”, in which entropy is precisely related to the area of the horizon, {S\sim A} (here “{\sim}” means “proportional to”; we’ll fix the coefficient later).

Thermodynamically, entropy is an extensive property, so associating the entropy to some function of the size of the black hole makes sense. But why {S\sim A}, specifically? In statistical mechanics, the entropy generally scales with the volume of the system, so one might naïvely have expected {S\sim V}. Indeed, one of the most remarkable aspects of black holes is that the entropy scales with the area instead of the volume. Insofar as black holes represent the densest possible configuration of energy — and hence of information — this implies a drastic reduction in the (maximum) number of degrees of freedom in the universe, as I’ll discuss in more detail below. However, area laws for entanglement entropy are actually quite common; see for example [3] for a review. And while the ultimate source of black hole entropy (that is, the microscopic degrees of freedom it’s counting) is an ongoing topic of current research, the entanglement between the interior and exterior certainly plays an important role. But that’s a QFT calculation, whereas everything I’ve said so far is purely classical. Is there any way to see that the entropy must scale with {A} instead of {V}, without resorting to QFT in curved space or the full gravitational path integral?

In fact, there’s a very simple reason the entropy must scale with the area: the interior volume of a black hole is ill-defined. Consider the Schwarzschild metric

\displaystyle \mathrm{d} s^2=-f(r)\mathrm{d} t^2+\frac{\mathrm{d} r^2}{f(r)}+r^2\mathrm{d}\Omega^2~, \quad\quad f(r)=1-\frac{r_s}{r}~, \ \ \ \ \ (1)

where {r_s=2M} is the Schwarzschild radius and {\mathrm{d}\Omega^2=\mathrm{d}\theta^2+\sin^2\!\theta\,\mathrm{d}\phi^2}. For {r>r_s}, the metric is static: the spatial components look the same for any value of {t}. But inside the black hole, {r<r_s}, and hence {f(r)<0}. This makes the {\mathrm{d} t^2} component positive and the {\mathrm{d} r^2} component negative, so that space and time switch roles in the black hole interior. Consequently, the “spatial” components are no longer static inside the black hole, since they will continue to evolve with {t}. Thus the “volume” of the black hole interior depends on time, and in fact on one’s choice of coordinates in general. (This isn’t too strange, if you think about it: the lesson of general relativity is that spacetime is curved, so your quantification of “space” will generally depend on your choice of “time”).

The issue is clarified nicely in a paper by Christodoulou and Rovelli [4] (be warned however that while the GR calculations in this paper are totally solid, the discussion of entropy in section VIII is severely flawed). The crux of the matter is that our usual definition of “volume” doesn’t generalize to curved spacetime. In flat (Minkowski) spacetime, we define volume by picking a Cauchy slice, and consider the spacelike 3d hypersurface {\Sigma} bounded by some 2d sphere {B} on that slice. But when space is curved, there are many different constant-{t} slices we can choose, none of which has any special status (in GR, the coordinates don’t matter). Suppose for example we tried to calculate the interior volume in Schwarzschild coordinates (1). Our flat-space intuition says to pick a constant-{t} slice bounded by some surface {B} (in this case, the horizon itself), and integrate over the enclosed hypersurface {\Sigma}:

\displaystyle V=\int_\Sigma\!\mathrm{d}^3x\sqrt{g}~, \ \ \ \ \ (2)

where {g} is the determinant of the (induced) metric on {\Sigma}. Along a timeslice, {\mathrm{d} t=0}, so we have

\displaystyle V=4\pi\int\!\mathrm{d} r\,r^2\left(1-\tfrac{r_s}{r}\right)^{-1/2}~. \ \ \ \ \ (3)

But the Schwarzschild coordinates break-down at the horizon, so the upper and lower limits of the remaining integral are the same, {r=r_s}, and the integral vanishes. Thus the Schwarzschild metric would lead one to conclude that the “volume” of the black hole is zero! (Technically the integral is ill-defined at {r=r_s}, but one obtains the same result by changing the outer limit to {r=r_s+\epsilon} and taking the limit {\epsilon\rightarrow0} [5]).

Let’s try a different coordinate system, better suited to examining the interior. Define the new time variable

\displaystyle T=t+r_s\left(2\sqrt{\frac{r}{r_s}}+\ln\left|\frac{\sqrt{\tfrac{r}{r_s}}-1}{\sqrt{\tfrac{r}{r_s}}+1}\right|\right)~, \ \ \ \ \ (4)

in terms of which the metric (1) becomes

\displaystyle \mathrm{d} s^2=-f(r)\mathrm{d} T^2+2\sqrt{\frac{r_s}{r}}\,\mathrm{d} T\mathrm{d} r+\mathrm{d} r^2+r^2\mathrm{d}\Omega^2~. \ \ \ \ \ (5)

These are Gullstrand-Painlevé (GP) coordinates. They’re relatively unfamiliar, but have some useful properties; see for example [6], in which my colleagues and I utilized them in the context of the firewall paradox during my PhD days. Unlike the Schwarzschild coordinates, they cover both the exterior region and the black hole interior. They look like this:


where constant {T} slices are in yellow, and constant {r} slices are in green. One neat thing about these coordinates is that {T} is the proper time of a free-falling observer who starts from rest at infinity. (Somewhat poetically, they’re the natural coordinates that would be associated to a falling drop of rain, and are sometimes called “rain-frame coordinates” for this reason). Another neat thing about them is that the constant-{T} slices are flat! Thus if we attempt to calculate the interior volume along one such Cauchy slice, we simply recover the flat-space result,

\displaystyle V=4\pi\int_0^{r_s}\mathrm{d} r^2\,r^2=\frac{4}{3}\pi r_s^3~, \ \ \ \ \ (6)

and thus the volume is constant, no matter what {T}-slice we choose; in other words, the observer can fall forever and never see less volume! See [5] for a pedagogical treatment of the volume calculation in some other coordinate systems, which again yield different results.

The above examples illustrate the fact that in general, there are many (in fact, infinitely many!) different choices of {\Sigma} within the boundary sphere {B}, and we need a slightly more robust notion of volume to make sense in curved spacetime. As Christodoulou and Rovelli point out, a better definition for {\Sigma} is the largest spherically symmetric surface bounded by {B}. This reduces to the familiar definition above in Minkowski space, but extends naturally and unambiguously to curved spacetime as well. Thus the basic idea is to first fix the boundary sphere {B}, and then extremize over all possible interior 3d hypersurfaces {\Sigma}. For a Schwarzschild black hole, in the limit where the null coordinate {v=r+t} is much greater than {M} (i.e., at late times), one finds [4]

\displaystyle V\rightarrow3\sqrt{3}\pi m^2 v~. \ \ \ \ \ (7)

Thus, the interior volume of a black hole continues to grow linearly for long times, and can even exceed the volume of the visible universe!

Whether one thinks of entropy as a measure of one’s ignorance of the interior given the known exterior state, or a quantification of all possible microstates given the constraints of mass (as well as charge and angular momentum for a Kerr-Newman black hole), it should not depend on the choice of coordinates, or continue growing indefinitely while the surface area (i.e., the boundary between the known and unknown regions) remains fixed. Thus if we want a sensible, covariant quantification of the size of the black hole, it must be the area. (Note that the area is more fundamental than the radius: the latter is defined in terms of the former (equivalently, in terms of the mass), rather than by measuring the distance from {r\!=\!0}, for the same reasons we encountered when attempting to define volume above). Since the event horizon is a null-surface, the area is coordinate-invariant; fixing {t} and {r} in the Schwarzschild metric then simply yields the area element of the 2-sphere,

\displaystyle \mathrm{d} s^2=r^2\mathrm{d}\Omega^2 \quad\longrightarrow\quad A=\int\!\mathrm{d} s^2=\int\!\mathrm{d}\Omega\sqrt{g} =4\pi r_s^2~. \ \ \ \ \ (8)

Thus areas, rather than volumes, provide the only covariant, well-defined measures of the spatial “size” of black holes.

Technically, this doesn’t prove that {S\sim A}, of course; it might logically have been some other function or power of the area, but this would be less natural on physical grounds (though {S\sim A^{1/2}} can be easily ruled out by considering a black hole merger [1]). And, while it’s a nice consistency check on the universe, it doesn’t really give any insight into why the degrees of freedom are ultimately bounded by the surface area, beyond the necessity-of-curved-space argument above.

There is however one problem with this identification: the entropy, in natural units, is dimensionless, while the area has units of length squared, so the mismatch must be remedied by the hitherto undetermined proportionality factor. As Bekenstein pointed out, there is no universal constant in GR alone that has the correct units; the only fundamental constant that fits the bill is the Planck length,

\displaystyle \ell_P=\frac{G\hbar}{c^3}~. \ \ \ \ \ (9)

As Hawking was quick to show [7], the correct result is

\displaystyle S_{BH}=\frac{A}{4\ell_P}~, \ \ \ \ \ (10)

which is the celebrated Bekenstein-Hawking entropy of black holes. This is one of the most remarkable expressions in all of physics, insofar as it’s perhaps the only known example in which gravity ({G}), quantum mechanics ({\hbar}), and special relativity ({c}) all come together (thermodynamics too, if you consider that we set {k_B=1}).

Hawking’s calculation, and the myriad alternative derivations put forward since, require a full QFT treatment, so I’m not going to go into them here. If you’re interested, I’ve covered one such derivation based on the gravitational path integral before, and the case of a collapsing black hole that Hawking considered is reviewed in the classic textbook [8]. In the original paper [1] however, Bekenstein provides a very cute derivation which barely even requires quantum mechanics, and yet gets surprisingly close to the right answer. The basic idea is to calculate the minimum possible increase in the size of the black hole which, classically, would occur when we gently drop in a particle whose size is of order its own Compton wavelength (this is where the {\hbar} comes in). This can be related to the entropy on the basis that the loss of information is the entropy of a single bit, {\ln 2}, i.e., the answer to the yes-or-no question, “does the black hole contain the particle?” This line of reasoning yields {\tfrac{1}{2\pi}\ln2\,S_{BH}}; not bad, given that we ignored QFT entirely!

By now I hope I’ve convinced you of two facts: (1) black holes have an entropy, and (2) the entropy is given by the area of the horizon. This is the foundation on which black hole thermodynamics is built.

We are now in position to write down the analogue of the fundamental thermodynamic relation for black holes. Recall that for a closed system in equilibrium,

\displaystyle \mathrm{d} U=T\mathrm{d} S-P\mathrm{d} V~, \ \ \ \ \ (11)

where {U} is the internal energy, {P} is the pressure, and {T}, {S}, and {V} are the temperature, entropy, and volume as above. The second term on the right-hand side represents the work done on the system by the environment (in this context, “closed” refers only to the transfer of mass or particles; the transfer of energy is still allowed). Supposing that this term is zero, the first term can be regarded as the definition of entropy for reversible processes, i.e., {\mathrm{d} S=\mathrm{d} Q/T} where {Q} is the heat.

Now consider, for generality, a charged, rotating black hole, described by the Kerr-Newman metric; in Boyer-Lindquist coordinates, this reads:

\displaystyle \begin{aligned} \mathrm{d} s^2=&~\frac{\Delta-a^2\sin^2\theta}{\rho^2}\,\mathrm{d} t^2 -\frac{\rho^2}{\Delta}\mathrm{d} r^2 -\rho^2\,\mathrm{d}\theta^2\\ &-\sin^2\!\theta\,\frac{\left( a^2+r^2\right)^2-a^2\Delta\sin^2\theta}{\rho^2}\,\mathrm{d}\phi^2 +\frac{2a\left( a^2+r^2-\Delta\right)\sin^2\theta}{\rho^2}\,\mathrm{d} t\mathrm{d}\phi~, \end{aligned} \ \ \ \ \ (12)

which reduces to the Schwarzschild black hole above when the charge {Q} and angular momentum {J} go to zero (after rescaling by the radius). For compactness, we have defined

\displaystyle a\equiv\frac{J}{M}~,\quad \Delta\equiv r^2-2 M r+a^2+Q^2~,\quad \rho^2\equiv r^2+a^2\cos^2\theta~. \ \ \ \ \ (13)

The {g_{rr}} component diverges when {\Delta=0}, which yields an inner ({r_-}) and outer ({r_+}) horizon:

\displaystyle r_\pm=M\pm\sqrt{M^2-a^2-Q^2}~. \ \ \ \ \ (14)

The inner horizon is generally thought to be unstable, while the outer is the event horizon whose area we’re interested in calculating. Setting {r} and {t} to constant values as above, the induced metric on the resulting 2d surface {B} is

\displaystyle \mathrm{d} s^2= -\rho^2\,\mathrm{d}\theta^2 -\sin^2\!\theta\,\frac{\left( a^2+r^2\right)^2-a^2\Delta\sin^2\theta}{\rho^2}\,\mathrm{d}\phi^2 \ \ \ \ \ (15)

We can then consider the case where the radius {r=r_+}, whereupon {\Delta=0}, and the area is simply

\displaystyle A=\int\!\mathrm{d}\Omega\sqrt{g} =4\pi\left( r_+^2+a^2\right)~, \ \ \ \ \ (16)

which is fairly intuitive: we get the Schwarzschild result, plus an additional contribution from the angular momentum.

Now, the area depends only on the mass {M}, charge {Q}, and angular momentum {J} (cf. the no-hair theorem), so a generic perturbation takes the form

\displaystyle \mathrm{d} A=\frac{\partial A}{\partial M}\,\mathrm{d} M+\frac{\partial A}{\partial Q}\,\mathrm{d} Q+\frac{\partial A}{\partial J}\,\mathrm{d} J~. \ \ \ \ \ (17)

Performing the derivatives and solving for {\mathrm{d} M}, one obtains the analogue of the fundamental thermodynamic relation (11) for black holes:

\displaystyle \mathrm{d} M=\frac{\kappa}{8\pi}\,\mathrm{d} A+\Omega\,\mathrm{d} J+\Phi\,\mathrm{d} Q~, \ \ \ \ \ (18)

where {\Omega} and {\Phi} are some functions of {M}, {Q}, and {J} that I’m not going to bother writing down, and {\kappa} is the surface gravity,

\displaystyle \kappa=\frac{r_+-r_-}{2\left( r_+^2+a^2\right)}=\frac{\sqrt{M^2-a^2-Q^2}}{2M^2-Q^2+2M\sqrt{M^2-a^2-Q^2}}~, \ \ \ \ \ (19)

which is the (normalized) gravitational acceleration experienced at the equator. (“Normalized”, because the Newtonian acceleration diverges at the horizon, so a meaningful value is obtained by dividing the proper acceleration by the gravitational time dilation factor).

Each term in this expression for {\mathrm{d} M} has a counterpart in (11). We already identified the area with the entropy, cf. (10), and since the mass is the only relevant parameter in the problem, it plays the role of the internal energy {U}. The surface gravity {\kappa} corresponds to the temperature. So if we restricted to a Schwarzschild black hole, we’d have

\displaystyle \mathrm{d} U=T\mathrm{d} S\qquad \longleftrightarrow\qquad \mathrm{d} M=\frac{\kappa}{4\pi}\mathrm{d} A~, \ \ \ \ \ (20)

which just canonizes the relationship between entropy and area we uncovered above, with {\kappa\sim T}. What about the other terms? As mentioned above, the {-P\mathrm{d} V} term in (11) corresponds to the work done to the system. And as it turns out, there’s a way of extracting energy from a (charged, rotating) black hole, known as the Penrose process. I don’t have the spacetime to go into this here, but the upshot is that the parameters {\Omega} and {\Phi} in (18) correspond to the rotational angular momentum and electric potential, respectively, so that {\Omega\,\mathrm{d} J+\Phi\,\mathrm{d} Q} is indeed the analogue of the work that the black hole could perform on some external system; i.e.,

\displaystyle -P\mathrm{d} V\qquad \longleftrightarrow\qquad \Omega\,\mathrm{d} J+\Phi\,\mathrm{d} Q~. \ \ \ \ \ (21)

And of course, energy that can’t be extracted as work is another way of describing entropy, so if even if you could extract all the angular momentum and charge from the black hole, you’d still be left with what Bekenstein calls the “degradation energy” [1], which is the area term (20) (determined by the irreducible mass).

That’s all I wanted to say about black hole thermodynamics here, though the analogy we’ve established above can be fleshed out more thoroughly, complete with four “laws of black hole thermodynamics” in parallel to the classic set. See for example my earlier post on firewalls, or the review by Jacobson [9], for more details. However, I’ve been glossing over a critical fact, namely that at the classical level, black holes are, well, black: they don’t radiate, and hence a classical black hole has zero temperature. This is the reason I’ve been careful to refer to black hole thermodynamics as an analogy. Strictly speaking, one cannot regard the temperature {T} as the physical temperature of a single black hole, but rather as referring to the equivalence class of all possible black holes subject to the same (observable) constraints of mass, charge, and angular momentum. In other words, the “temperature” of a Schwarzschild black hole is just a quantification of how the entropy — which measures the number of possible internal microstates — changes with respect to the mass, {T^{-1}=\mathrm{d} S/\mathrm{d} M}.

2. Quantum black holes

Of course, Hawking’s greatest claim to fame was the discovery [7] that when quantum field theory is properly taken into account, black holes aren’t black after all, but radiate with a temperature

\displaystyle T=\frac{1}{8\pi M}~. \ \ \ \ \ (22)

(This result is for a Schwarzschild black hole in thermal equilibrium, and is precisely what we obtain when taking {Q=J=0} in the expression for the surface gravity {\kappa} (19)). Hawking’s calculation, and many other derivations since, require the machinery of QFT, so I won’t go into the details here. There is however a cute hack for obtaining the identification (22), whereby one Wick rotates to Euclidean signature so that the {(3\!+\!1)}-dimensional Schwarzschild geometry becomes {\mathbb{R}^3\times S^1}, whereupon the temperature appears as a consequence of the periodicity in Euclidean time; see my first post for a sketch of the resulting “cigar geometry”, or my upcoming post on QFT in curved space for a more detailed discussion about the relationship between periodicity and horizons.

Hawking radiation is sometimes explained as the spontaneous fluctuation of a particle-antiparticle pair from the vacuum across the horizon; the particle escapes to infinity as Hawking radiation, while the antiparticle is captured by the black hole. This is a cute cartoon, except that it’s wrong, and an over-reliance on the resulting intuition can get one into trouble. I’ve already devoted an entire post to this issue, so I’ll refer you there if you’re interested; if you’ve got a QFT background, you can also find some discussion of the physical aspects of black hole emission in chapter eight of [8]. In a nutshell, the basic point is that radiation comes out in momentum-space modes with wavelength {\lambda\sim r_s}, which can’t be Fourier transformed back to position space to yield anything localized near the horizon. In other words, near the horizon of a black hole, the meaning of “particles” employed by an external observer breaks down. The fact that black holes can radiate away energy means that if you stop throwing in matter, the black hole will slowly shrink, which seems to contradict Hawking’s area theorem above. The catch is that this theorem relies on the weak energy condition, which states that the matter density along every timelike vector field is non-negative; this is no longer necessarily true once quantum fluctuations are taking into account, so there’s no mathematical contradiction. It does however mean that our formulation of the “second law” of black hole thermodynamics was too naïve: the area (and hence entropy) of a black hole can decrease, but only by emitting Hawking radiation which increases the entropy of the environment by at least as much. This motivates us to introduce the generalized entropy

\displaystyle S_\mathrm{gen}=\frac{A}{4\ell_P}+S_\mathrm{ext}~, \ \ \ \ \ (23)

where the first term is the black hole entropy {S_\mathrm{BH}} (10), and the second is the entropy of the thermal radiation. In full generality, the Second Law of (Black Hole) Thermodynamics is then statement is that the entropy (10) of all black holes, plus the entropy of the rest of the universe, never decreases:

\displaystyle \mathrm{d} S_\mathrm{gen}\geq 0~. \ \ \ \ \ (24)

Evaporating black holes have some peculiar properties. For example, since the temperature of a Schwarzschild black hole is inversely proportional to the mass, the specific heat capacity {C} is negative:

\displaystyle \frac{\mathrm{d} T}{\mathrm{d} M}=-\frac{1}{8\pi M^2} \qquad\implies\qquad C=\frac{1}{M}\frac{\mathrm{d} Q}{\mathrm{d} T}=-8\pi M~. \ \ \ \ \ (25)

(We’re working in natural units, so {c\!=\!1} and hence the heat {Q=M}). Consequently, throwing matter into a black hole to increase its size actually makes it cooler! Conversely, as the black hole emits Hawking radiation, its temperature increases, causing it to emit more radiation, and so on in a feedback loop that causes the black hole to get hotter and hotter as it shrinks away to nothing. (Precisely what happens in the final moments of a black hole’s lifespan is an open question, likely requiring a more developed theory of quantum gravity to answer. Here I’m going to take the majority view that it indeed evaporates away completely). Note that this means that whenever one speaks about black holes thermodynamically, one should use the microcanonical ensemble rather than the canonical ensemble, because the latter is unstable to any quantum fluctuation that changes the mass of the black hole.

The fact that black holes radiate when quantum field theory is taken into account transforms black hole thermodynamics from a formal analogy to an ontologically meaningful description, where now the temperature {T=\mathrm{d} M/\mathrm{d} S} is indeed the physical (thermodynamic) temperature of a single black hole. In this sense, quantum effects were required to resolve the tension between the fact that the information-theoretic interpretation of entropy as the measure of possible internal microstates was applicable to a single black hole — and hence had physical significance — while the temperature had no meaningful physical interpretation in the non-radiating (classical) case. The combination of seemingly disparate regimes in the expression for the entropy (10) is not a coincidence, but represents a truly remarkable unification. It’s perhaps the first thing a successful theory of quantum gravity should be expected to explain.

The fact that black holes evaporate also brings into focus the need for such a unification of general relativity and quantum field theory: a black hole is one of the only known regimes (the other being the Big Bang singularity) that falls within the purview of both theories, but attempts to combine them yield nonsensical infinities that have thus far resisted all attempts to tame them. This leads me to the main quantum puzzle I wanted to discuss: the information paradox. (The firewall paradox is essentially just a more modern sharpening of the underlying conflict, but is more difficult to sketch without QFT).

The information paradox, in a nutshell, is a conflict between the apparent ability of black holes to destroy information, and the quantum mechanical postulate of unitarity. Recall that unitarity is the statement that the time-evolution of a quantum state via the Schrödinger equation is described by a unitary operator, which have the property that they preserve the inner product. Physically, this ensures that probabilities continue to sum to one, i.e., that no information is lost. While evolution in open systems can be non-unitary due to decoherence with the environment, the evolution of any closed quantum mechanical system must be unitary, i.e., pure states evolve to pure states only, never to mixed states. This means that if we create a black hole by collapsing some matter in an initially pure state, let it evaporate, and then collect all the Hawking radiation, the final state must still be pure. The problem is that the Hawking radiation is, to a very good approximation, thermal, meaning it has the Planckian spectrum characteristic of black-body radiation, and thermal radiation contains no information.

The situation is often depicted by the Page curve [10,11], which is a plot of entropy with respect to time as the black hole evaporates. Suppose we collect all the Hawking radiation from a black hole that starts in a pure state; call the entropy of this radiation {S_R}. Initially, {S_R=0}, because our subsystem is empty. As the black hole evaporates, {S_R} steadily increases as we collect more and more radiation. Eventually the black hole evaporates completely, and we’re left with a thermal bath of radiation in a maximally mixed state, so {S_R=1} (after normalizing): a maximal loss of information has occurred! This is the information paradox in a single graph. In sharp contrast, what quantum mechanics expects to happen is that after the halfway point in the black hole’s lifespan, the late-time radiation starts to purify the early-time radiation we’ve already collected, so the entropy curve should turn around and head back to 0 when the black hole disappears. This is illustrated in the figure below, from Page’s paper [11]. (The lack of symmetry in the upwards and downwards parts is due to the fact that the emission of different particles (in this calculation, just photons and gravitons) affect the change in the black hole entropy and the change in the radiation entropy slightly differently. The turnover isn’t at exactly half the lifetime either, but rather around {53.81\%}.)


The fundamental issue is that quantum mechanics demands that the information escape the black hole, but there doesn’t seem to be any way of enabling this. (For more discussion, see my earlier post on firewalls. I should also mention that there are alternative proposals for what happens at the end of a black hole’s lifetime, but these are generally disfavoured for a variety of reasons, most notably AdS/CFT). That said, just within the past year, it was discovered that in certain AdS/CFT set-ups, one can obtain a Page curve for the entropy by including the contributions from wormhole geometries connecting different replicas that arise as subleading saddles in the gravitational path integral; see for example [12], or the talks by Douglas Stanford and Juan Maldacena as part of my research group’s QGI seminar series. While this doesn’t quite solve the paradox insofar as it doesn’t explain how the information actually escapes, it’s encouraging that — at least in AdS/CFT — there does seem to be a mechanism for correctly tracking the entropy as the black hole evaporates.

3. The holographic principle

To close this lecture/post, I’d be remiss if I didn’t mention the most remarkable and far-reaching consequence of the black hole investigations above: the holographic principle. Put forth by ‘t Hooft [13], and given a formulation in terms of string theory by Susskind [14], this is essentially the statement that the ultimate theory of quantum gravity must exhibit a dimensional reduction (from 3 to 2 spatial dimensions in our {(3\!+\!1)}-dimensional universe) in the number of fundamental degrees of freedom. This developed from the arguments of Bekenstein, that the black hole entropy (10) represents a bound on the amount of information that can be localized within any given region. The basic idea is that any attempt to cram more information into a region of fixed size will cause the system to collapse into a black hole, and therefore the dimension of the Hilbert space associated to any region must scale with the area of the boundary.

The review by Bousso [15] contains an excellent modern introduction to this principle; I’ll only give a quick summary of the main idea here. Recall that in quantum mechanics, the number of degrees of freedom {N} is given by the log of the dimension of the Hilbert space {\mathcal{H}}. For example, in a system with 100 spins, there are {2^{100}} possible states, so {\mathrm{dim}\,\mathcal{H}=2^{100}} and {N=100\ln 2}, i.e., the system contains 100 bits of information. One can crudely think of quantum field theory as a continuum theory with a harmonic oscillator at every spacetime point; a single harmonic oscillator already has {N=\infty}, so one would expect an infinite number of degrees of freedom for any region. However, one can’t localize more than a Planck energy {M_P=1.3\times 10^{19}\,\mathrm{GeV}} into a Planck cube {\ell_P^3} without forming a black hole, which provides an ultra-violet (UV) cutoff on the spectrum. And since any finite volume imposes an infra-red (IR) cutoff, we can take the degrees of freedom in field theory to scale like the volume of the region, with one oscillator per Planck cell. In other words, we think of space as a grid with lattice spacing {\ell_P=1.6\times 10^{-33}\,\mathrm{cm}}; the total number of oscillators thus scales like the volume {V}, and each one has a finite number of states {n} due to the UV cutoff mentioned above. Hence {\mathrm{dim}\,\mathcal{H}=n^V} and {N=V\ln n}. Thus, since {e^S=\mathrm{dim}\,\mathcal{H}}, {S\!=\!V\ln n\!\sim\! V}, and we expect entropy to scale with volume just as in classical mechanics.

The lesson from black hole thermodynamics is that gravity fundamentally alters this picture. Consider a Schwarzschild black hole: the mass scales like {M\sim r_s}, not {M\sim r_s^3}, so the energy can’t scale with the volume: the vast majority of the states which QFT would naïvely allow can’t be reached in a gravitational theory, because we form a black hole when we’ve excited only a small fraction of them. The maximum number of states we can excite is {A/4} (in Planck units).

You might object that the argument I gave above as to why the black hole entropy must scale with area rather than volume was based on the fact that the interior volume of the black hole is ill-defined, and that a volume law might still apply in other situations. Bousso [15] gives a nice argument as to why this can’t be true: it would violate unitarity. That is, suppose a region of space had an entropy that scaled with the volume, i.e., {\mathrm{dim}\,\mathcal{H}=e^V}. If we then collapse that region into a black hole, the Hilbert space dimension would have to suddenly drop to {e^{A/4}}. It would then be impossible to recover the initial state from the final state (e.g., after allowing the black hole to evaporate). Thus in order to preserve unitarity, the dimension of the Hilbert space must have been {e^{A/4}} from the start.

I’ve glossed over an important technical issue in introducing this “holographic entropy bound” however, namely that the spatial bound doesn’t actually work: it’s violated in all sorts of scenarios. For example, consider a region of our universe, which is well-approximated as a flat ({\mathbb{R}^3}), homogeneous, isotropic space with some average entropy density {\sigma}. Then the entropy scales like

\displaystyle S=\sigma V=\frac{\sigma}{6\sqrt{\pi}}A^{3/2}~, \ \ \ \ \ (26)

which exceeds the bound {A/4} when {r\geq 3\sigma/4}. The proper way to generalize black hole entropy to the sort of bound we want is to recall that the area of the event horizon is a null hypersurface, and it is the formulation in terms of such light-sheets which is consistent with all known examples. This is known as the covariant entropy bound, and states that the entropy on (or rather, contained within) non-expanding light-sheets of some spacetime codimension-2 surface {B} does not exceed the area of {B}. A thorough discussion would be another lecture in itself, so do check out Bousso’s review [15] if you’re interested in more details. Here I merely wanted to bring attention to the fact that the holographic principle is properly formulated on null, rather than spacelike, hypersurfaces.

The holographic principle represents a radical departure from our intuition, and implies that reality is fundamentally nonlocal. One further expects that this feature should be manifest in the ultimate theory of quantum gravity. AdS/CFT provides a concrete realization of this principle, and its success is such that the unqualified “holography” is taken to refer to it in the literature, but it’s important to remember that the holographic principle itself is more general, and applies to our universe as well.



  1. J. D. Bekenstein, “Black holes and entropy,” Phys. Rev. D 7 (Apr, 1973) 2333–2346.
  2. S. W. Hawking, “Gravitational radiation from colliding black holes,” Phys. Rev. Lett. 26 (May, 1971) 1344–1346.
  3. J. Eisert, M. Cramer, and M. B. Plenio, “Area laws for the entanglement entropy – a review,” Rev. Mod. Phys. 82 (2010) 277–306, arXiv:0808.3773 [quant-ph].
  4. M. Christodoulou and C. Rovelli, “How big is a black hole?,” Phys. Rev. D91 no. 6, (2015) 064046, arXiv:1411.2854 [gr-qc].
  5. B. S. DiNunno and R. A. Matzner, “The Volume Inside a Black Hole,” Gen. Rel. Grav. 42 (2010) 63–76, arXiv:0801.1734 [gr-qc].
  6. B. Freivogel, R. Jefferson, L. Kabir, and I.-S. Yang, “Geometry of the Infalling Causal Patch,” Phys. Rev. D91 no. 4, (2015) 044036, arXiv:1406.6043 [hep-th].
  7. S. W. Hawking, “Particle creation by black holes,” Comm. Math. Phys. 43 no. 3, (1975) 199–220.
  8. N. D. Birrell and P. C. W. Davies, Quantum Fields in Curved Space. Cambridge Monographs on Mathematical Physics. Cambridge Univ. Press, Cambridge, UK, 1984.
  9. T. Jacobson, “Introductory Lectures on Black Hole Thermodynamics,”. http://www.physics.umd.edu/grt/taj/776b/lectures.pdf.
  10. D. N. Page, “Information in black hole radiation,” Phys. Rev. Lett. 71 (1993) 3743–3746, arXiv:hep-th/9306083 [hep-th].
  11. D. N. Page, “Time Dependence of Hawking Radiation Entropy,” JCAP 1309 (2013) 028, arXiv:1301.4995 [hep-th].
  12. A. Almheiri, T. Hartman, J. Maldacena, E. Shaghoulian, and A. Tajdini, “Replica Wormholes and the Entropy of Hawking Radiation,” arXiv:1911.12333 [hep-th].
  13. G. ’t Hooft, “Dimensional reduction in quantum gravity,” Conf. Proc. C930308 (1993) 284–296, arXiv:gr-qc/9310026 [gr-qc].
  14. L. Susskind, “The World as a hologram,” J. Math. Phys. 36 (1995) 6377–6396, arXiv:hep-th/9409089 [hep-th].
  15. R. Bousso, “The Holographic principle,” Rev. Mod. Phys. 74 (2002) 825–874, arXiv:hep-th/0203101 [hep-th].


Posted in Physics | Leave a comment

Interior operators in AdS/CFT

In a previous post, I mentioned that the firewall paradox could be phrased as a question about the existence of interior operators that satisfy the correct thermal correlation functions, namely 

\displaystyle \langle\Psi|\mathcal{O}(t,\mathbf{x})\tilde{\mathcal{O}}(t',\mathbf{x}')|\Psi\rangle =Z^{-1}\mathrm{tr}\left[e^{-\beta H}\mathcal{O}(t,\mathbf{x})\mathcal{O}(t'+i\beta/2,\mathbf{x}')\right]~, \ \ \ \ \ (1)

where {\tilde{\mathcal{O}}} and {\mathcal{O}} operators inside and outside the black hole, respectively; cf. eqn. (2) here. In this short post, I’d like to review the basic argument leading up to this statement, following the original works [1,2].

Consider the eternal black hole in AdS as depicted in the following diagram, which I stole from [1]:


The blue line connecting the two asymptotic boundaries is the Cauchy slice on which we’ll construct our states, denoted {\Sigma_I} in exterior region {I} and {\Sigma_{III}} in exterior region {III}. Note that, modulo possible UV divergences at the origin, either half serves as a complete Cauchy slice if we restrict our inquiries to the associated exterior region. But if we wish to reconstruct states in the interior (henceforth just {II}, since we don’t care about {IV}), then we need the entire slice. Pictorially, one can see this from the fact that only the left-moving modes from region {I}, and the right-moving modes from region {III}, cross the horizon into region {II}, but we need both left- and right-movers to have a complete mode decomposition.

To expand on this, imagine we proceed with the quantization of a free scalar field in region {I}. We need to solve the Klein-Gordon equation,

\displaystyle \left(\square-m^2\right)\phi=\frac{1}{\sqrt{-g}}\,\partial_\mu\left( g^{\mu\nu}\sqrt{-g}\,\partial_\nu\phi\right)-m^2\phi=0 \ \ \ \ \ (2)

on the AdS black brane background,

\displaystyle \mathrm{d} s^2=\frac{1}{z^2}\left[-h(z)\mathrm{d} t^2+\mathrm{d} z^2+\mathrm{d}\mathbf{x}^2\right]~, \quad\quad h(z)\equiv1-\left(\frac{z}{z_0}\right)^d~. \ \ \ \ \ (3)

where, in Poincaré coordinates, the asymptotic boundary is at {z\!=\!0}, and the horizon is at {z\!=\!z_0}. We work in {(d+1)-}spacetime dimensions, so {\mathbf{x}} is a {(d\!-\!1)}-dimensional vector representing the transverse coordinates. Note that we’ve set the AdS radius to 1. Substituting the usual plane-wave ansatz

\displaystyle f_{\omega,\mathbf{k}}(t,\mathbf{x},z)=e^{-i\omega t+i\mathbf{k}\mathbf{x}}\psi_{\omega,\mathbf{k}(z)} \ \ \ \ \ (4)

into the Klein-Gordon equation results in a second order ordinary differential equation for the radial function {\psi_{\omega,\mathbf{k}}(z)}, and hence two linearly independent solutions. As usual, we then impose normalizable boundary conditions at infinity, which leaves us with a single linear combination for each {(\omega,\mathbf{k})}. Note that we do not impose boundary conditions at the horizon. Naïvely, one might have thought to impose ingoing boundary conditions there; however, as remarked in [1], this precludes the existence of real {\omega}. More intuitively, I think of this as simply the statement that the black hole is evaporating, so we should allow the possibility for outgoing modes as well. (That is, assuming a large black hole in AdS, the black hole is in thermal equilibrium with the surrounding environment, so the outgoing and ingoing fluxes are precisely matched, and it maintains constant size). The expression for {\psi_{\omega,\mathbf{k}}(z)} is not relevant here; see [1] for more details.

We thus arrive at the standard expression of the (bulk) field {\phi} in terms of creation and annihilation operators,

\displaystyle \phi_I(t,\mathbf{x},z)=\int_{\omega>0}\frac{\mathrm{d}\omega\mathrm{d}^{d-1}\mathbf{k}}{\sqrt{2\omega}(2\pi)^d}\,\bigg[ a_{\omega,\mathbf{k}}\,f_{\omega,\mathbf{k}}(t,\mathbf{x},z)+\mathrm{h.c.}\bigg]~, \ \ \ \ \ (5)

where the creation/annihilation operators for the modes may be normalized with respect to the Klein-Gordon norm, so that

\displaystyle [a_{\omega,\mathbf{k}},a^\dagger_{\omega',\mathbf{k}'}]=\delta(\omega-\omega')\delta^{d-1}(\mathbf{k}-\mathbf{k}')~. \ \ \ \ \ (6)

Of course, a similar expansion holds for region {III}:

\displaystyle \phi_{III}(t,\mathbf{x},z)=\int_{\omega>0}\frac{\mathrm{d}\omega\mathrm{d}^{d-1}\mathbf{k}}{\sqrt{2\omega}(2\pi)^d}\,\bigg[\tilde a_{\omega,\mathbf{k}}\,g_{\omega,\mathbf{k}}(t,\mathbf{x},z)+\mathrm{h.c.}\bigg]~, \ \ \ \ \ (7)

where the mode operators {\tilde a_{\omega,\mathbf{k}}} commute with all {a_{\omega,\mathbf{k}}} by construction.

Now, what of the future interior, region {II}? Unlike the exterior regions, we no longer have any boundary condition to impose, since every Cauchy slice which crosses this region is bounded on both sides by a future horizon. Consequently, we retain both the linear combinations obtained from the Klein-Gordon equation, and hence have twice as many modes as in either {I} or {III}—which makes sense, since the interior receives contributions from both exterior regions. Nonetheless, it may be a bit confusing from the bulk perspective, since any local observer would simply arrive at the usual mode expansion involving only a single set of creation/annihilation operators, and I don’t have an intuition as to how {a_{\omega,\mathbf{k}}} and {\tilde a_{\omega,\mathbf{k}}} relate vis-à-vis their commutation relations in this shared domain. However, the entire framework in which the interior is fed by two exterior regions is only properly formulated in AdS/CFT, in which — it is generally thought — the interior region emerges from the entanglement structure between the two boundaries, so I prefer to uplift this discussion to the CFT before discussing the interior region in detail. This avoids the commutation confusion above — since the operators live in different CFTs — and it was the next step in our analysis anyway. (Incidentally, appendix B of [1] performs the mode decomposition in all three regions explicitly for the case of Rindler space, which provides a nice concrete example in which one can get one’s hands dirty).

So, we want to discuss local bulk fields from the perspective of the boundary CFT. From the extrapolate dictionary, we know that local bulk operators become increasingly smeared over the boundary (in both space and time) the farther we move into the bulk. Thus in region {I}, we can construct the operator

\displaystyle \phi^I_{\mathrm{CFT}}(t,\mathbf{x},z)=\int_{\omega>0}\frac{\mathrm{d}\omega\mathrm{d}^{d-1}\mathbf{k}}{(2\pi)^d}\,\bigg[\mathcal{O}_{\omega,\mathbf{k}}\,f_{\omega,\mathbf{k}}(t,\mathbf{x},z)+\mathcal{O}^\dagger_{\omega,\mathbf{k}}f^*_{\omega,\mathbf{k}}(t,\mathbf{x},z)\bigg]~, \ \ \ \ \ (8)

which, while a non-local operator in the CFT (constructed from local CFT operators {\mathcal{O}_{\omega,\mathbf{k}}} which act as creation operators of light primary fields), behaves like a local operator in the bulk. Note that from the perspective of the CFT, {z} is an auxiliary coordinate that simply parametrizes how smeared-out this operator is on the boundary.

As an aside, the critical difference between (8) and the more familiar HKLL prescription [3-5] is that the former is formulated directly in momentum space, while the latter is defined in position space as

\displaystyle \phi_{\mathrm{CFT}}(t,\mathbf{x},z)=\int\!\mathrm{d} t'\mathrm{d}^{d-1}\mathbf{x}'\,K(t,\mathbf{x},z;t',\mathbf{x}')\mathcal{O}(t',\mathbf{x}')~, \ \ \ \ \ (9)

where the integration kernel {K} is known as the “smearing function”, and depends on the details of the spacetime. To solve for {K}, one performs a mode expansion of the local bulk field {\phi} and identifies the normalizable mode with the local bulk operator {\mathcal{O}} in the boundary limit. One then has to invert this relation to find the bulk mode operator, and then insert this into the original expansion of {\phi}. The problem now is that to identify {K}, one needs to swap the order of integration between position and momentum space, and the presence of the horizon results in a fatal divergence that obstructs this maneuver. As discussed in more detail in [1] however, working directly in momentum space avoids this technical issue. But the basic relation “smeared boundary operators {\longleftrightarrow} local bulk fields” is the same.

Continuing, we have a similar bulk-boundary relation in region {III}, in terms of operators {\tilde{\mathcal{O}}_{\omega,\mathbf{k}}} living in the left CFT:

\displaystyle \phi^{III}_{\mathrm{CFT}}(t,\mathbf{x},z)=\int_{\omega>0}\frac{\mathrm{d}\omega\mathrm{d}^{d-1}\mathbf{k}}{(2\pi)^d}\,\bigg[\tilde{\mathcal{O}}_{\omega,\mathbf{k}}\,f_{\omega,\mathbf{k}}(t,\mathbf{x},z)+\tilde{\mathcal{O}}^\dagger_{\omega,\mathbf{k}}f^*_{\omega,\mathbf{k}}(t,\mathbf{x},z)\bigg]~. \ \ \ \ \ (10)

Note that even though I’ve used the same coordinate labels, {t} runs backwards in the left wedge, so that {\tilde{\mathcal{O}}_{\omega,\mathbf{k}}} plays the role of the creation operator here. From the discussion above, the form of the field in the black hole interior is then

\displaystyle \phi^{II}_{\mathrm{CFT}}(t,\mathbf{x},z)=\int_{\omega>0}\frac{\mathrm{d}\omega\mathrm{d}^{d-1}\mathbf{k}}{(2\pi)^d}\,\bigg[\mathcal{O}_{\omega,\mathbf{k}}\,g^{(1)}_{\omega,\mathbf{k}}(t,\mathbf{x},z)+\tilde{\mathcal{O}}_{\omega,\mathbf{k}}g^{(2)}_{\omega,\mathbf{k}}(t,\mathbf{x},z)+\mathrm{h.c}\bigg]~, \ \ \ \ \ (11)

where {\mathcal{O}_{\omega,\mathbf{k}}} and {\tilde{\mathcal{O}}_{\omega,\mathbf{k}}} are the (creation/annihilation operators for the) boundary modes in the right and left CFTs, respectively. The point is that in order to construct a local field operator behind the horizon, both sets of modes — the left-movers {\mathcal{O}_{\omega,\mathbf{k}}} from {I} and the right-movers {\tilde{\mathcal{O}}_{\omega,\mathbf{k}}} from {III} — are required. In the eternal black hole considered above, the latter originate in the second copy of the CFT. But in the one-sided case, we would seem to have only the left-movers {\mathcal{O}_{\omega,\mathbf{k}}}, hence we arrive at the crucial question: for a one-sided black hole — such as that formed from collapse in our universe — what are the interior modes {\tilde{\mathcal{O}}_{\omega,\mathbf{k}}}? Equivalently: how can we represent the black hole interior given access to only one copy of the CFT?

To answer this question, recall that the thermofield double state,

\displaystyle |\mathrm{TFD}\rangle=\frac{1}{\sqrt{Z_\beta}}\sum_ie^{-\beta E_i/2}|E_i\rangle\otimes|E_i\rangle~, \ \ \ \ \ (12)

is constructed so that either CFT appears exactly thermal when tracing out the other side, and that this well-approximates the late-time thermodynamics of a large black hole formed from collapse. That is, the exterior region will be in the Hartle-Hawking vacuum (which is to Schwarzschild as Rindler is to Minkowski), with the temperature {\beta^{-1}} of the CFT set by the mass of the black hole. This implies that correlation functions of operators {\mathcal{O}} in the pure state {|\mathrm{TFD}\rangle} may be computed as thermal expectation values in their (mixed) half of the total Hilbert space, i.e.,

\displaystyle \langle\mathrm{TFD}|\mathcal{O}(t_1,\mathbf{x}_1)\ldots\mathcal{O}(t_n,\mathbf{x}_n)|\mathrm{TFD}\rangle =Z^{-1}_\beta\mathrm{tr}\left[e^{-\beta H}\mathcal{O}(t_1,\mathbf{x}_1)\ldots\mathcal{O}(t_n,\mathbf{x}_n)\right]~. \ \ \ \ \ (13)

The same fundamental relation remains true in the case of the one-sided black hole as well: given the Hartle-Hawking state representing the exterior region, we can always obtain a purification such that expectation values in the original, thermal state are equivalent to standard correlators in the “fictitious” pure state, by the same doubling formalism that yielded the TFD. (Of course, the purification of a given mixed state is not unique, but as pointed out in [2] “the correct way to pick it, assuming that expectation values [of the operators] are all the information we have, is to pick the density matrix which maximizes the entropy.” That is, we pick the purification such that the original mixed state is thermal, i.e., {\rho\simeq Z^{-1}_\beta e^{-\beta H}} up to {1/N^2} corrections. The reason this is the “correct” prescription is that it’s the only one which does not impose additional constraints.) Thus (13) can be generally thought of as the statement that operators in an arbitrary pure state have the correct thermal expectation values when restricted to some suitably mixed subsystem (e.g., the black hole exterior dual to a single CFT).

Now, what if we wish to compute a correlation function involving operators across the horizon, e.g., {\langle\mathcal{O}\tilde{\mathcal{O}}\rangle}? In the two-sided case, we can simply compute this correlator in the pure state {|\mathrm{TFD}\rangle}. But in the one-sided case, we only have access to the thermal state representing the exterior. Thus we’d like to know how to compute the correlator using only the available data in the CFT corresponding to region {I}. In order to do this, we re-express all operators {\tilde{\mathcal{O}}} appearing in the correlator with analytically continued operators {\mathcal{O}} via the KMS condition, i.e., we make the replacement

\displaystyle \tilde{\mathcal{O}}(t,\mathbf{x}) \longrightarrow \mathcal{O}(t+i\beta/2,\mathbf{x})~. \ \ \ \ \ (14)

This is essentially the usual statement that thermal Green functions are periodic in imaginary time; see [1] for details. This relationship allows us to express the desired correlator as

\displaystyle \langle\mathrm{TFD}|\mathcal{O}(t_1,\mathbf{x}_1)\ldots\tilde{\mathcal{O}}(t_n,\mathbf{x}_n)|\mathrm{TFD}\rangle =Z^{-1}_\beta\mathrm{tr}\left[e^{-\beta H}\mathcal{O}(t_1,\mathbf{x}_1)\ldots\mathcal{O}_{\omega,\mathbf{k}}(t_n+i\beta/2,\mathbf{x}_n)\right]~, \ \ \ \ \ (15)

which is precisely eqn. (2) in our earlier post, cf. the two-point function (1) above. Note the lack of tilde’s on the right-hand side: this thermal expectation value can be computed entirely in the right CFT.

If the CFT did not admit operators which satisfy the correlation relation (15), it would imply a breakdown of effective field theory across the horizon. Alternatively, observing deviations from the correct thermal correlators would allow us to locally detect the horizon, in contradiction to the equivalence principle. In this sense, this expression may be summarized as the statement that the horizon is smooth. Thus, for the CFT to represent a black hole with no firewall, it must contain a representation of interior operators {\tilde{\mathcal{O}}} with the correct behaviour inside low-point correlators. This last qualifier hints at the state-dependent nature of these so-called “mirror operators”, which I’ve discussed at length elsewhere [6].


[1]  K. Papadodimas and S. Raju, “An Infalling Observer in AdS/CFT,” JHEP 10 (2013) 212,arXiv:1211.6767 [hep-th].

[2]  K. Papadodimas and S. Raju, “State-Dependent Bulk-Boundary Maps and Black Hole Complementarity,” Phys. Rev. D89 no. 8, (2014) 086010, arXiv:1310.6335 [hep-th].

[3]  A. Hamilton, D. N. Kabat, G. Lifschytz, and D. A. Lowe, “Holographic representation of local bulk operators,” Phys. Rev. D74 (2006) 066009, arXiv:hep-th/0606141 [hep-th].

[4]  A. Hamilton, D. N. Kabat, G. Lifschytz, and D. A. Lowe, “Local bulk operators in AdS/CFT: A Boundary view of horizons and locality,” Phys. Rev. D73 (2006) 086003,arXiv:hep-th/0506118 [hep-th].

[5]  A. Hamilton, D. N. Kabat, G. Lifschytz, and D. A. Lowe, “Local bulk operators in AdS/CFT: A Holographic description of the black hole interior,” Phys. Rev. D75 (2007) 106001,arXiv:hep-th/0612053 [hep-th]. [Erratum: Phys. Rev.D75,129902(2007)].

[6]  R. Jefferson, “Comments on black hole interiors and modular inclusions,” SciPost Phys. 6 no. 4, (2019) 042, arXiv:1811.08900 [hep-th].

Posted in Physics | Leave a comment

Free energy, variational inference, and the brain

In several recent posts, I explored various ideas that lie at the interface of physics, information theory, and machine learning:

  • We’ve seen, à la Jaynes, how the concepts of entropy in statistical thermodynamics and information theory are unified, perhaps the quintessential manifestation of the intimate relationship between the two.
  • We applied information geometry to Boltzmann machines, which led us to the formalization of “learning” as a geodesic in the abstract space of machines.
  • In the course of introducing VAEs, we saw that the Bayesian inference procedure can be understood as a process which seeks to minimizes the variational free energy, which encodes the divergence between the approximate and true probability distributions.
  • We examined how the (dimensionless) free energy serves as a generating function for the cumulants from probability theory, which manifest as the connected Green functions from quantum field theory.
  • We also showed how the cumulants from hidden priors control the higher-order interactions between visible units in an RBM, which underlies their representational power.
  • Lastly, we turned a critical eye towards the analogy between deep learning and the renormalization group, through a unifying Bayesian language in which UV degrees of freedom correspond to hidden variables over which a low-energy observer must marginalize.

Collectively, this led me to suspect that ideas along these lines — in particular, the link between variational Bayesian inference and free energy minimization in hierarchical models — might provide useful mathematical headway in our attempts to understand learning and intelligence in both minds and machines. It turns out this has been explored before by many scientists; at least in the context of biological brains, the most well-known is probably the approach by the neuroscientist Karl Friston, which provides a convenient starting point for exploring these ideas.

The aptly-named free energy principle (for the brain) is elaborated upon in a series of about ten papers spanning as many years. I found [1-5] most helpful, but insofar as a great deal of text is copied verbatim (yes, really; never trust the h-index) it doesn’t really matter which one you read. I’m going to mostly draw from [3], because it seems the earliest in which the basic idea is fleshed-out completely. Be warned however that the notation varies slightly from paper to paper, and I find his distinction between states and parameters rather confusingly fuzzy; but we’ll make this precise below.

The basic idea is actually quite simple, and proceeds from the view of the brain as a Bayesian inference machine. In a nutshell, the job of the brain is to infer, as accurately as possible, the probability distribution representing the world (i.e., to build a model that best accords with sensory inputs). In a sense, the brain itself is a probabilistic model in this framework, so the goal is to bring this internal model of the world in line with the true, external one. But this is exactly the same inference procedure we’ve seen before in the context of VAEs! Thus the free energy principle is just the statement that the brain minimizes the variational free energy between itself (that is, its internal, approximate model) and its sensory inputs—or rather, the true distribution that generates them.

To elucidate the notation involved in formulating the principle, we can make an analogy with VAEs. In this sense, the goal of the brain is to construct a map between our observations (i.e., sensory inputs {x}) and the underlying causes (i.e., the environment state {z}). By Bayes’ theorem, the joint distribution describing the model can be decomposed as

\displaystyle p_\theta(x,z)=p_\theta(x|z)p(z)~. \ \ \ \ \ (1)

The first factor on the right-hand side is the likelihood of a particular sensory input {x} given the current state of the environment {z}, and plays the role of the decoder in this analogy, while the second factor is the prior distribution representing whatever foreknowledge the system has about the environment. The subscript {\theta} denotes the variational or “action parameters” of the model, so named because they parametrize the action of the brain on its substrate and surroundings. That is, the only way in which the system can change the distribution is by acting to change its sensory inputs. Friston denotes this dependency by {x(\theta)} (with different variables), but as alluded above, I will keep to the present notation to avoid conflating state/parameter spaces.

Continuing this analogy, the encoder {p_\theta(z|x)} is then a map from the space of sensory inputs {X} to the space of environment states {Z} (as modelled by the brain). As in the case of VAEs however, this is incomputable in practice, since we (i.e., the brain) can’t evaluate the partition function {p(x)=\sum_zp_\theta(x|z)p(z)}. Instead, we construct a new distribution {q_\phi(z|x)} for the conditional probability of environment states {z} given a particular set of sensory inputs {x}. The variational parameters {\phi} for this ensemble control the precise hamiltonian that defines the distribution, i.e., the physical parameters of the brain itself. Depending on the level of resolution, these could represent, e.g., the firing status of all neurons, or the concentrations of neurotransmitters (or the set of all weights and biases in the case of artificial neural nets).

Obviously, the more closely {q_\phi(z|x)} approximates {p_\theta(z|x)}, the better our representation — and hence, the brain’s predictions — will be. As we saw before, we quantify this discrepancy by the Kullback-Leibler divergence

\displaystyle D_z(q_\phi(z|x)||p_\theta(z|x))=\sum_zq_\phi(z|x)\ln\frac{q_\phi(z|x)}{p_\theta(z|x)}~, \ \ \ \ \ (2)

which we can re-express in terms of the variational free energy

\displaystyle \begin{aligned} F_{q|}&=-\langle\ln p_\theta(x|z)\rangle_{q|}+D_z(q_\phi(z|x)||p(z))\\ &=-\sum_zq_\phi(z|x)\ln\frac{p_\theta(x,z)}{q_\phi(z|x)} =\langle E_{p|}\rangle_{q|}-S_{q|}~, \end{aligned} \ \ \ \ \ (3)

where the subscripts {p|,q|} denote the conditional distributions {p_\theta(z|x)}, {q_\phi(z|x)}. On the far right-hand side, {E_{p|}=-\ln p_\theta(x,z)} is the energy or hamiltonian for the ensemble {p_\theta(z|x)} (with partition function {Z=p(x)}), and {S_{q|}=-\int\!\mathrm{d} z\,q_\phi(z|x)\ln q_\phi(z|x)} is the entropy of {q_\phi(z|x)} (see the aforementioned post for details).

However, at this point we must diverge from our analogy with VAEs, since what we’re truly after is a model of the state of the world which is independent of our current sensory inputs. Consider that from a selectionist standpoint, a brain that changes its environmental model when a predator temporarily moves out of sight is less likely to pass on the genes for its construction! Said differently, a predictive model of reality will be more successful when it continues to include the moon, even when nobody looks at it. Thus instead of {q_\phi(z|x)}, we will compare {p_\theta(x|z)} with the ensemble density {q_\lambda(z)}, where — unlike in the case of {p(x)} or {p(z)} — we have denoted the variational parameters {\lambda} explicitly, since they will feature crucially below. Note that {\lambda} is not the same as {\theta} (and similarly, whatever parameters characterize the marginals {p(x)}, {p(z)} cannot be identified with {\theta}). One way to see this is by comparison with our example of renormalization in deep networks, where the couplings in the joint distribution (here, {\phi} in {q_\phi(x,z)}) get renormalized after marginalizing over some degrees of freedom (here, {\lambda} in {q_\lambda(z)}, after marginalizing over all possible sensory inputs {x}). Friston therefore defines the variational free energy as

\displaystyle \begin{aligned} \mathcal{F}_q&=-\langle\ln p_\theta(x|z)\rangle_q+D_z(q_\lambda(z)||p(z))\\ &=-\sum_zq_\lambda(z)\ln\frac{p_\theta(x,z)}{q_\lambda(z)} =\langle E_{p|}\rangle_{q}-S_{q}~, \end{aligned} \ \ \ \ \ (4)

where we have used a curly {\mathcal{F}} to distinguish this from {F} above, and note that the subscript {q} (no vertical bar) denotes that expectation values are computed with respect to the distribution {q_\lambda(z)}. The first equality expresses {\mathcal{F}_q} as the log-likelihood of sensory inputs given the state of the environment, minus an error term that quantifies how far the brain’s internal model of the world {q_\lambda(z)} is from the model consistent with our observations, {p(z)}, cf. (1). Equivalent, comparing with (2) (with {q_\lambda(z)} in place of {q_\phi(z|x)}), we’re interested in the Kullback-Leibler divergence between the brain’s model of the external world, {q_\lambda(z)}, and the conditional likelihood of a state therein given our sensory inputs, {p_\theta(z|x)}. Thus we arrive at the nutshell description we gave above, namely that the principle is to minimize the difference between what is and what we think there is. As alluded above, there is a selectionist argument for this principle, namely that organisms whose beliefs accord poorly with reality tend not to pass on their genes.

As an aside, it is perhaps worth emphasizing that both of these variational free energies are perfectly valid: unlike the Helmholtz free energy, which is uniquely defined, one can define different variational free energies depending on which ensembles one wishes to compare, provided it admits an expression of the form {\langle E\rangle-S} for some energy {E} and entropy {S} (in case it wasn’t clear by now, we’re working with the dimensionless or reduced free energy, equivalent to setting {\beta=1}; the reason for this general form involves a digression on Legendre transforms). Comparing (4) and (3), one sees that the difference in this case is simply a difference in entropies and expectation values with respect to prior {q_\lambda(z)} vs. conditional distributions {q_\phi(z|x)} (which makes sense, since all we did was replace the latter by the former in our first definition).

Now, viewing the brain as an inference machine means that it seeks to optimize its predictions about the world, which in this context, amounts to minimizing the free energy by varying the parameters {\theta,\,\lambda}. As explained above, {\theta} corresponds to the actions the system can take to alter its sensory inputs. From the first equality in (4), we see that the dependence on the action parameters is entirely contained in the log-likelihood of sensory inputs: the second, Kullback-Leibler term contains only priors (cf. our discussion of gradient descent in VAEs). This, optimizing the free energy with respect to {\theta} means that the system will act in such a way as to fulfill its expectations with regards to sensory inputs. Friston neatly summarizes this philosophy as the view that “we may not interact with the world to maximize our reward but simply to ensure it behaves as we think it should” [3]. While this might sound bizarre at first glance, the key fact to bear in mind is that the system is limited in the actions it can perform, i.e., in its ability to adapt. In other words, a system with low free energy is per definition adapting well to changes in its environment or its own internal needs, and therefore is positively selected for relative to systems whose ability to model and adapt to their environment is worse (higher free energy).

What about optimization with respect to the other set of variational parameters, {\lambda}? As mentioned above, these correspond to the physical parameters of the system itself, so this corresponds to adjusting the brain’s internal parameters — connection strengths, neurotransmitter levels, etc. — to ensure that our perceptions are as accurate as possible. By applying Bayes rule to the joint distribution {p_\theta(x,z)}, we can re-arrange the expression for the free energy to isolate this dependence in a single Kullback-Leibler term:

\displaystyle \mathcal{F}_q=-\ln p_\theta(x)+D_z\left( q_\lambda(z)||p_\theta(z|x)\right)~. \ \ \ \ \ (5)

where we have used the fact that {\langle \ln p_\theta(x)\rangle_q=\ln p_\theta(x)}. This form of the expression shows clearly that minimization with respect to {\lambda} directly corresponds to minimizing the Kullback-Leibler divergence between the brain’s internal model of the world, {q_\lambda(z)}, and the posterior probability of the state giving rise to its sensory inputs, {p_\theta(z|x)}. That is, in the limit where the second, Kullback-Leibler term vanishes, we are correctly modelling the causes of our sensory inputs. The selectionist interpretation is that systems which are less capable of accurately modelling their environment by correctly adjusting internal, “perception parameters” {\lambda} will have higher free energy, and hence will be less adept in bringing their perceptions in line with reality.

Thus far everything is quite abstract and rather general. But things become really interesting when we apply this basic framework to hierarchical models with both forward and backwards connections — such as the cerebral cortex — which leads to “recurrent dynamics that self-organize to suppress free energy or prediction error, i.e., recognition dynamics” [3]. In fact, Friston makes the even stronger argument that it is precisely the inability to invert the recognition problem that necessitates backwards (as opposed to purely feed-forwards) connections. In other words, the selectionist pressure to accurately model the (highly non-linear) world requires that brains evolve top-down connections from higher to lower cortical layers. Let’s flesh this out in a bit more detail.

Recall that {Z} is the space of environmental states as modelled by the brain. Thus we can formally associate the encoder, {p_\theta(z|x)}, with forwards connections, which propagate sensory data up the cortical hierarchy; Friston refers to this portion as the recognition model. That is, the recognition model should take a given data point {x}, and return the likelihood of a particular cause (i.e., world-state) {z}. In general however, the map from causes to sensory inputs — captured by the so-called generative model {p_\theta(x|z)} — is highly non-linear, and the brain must essentially invert this map to find contextually invariant causes (e.g., the continued threat of a predator even when it’s no longer part of our immediate sensory input). This is the intractable problem of computing the partition function above, the workaround for which is to instead postulate an approximate recognition model {q_\lambda(z)}, whose parameters {\lambda} are encoded in the forwards connections. The role of the generative model {p_\theta(x|z)} is then to modulate sensory inputs (or their propagation and processing) based on the prevailing belief about the environment’s state, the idea being that these effects are represented in backwards (and lateral) connections. Therefore, the role of these backwards or top-down connections is to modulate forwards or bottom-up connections, thereby suppressing prediction error, which is how the brain operationally minimizes its free energy.

The punchline is that backwards connections are necessary for general perception and recognition in hierarchical models. As mentioned above, this is quite interesting insofar as it offers, on the one hand, a mathematical explanation for the cortical structure found in biological brains, and on the other, a potential guide to more powerful, neuroscience-inspired artificial intelligence.

There are however a couple technical exceptions to this claim of necessity worth mentioning, which is why I snuck in the qualifier “general” in the punchline above. If the abstract generative model can be inverted exactly, then there’s no need for (expensive and time-consuming) backwards connections, because one can obtain a perfectly suitable recognition model that reliably predicts the state of the world given sensory inputs, using a purely feed-forward network. Mathematically, this corresponds to simply taking {q_\lambda(z)=p_\theta(z|x)} in (4) (i.e., zero Kullback-Leibler divergence (2)), whereupon the free energy reduces to the negative log-likelihood of sensory inputs,

\displaystyle \mathcal{F}_{p}=-\ln p(x)~, \ \ \ \ \ (6)

where we have used the fact that {\langle\ln p(x)\rangle_{p|}=\ln p(x)}. Since real-world models are generally non-linear in their inputs however, invertibility is not something one expects to encounter in realistic inference machines (i.e., brains). Indeed, our brains evolved under strict energetic and space constraints; there simply isn’t enough processing power to brute-force the problem by using dedicated feed-forward networks for all our recognition needs. The other important exception is when the recognition process is purely deterministic. In this case one replaces {q_\lambda(z)} by a Kronecker delta function {\delta(z(x)-x)}, so that upon performing the summation, the inferred state {z} becomes a deterministic function of the inputs {x}. Then the second expression for {\mathcal{F}} in (4) becomes the negative log-likelihood of the joint distribution

\displaystyle \mathcal{F}_\delta=-\ln p_\theta(x,z(x)) =-\ln p_\theta(x|z(x))-\ln p(z(x))~, \ \ \ \ \ (7)

where we have used the fact that {\ln\delta(0)=0}. Note that the invertible case, (6), corresponds to maximum likelihood estimation (MLE), while the deterministic case (7) corresponds to so-called maximum a posteriori estimation (MAP), the only difference being that the latter includes a weighting based on the prior distribution {p(z(x))}. Neither requires the conditional distribution {p_\theta(z|x)}, and so skirts the incomputability issue with the path integral above. The reduction to these familiar machine learning metrics for such simple models is reasonable, since only in relatively contrived settings does one ever expect deterministic/invertible recognition.

In addition to motivating backwards connections, the hierarchical aspect is important because it allows the brain to learn its own priors through a form of empirical Bayes. In this sense, the free energy principle is essentially an elegant (re)formulation of predictive coding. Recall that when we introduced the generative model in the form of the decoder {p_\theta(x|z)} in (1), we also necessarily introduced the prior distribution {p(z)}: the liklihood of a particular sensory input {x} given (our internal model of) the state of the environment (i.e., the cause) {z} only makes sense in the context of the prior distribution of causes. Where does this prior distribution come from? In artificial models, we can simply postulate some (e.g., Gaussian or informative) prior distribution and proceed to train the model from there. But a hierarchical model like the brain enables a more natural option. To illustrate the basic idea, consider labelling the levels in such a cortical hierarchy by {i\in{0,\ldots,n}}, where 0 is the bottom-most layer and {n} is the top-most layer. Then {x_i} denotes sensory data at the corresponding layer; i.e., {x_0} corresponds to raw sensory inputs, while {x_n} corresponds to the propagated input signals after all previous levels of processing. Similarly, let {z_i} denote the internal model of the state of the world assembled (or accessible at) the {i^\mathrm{th}} layer. Then

\displaystyle p(z_i)=\sum_{z_{i-1}}p(z_i|z_{i-1})p(z_{i-1})~, \ \ \ \ \ (8)

i.e., the prior distribution {p(z_i)} implicitly depends on the knowledge of the state at all previous levels, analogous to how the IR degrees of freedom implicitly depend on the marginalized UV variables. The above expression can be iterated recursively until we reach {p(z_0)}. For present purposes, this can be identified with {p(x_0)}, since at the bottom-most level of the hierarchy, there’s no difference between the raw sensory data and the inferred state of the world (ignoring whatever intralayer processing might take place). In this (empirical Bayesian) way, the brain self-consistently builds up higher priors from states at lower levels.

The various works by Friston and collaborators go into much more detail, of course; I’ve made only the crudest sketch of the basic idea here. In particular, one can make things more concrete by examining the neural dynamics in such models, which some of these works seek to explore via something akin to a mean field theory (MFT) approach. I’d originally hoped to have time to dive into this in detail, but a proper treatment will have to await another post. Suffice to say however that an approach along the lines of the free energy principle may provide an elegant formulation which, as in the other topics mentioned at the beginning of this post, allows us to apply ideas from theoretical physics to understand the structure and dynamics of neural networks, and may even prove a fruitful mathematical framework for both theoretical neuroscience and (neuro-inspired) artificial intelligence.


[1] K. Friston, “Learning and inference in the brain,” Neural Networks (2003) .

[2] K. Frison, “A theory of cortical responses,” Phil. Trans. R. Soc. B (2005) .

[3] K. Friston, J. Kilner, and L. Harrison, “A free energy principle for the brain,” J. Physiology (Paris) (2006) .

[4] K. J. Friston and K. E. Stephan, “Free-energy and the brain,” Synthese (2007) .

[5] K. Friston, “The free-energy principle: a unified brain theory?,” Nature Rev. Neuro. (2010) .

Posted in Minds & Machines | 1 Comment

Deep learning and the renormalization group

In recent years, a number of works have pointed to similarities between deep learning (DL) and the renormalization group (RG) [1-7]. This connection was originally made in the context of certain lattice models, where decimation RG bears a superficial resemblance to the structure of deep networks in which one marginalizes over hidden degrees of freedom. However, the relation between DL and RG is more subtle than has been previously presented. The “exact mapping” put forth by [2], for example, is really just a formal analogy that holds for essentially any hierarchical model! That’s not to say there aren’t deeper connections between the two: in my earlier post on RBMs for example, I touched on how the cumulants encoding UV interactions appear in the renormalized couplings after marginalizing out hidden degrees of freedom, and we’ll go into this in much more detail below. But it’s obvious that DL and RG are functionally distinct: in the latter, the couplings (i.e., the connection or weight matrix) are fixed by the relationship between the hamiltonian at different scales, while in the former, these connections are dynamically altered in the training process. There is, in other words, an important distinction between structure and dynamics which seems to have been overlooked. Understanding both these aspects is required to truly understand why deep learning “works”, but “learning” itself properly refers to the latter.

That said, structure is the first step to dynamics, so I wanted to see how far one could push the analogy. To that end, I started playing with simple Gaussian/Bernoulli RBMs, to see whether understanding the network structure — in particular, the appearance of hidden cumulants, hence the previous post in this two-part sequence — would shed light on, e.g., the hierarchical feature detection observed in certain image recognition tasks, the propagation of structured information more generally, or the relevance of criticality to both deep nets and biological brains. To really make the RG analogy precise, one would ideally like a beta function for the network, which requires a recursion relation for the couplings. So my initial hope was to derive an expression for this in terms of the cumulants of the marginalized neurons, and thereby gain some insight into how correlations behave in these sorts of hierarchical networks.

To start off, I wanted a simple model that would be analytically solvable while making the analogy with decimation RG completely transparent. So I began by considering a deep Boltzmann machine (DBM) with three layers: a visible layer of Bernoulli units {x_i}, and two hidden layers of Gaussian units {y_i,z_i}. The total energy function is

\displaystyle \begin{aligned} H(x,y,z)&=-\sum_{i=1}^na_ix_i+\frac{1}{2}\sum_{j=1}^my_j^2+\frac{1}{2}\sum_{k=1}^pz_k^2-\sum_{ij}A_{ij}x_iy_j-\sum_{jk}B_{jk}y_jz_k\\ &=-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\left( \mathbf{y}^\mathrm{T}\mathbf{y}+\mathbf{z}^\mathrm{T}\mathbf{z}\right)-\mathbf{x}^\mathrm{T} A\,\mathbf{y}-\mathbf{y}^\mathrm{T} B\,\mathbf{z}~, \end{aligned} \ \ \ \ \ (1)

where on the second line I’ve switched to the more convenient vector notation; the dot product between vectors is implicit, i.e., {\mathbf{a}\,\mathbf{x}=\mathbf{a}\cdot\mathbf{x}}. Note that there are no intra-layer couplings, and that I’ve stacked the layers so that the visible layer {x} is connected only to the intermediate hidden layer {y}, which in turn is connected only to the final hidden layer {z}. The connection to RG will be made by performing sequential marginalizations over first {z}, and then {y}, so that the flow from UV to IR is {z\rightarrow y\rightarrow x}. There’s an obvious Bayesian parallel here: we low-energy beings don’t have access to complete information about the UV, so the visible units are naturally identified with IR degrees of freedom, and indeed I’ll use these terms interchangeably throughout.

The joint distribution function describing the state of the machine is

\displaystyle p(x,y,z)=Z^{-1}e^{-\beta H(x,y,z)}~, \quad\quad Z[\beta]=\prod_{i=1}^n\sum_{x_i=\pm1}\int\!\mathrm{d}^my\mathrm{d}^pz\,e^{-\beta H(x,y,z)}~, \ \ \ \ \ (2)

where {\int\!\mathrm{d}^my=\int\!\prod_{i=1}^m\mathrm{d} y_i}, and similarly for {z}. Let us now consider sequential marginalizations to obtain {p(x,y)} and {p(x)}. In Bayesian terms, these distributions characterize our knowledge about the theory at intermediate- and low-energy scales, respectively. The first of these is

\displaystyle p(x,y)=\int\!\mathrm{d}^pz\,p(x,y,z)=Z^{-1}e^{-\beta\,\left(-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T}\mathbf{y}-\mathbf{x}^\mathrm{T} A\,\mathbf{y}\right)}\int\!\mathrm{d}^pz\,e^{-\beta\mathbf{z}^\mathrm{T}\mathbf{z}+\beta\mathbf{y}^\mathrm{T} B\,\mathbf{z}}~. \ \ \ \ \ (3)

In order to establish a relationship between couplings at each energy scale, we then define the hamiltonian on the remaining, lower-energy degrees of freedom {H(x,y)} such that

\displaystyle p(x,y)=Z^{-1}e^{-\beta H(x,y)}~, \ \ \ \ \ (4)

which implies

\displaystyle H(x,y)=-\frac{1}{\beta}\ln\int\!\mathrm{d}^pz\,e^{-\beta H(x,y,z)} =-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T}\mathbf{y}-\mathbf{x}^\mathrm{T} A\,\mathbf{y} -\frac{1}{\beta}\ln\int\!\mathrm{d}^pz\,e^{-\beta\mathbf{z}^\mathrm{T}\mathbf{z}+\beta\mathbf{y}^\mathrm{T} B\,\mathbf{z}}~. \ \ \ \ \ (5)

This is a simple multidimensional Gaussian integral:

\displaystyle \int\!\mathrm{d}^pz\,\mathrm{exp}\left(-\frac{1}{2}\mathbf{z}^\mathrm{T} M\,\mathbf{z}+J^\mathrm{T}\mathbf{z}\right) =\sqrt{\frac{(2\pi)^p}{|M|}}\,\mathrm{exp}\left(\frac{1}{2}J^\mathrm{T} M^{-1} J\right)~, \ \ \ \ \ (6)

where in the present case {M=\beta\mathbf{1}} and {J=\beta B^\mathrm{T}\mathbf{y}}. We therefore obtain

\displaystyle \begin{aligned} H(x,y)&=-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T}\mathbf{y}-\mathbf{x}^\mathrm{T} A\,\mathbf{y} -\frac{1}{\beta}\ln\sqrt{\frac{(2\pi)^p}{\beta}}\exp\left(\frac{\beta}{2}\mathbf{y}^\mathrm{T} BB^\mathrm{T}\mathbf{y}\right)\\ &=f(\beta)-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T} \left(\mathbf{1}-B'\right)\mathbf{y}-\mathbf{x}^\mathrm{T} A\,\mathbf{y}~, \end{aligned} \ \ \ \ \ (7)

where we have defined

\displaystyle f(\beta)\equiv\frac{1}{2\beta}\ln\frac{\beta}{(2\pi)^p} \qquad\mathrm{and}\qquad B_{ij}'\equiv\sum_{\mu=1}^pB_{i\mu}B_{\mu j}~. \ \ \ \ \ (8)

The key point to note is that the interactions between the intermediate degrees of freedom {y} have been renormalized by an amount proportional to the coupling with the UV variables {z}. And indeed, in the context of deep neural nets, the advantage of hidden units is that they encode higher-order interactions through the cumulants of the associated prior. To make this connection explicit, consider the prior distribution of UV variables

\displaystyle q(z)=\sqrt{\frac{\beta}{(2\pi)^p}}\,\mathrm{exp}\left(-\frac{1}{2}\beta \,\mathbf{z}^\mathrm{T} \mathbf{z}\right)~. \ \ \ \ \ (9)

The cumulant generating function for {\mathbf{z}} with respect to this distribution is then

\displaystyle K_{z}(t)=\ln\langle e^{\mathbf{t}\mathbf{z}}\rangle =\ln\sqrt{\frac{\beta}{(2\pi)^p}}\int\!\mathrm{d}^pz\,\mathrm{exp}\left(-\frac{1}{2}\beta \mathbf{z}^\mathrm{T} \mathbf{z}+\mathbf{t}\mathbf{z}\right) =\frac{1}{2\beta}\mathbf{t}\mathbf{t}^\mathrm{T}~, \ \ \ \ \ (10)

cf. eqn. (4) in the previous post. So by choosing {\mathbf{t}=\beta\mathbf{y}^\mathrm{T} B}, we have

\displaystyle K_{z}\left(\beta\mathbf{y}^\mathrm{T} B\right)=\frac{\beta}{2}\mathbf{y}^\mathrm{T} BB^\mathrm{T}\mathbf{y} \ \ \ \ \ (11)

and may therefore express (7) as

\displaystyle H(x,y)=f(\beta)-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T}\mathbf{y}-\frac{1}{\beta}K_{z}\left(\beta\mathbf{y}^\mathrm{T} B\right)~. \ \ \ \ \ (12)

From the cumulant expansion in the aforementioned eqn. (4), in which the {n^\mathrm{th}} cumulant is {\kappa_n=K_X^{(n)}(t)\big|_{t=0}}, we then see that the effect of the marginalizing out UV (i.e., hidden) degrees of freedom is to induce higher-order couplings between the IR (i.e., visible) units, with the coefficients of the interaction weighted by the associated cumulant:

\displaystyle \begin{aligned} H(x,y)&=f(\beta)-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T}\mathbf{y}-\frac{1}{\beta}\left(\kappa_1\mathbf{t}+\frac{\kappa_2}{2}\mathbf{t}^2+\ldots\right)\\ &=f(\beta)-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T}\mathbf{y}-\kappa_1\mathbf{y}^\mathrm{T} B -\frac{\kappa_2}{2}\beta\mathbf{y}^\mathrm{T} BB^\mathrm{T} \mathbf{y}-\ldots~. \end{aligned} \ \ \ \ \ (13)

For the Gaussian prior (9), one immediately sees from (10) that all cumulants except for {\kappa_2=1/\beta} (the variance) vanish, whereupon (13) reduces to (7) above.

Now, let’s repeat this process to obtain the marginalized distribution of purely visible units {p(x)}. In analogy with Wilsonian RG, this corresponds to further lowering the cutoff scale in order to obtain a description of the theory in terms of low-energy degrees of freedom that we can actually observe. Hence, tracing out {y}, we have

\displaystyle p(x)=\int\!\mathrm{d}\mathbf{y}\,p(x,y)=Z^{-1}e^{-\beta\,\left( f-\mathbf{a}\mathbf{x}\right)}\int\!\mathrm{d}\mathbf{y} \,\mathrm{exp}\left(-\frac{1}{2}\beta\mathbf{y}^\mathrm{T}\left(\mathbf{1}-B'\right)\mathbf{y}+\beta\mathbf{x}^\mathrm{T} A\,\mathbf{y}\right)~. \ \ \ \ \ (14)

Of course, this is just another edition of (6), but now with {M=\beta\left(\mathbf{1}-B'\right)} and {J=\beta A^\mathrm{T}\mathbf{x}}. We therefore obtain

\displaystyle \begin{aligned} p(x)&=Z^{-1}\sqrt{\frac{(2\pi)^m}{\beta\left(1-|B'|\right)}}\mathrm{exp}\left[-\beta\left( f(\beta)-\mathbf{a}\mathbf{x}+\frac{1}{2}\mathbf{x}^\mathrm{T} A\left(\mathbf{1}-B'\right)^{-1}A^\mathrm{T}\mathbf{x}\right)\right]\\ &=Z^{-1}\sqrt{\frac{(2\pi)^m}{\beta\left(1-|B'|\right)}}\mathrm{exp}\left[-\beta\left( f(\beta)-\mathbf{a}\mathbf{x}+\frac{1}{2}\mathbf{x}^\mathrm{T} A'\mathbf{x}\right)\right] \end{aligned} \ \ \ \ \ (15)

where we have defined {A\equiv A\,\left(\mathbf{1}-B'\right)^{-1}\!A^\mathrm{T}}. As before, we then define {H(x)} such that

\displaystyle p(x)=Z^{-1}e^{-\beta H(x)}~, \ \ \ \ \ (16)

which implies

\displaystyle H(x)=g(\beta)-\mathbf{a}\mathbf{x}+\frac{1}{2}\mathbf{x}^\mathrm{T} A'\mathbf{x}~, \ \ \ \ \ (17)


\displaystyle g(\beta)\equiv f(\beta)+\frac{1}{2\beta}\ln\frac{\beta\left(1-|B'|\right)}{(2\pi)^m} =\frac{1}{2\beta}\ln\frac{\beta^2\left(1-|B'|\right)}{(2\pi)^{p+m}}~, \ \ \ \ \ (18)

where {f(\beta)} and {B'} are given in (8).

Again we see that marginalizing out UV information induces new couplings between IR degrees of freedom; in particular, the hamiltonian {H(x)} contains a quadratic interaction terms between the visible units. And we can again write this directly in terms of a cumulant generating function for hidden degrees of freedom by defining a prior of the form (9), but with {z\rightarrow w\in\{y,z\}} and {p\rightarrow m\!+\!p}. This will be of the form (10), where in this case we need to choose {\mathbf{t}=\beta\mathbf{x}^\mathrm{T} A\left(\mathbf{1}-B'\right)^{-1/2}}, so that

\displaystyle K_z(t)=\frac{\beta}{2} \mathbf{x}^\mathrm{T} A\left(\mathbf{1}-B'\right)^{-1} A^\mathrm{T}\mathbf{x} \ \ \ \ \ (19)

(where, since {B'^\mathrm{T}=B'}, the inverse matrix {(\mathbf{1}-B')^{-1}} is also invariant under the transpose; at this stage of exploration, I’m being quite cavalier about questions of existence). Thus the hamiltonian of visible units may be written

\displaystyle H(x)=g(\beta)-\mathbf{a}\mathbf{x}+\frac{1}{\beta}K_z(t)~, \ \ \ \ \ (20)

with {t} and {g(\beta)} as above. Since the prior with which these cumulants are computed is again Gaussian, only the second cumulant survives in the expansion, and we indeed recover (17).

To summarize, the sequential flow from UV (hidden) to IR (visible) distributions is, from top to bottom,

\displaystyle \begin{aligned} p(x,y,z)&=Z^{-1}e^{-\beta H(x,y,z)}~,\qquad &H(x,y,z)&=-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\left( \mathbf{y}^\mathrm{T}\mathbf{y}+\mathbf{z}^\mathrm{T}\mathbf{z}\right)-\mathbf{x}^\mathrm{T} A\,\mathbf{y}-\mathbf{y}^\mathrm{T} B\,\mathbf{z}~,\\ p(x,y)&=Z^{-1}e^{-\beta H(x,y)}~,\qquad &H(x,y)&=f(\beta)-\mathbf{a}\,\mathbf{x}+\frac{1}{2}\mathbf{y}^\mathrm{T}\left(\mathbf{1}-BB^\mathrm{T}\right)\mathbf{y}-\mathbf{x}^\mathrm{T} A\,\mathbf{y}~,\\ p(x)&=Z^{-1}e^{-\beta H(x)}~,\qquad &H(x)&=g(\beta)-\mathbf{a}\mathbf{x}+\frac{1}{2}\mathbf{x}^\mathrm{T} A\left(\mathbf{1}-BB^\mathrm{T}\right)^{-1}\!A^\mathrm{T}\mathbf{x}~, \end{aligned} \ \ \ \ \ (21)

where upon each marginalization, the new hamiltonian gains additional interaction terms/couplings governed by the cumulants of the UV prior (where “UV” is defined relative to the current cutoff scale, i.e., {q(z)} for {H(x,y)} and {q(w)} for {H(x)}), and the renormalization of the partition function is accounted for by

\displaystyle f(\beta)=\frac{1}{2\beta}\ln\frac{\beta}{(2\pi)^p}~, \quad\quad g(\beta)=-\frac{1}{2\beta}\ln\frac{\beta^2\left(1-|BB^\mathrm{T}|\right)}{(2\pi)^{p+m}}~. \ \ \ \ \ (22)

As an aside, note that at each level, fixing the form (4), (16) is equivalent to imposing that the partition function remain unchanged. Ultimately, this is required in order to preserve low-energy correlation functions. The two-point correlator {\langle x_1x_2\rangle} between visible (low-energy) degrees of freedom, for example, does not depend on which distribution we use to compute the expectation value, so long as the energy scale thereof is at or above the scale set by the inverse lattice spacing of {\mathbf{x}}:

\displaystyle \begin{aligned} \langle x_ix_j\rangle_{p(x,y,z)}&=\prod_{k=1}^n\sum_{x_k=\pm1}\int\!\mathrm{d}^my\mathrm{d}^pz\,x_ix_j\,p(x,y,z)\\ &=\prod_{k=1}^n\sum_{x_k=\pm1}x_ix_j\int\!\mathrm{d}^my\,p(x,y) =\langle x_ix_j\rangle_{p(x,y)}\\ &=\prod_{k=1}^n\sum_{x_k=\pm1}x_ix_j\,p(x) =\langle x_ix_j\rangle_{p(x)}~. \end{aligned} \ \ \ \ \ (23)

Failure to satisfy this requirement amounts to altering the theory at each energy scale, in which case there would be no consistent renormalization group relating them. In information-theoretic terms, this would represent an incorrect Bayesian inference procedure.

Despite (or perhaps, because of) its simplicity, this toy model makes manifest the fact that the RG prescription is reflected in the structure of the network, not the dynamics of learning per se. Indeed, Gaussian units aside, the above is essentially nothing more than real-space decimation RG on a 1d lattice, with a particular choice of couplings between “spins” {\sigma\in\{x,y,z\}}. In this analogy, tracing out {y} and then {x} maps to a sequential marginalization over even spins in the 1d Ising model. “Dynamics” in this sense are determined by the hamiltonian {H(\sigma)}, which is again one-dimensional. When one speaks of deep “learning” however, one views the network as two-dimensional, and “dynamics” refers to the changing values of the couplings as the network attempts to minimize the cost function. In short, RG lies in the fact that the couplings at each level in (21) encode the cumulants from hidden units in such a way as to ensure the preservation of visible correlations, whereas deep learning then determines their precise values in such a way as to reproduce a particular distribution. To say that deep learning itself is an RG is to conflate structure with function.

Nonetheless, there’s clearly an intimate parallel between RG and hierarchical Bayesian modeling at play here. As mentioned above, I’d originally hoped to derive something like a beta function for the cumulants, to see what insights theoretical physics and machine learning might yield to one another at this information-theoretic interface. Unfortunately, while one can see how the higher UV cumulants from {q(z)} are encoded in those from {q(w)}, the appearance of the inverse matrix makes a recursion relation for the couplings in terms of the cumulants rather awkward, and the result would only hold for the simple Gaussian hidden units I’ve chosen for analytical tractability here.

Fortunately, after banging my head against this for a month, I learned of a recent paper [8] that derives exactly the sort of cumulant relation I was aiming for, at least in the case of generic lattice models. The key is to not assume a priori which degrees of freedom will be considered UV/hidden vs. IR/visible. That is, when I wrote down the joint distribution (2), I’d already distinguished which units would survive each marginalization. While this made the parallel with the familiar decimation RG immediate — and the form of (1) made the calculations simple to perform analytically — it’s actually a bit unnatural from both a field-theoretic and a Bayesian perspective: the degrees of freedom that characterize the theory in the UV may be very different from those that we observe in the IR (e.g., strings vs. quarks vs. hadrons), so we shouldn’t make the distinction {x,y,z} at this level. Accordingly, [8] instead replace (2) with

\displaystyle p(\chi)=\frac{1}{Z}\,e^{\mathcal{K}(\chi)} \ \ \ \ \ (24)

where {\mathcal{K}\equiv-\beta H} is the so-called reduced (i.e., dimensionless) hamiltonian, cf. the reduced/dimensionless free energy {\tilde F\equiv\beta F} in the previous post. Note that {\chi} runs over all the degrees of freedom in the theory, which are all UV/hidden variables at the present level.

Two words of notational warning ere we proceed: first, there is a sign error in eqn. (1) of [8] (in version 1; the negative in the exponent has been absorbed into {\mathcal{K}} already). More confusingly however, their use of the terminology “visible” and “hidden” is backwards with respect to the RG analogy here. In particular, they coarse-grain a block of “visible” units into a single “hidden” unit. For reasons which should by now be obvious, I will instead stick to the natural Bayesian identifications above, in order to preserve the analogy with standard coarse-graining in RG.

Let us now repeat the above analysis in this more general framework. The real-space RG prescription consists of coarse-graining {\chi\rightarrow\chi'}, and then writing the new distribution {p(\chi')} in the canonical form (24). In Bayesian terms, we need to marginalize over the information about {\chi} contained in the distribution of {\chi'}, except that unlike in my simple example above, we don’t want to make any assumptions about the form of {p(\chi,\chi')}. So we instead express the integral — or rather, the discrete sum over lattice sites — in terms of the conditional distribution {p(\chi'|\chi)}:

\displaystyle p(\chi')=\sum_\chi p(\chi,\chi') =\sum_\chi p(\chi'|\chi)\,p(\chi)~. \ \ \ \ \ (25)

where {\sum\nolimits_\chi=\prod_{i=1}^m\sum_{\chi_i}}, {\chi=\{\chi_1,\ldots,\chi_m\}}. Denoting the new dimensionless hamiltonian {\mathcal{K}'(\chi')}, with {\chi'=\{\chi_1',\ldots,\chi_n'\}}, {n<m}, we therefore have

\displaystyle e^{\mathcal{K}'(\chi')}=\sum_\chi p(\chi'|\chi)\,e^{\mathcal{K}(\chi)}~. \ \ \ \ \ (26)

So far, so familiar, but now comes the trick: [8] split the hamiltonian {\mathcal{K}(\chi)} into a piece containing only intra-block terms, {\mathcal{K}_0(\chi)} (that is, interactions solely among the set of hidden units which is to be coarse-grained into a single visible unit), and a piece containing the remaining, inter-block terms, {\mathcal{K}_1(\chi)} (that is, interactions between different aforementioned sets of hidden units).

Let us denote a block of hidden units by {\mathcal{H}_j\ni\chi_i}, such that {\chi=\bigcup_{j=1}^n\mathcal{H}_j} (note that since {\mathrm{dim}(\chi)=m}, this implies {\mathrm{dim}(\mathcal{H}_j)=m/n} degrees of freedom {\chi_i} per block). To each {\mathcal{H}_j}, we associate a visible unit {\mathcal{V}_j=\chi'_j}, into which the constituent UV variables {\chi_i} have been coarse-grained. (Note that, for the reasons explained above, we have swapped {\mathcal{H}\leftrightarrow\mathcal{V}} relative to [8]). Then translation invariance implies

\displaystyle p(\chi'|\chi)=\prod_{j=1}^np(\mathcal{V}_j|\mathcal{H}_j) \qquad\mathrm{and}\qquad \mathcal{K}_0(\chi)=\sum_{j=1}^n\mathcal{K}_b(\mathcal{H}_j)~, \ \ \ \ \ (27)

where {\mathcal{K}_b(\chi_i)} denotes a single intra-block term of the hamiltonian. With this notation in hand, (26) becomes

\displaystyle e^{\mathcal{K}'(\chi')}=\sum_\chi e^{\mathcal{K}_1(\chi)}\prod_{j=1}^np(\mathcal{V}_j|\mathcal{H}_j)\,e^{\mathcal{K}_b(\mathcal{H}_j)}~. \ \ \ \ \ (28)

Now, getting from this to the first line of eqn. (13) in [8] is a bit of a notational hazard. We must suppose that for each block {\mathcal{H}_j}, we can define the block-distribution {p_j=Z_b^{-1}e^{\mathcal{K}_b(\mathcal{H}_j)}}, where {Z_b=\sum_{\chi_i\in\mathcal{H}_j}e^{\mathcal{K}_b(\mathcal{H}_j)}=\prod_{i=1}^{m/n}\sum_{\chi_i}e^{\mathcal{K}_b(\chi_i)}}. Given the underlying factorization of the total Hilbert space, we furthermore suppose that the distribution of all intra-block contributions can be written {p_0=Z_0^{-1}e^{\mathcal{K}_0(\chi)}}, so that {Z_0=\prod_{i=1}^m\sum_{\chi_i\in\chi}e^{\mathcal{K}_0(\chi)}}. This implies that

\displaystyle Z_0=\sum_{\chi_1}\ldots\sum_{\chi_m}e^{\mathcal{K}_b(\mathcal{H}_1)}\ldots\,e^{\mathcal{K}_b(\mathcal{H}_n)} =\prod_{j=1}^n\sum_{\chi_i\in\mathcal{H}_j}e^{\mathcal{K}_b(\mathcal{H}_j)} =\prod_{j=1}^nZ_b~. \ \ \ \ \ (29)

Thus we see that we can insert a factor of {1=Z_0\cdot\left(\prod\nolimits_jZ_b\right)^{-1}} into (28), from which the remaining manipulations are straightforward: we identify

\displaystyle p(\mathcal{V}_j|\mathcal{H}_j)\frac{1}{Z_b}\,e^{\mathcal{K}_b(\mathcal{H}_j)} =p(\mathcal{V}_j|\mathcal{H}_j)\,p(\mathcal{H}_j) =p(\mathcal{H}_j|\mathcal{V}_j)\,p(\mathcal{V}_j) \ \ \ \ \ (30)

and {p(\chi')=\prod_{j=1}^np(\mathcal{V}_j)} (again by translation invariance), whereupon we have

\displaystyle e^{\mathcal{K}'(\chi')}=Z_0\,p(\chi')\sum_\chi p(\chi|\chi')\,e^{\mathcal{K}_1(\chi)}=Z_0\,p(\chi')\langle e^{\mathcal{K}_1(\chi)}\rangle~, \ \ \ \ \ (31)

where the expectation value is defined with respect to the conditional distribution {p(\chi|\chi')}. Finally, by taking the log, we obtain

\displaystyle \mathcal{K}'(\chi')=\ln Z_0+\ln p(\chi')+\ln\langle e^{\mathcal{K}_1(\chi)}\rangle~, \ \ \ \ \ (32)

which one clearly recognizes as a generalization of (12): {\ln Z_0} accounts for the normalization factor {f(\beta)}; {\ln p(\chi')} gives the contribution from the un-marginalized variables {x,y}; and the log of the expectation values is the contribution from the UV cumulants, cf. eqn. (1) in the previous post. Note that this is not the cumulant generating function itself, but corresponds to setting {t\!=\!1} therein:

\displaystyle K_X(1)=\ln\langle e^{X}\rangle=\sum_{n=1}\frac{\kappa_n}{n!}~. \ \ \ \ \ (33)

Within expectation values, {\mathcal{K}_1} becomes the dimensionless energy {-\beta\langle E_1\rangle}, so the {n^\mathrm{th}} moment/cumulant picks up a factor of {(-\beta)^n} relative to the usual energetic moments in eqn. (11) of the previous post. Thus we may express (32) in terms of the cumulants of the dimensionless hamiltonian {\mathcal{K}_1} as

\displaystyle \mathcal{K}'(\chi')=\ln Z_0+\ln p(\chi')+\sum_{n=1}\frac{(-\beta)^n}{n!}\kappa_n~, \ \ \ \ \ (34)

where {\kappa_n=K_{E_1}^{(n)}(t)\big|_{t=0}}, and the expectation values in the generating functions are computed with respect to {p(\chi|\chi')}.

This is great, but we’re not quite finished, since we’d still like to determine the renormalized couplings in terms of the cumulants, as I did in the simple Gaussian DBM above. This requires expressing the new hamiltonian in the same form as the old, which allows one to identify exactly which contributions from the UV degrees of freedom go where. (See for example chapter 13 of [9] for a pedagogical exposition of this decimation RG procedure for the 1d Ising model). For the class of lattice models considered in [8] — by which I mean, real-space decimation with the imposition of a buffer zone — one can write down a formal expression for the canonical form of the hamiltonian, but expressions for the renormalized couplings themselves remain model-specific.

There’s more cool stuff in the paper [8] that I won’t go into here, concerning the question of “optimality” and the behaviour of mutual information in these sorts of networks. Suffice to say that, as alluded in the previous post, the intersection of physics, information theory, and machine learning is potentially rich yet relatively unexplored territory. While the act of learning itself is not an RG in a literal sense, the two share a hierarchical Bayesian language that may yield insights in both directions, and I hope to investigate this more deeply (pun intended) soon.


[1] C. Beny, “Deep learning and the renormalization group,” arXiv:1301.3124.

[2]  P. Mehta and D. J. Schwab, “An exact mapping between the Variational Renormalization Group and Deep Learning,” arXiv:1410.3831.

[3]  H. W. Lin, M. Tegmark, and D. Rolnick, “Why Does Deep and Cheap Learning Work So Well?,” arXiv:1608.08225.

[4]  S. Iso, S. Shiba, and S. Yokoo, “Scale-invariant Feature Extraction of Neural Network and Renormalization Group Flow,” arXiv:1801.07172.

[5]  M. Koch-Janusz and Z. Ringel, “Mutual information, neural networks and the renormalization group,” arXiv:1704.06279.

[6]  S. S. Funai and D. Giataganas, “Thermodynamics and Feature Extraction by Machine Learning,” arXiv:1810.08179.

[7]  E. Mello de Koch, R. Mello de Koch, and L. Cheng, “Is Deep Learning an RG Flow?,” arXiv:1906.05212.

[8]  P. M. Lenggenhager, Z. Ringel, S. D. Huber, and M. Koch-Janusz, “Optimal Renormalization Group Transformation from Information Theory,” arXiv:1809.09632.

[9]  R. K. Pathria, Statistical Mechanics. 1996. Butterworth-Heinemann, Second edition.

Posted in Minds & Machines, Physics | 1 Comment

Cumulants, correlators, and connectivity

Lately, I’ve been spending a lot of time exploring the surprisingly rich mathematics at the intersection of physics, information theory, and machine learning. Among other things, this has led me to a new appreciation of cumulants. At face value, these are just an alternative to the moments that characterize a given probability distribution function, and aren’t particularly exciting. Except they show up all over statistical thermodynamics, quantum field theory, and the structure of deep neural networks, so of course I couldn’t resist trying to better understand the information-theoretic connections to which this seems to allude. In the first part of this two-post sequence, I’ll introduce them in the context of theoretical physics, and then turn to their appearance in deep learning in the next post, where I’ll dive into the parallel with the renormalization group.

The relation between these probabilistic notions and statistical physics is reasonably well-known, though the literature on this particular point unfortunately tends to be slightly sloppy. Loosely speaking, the partition function corresponds to the moment generating function, and the (Helmholtz) free energy corresponds to the cumulant generating function. By way of introduction, let’s make this identification precise.

The moment generating function for a random variable {X} is

\displaystyle M_X(t)\equiv \langle e^{tX}\rangle~,\quad\quad\forall t\in\mathbb{R}~, \ \ \ \ \ (1)

where {\langle\ldots\rangle} denotes the expectation value for the corresponding distribution. (As a technical caveat: in some cases, the moments — and correspondingly, {M_X} — may not exist, in which case one can resort to the characteristic function instead). By series expanding the exponential, we have

\displaystyle M_X(t)=1+t\langle X\rangle+\frac{t^2}{2}\langle X^2\rangle+\ldots\,=1+\sum_{n=1}m_n\frac{t^n}{n!}~, \ \ \ \ \ (2)

were {m_n} is the {n^\mathrm{th}} moment, which we can obtain by taking {n} derivatives and setting {t\!=\!0}, i.e.,

\displaystyle m_n=M_X^{(n)}(t)\Big|_{t=0}=\langle X^n\rangle~. \ \ \ \ \ (3)

However, it is often more convenient to work with cumulants instead of moments (e.g., for independent random variables, the cumulant of the sum is the sum of the cumulants, thanks to the log). These are uniquely specified by the moments, and vice versa—unsurprisingly, since the cumulant generating function is just the log of the moment generating function:

\displaystyle K_X(t)\equiv\ln M_X(t)=\ln\langle e^{tX}\rangle \equiv\sum_{n=1}\kappa_n\frac{t^n}{n!}~, \ \ \ \ \ (4)

where {\kappa_n} is the {n^\mathrm{th}} cumulant, which we again obtain by differentiating {n} times and setting {t=0}:

\displaystyle \kappa_n=K_X^{(n)}(t)\big|_{t=0}~. \ \ \ \ \ (5)

Note however that {\kappa_n} is not simply the log of {m_n}!

Now, to make contact with thermodynamics, consider the case in which {X} is the energy of the canonical ensemble. The probability of a given energy eigenstate {E_i} is

\displaystyle p_i\equiv p(E_i)=\frac{1}{Z[\beta]}e^{-\beta E_i}~, \quad\quad \sum\nolimits_ip_i=1~. \ \ \ \ \ (6)

The moment generating function for energy is then

\displaystyle M_E(t)=\langle e^{tE}\rangle=\sum_i p(E_i)e^{tE_i} =\frac{1}{Z[\beta]}\sum_ie^{-(\beta\!-\!t)E_i} =\frac{Z[\beta-t]}{Z[\beta]}~. \ \ \ \ \ (7)

Thus we see that the partition function {Z[\beta]} is not the moment generating function, but there’s clearly a close relationship between the two. Rather, the precise statement is that the moment generating function {M_E(t)} is the ratio of two partition functions at inverse temperatures {\beta-t} and {\beta}, respectively. We can gain further insight by considering the moments themselves, which are — by definition (3) — simply expectation values of powers of the energy:

\displaystyle \langle E^n\rangle=M^{(n)}(t)\Big|_{t=0} =\frac{1}{Z[\beta]}\frac{\partial^n}{\partial t^n}Z[\beta\!-\!t]\bigg|_{t=0} =(-1)^n\frac{Z^{(n)}[\beta\!-\!t]}{Z[\beta]}\bigg|_{t=0} =(-1)^n\frac{Z^{(n)}[\beta]}{Z[\beta]}~. \ \ \ \ \ (8)

Note that derivatives of the partition function with respect to {t} have, at {t=0}, become derivatives with respect to inverse temperature {\beta} (obviously, this little slight of hand doesn’t work for all functions; simple counter example: {f(\beta-t)=(\beta-t)^2}). Of course, this is simply a more formal expression for the usual thermodynamic expectation values. The first moment of energy, for example, is

\displaystyle \langle E\rangle= -\frac{1}{Z[\beta]}\frac{\partial Z[\beta]}{\partial\beta} =\frac{1}{Z[\beta]}\sum_i E_ie^{-\beta E_i} =\sum_i E_i\,p_i~, \ \ \ \ \ (9)

which is the ensemble average. At a more abstract level however, (8) expresses the fact that the average energy — appropriately normalized — is canonically conjugate to {\beta}. That is, recall that derivatives of the action are conjugate variables to those with respect to which we differentiate. In classical mechanics for example, energy is conjugate to time. Upon Wick rotating to Euclidean signature, the trajectories become thermal circles with period {\beta}. Accordingly, the energetic moments can be thought of as characterizing the dynamics of the ensemble in imaginary time.

Now, it follows from (7) that the cumulant generating function (4) is

\displaystyle K_E(t)=\ln\langle e^{tE}\rangle=\ln Z[\beta\!-\!t]-\ln Z[\beta]~. \ \ \ \ \ (10)

While the {n^\mathrm{th}} cumulant does not admit a nice post-derivative expression as in (8) (though I suppose one could write it in terms of Bell polynomials if we drop the adjective), it is simple enough to compute the first few and see that, as expected, the first cumulant is the mean, the second is the variance, and the third is the third central moment:

\displaystyle \begin{aligned} K^{(1)}(t)\big|_{t=0}&=-\frac{Z'[\beta]}{Z[\beta]}=\langle E\rangle~,\\ K^{(2)}(t)\big|_{t=0}&=\frac{Z''[\beta]}{Z[\beta]}-\left(\frac{Z'[\beta]}{Z[\beta]}\right)^2=\langle E^2\rangle-\langle E\rangle^2\\ K^{(3)}(t)\big|_{t=0}&=-2\left(\frac{Z'[\beta]}{Z[\beta]}\right)^3+3\frac{Z'[\beta]Z''[\beta]}{Z[\beta]^2}-\frac{Z^{(3)}[\beta]}{Z[\beta]}\\ &=-2\langle E\rangle^3+3\langle E\rangle\langle E^2\rangle-\langle E^3\rangle =-\left\langle\left( E-\langle E\rangle\right)^3\right\rangle~. \end{aligned} \ \ \ \ \ (11)

where the prime denotes the derivative with respect to {\beta}. Note that since the second term in the generating function (10) is independent of {t}, the normalization drops out when computing the cumulants, so we would have obtained the same results had we worked directly with the partition function {Z[\beta]} and taken derivatives with respect to {\beta}. That is, we could define

\displaystyle K_E(\beta)\equiv-\ln Z[\beta] \qquad\implies\qquad \kappa_n=(-1)^{n-1}K_E^{(n)}(\beta)~, \ \ \ \ \ (12)

where, in contrast to (5), we don’t need to set anything to zero after differentiating. This expression for the cumulant generating function will feature more prominently when we discuss correlation functions below.

So, what does the cumulant generating function have to do with the (Helmholtz) free energy, {F[\beta]=-\beta^{-1}\ln Z[\beta]}? Given the form (12), one sees that they’re essentially one and the same, up to a factor of {\beta}. And indeed the free energy is a sort of “generating function” in the sense that it allows one to compute any desired thermodynamic quantity of the system. The entropy, for example, is

\displaystyle S=-\frac{\partial F}{\partial T}=\beta^2\frac{\partial F}{\partial\beta} =\beta\langle E\rangle+\ln Z=-\langle\ln p\rangle~, \ \ \ \ \ (13)

where {p} is the Boltzmann distribution (6). However, the factor of {\beta^{-1}} in the definition of free energy technically prevents a direct identification with the cumulant generating function above. Thus it is really the log of the partition function itself — i.e., the dimensionless free energy {\beta F} — that serves as the cumulant generating function for the distribution. We’ll return to this idea momentarily, cf. (21) below.

So much for definitions; what does it all mean? It turns out that in addition to encoding correlations, cumulants are intimately related to connectedness (in the sense of connected graphs), which underlies their appearance in QFT. Consider, for concreteness, a real scalar field {\phi(x)} in 4 spacetime dimensions. As every student knows, the partition function

\displaystyle Z[J]=\mathcal{N}\int\mathcal{D}\phi\,\exp\left\{i\!\int\!\mathrm{d}^dx\left[\mathcal{L}(\phi,\partial\phi)+J(x)\phi(x)\right]\right\} \ \ \ \ \ (14)

is the generating function for the {n}-point correlator or Green function {G^{(n)}(x_1,\ldots,x_n)}:

\displaystyle G^{(n)}(x_1,\ldots,x_n)=\frac{1}{i^n}\frac{\delta^nZ[J]}{\delta J(x_1)\ldots\delta J(x_n)}\bigg|_{J=0}~, \ \ \ \ \ (15)

where the normalization {\mathcal{N}} is fixed by demanding that in the absence of sources, we should recover the vacuum expectation value, i.e., {Z[0]=\langle0|0\rangle=1}. In the language of Feynman diagrams, the Green function contains all possible graphs — both connected and disconnected — that contribute to the corresponding transition amplitude. For example, the 4-point correlator of {\phi^4} theory contains, at first order in the coupling, a disconnected graph consisting of two Feynman propagators, another disconnected graph consisting of a Feynman propagator and a 1-loop diagram, and an irreducible graph consisting of a single 4-point vertex. But only the last of these contributes to the scattering process, so it’s often more useful to work with the generating function for connected diagrams only,

\displaystyle W[J]=-i\ln Z[J]~, \ \ \ \ \ (16)

from which we obtain the connected Green function {G_c^{(n)}}:

\displaystyle G_c^{(n)}(x_1,\ldots,x_n)=\frac{1}{i^{n-1}}\frac{\delta^nW[J]}{\delta J(x_1)\ldots\delta J(x_n)}\bigg|_{J=0}~. \ \ \ \ \ (17)

The fact that the generating functions for connected vs. disconnected diagrams are related by an exponential, that is, {Z[J]=\exp{i W[J]}}, is not obvious at first glance, but it is a basic exercise in one’s first QFT course to show that the coefficients of various diagrams indeed work out correctly by simply Taylor expanding the exponential {e^X=\sum_n\tfrac{X^n}{n!}}. In the example of {\phi^4} theory above, the only first-order diagram that contributes to the connected correlator is the 4-point vertex. More generally, one can decompose {G^{(n)}} into {G_c^{(n)}} plus products of {G_c^{(m)}} with {m<n}. The factor of {-i} in (16) goes away in Euclidean signature, whereupon we see that {Z[J]} is analogous to {Z[\beta]} — and hence plays the role of the moment generating function — while {W[J]} is analogous to {\beta F[\beta]} — and hence plays the role of the cumulant generating function in the form (12).

Thus, the {n^\mathrm{th}} cumulant of the field {\phi} corresponds to the connected Green function {G_c^{(n)}}, i.e., the contribution from correlators of all {n} fields only, excluding contributions from lower-order correlators among them. For example, we know from Wick’s theorem that Gaussian correlators factorize, so the corresponding {4}-point correlator {G^{(4)}} becomes

\displaystyle \langle\phi_1\phi_2\phi_3\phi_4\rangle= \langle\phi_1\phi_2\rangle\langle\phi_3\phi_4\rangle +\langle\phi_1\phi_3\rangle\langle\phi_2\phi_4\rangle +\langle\phi_1\phi_4\rangle\langle\phi_2\phi_3\rangle~. \ \ \ \ \ (18)

What this means is that there are no interactions among all four fields that aren’t already explained by interactions among pairs thereof. The probabilistic version of this statement is that for the normal distribution, all cumulants other than {n=2} are zero. (For a probabilist’s exposition on the relationship between cumulants and connectivity, see the first of three lectures by Novak and LaCroix [1], which takes a more graph-theoretic approach).

There’s one more important function that deserves mention here: the final member of the triumvirate of generating functions in QFT, namely the effective action {\Gamma[\phi]}, defined as the Legendre transform of {W[J]}:

\displaystyle \Gamma[\phi]=W[J]-\int\!\mathrm{d} x\,J(x)\phi(x)~. \ \ \ \ \ (19)

The Legendre transform is typically first encountered in classical mechanics, where it relates the hamiltonian and lagrangian formulations. Geometrically, it translates between a function and its envelope of tangents. More abstractly, it provides a map between the configuration space (here, the sources {J}) and the dual vector space (here, the fields {\phi}). In other words, {\phi} and {J} are conjugate pairs in the sense that

\displaystyle \frac{\delta\Gamma}{\delta\phi}=-J \qquad\mathrm{and}\qquad \frac{\delta W}{\delta J}=\phi~. \ \ \ \ \ (20)

As an example that connects back to the thermodynamic quantities above: we already saw that {E} and {\beta} are conjugate variables by considering the partition function, but the Legendre transform reveals that the free energy and entropy are conjugate pairs as well. This is nicely explained in the lovely pedagogical treatment of the Legendre transform by Zia, Redish, and McKay [2], and also cleans up the disruptive factor of {\beta} that prevented the identification with the cumulant generating function above. The basic idea is that since we’re working in natural units (i.e., {k_B=1}), the thermodynamic relation in the form {\beta F+S=\beta E} (13) obscures the duality between the properly dimensionless quantities {\tilde F\equiv\beta F} and {\tilde S=S/k_B}. From this perspective, it is more natural to work with {\tilde F} instead, in which case we have both an elegant expression for the duality in terms of the Legendre transform, and a precise identification of the dimensionless free energy with the cumulant generating function (12):

\displaystyle \tilde F(\beta)+\tilde S(E)=\beta E~, \qquad\qquad K_E(\beta)=\tilde F=\beta F~. \ \ \ \ \ (21)

Now, back to QFT, in which {\Gamma[\phi]} generates one-particle irreducible (1PI) diagrams. A proper treatment of this would take us too far afield, but can be found in any introductory QFT book, e.g., [3]. The basic idea is that in order to be able to cut a reducible diagram, we need to work at the level of vertices rather than sources (e.g., stripping off external legs, and identifying the bare propagator between irreducible parts). The Legendre transform (19) thus removes the dependence on the sources {J}, and serves as the generator for the vertex functions of {\phi}, i.e., the fundamental interaction terms. The reason this is called the effective action is that in perturbation theory, {\Gamma[\phi]} contains the classical action as the leading saddle-point, as well as quantum corrections from the higher-order interactions in the coupling expansion.

In information-theoretic terms, the Legendre transform of the cumulant generating function is known as the rate function. This is a core concept in large deviations theory, and I won’t go into details here. Loosely speaking, it quantifies the exponential decay that characterizes rare events. Concretely, let {X_i} represent the outcome of some measurement or operation (e.g., a coin toss); then the mean after {N} independent trials is

\displaystyle M_N=\frac{1}{N}\sum_{i=1}^N X_i~. \ \ \ \ \ (22)

The probability that a given measurement deviates from this mean by some specified amount {x} is

\displaystyle P(M_N>x)\approx e^{-N I(x)} \ \ \ \ \ (23)

where {I(x)} is the aforementioned rate function. The formal similarity with the partition function in terms of the effective action, {Z=e^{-\Gamma}}, is obvious, though the precise dictionary between the two languages is not. I suspect that a precise translation between the two languages — physics and information theory — can be made here as well, in which the increasing rarity of events as one moves along the tail of the distribution correspond to increasingly high-order corrections to the quantum effective action, but I haven’t worked this out in detail.

Of course, the above is far from the only place in physics where cumulants are lurking behind the scenes, much less the end of the parallel with information theory more generally. In the next post, I’ll discuss the analogy between deep learning and the renormalization group, and see how Bayesian terminology can provide an underlying language for both.


[1] J. Novak and M. LaCroix, “Three lectures on free probability,” arXiv:1205.2097.

[2] R. K. P. Zia, E. F. Redish, and S. R. McKay, “Making sense of the Legendre transform, arXiv:0806.1147.

[3] L. H. Ryder, Quantum Field Theory. Cambridge University Press, 2 ed., 1996.

Posted in Physics | 2 Comments

Black hole interiors, state dependence, and all that

In the context of firewalls, the crux of the paradox boils down to whether black holes have smooth horizons (as required by the equivalence principle). It turns out that this is intimately related to the question of how the interior of the black hole can be reconstructed by an external observer. AdS/CFT is particularly useful in this regard, because it enables one to make such questions especially sharp. Specifically, one studies the eternal black hole dual to the thermofield double (TFD) state, which cleanly captures the relevant physics of real black holes formed from collapse.

To construct the TFD, we take two copies of a CFT and entangle them such that tracing out either results in a thermal state. Denoting the energy eigenstates of the left and right CFTs by {\tilde E_i} and {E_i}, respectively, the state is given by

\displaystyle |\Psi\rangle=\frac{1}{\sqrt{Z_\beta}}\sum_ie^{-\beta E_i/2}|E_i\rangle\otimes|\tilde E_i\rangle\ \ \ \ \ (1)

where {Z_\beta} is the partition function at inverse temperature {\beta}. The AdS dual of this state is the eternal black hole, the two sides of which join the left and right exterior bulk regions through the wormhole. Incidentally, one of the fascinating questions inherent to this construction is how the bulk spacetime emerges in a manner consistent with the tensor product of boundary CFTs. For our immediate purposes, the important fact is that operators in the left source states behind the horizon from the perspective of the right (and vice-versa). The requirement from general relativity that the horizon be smooth then imposes conditions on the relationship between these operators.

A noteworthy approach in this vein is the so-called “state-dependence” proposal developed by Kyriakos Papadodimas and Suvrat Raju over the course of several years [1,2,3,4,5] (referred to as PR henceforth). Their collective florilegium spans several hundred pages, jam-packed with physics, and any summary I could give here would be a gross injustice. As alluded above however, the salient aspect is that they phrased the smoothness requirement precisely in terms of a condition on correlation functions of CFT operators across the horizon. Focusing on the two-point function for simplicity, this condition reads:

\displaystyle \langle\Psi|\mathcal{O}(t,\mathbf{x})\tilde{\mathcal{O}}(t',\mathbf{x}')|\Psi\rangle =Z^{-1}\mathrm{tr}\left[e^{-\beta H}\mathcal{O}(t,\mathbf{x})\mathcal{O}(t'+i\beta/2,\mathbf{x}')\right]~. \ \ \ \ \ (2)

Here, {\mathcal{O}} is an exterior operator in the right CFT, while {\tilde{\mathcal{O}}} is an interior operator in the left—that is, it represents an excitation localized behind the horizon from the perspective of an observer in the right wedge (see the diagram above). The analytical continuation {\tilde{\mathcal{O}}(t,x)\rightarrow\mathcal{O}(t+i\beta/2,\mathbf{x})} arises from the KMS condition (i.e., the periodicity of thermal Green functions in imaginary time). Physically, this is essentially the statement that one should reproduce the correct thermal expectation values when restricted to a single copy of the CFT.

The question then becomes whether one can find such operators in the CFT that satisfy this constraint. That is, we want to effectively construct interior operators by acting only in the exterior CFT. PR achieve this through their so-called “mirror operators” {\tilde{\mathcal{O}}_n}, defined by

\displaystyle \tilde{\mathcal{O}}_n\mathcal{O}_m|\Psi\rangle=\mathcal{O}_me^{-\beta H/2}\mathcal{O}_n^\dagger e^{\beta H/2}|\Psi\rangle~. \ \ \ \ \ (3)

While appealingly compact, it’s more physically insightful to unpack this into the following two equations:

\displaystyle \tilde{\mathcal{O}}_n|\Psi\rangle=e^{-\beta H/2}\mathcal{O}_n^\dagger e^{\beta H/2}|\Psi\rangle~, \quad\quad \tilde{\mathcal{O}}_n\mathcal{O}_m|\Psi\rangle=\mathcal{O}_m\tilde{\mathcal{O}}_n|\Psi\rangle~. \ \ \ \ \ (4)

The key point is that these operators are defined via their action on the state {|\Psi\rangle}, i.e., they are state-dependent operators. For example, the second equation does not say that the operators commute; indeed, as operators, {[\tilde{\mathcal{O}}_n,\mathcal{O}_m]\neq0}. But the commutator does vanish in this particular state, {[\tilde{\mathcal{O}}_n,\mathcal{O}_m]|\Psi\rangle=0}. This may seem strange at first sight, but it’s really just a matter of carefully distinguishing between equations that hold as operator statements and those that hold only at the level of states. Indeed, this is precisely the same crucial distinction between localized states vs. localized operators that I’ve discussed before.

PR’s work created some backreaction, most of which centered around the nature of this “unusual” state dependence, which generated considerable confusion. Aspects of PR’s proposal were critiqued in a number of papers, particularly [6,7], which led many to claim that state dependence violates quantum mechanics. Coincidentally, I had the good fortune of being a visiting grad student at the KITP around this time, where these issues where hotly debated during a long-term workshop on quantum gravity. This was a very stimulating time, when the firewall paradox was still center-stage, and the collective confusion was almost palpable. Granted, I was a terribly confused student, but the fact that the experts couldn’t even agree on language — let alone physics — certainly didn’t do me any favours. Needless to say, the debate was never resolved, and the field’s collective attention span eventually drifted to other things. Yet somehow, the claim that state dependence violates quantum mechanics (or otherwise constitutes an unusual or potentially problematic modification thereof) has since risen to the level of dogma, and one finds it regurgitated again and again in papers published since.

Motivated in part by the desire to understand the precise nature of state dependence in this context (though really, it was the interior spacetime I was after), I wrote a paper [8] last year in an effort to elucidate and connect a number of interesting ideas in the emergent spacetime or “It from Qubit” paradigm. At a technical level, the only really novel bit was the application of modular inclusions, which provide a relatively precise framework for investigating the question of how one represents information in the black hole interior, and perhaps how the bulk spacetime emerges more generally. The relation between Tomita-Takesaki theory itself (a subset of algebraic quantum field theory) and state dependence was already pointed out by PR [3], and is highlighted most succinctly in Kyriakos’ later paper in 2017 [9], which was the main stimulus behind my previous post on the subject. However, whereas PR arrived at this connection from more physical arguments (over the course of hundreds of pages!), I took essentially the opposite approach: my aim was to distill the fundamental physics as cleanly as possible, to which end modular theory proves rather useful for demystifying issues which might otherwise remain obfuscated by details. The focus of my paper was consequently decidedly more conceptual, and represents a personal attempt to gain deeper physical insight into a number of tantalizing connections that have appeared in the literature in recent years (e.g., the relationship between geometry and entanglement represented by Ryu-Takayanagi, or the ontological basis for quantum error correction in holography).

I’ve little to add here that isn’t said better in [8] — and indeed, I’ve already written about various aspects on other occasions — so I invite you to simply read the paper if you’re interested. Personally, I think it’s rather well-written, though card-carrying members of the “shut up and calculate” camp may find it unpalatable. The paper touches on a relatively wide range of interrelated ideas in holography, rather than state dependence alone; but the upshot for the latter is that, far from being pathological, state dependence (precisely defined) is

  1. a natural part of standard quantum field theory, built-in to the algebraic framework at a fundamental level, and
  2. an inevitable feature of any attempt to represent information behind horizons.

I hasten to add that “information” is another one of those words that physicists love to abuse; here, I mean a state sourced by an operator whose domain of support is spacelike separated from the observer (e.g., excitations localized on the opposite side of a Rindler/black hole horizon). The second statement above is actually quite general, and results whenever one attempts to reconstruct an excitation outside its causal domain.

So why I am devoting an entire post to this, if I’ve already addressed it at length elsewhere? There were essentially two motivations for this. One is that I recently had the opportunity to give a talk about this at the YITP in Kyoto (the slides for which are available from the program website here), and I fell back down the rabbit hole in the course of reviewing. In particular, I wanted to better understand various statements in the literature to the effect that state dependence violates quantum mechanics. I won’t go into these in detail here — one can find a thorough treatment in PRs later works — but suffice to say the primary issue seems to lie more with language than physics: in the vast majority of cases, the authors simply weren’t precise about what they meant by “state dependence” (though in all fairness, PR weren’t totally clear on this either), and the rare exceptions to this had little to nothing to do with the unqualified use of the phrase here. I should add the disclaimer that I’m not necessarily vouching for every aspect of PR’s approach—they did a hell of a lot more than just write down (3), after all. My claim is simply that state dependence, in the fundamental sense I describe, is a feature, not a bug. Said differently, even if one rejects PR’s proposal as a whole, the state dependence that ultimately underlies it will continue to underlie any representation of the black hole interior. Indeed, I had hoped that my paper would help clarify things in this regard.

And this brings me to the second reason, namely: after my work appeared, a couple other papers [10,11] were written that continued the offense of conflating the unqualified phrase “state dependence” with different and not-entirely-clear things. Of course, there’s no monopoly on terminology: you can redefine terms however you like, as long as you’re clear. But conflating language leads to conflated concepts, and this is where we get into trouble. Case in point: both papers contain a number of statements which I would have liked to see phrased more carefully in light of my earlier work. Indeed, [11] goes so far as to write that “interior operators cannot be encoded in the CFT in a state-dependent way.” On the contrary, as I had explained the previous year, it’s actually the state independent operators that lead to pathologies (specifically, violations of unitarity)! Clearly, whatever the author means by this, it is not the same state dependence at work here. So consider this a follow-up attempt to stop further terminological misuse confusion.

As I’ll discuss below, both these works — and indeed most other proposals from quantum information — ultimately rely on the Hayden-Preskill protocol [12] (and variations thereof), so the real question is how the latter relates to state dependence in the unqualified use of the term (i.e., as defined via Tomita-Takesaki theory; I refer to this usage as “unqualified” because if you’re talking about firewalls and don’t specify otherwise, then this is the relevant definition, as it underlies PR’s introduction of the phrase). I’ll discuss this in the context of Beni’s work [10] first, since it’s the clearer of the two, and comment more briefly on Geof’s [11] below.

In a nutshell, the classic Hayden-Preskill result [12] is a statement about the ability to decode information given only partial access to the complete quantum state. In particular, one imagines that the proverbial Alice throws a message comprised of {k} bits of information into a black hole of size {n\!-\!k\gg k}. The black hole will scramble this information very quickly — the details are not relevant here — such that the information is encoded in some complicated manner among the (new) total {n} bits of the black hole. For example, if we model the internal dynamics as a simple permutation of Alice’s {k}-bit message, it will be transformed into one of {2^k} possible {n}-bit strings—a huge number of possibilities!

Now suppose Bob wishes to reconstruct the message by collecting qubits from the subsequent Hawking radiation. Naïvely, one would expect him to need essentially all {n} bits (i.e., to wait until the black hole evaporates) in order to accurately determine among the {2^k} possibilities. The surprising result of Hayden-Preskill is that in fact he needs only slightly more than {k} bits. The time-scale for this depends somewhat on the encoding performed by the black hole, but in principle, this means that Bob can recover the message just after the scrambling time. However, a crucial aspect of this protocol is that Bob knows the initial microstate of the black hole (i.e., the original {(n\!-\!k)}-bit string). This is the source of the confusing use of the phrase “state dependence”, as we’ll see below.

Of course, as Hayden and Preskill acknowledge, this is a highly unrealistic model, and they didn’t make any claims about being able to reconstruct the black hole interior in this manner. Indeed, the basic physics involved has nothing to do with black holes per se, but is a generic feature of quantum error correcting codes, reminiscent of the question of how to share (or decode) a quantum “secret” [13]. The novel aspect of Beni’s recent work [10] is to try to apply this to resolving the firewall paradox, by explicitly reconstructing the interior of the black hole.

Beni translates the problem of black hole evaporation into the sort of circuit language that characterizes much of the quantum information literature. One the one hand, this is nice in that it enables him to make very precise statements in the context of a simple qubit model; and indeed, at the mathematical level, everything’s fine. The confusion arises when trying to lift this toy model back to the physical problem at hand. In particular, when Beni claims to reconstruct state-independent interior operators, he is — from the perspective espoused above — misusing the terms “state-independent”, “interior”, and “operator”.

Let’s first summarize the basic picture, and then try to elucidate this unfortunate linguistic hat-trick. The Hayden-Preskill protocol for recovering information from black holes is illustrated in the figure from Beni’s paper below. In this diagram, {B} is the black hole, which is maximally entangled (in the form of some number of EPR pairs) with the early radiation {R}. Alice’s message corresponds to the state {|\psi\rangle}, which we imagine tossing into the black hole as {A}. One then evolves the black hole (which now includes Alice’s message {A}) by some unitary operator {U}, which scrambles the information as above. Subsequently, {D} represents some later Hawking modes, with the remaining black hole denoted {C}. Bob’s task is to reconstruct the state {|\psi\rangle} by acting on {D} and {R} (since he only has access to the exterior) with some operator {V}.

Now, Beni’s “state dependence” refers to the fact that the technical aspects of this construction relied on putting the initial state of the black hole {+} radiation {|\Psi\rangle_{BR}} in the form of a collection of EPR pairs {|\Phi\rangle_{EPR}}. This can be done by finding some unitary operator {K}, such that

\displaystyle |\Psi\rangle_{BR}=(I\otimes K)|\Phi\rangle_{EPR}~, \ \ \ \ \ (5)

(Here, one imagines that {B} is further split into a Hawking mode and its partner just behind the horizon, so that {I} acts on the interior mode while {K} affects only the new Hawking mode and the early radiation; see [10] for details). This is useful because it enables the algorithm to work for arbitrary black holes: for some other initial state {|\Psi'\rangle_{BR}}, one can find some other {K'\neq K} which results in the same state {|\Phi\rangle_{EPR}}. The catch is that Bob’s reconstruction depends on {K}, and therefore, on the initial state {|\Psi\rangle_{BR}}. But this is to be expected: it’s none other than the Hayden-Preskill requirement above that Bob needs to know the exact microstate of the system in order for the decoding protocol to work. It is in this sense that the Hayden-Preskill protocol is “state-dependent”, which clearly references something different than what we mean here. The reason I go so far as to call this a misuse of terminology is that Beni explicitly conflates the two, and regurgitates the claim that these “state-dependent interior operators” lead to inconsistencies with quantum mechanics, referencing work above. Furthermore, as alluded above, there’s an additional discontinuity of concepts here, namely that the “state-dependent” operator {V} is obviously not the “interior operator” to which we’re referring: it’s support isn’t even restricted to the interior, nor does it source any particular state localized therein!

Needless to say, I was in a superposition of confused and unhappy with the terminology in this paper, until I managed to corner Beni at YITP for a couple hours at the aforementioned workshop, where he was gracious enough to clarify various aspects of his construction. It turns out that he actually has in mind something different when he refers to the interior operator. Ultimately, the identification still fails on these same counts, but it’s worth following the idea a bit further in order to see how he avoids the “state dependence” in the vanilla Hayden-Preskill set-up above. (By now I shouldn’t have to emphasize that this form of “state dependence” isn’t problematic in any fundamental sense, and I will continue to distinguish it from the latter, unqualified use of the phrase with quotation marks).

One can see from the above diagram that the state of the black hole {|\Psi\rangle} — before Alice & Bob start fiddling with it — can be represented by the following diagram, also from [10]:

where {R}, {D}, and {C} are again the early radiation, later Hawking mode, and remaining black hole, respectively. The problem Beni solves is finding the “partner” — by which he means, the purification — of {D} in {CR}. Explicitly, he wants to find the operator {\tilde{\mathcal{O}}_{CR}^T} such that

\displaystyle (\mathcal{O}_D\otimes I_{CR})|\Psi\rangle=(I_D\otimes\tilde{\mathcal{O}}_{CR}^T)|\Psi\rangle~. \ \ \ \ \ (6)

Note that there’s yet another language ergo conceptual discontinuity here, namely that Beni uses “qubit”, “mode”, and “operator” interchangeably (indeed, when I pressed him on this very point, he confirmed that he regards these as synonymous). These are very different beasts in the physical problem at hand; however, for the purposes of Beni’s model, the important fact is that one can push the operator {O_D} (which one should think of as some operator that acts on {D}) through the unitary {U} to some other operator {\tilde{\mathcal{O}}_{CR}^T} that acts on both {C} and {R}:

He then goes on to show that one can reconstruct this operator {\tilde{\mathcal{O}}_{CR}} independently of the initial state of the black hole (i.e., the operator {K}) by coupling to an auxiliary system. Of course, I’m glossing over a great number of details here; in particular, Beni transmutes the outgoing mode {D} into a representation of the interior mode in his model, and calls whatever purifies it the “partner” {\tilde{\mathcal{O}}_{CR}^T}. Still, I personally find this a bit underwhelming; but then, from my perspective, the Hayden-Preskill “state dependence” wasn’t the issue to begin with; quantum information people may differ, and in any case Beni’s construction is still a neat toy model in its own domain.

However, the various conflations above are problematic when one attempts to map back to the fundamental physics we’re after: {\tilde{\mathcal{O}}_{CR}^T} is not the “partner” of the mode {D} in the relevant sense (namely, the pairwise entangled modes required for smoothness across the horizon), nor does it correspond to PR’s mirror operator (since its support actually straddles both sides of the horizon). Hence, while Beni’s construction does represent a non-trivial refinement of the original Hayden-Preskill protocol, I don’t think it solves the problem.

So if this model misses the point, what does Hayden-Preskill actually achieve in this context? Indeed, even in the original paper [12], they clearly showed that one can recover a message from inside the black hole. Doesn’t this mean we can reconstruct the interior in a state-independent manner, in the proper use of the term?

Well, not really. Essentially, Hayden-Preskill (in which I’m including Beni’s model as the current state-of-the-art) & PR (and I) are asking different questions: the former are asking whether it’s possible to decode messages to which one would not normally have access (answer: yes, if you know enough about the initial state and any auxiliary systems), while the latter are asking whether physics in the interior of the black hole can be represented in the exterior (answer: yes, if you use state-dependent operators). Reconstructing information about entangled qubits is not quite the same things as reconstructing the state in the interior. Consider a single Bell pair for simplicity, consisting of an exterior qubit (say, {D} in Beni’s model) and the interior “partner” that purifies it. Obviously, this state isn’t localized to either side, and so does not correspond to an interior operator.

The distinction is perhaps a bit subtle, so let me try to clarify. Let us define the operator {\mathcal{O}_A} with support behind the horizon, whose action on the vacuum creates the state in which Alice’s message has been thrown into the black hole; i.e., let

\displaystyle |\Psi\rangle_A=(\mathcal{O}_A\otimes I_R)|\Psi\rangle_{EPR} \ \ \ \ \ (7)

denote the state of the black hole containing Alice’s message, where the identity factor acts on the early radiation. Now, the fundamental result of PR is that if Bob wishes to reconstruct the interior of the black hole (concretely, the excitation behind the horizon corresponding to Alice’s message), he can only do so using state-dependent operators. In other words, there is no operator with support localized to the exterior which precisely equals {\mathcal{O}_A}; but Bob can find a state that approximates {|\Psi\rangle_A} arbitrarily well. This is more than just an operational restriction, but rather stems from an interesting trade-off between locality and unitarity which seems built-in to the theory at a fundamental level; see [8] for details.

Alternatively, Bob might not care about directly reconstructing the black hole interior (since he’s not planning on following Alice in, he’s not concerned about verifying smoothness as we are). Instead he’s content to wait for the “information” in this state to be emitted in the Hawking radiation. In this scenario, Bob isn’t trying to reconstruct the black hole interior corresponding to (7)—indeed, by now this state has long-since been scrambled. Rather, he’s only concerned with recovering the information content of Alice’s message—a subtly related but crucially distinct procedure from trying to reconstruct the corresponding state in the interior. And the fundamental result of Hayden-Preskill is that, given some admittedly idealistic assumptions (i.e., to the extent that the evaporating black hole can be viewed as a simple qubit model) this can also be done.

In the case of Geof’s paper [11], there’s a similar but more subtle language difference at play. Here the author means “state dependence” to mean something different from both Beni and PR/myself; specifically, he means “state dependence” in the context of quantum error correction (QEC). This is more clearly explained his earlier paper with Hayden [14], and refers to the fact that in general, a given boundary operator may only reconstruct a given bulk operator for a single black hole microstate. Conversely, a “state-independent” boundary operator, in their language, is one which approximately reconstructs a given bulk operator in a larger class of states—specifically, all states in the code subspace. Note that the qualifier “approximate” is crucial here. Otherwise, schematically, if {\epsilon} represents some small perturbation of the vacuum {\Omega} (where “small” means that the backreaction is insufficient to move us beyond the code subspace), then an exact reconstruction of the operator {\mathcal{O}} that sources the state {|\Psi\rangle=\mathcal{O}|\Omega\rangle} would instead produce some other state {|\Psi'\rangle=\mathcal{O}|\Omega+\epsilon\rangle}. So at the end of the day, I simply find the phrasing in [11] misleading; the lack of qualifiers makes many of his statements about “state-(in)dependence” technically erroneous, even though they’re perfectly correct in the context of approximate QEC.

At the end of the day however, these [10,11,14] are ultimately quantum information-theoretic models, in which the causal structure of the original problem plays no role. This is obvious in Beni’s case [10], in which Hayden-Preskill boils down to the statement that if one knows the exact quantum state of the system (or approximately so, given auxiliary qubits), then one can recover information encoded non-locally (e.g., Alice’s bit string) from substantially fewer qubits than one would naïvely expect. It’s more subtle in [11,14], since the authors work explicitly in the context of entanglement wedge reconstruction in AdS/CFT, which superficially would seem to include aspects of the spacetime structure. However, they take the black hole to be included in the entanglement wedge (i.e., code subspace) in question, and ask only whether an operator in the corresponding boundary region “works” for every state in this (enlarged) subspace, regardless of whether the bulk operator we’re trying to reconstruct is behind the horizon (i.e., ignoring the localization of states in this subspace). And this is where super-loading the terminology “state-(in)dependence” creates the most confusion. For example, when Geof writes that “boundary reconstructions are state independent if, and only if, the bulk operator is contained in the entanglement wedge” (emphasis added), he is making a general statement that holds only at the level of QEC codes. If the bulk operator lies behind the horizon however, then simply placing the black hole within the entanglement wedge does not alter the fact that a state-independent reconstruction, in the unqualified use of the phrase, does not exist.

Of course, as the authors of [14] point out in this work, there is a close relationship between state-dependent in QEC and in PR’s use of the term. Indeed, one of the closing thoughts of my paper [8] was the idea that modular theory may provide an ontological basis for the epistemic utility of QEC in AdS/CFT. Hence I share the authors’ view that it would be very interesting to make the relation between QEC and (various forms of) state-dependence more precise.

I should add that in Geof’s work [11], he seems to skirt some of the interior/exterior objections above by identifying (part of) the black hole interior with the entanglement wedge of some auxiliary Hilbert space that acts as a reservoir for the Hawking radiation. Here I can only confess some skepticism as to various aspects of his construction (or rather, the legitimacy of his interpretation). In particular, the reservoir is artificially taken to lie outside the CFT, which would normally contain a complete representation of exterior states, including the radiation. Consequently, the question of whether it has a sensible bulk dual at all is not entirely clear, much less a geometric interpretation as the “entanglement wedge” behind the horizon, whose boundary is the origin rather than asymptotic infinity.

A related paper [15] by Almeiri, Engelhardt, Marolf, and Maxfield appeared on the arXiv simultaneously with Geof’s work. While these authors are not concerned with state-dependence per se, they do provide a more concrete account of the effects on the entanglement wedge in the context of a precise model for an evaporating black hole in AdS/CFT. The analogous confusion I have in this case is precisely how the Hawking radiation gets transferred to the left CFT, though this may eventually come down to language as well. In any case, this paper is more clearly written, and worth a read (happily, Henry Maxfield will speak about it during one of our group’s virtual seminars in August, so perhaps I’ll obtain greater enlightenment about both works then).

Having said all that, I believe all these works are helpful in strengthening our understanding, and exemplify the productive confluence of quantum information theory, holography, and black holes. A greater exchange of ideas from various perspectives can only lead to further progress, and I would like to see more work in all these directions.

I would like to thank Beni Yoshida, Geof Penington, and Henry Maxfield for patiently fielding my persistent questions about their work, and beg their pardon for the gross simplifications herein. I also thank the YITP in Kyoto for their hospitality during the Quantum Information and String Theory 2019 / It from Qubit workshop, where most of this post was written amidst a great deal of stimulating discussion.


  1. K. Papadodimas and S. Raju, “Remarks on the necessity and implications of state-dependence in the black hole interior,” arXiv:1503.08825
  2. K. Papadodimas and S. Raju, “Local Operators in the Eternal Black Hole,” arXiv:1502.06692
  3. K. Papadodimas and S. Raju, “State-Dependent Bulk-Boundary Maps and Black Hole Complementarity,” arXiv:1310.6335
  4. K. Papadodimas and S. Raju, “Black Hole Interior in the Holographic Correspondence and the Information Paradox,” arXiv:1310.6334
  5. K. Papadodimas and S. Raju, “An Infalling Observer in AdS/CFT,” arXiv:1211.6767
  6. D. Harlow, “Aspects of the Papadodimas-Raju Proposal for the Black Hole Interior,” arXiv:1405.1995
  7. D. Marolf and J. Polchinski, “Violations of the Born rule in cool state-dependent horizons,” arXiv:1506.01337
  8. R. Jefferson, “Comments on black hole interiors and modular inclusions,” arXiv:1811.08900
  9. K. Papadodimas, “A class of non-equilibrium states and the black hole interior,” arXiv:1708.06328
  10. B. Yoshida, “Firewalls vs. Scrambling,” arXiv:1902.09763
  11. G. Penington, “Entanglement Wedge Reconstruction and the Information Paradox,” arXiv:1905.08255
  12. P. Hayden and J. Preskill, “Black holes as mirrors: Quantum information in random subsystems,” arXiv:0708.4025
  13. R. Cleve, D. Gottesman, and H.-K. Lo, “How to share a quantum secret,” arXiv:quant-ph/9901025
  14. P. Hayden and G. Penington, “Learning the Alpha-bits of Black Holes,” arXiv:1807.06041
  15. A. Almheiri, N. Engelhardt, D. Marolf, and H. Maxfield, “The entropy of bulk quantum fields and the entanglement wedge of an evaporating black hole,” arXiv:1905.08762
Posted in Physics | Leave a comment