Criticality in deep neural nets

In the previous post, we introduced mean field theory (MFT) as a means of approximating the partition function for interacting systems. In particular, we used this to determine the critical point at which the system undergoes a phase transition, and discussed the characteristic divergence of the correlation function. In this post, I’d like to understand how recent work in machine learning has leveraged these ideas from statistical field theory to yield impressive training advantages in deep neural networks. As mentioned in part 1, the key feature is the divergence of the correlation length at criticality, which controls the depth of information propagation in such networks.

I’ll primarily follow a pair of very interesting papers [1,2] on random neural nets, where the input (i.e., pre-activation) {z_i^l} of neuron {i} in layer {l} is given by

\displaystyle z_i^l=\sum_jW_{ij}^ly_j^{l-1}+b_i^l~, \ \ \ \ \ (1)

where {y_j^{l-1}=\phi(z_j^{l-1})} is some non-linear activation function of the neurons in the previous layer, and {W_{ij}^l} is an {N_l\times N_{l-1}} matrix of weights. The qualifier “random” refers to the fact that the weights and biases are i.i.d. according to the normal distributions

\displaystyle \begin{aligned} W_{ij}^l\sim\mathcal{N}\left(0,\sigma_w^2/N_{l-1}\right) &\implies p\left( w_{ij}^l\right)=\sqrt{\frac{N_{l-1}}{2\pi\sigma_w^2}}\,e^{-\frac{1}{2}\left(\!\frac{w_{ij}^l}{\sigma_w}\right)^2}~,\\ b_i^l\sim\mathcal{N}\left(0,\sigma_b^2\right) &\implies p\left( b_i^l\right)=\frac{1}{\sqrt{2\pi}\sigma_b}\,e^{-\frac{1}{2}\left(\!\frac{b_i^l}{\sigma_b}\right)^2}~, \end{aligned} \ \ \ \ \ (2)

where the factor of {N_{l_1}} has been introduced to ensure that the contribution from the previous layer remains {O(1)} regardless of layer width.

Given this, what is the probability distribution for the inputs {z_i^l}? Treating {y_i^{l-1}} as fixed, this is a linear combination of independent, normally-distributed variables, so we expect it to be Gaussian as well. Indeed, we can derive this rather easily by considering the characteristic function, which is a somewhat more fundamental way of characterizing probability distributions than the moment and cumulant generating functions we’ve discussed before. For a random variable {x\in\mathbb{R}}, the characteristic function is defined as

\displaystyle \phi_x(t)=\langle e^{itx}\rangle=\int_\mathbb{R}\!\mathrm{d} x\,e^{itx}p(x) \ \ \ \ \ (3)

where in the second equality we assume the probability density (i.e., distribution) function {p(x)} exists. Hence for a Gaussian random variable {x\sim\mathcal{N}(\mu,\sigma^2)}, we have

\displaystyle \phi_x(t)=\frac{1}{\sqrt{2\pi}\sigma}\int\!\mathrm{d} x\,e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2+itx} =e^{-\frac{1}{2}\sigma^2t^2+it\mu}~. \ \ \ \ \ (4)

A pertinent feature is that the characteristic function of a linear combination of independent random variables can be expressed as a product of the characteristic functions thereof:

\displaystyle \phi_{a_1x_1+\ldots+a_nx_n}(t)=\phi_{x_1}(a_1t)\ldots\phi_{x_n}(a_nt)~, \ \ \ \ \ (5)

where {\{x_i\}} are independent random variables and {\{a_i\}} are some constants. In the present case, we may view each {y_j^{l-1}} as a constant modifying the random variable {W_{ij}^l}, so that

\displaystyle \begin{aligned} \phi_{z_i^l}(t)&=\phi_{b_i^l}(t)\prod_j\phi_{W_{ij}^l}(y_j^{l-1}t) =e^{-\frac{1}{2}\sigma_b^2t^2}\prod_je^{-\frac{1}{2N_{l-1}}\sigma_w^2(y_j^{l-1})^2}\\ &=\exp\left[-\frac{1}{2}\left(\sigma_w^2\frac{1}{N_{l-1}}\sum_j(y_j^{l-1})^2+\sigma_b^2\right) t^2\right]~. \end{aligned} \ \ \ \ \ (6)

Comparison with (4) then implies that {z_i^l} indeed follow a Gaussian,

\displaystyle z_i^l\sim\mathcal{N}(0,\sigma_z^2)~,\qquad\mathrm{with}\qquad \sigma_z^2=\sigma_w^2\frac{1}{N_{l-1}}\sum_j(y_j^{l-1})^2+\sigma_b^2~. \ \ \ \ \ (7)

This is the probability distribution that describes the pre-activations of the neurons in layer {l}. Note that since the neurons are indistinguishable (i.e., all identical), in the large-{N} limit {\sigma_z^2} becomes the variance of the distribution of inputs across the entire layer. The MFT approximation is then to simply replace the actual neurons with values drawn from this distribution, which becomes exact in the large-{N} limit (EDIT: there’s a subtlety here, see “MFT vs. CLT” after the references below). For consistency with [1] (and because tracking the {l}-dependence by writing {\sigma_{z^l}} is cumbersome), we’ll introduce the notation {q^l:=\sigma_z^2} to denote the variance of the pre-activations in layer {l}.

Now, observe that in the large-{N} limit, the average over the activations of the previous layer becomes an integral:

\displaystyle \frac{1}{N_{l-1}}\sum_j(y_j^{l-1})^2 =\frac{1}{N_{l-1}}\sum_j\phi(z_j^{l-1})^2 \;\overset{N\rightarrow\infty}{\longrightarrow}\; \int\!\mathcal{D} z_j\,\phi(z_j^{l-1})^2 =\int\!\frac{\mathrm{d} z}{\sqrt{2\pi}}\,e^{-\frac{z^2}{2}}\,\phi\left(\!\sqrt{q^{l-1}}z\right)^2~, \ \ \ \ \ (12)

where we’ve used {\mathcal{D} z} to denote the standard normal Gaussian measure, and we’ve defined {z:= z_j^{l-1}/\sqrt{q^{l-1}}} so that {z\sim\mathcal{N}(0,1)}. That is, we can’t just write

\displaystyle \lim_{N\rightarrow\infty}\frac{1}{N}\sum_if(z_i)=\int\!\mathrm{d} z_i\,f(z_i)~, \ \ \ \ \ (13)

because this makes an assumption on the measure, {\mathrm{d} z_i}. Specifically, it assumes that we’re integrating over the real line, whereas we know that {z_i} in fact lie along a Gaussian. Thus we need a Lebesgue integral (rather than the more familiar Riemann integral) with an appropriate measure—in this case, {{\mathcal{D} z=\tfrac{1}{\sqrt{2\pi}}e^{-z^2/2}\mathrm{d} z}}. (We could alternatively have written {{\mathcal{D} z_j=\tfrac{1}{\sqrt{2\pi q^{l-1}}}e^{-\tfrac{1}{2}\left( z_j^{l-1}/q^{l-1}\right)^2}}}, but it’s cleaner to simply rescale things). Substituting this back into (7), we thus obtain a recursion relation for the variance:

\displaystyle q^l=\sigma_w^2\int\!\mathcal{D} z\,\phi\left(\!\sqrt{q^{l-1}}z\right)^2+\sigma_b^2~, \ \ \ \ \ (14)

which is eqn. (3) of [1]. Very loosely speaking, this describes the spread of information at a single neuron, with smaller values implying a more tightly peaked distribution. We’ll return to the importance of this expression momentarily.

As you might have guessed from all the song-and-dance about correlation functions in part 1 and above, a better probe of information propagation in the network is the correlator of two inputs (in this case, the correlation between different inputs in two identically prepared copies of the network). Let us thus add an additional Latin index {a,b,\ldots} to track particular inputs through the network, so that {z_{i,a}^l} is the value of the {i^\mathrm{th}} neuron in layer {l} in response to the data {\mathbf{x}_a=\{x_{i,a}\}}, {z_{i,b}^l} is its value in response to data {\mathbf{x}_b}, etc., where the data is fed into the network by setting {\mathbf{y}_a^0=\mathbf{x}_a} (here boldface letters denote the entire layer, treated as a vector in {\mathbb{R}^l}). We then consider the two-point correlator between these inputs at a single neuron:

\displaystyle \begin{aligned} \langle z_{i,a}^lz_{i,b}^l\rangle&= \left<\left(\sum_jW_{ij}^ly_{j,a}^{l-1}+b_i\right)\left(\sum_kW_{ik}^ly_{k,b}^{l-1}+b_i\right)\right>\\ &=\sigma_w^2\frac{1}{N_{l-1}}\sum_jy_{j,a}^{l-1}y_{j,b}^{l-1}+\sigma_b^2~, \end{aligned} \ \ \ \ \ (15)

where we have used the fact that {\langle W_{ij}^lW_{ik}^l\rangle=\delta_{ij}\sigma_w^2/N_{l-1}}, and that the weights and biases are independent with {\langle W_{ij}^l\rangle=0=\langle b_i^l\rangle}. Note that since {{\langle z_{i,a}^l\rangle\!=\!0}}, this also happens to be the covariance {{\mathrm{cov}(z_{i,a}^l,z_{i,b}^l)\!=\!\langle z_{i,a}^lz_{i,b}^l\rangle-\langle z_{i,a}^l\rangle\langle z_{i,b}^l\rangle}}, which we shall denote {q_{ab}^l} in accordance with the notation from [1] introduced above.

As in the single-input case above, we again take the continuum (i.e., large-{N}) limit, whereupon the sum becomes

\displaystyle \frac{1}{N_{l-1}}\sum_jy_{j,a}^{l-1}y_{j,b}^{l-1} \;\overset{N\rightarrow\infty}{\longrightarrow}\; \int\!\mathcal{D} z_a\mathcal{D} z_b\,\phi(z_a^{l-1})\phi(z_b^{l-1})~. \ \ \ \ \ (16)

Unlike the previous case however, we can’t just use independent standard Gaussian measures for {\mathcal{D} z_a} and {\mathcal{D} z_b}, since that would preclude any correlation in the original inputs {\mathbf{x}_a,~\mathbf{x}_b}. Rather, any non-zero correlation in the data will propagate through the network, so the appropriate measure is the bivariate normal distribution,

\displaystyle \mathcal{D} z_a\mathcal{D} z_b =\frac{\mathrm{d} z_a\mathrm{d} z_b}{2\pi\sqrt{q_a^{l-1}q_b^{l-1}(1-\rho^2)}}\, \exp\!\left[-\frac{1}{2(1-\rho^2)}\left(\frac{z_a^2}{q_a^{l-1}}+\frac{z_b^2}{q_b^{l-1}}-\frac{2\rho\,z_az_b}{\sqrt{q_a^{l-1}q_b^{l-1}}}\right)\right]~, \ \ \ \ \ (17)

where the (Pearson) correlation coefficient is defined as

\displaystyle \rho^l:=\frac{\mathrm{cov}(z_a^l,z_b^l)}{\sigma_{z_a}\sigma_{z_b}} =\frac{q_{ab}^l}{\sqrt{q_a^lq_b^l}}~, \ \ \ \ \ (18)

and we have denoted the squared variance in layer {l} corresponding to the input {\mathbf{x}_a} by {{q_a^l:=\sigma_{z_a^l}^2}}. If the inputs are uncorrelated (i.e., {{\langle \mathbf{x}_a\mathbf{x}_b\rangle=\langle\mathbf{x}_a\rangle\langle\mathbf{x}_b\rangle}}), then {{\mathrm{cov}(\mathbf{x}_a,\mathbf{x}_b)\!=\!0\implies\rho\!=\!0}}, and the measure reduces to a product of independent Gaussian variables, as expected. (I say “as expected” because the individual inputs {z_i^l} are drawn from a Gaussian. In general however, uncorrelatedness does not necessarily imply independence! The basic reason is that the correlation coefficient {\rho(x,y)} is only sensitive to linear relationships between random variables {x} and {y}; if these are related non-linearly, then it’s possible for them to be dependent ({{p(x,y)\neq p(x)p(y)}}) despite being uncorrelated ({{\rho(x,y)\!=\!0}}). See Wikipedia for a simple example of such a case.)

Following [1], the clever trick is then to define new integration variables {z_1,z_2} such that

\displaystyle z_a=\sqrt{q_a^{l-1}}\,z_1~,\qquad\qquad z_b=\sqrt{q_b^{l-1}}\left(\rho^{l-1} z_1+\sqrt{1-(\rho^{l-1})^2}\,z_2\right)~, \ \ \ \ \ (19)

(Do not confuse the standard normal integration variables {z_1,z_2} with the inputs {z_{i,a}^l}; the former are distinguished by the lack of any layer label {l}). With this substitution, the integral (16) becomes

\displaystyle \int\!\mathcal{D} z_a\mathcal{D} z_b\,\phi(z_a^{l-1})\phi(z_b^{l-1}) =\int\!\frac{\mathrm{d} z_1\mathrm{d} z_2}{2\pi}e^{-\frac{1}{2}\left( z_1^2+z_2^2\right)}\phi(z_a)\phi(z_b)~, \ \ \ \ \ (20)

and thus we obtain the recursion relation for the covariance, cf. eqn. (5) of [1]:

\displaystyle q_{ab}^l=\sigma_w^2\int\!\mathcal{D} z_1\mathcal{D} z_2\,\phi(z_a)\phi(z_b)+\sigma_b^2~, \ \ \ \ \ (21)

with {z_a,z_b} given by (19). Note that since {z_1,z_2} are Gaussian, their independence (i.e., the fact that {p(z_1,z_2)=p(z_1)p(z_2)}) follows from their uncorrelatedness, {\rho(z_1,z_2)\!=\!0}; the latter is easily shown by direct computation, and recalling that {\langle z_a^2\rangle\!=\!\sigma_a^2}.

Now, the interesting thing about the recursion relations (14) and (21) is that they exhibit fixed points for particular values of {\sigma_w}, {\sigma_b}, which one can determine graphically (i.e., numerically) for a given choice of non-linear activation {\phi(z)}. For example, the case {\phi(z)=\tanh(z)} is plotted in fig. 1 of [1] (shown below), demonstrating that the fixed point condition

\displaystyle q^*=\sigma_w^2\int\!\frac{\mathrm{d} z}{\sqrt{2\pi}}\,e^{-\frac{z^2}{2}}\tanh(\sqrt{q^*}z)^2+\sigma_b^2 \ \ \ \ \ (22)

is satisfied for a range of {(\sigma_w,\sigma_b)}. Therefore, when the parameters of the network (i.e., the distributions (2)) are tuned appropriately, the variance remains constant as signals propagate through the layers. This is useful, because it means that non-zero biases at each layer prevent signals from decaying to zero. Furthermore, provided the non-linearity {\phi} is concave, the convergence to the fixed point occurs relatively rapidly as a function of depth (i.e., layer {l}). One can verify this numerically for the case {\phi(z)=\tanh(z)} considered above: since {\tfrac{\mathrm{d}^2 q^{l}}{\mathrm{d}(q^{l-1})^2}\!<\!0}, the dynamics ensures convergence to {q^*} for all initial values. To visualize this, consider the following figure, which I took from [1] ({\mathcal{V}} is their notation for the recursion relation (14)). In the left plot, I’ve added the red arrows illustrating the convergence to the fixed point for {\sigma_w=4.0}. For the particular starting point I chose, it takes about 3 iterations to converge, which is consistent with the plot on the right.

qConvergence

Similarly, the swift convergence to {q^*} for the variance of individual neurons implies a fixed point for (21). Observe that when {q_a^{l-1}\!=\!q_b^{l-1}\!=\!q^*}, {\rho^{l-1}\!=\!q_{ab}^{l-1}/q^*}. Hence by dividing (21) by {q^*}, we have

\displaystyle \rho^l=\frac{\sigma_w^2}{q^*}\int\!\mathcal{D} z_1\mathcal{D} z_2\, \phi\left(\sqrt{q^*}z_1\right) \phi\left(\sqrt{q^*}\left[\rho^{l-1}z_1+\sqrt{1-(\rho^{l-1})^2}z_2\right]\right) +\frac{\sigma_b^2}{q^*}~. \ \ \ \ \ (23)

This equation always has a fixed point at {\rho^*\!=\!1}:

\displaystyle \rho^l=\frac{\sigma_w^2}{q^*}\int\!\mathcal{D} z_1\mathcal{D} z_2\, \phi\left(\sqrt{q^*}z_1\right)^2 +\frac{\sigma_b^2}{q^*} =\frac{1}{q^*}q^{l}=1~,\ \ \ \ \ \ (24)

where {\int\!\mathcal{D} z_2\!=\!1}, and we have identified the single-neuron variance {q^l}, cf. (14), which is {q_a^l\!=\!q_b^l\!=\!q^*}.

Let us now ask about the stability of this fixed point. As observed in [1], this is probed by the first derivative evaluated at {\rho^*}:

\displaystyle \chi_1:= \frac{\partial \rho^l}{\partial\rho^{l-1}}\bigg|_{\rho^{l-1}=1} =\;\sigma_w^2\int\!\mathcal{D} z\,\phi'\left(\sqrt{q^*}z\right)^2~, \ \ \ \ \ (25)

where {\phi'(u)=\tfrac{\partial\phi(u)}{\partial u}} (the second equality is not obvious; see the derivation at the end of this post if you’re curious). The reason is that for monotonic functions {\phi(z)}, {\chi<1} implies that the curve (23) must approach the unity line from above (i.e., it’s concave, or has a constant slope with a vertical offset at {\rho^{l-1}\!=\!0} set by the biases), and hence in this case {\rho^l} is driven towards the fixed point {\rho^*}. Conversely, if {\chi>1}, the curve must be approaching this point from below (i.e., it’s convex), and hence in this case {\rho^l} is driven away from unity. Depending on the strengths of the biases relative to the weights, the system will then converge to some new fixed point that approaches zero as {\sigma_b\rightarrow 0}. (It may help to sketch a few examples, like I superimposed on the left figure above, to convince yourself that this slope/convergence argument works).

The above implies the existence of an order-to-chaos phase transition in the {(\sigma_w,\sigma_b)} plane: {\chi>1} corresponds to the chaotic phase, in which the large random weights cause any correlation between inputs to shrink, while {\chi<1} corresponds to the ordered phase, in which the inputs become perfectly aligned and hence {\rho\rightarrow1}. At exactly {\chi\!=\!1}, the system lies at a critical point, and should therefore be characterized some divergence(s) (recall that, in the limit {N\rightarrow\infty}, a phase transition is defined by the discontinuous change of some quantity; one often hears of an “{n^\mathrm{th}} order phase transition” if the discontinuity lies in the {n^\mathrm{th}} derivative of the free energy).

We saw in part 1 that at a critical point, the correlation length that governs the fall-off of the two-point correlator diverges. This is the key physics underlying the further development of this mean field framework in [2], who used it to show that random networks are trainable only in the vicinity of this phase transition. Roughly speaking, the intuition is that for such a network to be trainable, information must be able to propagate all the way through the network to establish a relationship between the input (data) and the output (i.e., the evaluation of the cost function) (actually, it needs to go both ways, so that one can backpropagate the gradients through as well). In general, correlations in the input data will exhibit some fall-off behaviour as a function of depth, which limits how far information can propagate through these networks. However, borrowing our physics terminology, we may say that a network at criticality exhibits fluctuations on all scales, so there’s no fall-off behaviour limiting the propagation of information with depth.

So, what is the correlation length in the present problem? In the previous post on mean field theory, we determined this by examining the fall-off of the connected 2-point correlator (i.e., the Green function), which in this case is just {\langle z_a^lz_b^l\rangle=q_{ab}^l}. To determine the fall-off behaviour, we expand near the critical point, and examine the difference {|\rho^l-\rho^*|} as {l\!\rightarrow\!\infty} (where recall {\rho^l=q_{ab}^l/q^*}. The limit simply ensures that {q^*,\rho^*} exist; as shown above, we don’t have to wait that long in practice). This is done explicitly in [2], so I won’t work through all the details here; the result is that upon expanding {\rho^l=\rho^*+\epsilon^l}, one finds the recurrence relation

\displaystyle \epsilon^{l+1}=\epsilon^l\,\sigma_w^2\int\!\mathcal{D} z_1\mathcal{D} z_2\,\phi'(z_a)\phi'(z_b)~, \ \ \ \ \ (26)

up to terms of order {(\epsilon^l)^2}. This implies that, at least asymptotically, {\epsilon^l\sim e^{-l/\xi}}, which one can verify by substituting this into (26) and identifying

\displaystyle \xi^{-1}=-\ln\left[\sigma_w^2\int\!\mathcal{D} z_1\mathcal{D} z_2\,\phi'(z_a)\phi'(z_b)\right]~. \ \ \ \ \ (27)

Thus {\xi} is the correlation length that governs the propagation of information — i.e., the decay of the two-point correlator — through the network. And again, we know from our discussion in part 1 that the correlation length diverges at the critical point, which is indeed what happens here: denoting the value of the correlation length at the critical point {\rho^*\!=\!1} by {\xi_1}, observe that the argument of (27) reduces to (25), i.e.,

\displaystyle \xi_1^{-1}=-\ln\chi_1~, \ \ \ \ \ (28)

which goes to infinity precisely at the order-to-chaos phase transition, since the latter occurs at exactly {\chi\!=\!1}.

In practice of course, both {l} and the number of neurons {N} are finite, which precludes a genuine discontinuity at the phase transition (since the partition function is then a finite sum of analytic functions). Nonetheless, provided {\xi\gg1}, we expect that the basic conclusion about trainability above still holds even if the correlation length remains finite. And indeed, the main result of [2] is that at the critical point, not only can information about the inputs propagate all the way through the network, but information about the gradients can also propagate backwards (that is, exploding/vanishing gradients happen only in the ordered/chaotic phases, respectively, but are stable at the phase transition), and consequently these random networks are trainable provided {l\lesssim\xi}.

Incidentally, you might worry about the validity of these MFT results, given that — as stressed in part 1 — it’s precisely at a critical point that MFT may break down, due to the relevance of higher-order terms that the approximation ignores. But in this case, we didn’t neglect anything: we saw in (7) that the distribution of inputs already follows a Gaussian, so the only “approximation” lies in ignoring any finite-size effects associated with backing away from the {N\rightarrow\infty} limit (EDIT: actually, this isn’t true! See “MFT vs. CLT” below).

The application of MFT to neural networks is actually a rather old idea, with the presence of an order-to-chaos transition known in some cases since at least the 1980’s [3]. More generally, the idea that the critical point may offer unique benefits to computation has a long and eclectic history, dating back to an interesting 1990 paper [4] on cellular automata that coined the phrase “computation at the edge of chaos”. However, only in the past few years has this particular interdisciplinary ship really taken off, with a number of interesting papers demonstrating the power of applying physics to deep learning. The framework developed in [1,2] that I’ve discussed here has already been generalized to convolutional [5] and recurrent [6,7] neural networks, where one again finds that initializing the network near the phase transition results in significant improvements in training performance. It will be exciting to see what additional insights we can discover in the next few years.

References

  1. B. Poole, S. Lahiri, M. Raghu, J. Sohl-Dickstein, and S. Ganguli, “Exponential expressivity in deep neural networks through transient chaos,” arXiv:1606.05340 [stat.ML].
  2. S. S. Schoenholz, J. Gilmer, S. Ganguli, and J. Sohl-Dickstein, “Deep Information Propagation,” arXiv:1611.01232 [stat.ML].
  3. H. Sompolinsky, A. Crisanti, and H. J. Sommers, “Chaos in random neural networks,” Phys. Rev. Lett. 61 (Jul, 1988) 259–262.
  4. C. G. Langton, “Computation at the edge of chaos: Phase transitions and emergent computation,” Physica D: Nonlinear Phenomena 42 no. 1, (1990) 12 – 37.
  5. L. Xiao, Y. Bahri, J. Sohl-Dickstein, S. S. Schoenholz, and J. Pennington, “Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks,” arXiv:1806.05393 [stat.ML].
  6. M. Chen, J. Pennington, and S. S. Schoenholz, “Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks,” arXiv:1806.05394 [stat.ML].
  7. D. Gilboa, B. Chang, M. Chen, G. Yang, S. S. Schoenholz, E. H. Chi, and J. Pennington, “Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs,” arXiv:1901.08987 [cs.LG].

The second equality in (25), namely

\displaystyle \frac{\partial \rho^l}{\partial\rho^{l-1}}\bigg|_{\rho^{l-1}=1} =\;\sigma_w^2\int\!\mathcal{D} z\,\phi'\left(\sqrt{q^*}z\right)^2~, \ \ \ \ \ (29)

is not obvious, but requires an exercise in basic calculus. Since all the functional dependence on {\rho^{l-1}} — denoted simply {\rho} here for brevity — is contained in {z_b} (cf.(19)), we have from (23)

\displaystyle \begin{aligned} \frac{\partial \rho^l}{\partial\rho} &=\frac{\sigma_w^2}{q^*}\int\!\mathcal{D} z_1\mathcal{D} z_2\,\phi(z_a)\frac{\partial}{\partial\rho}\phi(z_b) =\frac{\sigma_w^2}{q^*}\int\!\mathcal{D} z_1\mathcal{D} z_2\,\phi(z_a)\phi'(z_b)\frac{\partial z_b}{\partial\rho}\\ &=\frac{\sigma_w^2}{\sqrt{q^*}}\int\!\mathcal{D} z_1\mathcal{D} z_2\,\phi(z_a)\phi'(z_b)\left( z_1-\frac{\rho}{\sqrt{1-\rho^2}}z_2\right)~, \end{aligned} \ \ \ \ \ (30)

where the prime denotes the derivative of {\phi} with respect to its argument. We now use the hint given in the appendix of [1], namely that for the standard Gaussian measure, integration by parts allows us to write {{\int\!\mathcal{D} z\,zf(z)=\int\!\mathcal{D} z\,f'(z)}}. We therefore have the identities

\displaystyle \int\!\mathcal{D} z_1\,z_1\phi(z_a) =\int\!\mathcal{D} z_1\frac{\partial\phi(z_a)}{\partial z_1} =\int\!\mathcal{D} z_1\,\phi'(z_a)\frac{\partial z_a}{\partial z_1} =\sqrt{q^*}\int\!\mathcal{D} z_1\,\phi'(z_a) \ \ \ \ \ (31)

and

\displaystyle \begin{aligned} \int\!\mathcal{D} z_2\,z_2\phi'(z_b) &=\int\!\mathcal{D} z_2\,z_2\frac{\partial\phi(z_b)}{\partial z_2}\left(\frac{\partial z_b}{\partial z_2}\right)^{-1} =\sqrt{q^*(1-\rho^2)}\int\!\mathcal{D} z_2\,z_2\frac{\partial\phi(z_b)}{\partial z_2}\\ &=\sqrt{q^*(1-\rho^2)}\int\!\mathcal{D} z_2\,(z_2^2-1)\phi(z_b)~, \end{aligned} \ \ \ \ \ (32)

where in the last step we have again applied integration by parts over the Gaussian measure. Substituting these results into (30), we find

\displaystyle \frac{\partial \rho^l}{\partial\rho} =\sigma_w^2\int\!\mathcal{D} z_1\mathcal{D} z_2\,\phi'(z_a)\phi'(z_b) +\sigma_w^2\rho\int\!\mathcal{D} z_1\mathcal{D} z_2\,(z_2^2-1)\phi(z_a)\phi(z_b)~. \ \ \ \ \ (33)

If we evaluate this at {\rho^*\!=\!1}, we can integrate freely over {z_2}, whereupon the second term vanishes,

\displaystyle \sigma_w^2\rho\int\!\mathcal{D} z_1\mathcal{D} z_2\,(z_2^2-1)\phi\left(\sqrt{q^*}z_1\right)^2=0 \ \ \ \ \ (34)

and the first returns the claimed result:

\displaystyle \frac{\partial \rho^l}{\partial\rho}\bigg|_{\rho\!=\!1} =\sigma_w^2\int\!\mathcal{D} z_1\,\phi'\left(\sqrt{q^*}z_1\right)^2=\chi_1~. \ \ \ \ \ (35)


MFT vs. CLT (edit 2021-12-09)

Throughout this post, I referred to the approximation employed in [1,2] as MFT, since they and related works in the machine learning literature advertise it under this name. It turns out however that this isn’t quite right: the results above actually follow directly from the central limit theorem (CLT), which — perhaps surprisingly — is not necessarily the same as MFT. I discovered this together with my excellent collaborator Kevin Grosvenor while working on the nascent NN-QFT correspondence. I might write a dedicated blog article about this at some point, but the basic idea is to develop the connections between physics and machine learning that I’ve alluded to in various places here in more detail, and use techniques from QFT in particular to understand deep neural networks.

In our latest paper, we made this correspondence concrete by explicitly constructing the QFT corresponding to a general class of networks including both recurrent and feedforward architectures. The MFT approximation corresponds to the leading saddle point of the theory, as I’ve explained before. But we can also compute perturbative corrections in {T/N}, the ratio of depth to width (this works for most deep networks of practical interest, which typically have {T/N\ll1}). Finite-width corrections appear at {\mathcal{O}(T/N)}, which physically correspond to effective interactions that arise upon marginalizing over hidden degrees of freedom; see my post on deep learning and the renormalization group for details.

However, we also discovered an {\mathcal{O}(1)} correction to the MFT (i.e., tree level) result, which is independent of network size, and therefore survives in the {N\rightarrow\infty} limit. Physically, this corresponds to fluctuations in the ensemble of random networks under study; some intuition for this is given in section 5 of the aforementioned paper. Since MFT ignores all fluctuations (not just those arising at finite width), it fails to capture this contribution. Results from the CLT might implicitly include this {\mathcal{O}(1)} effect, but this has yet to be worked-out in detail.

This entry was posted in Minds & Machines. Bookmark the permalink.

2 Responses to Criticality in deep neural nets

  1. Rafael Costa says:

    Loving your page…
    Please don’t let it die; and by that I mean don’t let it go down!

    Best regards.

    Like

    • Thank you, Rafael! I’m sorry I don’t post more often; my ability to do so varies depending on how readily whatever I’m currently researching lends itself to a blog article. But I’ve no plans to stop. 😉

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s