## Hawking pairs and ontic toys

The most widely known picture of Hawking radiation involves the creation of a particle-antiparticle pair at the horizon. At face value, this seems natural enough: we know from QFT that the vacuum is hardly vacuum at all, but instead a writhing sea of virtual particle excitations. They key word is “virtual”: in technical terms, these are off-shell contributions to Feynman diagrams that one requires to get the right answer, but whose ontology is altogether less clear. The reason the vacuum still looks “empty” despite all this seething (indeed, infinite!) activity is that these particles never go on-shell—that is, they never contribute to anything we can actually measure. The presence of a horizon changes this.

One technical note before we proceed: pair creation can occur whenever the incoming energy is at least equal to the total rest mass of the two particles being created. For example, an electron has a mass of about 511 eV, so any interaction that pumps in at least 1022 eV could create an electron-positron pair—both with equivalent, positive rest-mass energy. Note that this is an energy-conserving process. In virtual pair production, wherein we start with zero energy and fluctuate off-shell, one of the partners must therefore carry negative energy. This will play only a minor clarifying role in the following cartoon, but it’s worth bearing in mind if you’re concerned about the details.

Now, suppose a virtual pair fluctuates into existence at the horizon of a black hole, such that one partner is trapped inside the horizon while the other is formed outside. The exterior particle could still fall in and annihilate with its partner, but it’s also possible for tidal forces to increase their separation so much that both particles go on-shell. That is, the black hole prevents them from recombining and annihilating, with the result that two real particles are created. Since this is an energy conserving process, and we started with vacuum, one particle has positive energy — which escapes to infinity and contributes to the Hawking radiation — while the other has a negative energy of equal magnitude—which falls into the black hole, and thereby decreases its mass.

Why did we posit that the positive energy particle escaped, rather than that with negative energy? The answer has to do with the thermodynamic instability of black holes. Their specific heat is negative, and therefore absorbing energy causes it to increase size and become colder; this makes it thermodynamically favourable to absorb even more energy from the ambient bath, and so on; a similar uncontrolled spiral occurs in the opposite direction. Therefore, if the black hole were to spontaneously emit negative energy (equivalent to absorbing some positive mass), its temperature would decrease, causing the emission of ever-more negative energy. With nothing to stop it, it would eventually radiate a magnitude of energy far greater than that which it originally possessed—a violation of energy conservation on the grandest scales. The situation is neatly resolved by allowing the outgoing partner mode to carry only positive energy when it goes on-shell; the corresponding energy of the black hole it carries away in the process is compensated for by the negative energy mode that fell in. The process has a natural limit when all the mass of the black hole has been carried away, whereupon the hole has completely evaporated (modulo certain quantum-gravitational concerns at the end-point, which are irrelevantly beyond our current scope). Thus the book-keeping works out naturally, and we have a convenient mental image to help our primitive minds grasp such an esoteric concept.

Hawking radiation à la pair production is what’s known in physics as a “toy model”, where the adjective emphasizes the fact that it isn’t intended to have ontic status, but merely to elucidate certain aspects of a problem in a more controlled manner. This is immensely useful in a large number of fields, where the full theory may be unsolvable with current techniques, but where certain simplified — and explicitly solvable — models that capture certain core features can still be used to gain a great deal of insight. The catch lies in bearing in mind which features of the model reflect those in the underlying (real, physical) theory, and which are purely epistemic.

The pre-Copernican model of the solar system is a good example. The geocentric model of nested crystalline spheres was purely epistemic: it sufficed to make predictions of eclipses and the like, and indeed its utility in this regard was one of the reasons it was so hard to overthrow (sure, Mars was a bit tricky to get right, but what’s a few more epicycles?). But it had no ontic value: for all its predictive power, it bore no resemblance to reality. Its fundamental explanations were ontologically wrong.

A toy model denotes something which, from the start, we intend to be purely epistemic, however accurate its reflection of certain physical features. The purpose of science — the growth of knowledge — is in ever-amending our imperfect models to agree more closely with reality—in maximizing the ratio of ontic to epistemic, if you will. There are no ontic toys.

Pair production, as a model of Hawking radiation, is precisely such a purely epistemic model. In Susskind’s words, it’s merely “a cartoon Hawking invented to explain his calculation to children.” The actual calculation is performed in momentum space, and in Hawking’s original work relies on a Bogoliubov transformation between ingoing and outgoing modes in the presence of collapsing matter. Although one begins in vacuum (plus the mass that creates the black hole), the final state does not correspond to the initial state due to the large blueshift caused by the collapsing body. Thus in computing the expectation value of the number operator of outgoing modes, one finds (given certain assumptions, such as adiabaticity) it to have a thermal spectrum with temperature ${T=\kappa(2\pi)^{-1}}$.

This thermal radiation must correspond with the emission of physical particles, hence the interpretation of Hawking “pairs” above. In particular, an observer at infinity decomposes the scalar field into ingoing and outgoing modes. Information about the former are lost behind the horizon, which leads to a thermal spectrum precisely as in the case of a Rindler observer. An infalling observer, in contrast, would make no such discontinuous decomposition—her modes are continuous when propagated back across the horizon. They suffer a blueshift relative to her position, but are still purely positive frequency; they do not lead to thermal spectrum, and she observes no particle creation.

A great deal of literature/debate on black holes, particularly in the context of firewalls, relies on the notion — closely associated to the analysis above — of pairwise entangled modes. Note the importance of distinguishing “mode” and “particles” here. The former is perfectly fine, and indeed other arguments — such as the requirement that the vacuum remain smooth across Rindler horizons — demand that the modes be pairwise entangled. But particles aren’t, hence the cartoonish nature of interpreting Hawking pairs in this manner. The reason for this is that the analysis above is performed in free field theory, in which an ${n}$-particle state

$\displaystyle |p_1,\ldots p_n\rangle=a^\dagger(p_1)\ldots a^\dagger (p_n)|0\rangle \ \ \ \ \ (1)$

is sharply localized in momentum space, but completely delocalized in position space. One builds localized particle states by constructing wavepackets, which are usually Gaussian integrals over momentum space. The idea of particles as local excitations of the field is therefore technically rather inaccurate.

However, while from a field-theoretic perspective, treating the Hawking radiation in terms of pairwise entangled modes appears entirely kosher, there are other reasons to believe that the notion of eternally persisting (at least up to the singularity), pairwise entangled modes should be modified by interactions, or else break down in some other (perhaps highly non-local) way. Perhaps the simplest is to realize that the Hawking modes have wavelength ${\sim M^{-1}}$. Individual modes simply can’t be localized within a Schwarzschild radius from the horizon (and it’s rather difficult to imagine how the interior mode can fit at all). Of course, as Freivogel has pointed out, it is possible to localize wave packets within the zone, and even to entangle them across Rindler horizons, but the implications for Hawking radiation in this case are less immediately clear (the radiation does not, as far as we know, come out in conveniently localized Gaussian packets). However, there is a deeper, non-model-specific reason to suspect that the pair picture breaks down, which goes by the name of “scrambling.”

In basic terms, scrambling refers to the chaotic loss of information by a complex system. The crucial modifier is “chaotic”. There is no information loss in any fundamental sense. There’s no loss of unitarity when a butterfly flaps its wings to create a hurricane, but tracking backwards to find the initial perturbation is practically impossible, simply because the system is chaotic. More generally, consider a complex chaotic system with ${N}$ degrees of freedom. One can compute the reduced density matrix for any subsystem ${n<, which will approach thermal equilibrium as the system thermalizes. This is just the statement that entropy approaches its maximum value. We say that the total system has “scrambled” when any subsystem with ${n has maximum entanglement entropy. The reason for the terminology is that at this point, no information is recoverable from less than half the total degrees of freedom.

Note that this is subtly but crucially different from saying that the total system has thermalized. The latter implies a complete loss of correlations, i.e. an exactly Planckian spectrum. Scrambling merely implies that any information in the initial pure state is delocalized over at least half the system, but is in principle still recoverable. Thermalization implies a loss of unitarity; scrambling does not.

Why half? The reason goes back to the work of Page, who showed that any subsystem with ${n will look approximately thermal. In other words, for generic systems (insert details about canonical ensembles and whatnot here), any less-than-half portion of the system contains no information. Scrambling is merely the state at which this becomes exactly (rather than approximately) true.

Suppose you make a small perturbation to a scrambled system by adding a single degree of freedom. If you try to measure it — to recover the information — immediately afterwards, you only need to measure a single degree of freedom. But if you wait a short time, the information begins to diffuse, until soon the system has returned to a scrambled state, and recovering the information about that initial one-bit perturbation will require a very non-local measurement. The time you have to wait before information becomes scrambled is called the scrambling time, denoted ${t_*}$, and depends on the system under study. But the key point for our discussion is that black holes are the fastest scramblers in the universe, with ${t_*\sim\beta\ln S}$ (where ${\beta}$ is the inverse Hawking temperature, and the entropy ${S\sim N}$). In such a system, every degree of freedom is directly coupled to every other, so that information diffuses maximally rapidly. (Several papers by Susskind and collaborators discuss this in more detail).

For a black hole, ${S\sim M^2\implies t_*\sim M\log M}$. This is the amount of time it takes for the infalling partner mode to become entangled with the entire black hole. This is fast, about ${0.0004}$ seconds for a solar mass black hole. Clearly, it makes no sense to speak of pairwise entangled particles for more than an instant, let alone over the lifetime of the black hole, against which the current age of the universe is nothing. Therefore the outgoing Hawking mode must be entangled, not with its partner, but with the entire black hole.

Except that since we’re working in free field theory, this can’t happen. The modes are pairwise entangled across the horizon, and remain so as they propagate blithely out to infinity (or into the singularity). No one knows exactly how black holes scramble information, but it doesn’t appear consistent with this picture.

A key point in this issue is the question of Hilbert space factorization (which suffers its own deep troubles). A 2011 paper by Mathur and Plumberg provides a prime example. For the black hole, the authors explicitly assume a Hilbert space factorization of the form

$\displaystyle \mathcal{H}=\mathcal{H}_M\otimes\mathcal{H}_P~, \ \ \ \ \ (2)$

where ${\mathcal{H}_M}$ is the Hilbert space of the initial (pure state) matter that formed the hole (equivalently, the hole before any Hawking pairs are emitted), and ${\mathcal{H}_P}$ is the Hilbert space of created pairs. The states in this latter space are of the form

$\displaystyle |\Psi\rangle=\frac{1}{2^{n/2}}\prod_{i=1}^n\left(|0\rangle_{c_i}|0\rangle_{b_i}+|1\rangle_{c_i}|1\rangle_{b_i}\right)~, \ \ \ \ \ (3)$

where the product is a tensor product over created Bell pairs of entangled interior (${c}$) and exterior (${b}$) modes. After ${n}$ pairs are created, the entanglement between the exterior modes ${b_i}$ and the interior, which consists of ${M}$ and ${c}$, is

$\displaystyle S=n\ln 2~, \ \ \ \ \ (4)$

which grows linearly with the number of emitted modes, in accordance with the (wrong) curve on the Page diagram, and the concomitant information paradox.

The problem with this model is that, even if one assumes such a fictitious factorization of the Hilbert space, due to scrambling one expects the total Hilbert space to be something like

$\displaystyle \mathcal{H}=\mathcal{H}_{\tilde M}\otimes\mathcal{H}_E~, \ \ \ \ \ (5)$

with ${\mathcal{H}_{\tilde M}}$ subsuming the interior partner, while ${\mathcal{H}_E}$ contains the exterior mode. States in this total Hilbert space have the form

$\displaystyle |\Psi\rangle=\frac{1}{\sqrt{2}}\left(|M0\rangle|0\rangle_{b_i}+|M1\rangle|1\rangle_{b_i}\right) \ \ \ \ \ (6)$

where ${|M0\rangle,|M1\rangle\in\mathcal{H}_{\tilde M}}$ represent the state of the black hole where the partner mode is either 0 or 1, respectively. Crucially, black holes states in this factorization contain one-less bit than before the photon was emitted (since one bit moved from ${\mathcal{H}_{\tilde M}}$ to ${\mathcal{H}_E}$). Hence at both the start and end of evaporation, the entropy will be zero, since all the bits are in the same place (in or out of the hole, respectively). Thus one obtains the correct Page curve, which maxes-out at half the lifetime of the black hole and then decreases again to zero. Mathur and Plumberg actually obtain this behavior with a model of a burning piece of paper, where one has (e.g., kinetic) interactions between molecules to modify the entanglement structure appropriately. Intuitively, scrambling must do something similar, but free field theory seems to prohibit it.

Of course, there are many more subtle and sophisticated arguments for the existence of firewalls, particularly in AdS/CFT, and it’s not obvious that a modification of the entanglement structure via scrambling or the like will be sufficient to resolve the paradox. However, the pair-production picture of Hawking radiation is notorious for causing more problems than it solves. One must clearly bear in mind the implicit assumptions beneath any physical model, and beware any theory that takes its ontic toys too seriously.

This entry was posted in Philosophy, Physics. Bookmark the permalink.