A brief history of firewalls

Black hole thermodynamics

In 1973, Jacob Bekenstein observed that black holes must be endowed with an entropy in order to preserve the second law of thermodynamics; otherwise, one could decrease the entropy of the universe by simply throwing subsystems with high entropy (e.g., a hot cup of coffee, this blog post) into a black hole. At face value, this is an intuitive proposal: since the information about the degrees of freedom that comprise the hypothetical subsystem would then be hidden behind the event horizon, it makes sense to count them among the microstates of the black hole.

The unintuitive twist (the first of many!) comes from the realization that this naïve bookkeeping is not at all how black holes operate. The entropy of familiar systems scales with the volume thereof, {S\!\sim\!V}, which is consistent with simply counting the obvious (particulate) degrees of freedom in the examples above. Black hole entropy, in stark contrast, scales with the area of the event horizon, {S\!\sim\!A}. Bekenstein’s original motivation for this proposal hinged largely on Hawking’s 1971 result that the surface area of a black hole cannot decrease in any classical process (the so-called “area theorem”). This lead Bekenstein to propose an analogy between black holes and statistical thermodynamics, which has since been enshrined in the laws of black hole thermodynamics for stationary black holes:

  • Zeroth Law: The surface gravity, {\kappa}, on the horizon is constant. This implies that surface gravity is analogous to temperature.
  • First Law: For a stationary Kerr-Newman black hole, the change in energy under small perturbations is given by

    \displaystyle \mathrm{d} E=\frac{\kappa}{8\pi}\mathrm{d} A+\Omega \mathrm{d} J+\Phi\mathrm{d} Q~. \ \ \ \ \ (1)

    This is the statement of energy conservation, where the r.h.s. is equal to {T\mathrm{d} S}.

  • Second Law: Assuming the weak energy condition holds, the horizon area is non-decreasing,

    \displaystyle \frac{\mathrm{d} A}{\mathrm{d} t}\geq0~. \ \ \ \ \ (2)

    This is the aforementioned area theorem, and corresponds (under the instigating observation of Bekenstein above) to the statement that the entropy never decreases.

  • Third Law: It is not possible to form a black hole with vanishing surface gravity,

    \displaystyle \kappa>0~. \ \ \ \ \ (3)

    The third law of ordinary, statistical thermodynamics is essentially the statement that a system at absolute zero must be in the state with minimum possible energy. In the usual example of a perfect crystal, this is assumed to be comprised of a single eigenstate, hence the entropy vanishes. The corresponding example here is an extremal black hole, which has {\kappa=0}.

 

However, despite the apparent necessity of ascribing to black holes an entropy proportional to {A}, thus far black hole thermodynamics is little more than an analogy: classically, black holes do not radiate (hence the name), and therefore have zero temperature and consequently zero thermodynamic entropy. Indeed, Bekenstein’s original proposal explicitly views the entropy in an information-theoretic — as opposed to thermodynamic — sense, i.e., as the Shannon entropy measuring the inaccessibility of the internal microstates of a system. General relativity ensures that these degrees of freedom are forever isolated from the external universe, hence an external observer can never extract information, and thus the entropy of the black hole must be non-decreasing. It is worth emphasizing however that, at least at the classical level, this entropy is properly regarded as referring to the equivalence class of black holes with the same mass, charge, and angular momentum, rather than to the temperature of any single black hole.

The situation changed the following year, when Hawking showed that, quantum mechanically, black holes do radiate, with temperature

\displaystyle T=\frac{\kappa}{2\pi}=\frac{1}{8\pi M}~, \ \ \ \ \ (4)

and entropy

\displaystyle S=\frac{A}{4\ell_P^2}~, \ \ \ \ \ (5)

where we have explicitly included the Planck length, {\ell_P=\sqrt{\hbar G/c^3}}, in the latter formula lest the reader be disturbed by the mismatch in dimensions between {S} and {A}. The existence of Hawking radiation implies that black holes can evaporate, and thus their surface area {A} can in fact decrease. (In other words, the aforementioned area law was a purely classical statement. Quantum mechanical effects render the weak energy condition — a key assumption — invalid). This requires a modification of the second law, to the effect that the total entropy of the black hole (still identified with its horizon area) plus the entropy of the Hawking radiation is non-decreasing. This is referred to as the generalized second law.

With Hawking’s discovery that black holes are not completely black after all, black hole thermodynamics went from epistemic to ontic in one fell swoop. The precise nature of the Hawking radiation itself, however, remains muddled to this day.

The vast interpretational quagmire surrounding Hawking radiation is due in no small part to the fact that there are a multitude of seemingly distinct derivations thereof. Hawking’s original 1975 calculation considers a black hole that forms from collapse. The mode expansion of a scalar field at past and future null infinity differ, on account of the difference in vacuum state—namely, the Minkowski and Schwarzschild vacua, respectively. One can express the latter in terms of the former by means of a Bogoliubov transformation, which results in a thermal expectation value for the outgoing modes. (More technically, the initial Minkowski vacuum {|0_M\rangle} corresponds to the Kruskal or Hartle-Hawking vacuum {|0_K\rangle}, while the final Schwarzschild vacuum {|0_S\rangle} is analogous to Rindler space {|0_R\rangle}. While the Kruskal modes are defined on the entire manifold, a Rindler observer, who has access to only the exterior spacetime, will perceive a thermal vacuum corresponding to tracing out the degrees of freedom behind the horizon. This is the mechanism that underlies the Unruh effect).

However, the collapsing geometry is in fact entirely incidental to the radiation, as Hawking himself observed. Rather, it is the presence of the event horizon that is the key feature. Indeed, it is straightforward to show that an accelerating observer in Minkowski space observes a thermal spectrum associated with the Rindler horizon, which well-approximates the near-horizon region of a large Schwarzschild black hole.

The centrality of horizons in this context is elegantly demonstrated by the 1977 paper by Gibbons and Hawking, in which they compute the entropy of a black hole from what is essentially a purely geometrical argument. (In contrast to the usual jargon, here I mean “geometry” as distinct from “classical gravity”, since the presence of {\hbar} in the path integral technically places us beyond the domain of the latter). The basic idea is to compute the path integral for the black hole by Wick rotating to Euclidean signature, in which the geometry pinches off smoothly at the horizon. This corresponds to the fixed point of the {U(1)} symmetry, which we obtain by periodically identifying Euclidean time to avoid a conical deficit. The contribution from the fixed point dominates the path integral {Z}; and since {Z} is also the partition function, a simple thermodynamic argument allows one to derive an expression for the entropy in terms of the leading saddle-point, which yields precisely the above, well-known result (5).

The information paradox

The fact that black holes radiate has shattering implications, which Hawking was swift to point out in his subsequent work. Suppose that we form a black hole by collapsing some matter distribution in an initially pure state. After the black hole has completely evaporated, we are left with radiation in a thermal state, which is by definition mixed. But the transformation from a pure state to a mixed state violates unitarity, a fundamental principle of quantum mechanics necessary to ensure conservation of probabilities. In other words, non-unitary evolution would imply that information is lost in the process, which quantum mechanics forbids. Thus it appears that the very quantum mechanical laws which give rise to Hawking radiation are violated as a result! This is the substance of the black hole information paradox. As we shall see, it provides perhaps the first hints that our conception of locality may require modification.

It is illuminating to contrast this situation with the apparently pure-to-thermal evolution of normal matter upon incineration, say a burning lump of coal. Supposing this to be in an initially pure state, the final state again involves a thermal bath of radiation, with the apparent loss of information that implies. But we do not concern ourselves with unitarity-violating barbecues. The reason is that subtle correlations between early and late radiation conspire to preserve the purity of the total system. It is only in coarse-graining (or tracing out whatever fraction of coal remains at a given stage) that we perceive a thermal state. It may be impossible to actually recover this information in practice, but in principle, the laws of quantum mechanics survive intact—that is, a sufficiently powerful computer could do it.

The essential difference between the coal and the black hole is that the former has no horizon. Early “Hawking” modes are entangled with modes inside the coal, which can — via their interactions with other interior modes — imprint this information on the late radiation. In contrast, the presence of a horizon imposes a very specific entanglement structure on the modes that prevents those behind the horizon from transmitting the information in any obvious manner. This follows from the fact that the Minkowski vacuum is in some sense an infinitely entangled state: the correlation function between local field excitations at spacelike-separated points {A} and {B} will diverge as {A\rightarrow B}. We can make this more precise by considering the Rindler decomposition of the vacuum,

\displaystyle |0\rangle=\frac{1}{\sqrt{Z}}\sum_ie^{-\pi\omega_i}|i\rangle_L|i'\rangle_R~, \ \ \ \ \ (6)

where {Z} is the Euclidean path integral with no insertions, and the relation between the basis vectors for the left (L) and right (R) wedges is {|i\rangle_L=\Theta^\dagger|i'\rangle_R}, where {\Theta} is the CPT operator. (This is an antiunitary operator that exists in all QFTs, whose action on a scalar field {\Phi} is {\Theta^\dagger\Phi(t,x,\mathbf{y})\Theta=\Phi^\dagger(-t,-x,\mathbf{y})}). Now consider decomposing a free scalar field into modes of definite boost energy {\omega} ({-\omega}) in the right (left) Rindler wedge. Then the vacuum state can be equivalently written as a product state over all modes:

\displaystyle |0\rangle=\bigotimes_{\omega,k}\sqrt{1-e^{-2\pi\omega}}\sum_ne^{-\pi\omega n}|n\rangle_{L\omega(-k)}|n\rangle_{R\omega k}~. \ \ \ \ \ (7)

This pairwise entanglement between modes across the horizon is ultimately what prevents the modes from sharing their entanglement as in the lump of coal.

On this point, an important clarification bears mentioning: while the pairwise entangled modes are a characteristic feature of horizons, the popular conception of Hawking radiation as pairwise entangled particles is misleadingly invalid—“a cartoon Hawking invented to explain his paper to children”, as Harlow quotes of Susskind. The wavelength of the modes is of order {M^{-1}}, the size of the black hole, and thus the particle interpretation breaks down long before one reaches the horizon. It is therefore meaningless to speak of the radiation as being localized in this manner. (Just to be clear, this of course does not imply that an infalling observer won’t see particles as usual in her own reference frame, as per the equivalence principle. It is merely the blueshifting of Hawking modes back from infinity that is ill-defined; the associated divergence is simply the statement that, from the perspective of an external observer, time appears to stop at the horizon.) The related question of where, precisely, the Hawking radiation originates has not been settled, though the evidence suggests that the adjective “precisely” may lose out to nonlocality as well.

Despite these difficulties, there have been several attempts to reconcile the apparent information loss by black holes by appealing to subtle correlations in the Hawking radiation. And indeed, in this regard it is worth emphasizing an oft-misstated point, namely that the radiation is not exactly thermal in the technical sense of the term—meaning, possessing a Planckian spectrum. Lower bounds on deviations from thermality can be derived from greybody factors, as well as from adiabatic and phase space constraints. The appearance of (exact) thermality in certain calculations of the radiation spectrum (e.g., Hawking’s original work) stems from the fact that the Hartle-Hawking state presupposes that the black hole is in thermal equilibrium with the radiation, in which case one inevitably recovers a perfect black body spectrum. That said, the spectrum is thermal to a very good approximation, so we shall follow the conventional abuse of terminology and continue to use “thermal” in the colloquial sense, i.e., in reference to a highly mixed state with an approximately, rather than exactly, Planckian spectrum.

Even allowing for small deviations from exact thermality, it has been argued that subtle correlations in the Hawking radiation are insufficient to restore unitarity, and that these would instead have to constitute an {O(1)} correction, which would destroy the very semiclassical physics they were intended to save. But the possibility of encoding information in such a manner has not been ruled out. In fact, arguments from holography — more specifically the AdS/CFT correspondence — indicate that unitarity is indeed preserved, and consequently the belief that the information is somehow encoded in the Hawking radiation is currently the most popular position.

An alternative proposal is that the evaporation process halts with a Planck-scale remnant, which contains all the information necessary to purify the radiation. However, whether remnants actually possess such an information capacity has been called into question. Furthermore, even if the issue of unitarity could be resolved (or rather, sidestepped) in this manner, it would require an object on the order of {10^{-35}~m} to contain an (in principle) infinite number of internal states! This hardly seems a reasonable resolution, and remnants are generally disfavored for these and other reasons. That said, it is worth commenting that once the black hole approaches the Planck scale, semi-classical gravity breaks down, and a full theory of quantum gravity is needed to specify what happens in the final moments of a black hole’s life.

A somewhat more fanciful possibility is to suppose that the black hole gives rise to another universe, such that unitarity is preserved in the total system (that is, the resulting multiverse). However, information would still be lost from the perspective of outside observers. (Certain models suggest that when making measurements on an ensemble, the loss of information to the baby universes is not observable; however, this does not appear to resolve the paradox when restricted (as we are) to a single parent universe.) Additionally, there is ongoing debate as to whether evolution to a mixed state (or in this case, to a state defined on a non-Cauchy surface) violates conservation of energy. (Note that in this context we are considering the evolution of the entire system, as opposed to subsystems, from pure to mixed. The latter is a benign and fundamental feature of quantum mechanics known as decoherence). In any case, this possibility would seem qua definitione beyond observable verification. And as we shall see below, holography provides stronger arguments against black holes acting as “information sinks”, and thus we leave this option aside as well.

Black hole complementarity

All three of the proposed solutions (or rather, classes of solutions) above suffer drawbacks that, as of yet, have prevented a satisfactory resolution from emerging. However, in the early 90’s, Susskind, Thorlacius, and Uglum argued that that there is in fact no contradiction due to what they termed black hole complementarity (BHC). Building on earlier ideas by ‘t Hooft, they proposed what is essentially a radical deviation from locality, whereby the same information is observed in different locations by complementary observers. The adjective here denotes the key restriction that these observers are unable to communicate; both measurements are then equally valid, since the contradiction between them could only be observed by transmitting and comparing. (This is not to say that the information is in two places simultaneously, since that would violate the no-cloning principle. Rather, “complementarity” refers to the fundamental feature of quantum mechanics whereby non-commuting observables cannot be simultaneously measured (the most famous example of which are the canonical position and momentum operators)).

The postulates of BHC are as follows:

  1. Unitarity: Black hole formation and evaporation is described by a unitary S-matrix within the context of standard quantum field theory.
  2. EFT: Physics outside the horizon is described to a good approximation by effective field theory.
  3. Thermodynamics: To an external observer, the black hole appears to be a quantum system with discrete energy levels, and the dimension of the subspace is given by {e^S}.
  4. Equivalence principle: A freely falling observer experiences “no drama”, i.e., no substantial deviation from the predictions of general relativity, when crossing the horizon of a large black hole.

Postulates 1 and 3 follow from the usual demands of quantum mechanics and black hole thermodynamics, respectively, as described above. Postulates 2 and 4 essentially follow from the fact that the horizon of a large black hole is a region of low curvature, and (insofar as event horizons are global constructs) its presence is not revealed by any local invariant. (However, there are large nonlocalinvariants, in particular a large relative boost. In standard quantum field theory, only large local invariants can lead to a breakdown. But highly boosted strings behave differently than point particles, and some recent work has investigated string scattering near the horizon as a means of probing the possible breakdown of locality in effective field theory). Indeed, the Earth could be falling through the event horizon of a sufficiently large black hole at this very moment; according to the equivalence principle, we’d be unable to tell. In other words, while new physics, specifically a theory of quantum gravity, is obviously needed for the Planck-scale region near the singularity, one fully expects that semi-classical physics remains valid on (large) horizon scales.

The upshot of BHC is that an observer who remains outside the black hole perceives a hot membrane at the horizon which radiates information, while an infalling observer encounters nothing out of the ordinary as she falls through. The former sees unitary evolution but cannot verify the apparent loss of the equivalence principle, while the situation for the latter is precisely reversed. (Note that BHC does not contradict the relativistic law that physics is the same in all reference frames, but merely asserts that the description of events in frames “separated by a large boost parameter” may differ).

It is instructive to ask what prevents the external observer from jumping into the black hole at some later time in order to compare her observations with those of the earlier infaller. If possible, this would violate the no-cloning principle and thereby render BHC invalid. However, the external observer must wait until after the Page time before she can collect any information. If she then attempts to receive an illegal quantum copy from the earlier infaller by subsequently diving into the hole, the message must be sent with more energy than the entire black hole itself contains—otherwise, she’ll hit the singularity first. Thus it appears that a careful balance of factors conspires to keep the two frames of reference complementary in the above sense.

BHC is not as far-fetched as it initially sounds. Indeed, the idea that one should only endow observable quantities with ontic status is not only central to relativity, but a core tenet of science in general—that which cannot be measured (that is, does not interact with the physical universe) cannot be meaningfully said to exist. Nonetheless, BHC does entail a significant departure from standard quantum mechanics with regards to the interpretation of the Hilbert space on a Cauchy slice that crosses into the interior of the black hole in such a way as to intersect both “copies” of the information. In particular, the question is whether a global Hilbert space can be meaningfully said to exist on these “nice slices”.

If one posits a global Hilbert space, it must be the case that spacelike operators — specifically those in the interior and exterior — no longer commute. Otherwise, an observer whose causal past includes both regions would be able to measure them simultaneously. In this case, one preserves the usual formulation of quantum mechanics, except that locality is broken in such a manner as to make the same piece of information appear differently to different observers—specifically, observers who are complementary in the above sense. This is sometimes referred to as the weak interpretation of BHC, in contrast to the alternative below. As we shall see, this interpretation is morally in line with AdS/CFT, which also presumes quantum mechanics (i.e., the existence of a single, global Hilbert space) but is fundamentally nonlocal or holographic in nature.

Alternatively, one can deny the existence of such a global Hilbert space. In this so-called strong interpretation of BHC, the interior and exterior observers have their own separate Hilbert spaces, with some suitable matching conditions on the boundary (namely, the horizon). This preserves locality in the sense that spacelike observables commute as expected within each Hilbert space, but it is unclear whether it is possible to formulate a consistent set of matching conditions. (For example, insofar as horizons are global properties of the spacetime, the matching conditions would need to be defined nonlocally in time). Additionally, as Polchinski has noted, this interpretation still constitutes a “weakening” of local quantum field theory, since it makes the Hilbert space structure subordinate to the causal structure. (This is the inverse of the standard formulation of QFT, wherein locality or “microcausality” is seen to emerge from quantum mechanics in conjunction with special relativity and the clustering property (i.e., factorization of the S-matrix)).

Firewalls: the paradox reloaded

Until recently, BHC was generally the de facto (albeit perhaps not entirely satisfactory) solution to the information paradox. In 2013 however, Almheiri, Marolf, Polchinski, and Sully (AMPS) argued that the postulates of BHC are in fact mutually inconsistent. This rekindled the information paradox with a vengeance, and the modern, as yet unresolved version is known as the firewall paradox.

The AMPS argument can be crudely summarized as follows: smoothness of the horizon — i.e., the equivalence principle — requires that a given Hawking mode {H} and its interior partner {P} be maximally entangled, as discussed above (more generally, the exterior mode is purified by its interior partner), while purity of the final radiation — i.e., unitarity — requires that {H} be maximally entangled with the earlier radiation {R}. But this violates the monogamy of quantum entanglement, and thus it appears that at least one of the assumptions must be modified. AMPS chose the equivalence principle as the least egregious sacrifice. This would imply that an infalling observer indeed encounters the hot membrane perceived by her external collaborator—and is completely incinerated; hence the name “firewall”.

Note that the old argument that saved BHC — namely that it is impossible for an external observer to perform a measurement, and then dive in and obtain an illegal quantum copy of the result — does not save us here. The reason is that this argument is based on information recovery, which requires that we wait until after the Page time before jumping in. But as AMPS pointed out, we needn’t wait that long simply to uncover a problem. Instead, the external observer can make a measurement of a single Hawking mode at some early time, which must eventually be entangled with the late radiation if the final state is to be pure. She can then immediately jump in and capture the entanglement between this later mode and its interior partner, thus violating the no-cloning principle.

The aftermath of AMPS was considerable. As of this writing, less than 5 years after their paper’s appearance, it has received nearly 700 citations. Despite this effort however, the Firewall remains unextinguished. Operationally, there are reasons to believe that the problem is purely academic, insofar as no observer would actually perceive such a violation. For example, the first paper I wrote during my PhD casts doubt on whether all the ingredients for the paradox fit within a single observer’s causal patch. But such operational arguments feel somehow unsatisfactory, insofar as they do not shed light on how unitarity is preserved in principle—that is, how the requisite information escapes, as AdS/CFT implies it must.

One perspective is that our concept of locality will require modification, such as in the so-called non-violent nonlocality proposals of Giddings and collaborators. In fact, locality is not the only tenet that appears in need of reassessment. As we’ve mentioned before, nearly every discussion in this context involves an assumption about the entanglement structure at the horizon, namely that the Hilbert space factorizes into a tensor product structure. This fails in gauge theory, let alone gravity, and a more rigorous approach to field theory suggests that it’s not even valid for non-interacting QFTs. The deepening connections between entanglement and spacetime geometry uncovered in recent years may shed light on this issue, and it is one to which we hope to return.

 

Publication note: this post is essentially the first chapter of my unpublished PhD thesis, minus the bibliography (with apologies to the many deserving authors therein). A few bits, mostly around the beginning, resurfaced in my submission to the Black Hole Initiative’s 2018 essay competition, which can be found here.

 

 

 

This entry was posted in Physics. Bookmark the permalink.

1 Response to A brief history of firewalls

  1. Pingback: Islands behind the horizon | Ro's blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s