I was asked to give a lecture on “quantum puzzles and black holes” at the 20th Jürgen Ehlers Spring School, which was to be hosted at AEI this week. Unfortunately the school was cancelled due to the SARS-CoV-2 pandemic, but since I enjoyed researching the topic so much, I thought I’d make a post of it instead. Part of what made preparing for this lecture so interesting is that the students — primarily undergrads bordering on Masters students — hadn’t had quantum field theory (QFT), which meant that if I wanted to elucidate, e.g., the firewall paradox or the thermal nature of horizons in general, I’d have to do so without recourse to the standard toolkit. And while there’s a limit to how far one can get without QFT in curved spacetime, it was nice to go back and revisit some of the things with which long familiarity has made me take for granted.
Accordingly, I’ve endeavored to make this post maximally pedagogical, assuming only basic general relativity (GR) and a semblance of familiarity with undergraduate quantum mechanics and statistical thermodynamics. I’ll start by introducing black hole thermodynamics, which leads to the conclusion that black holes have an entropy given by a quarter the area of their event horizons in Planck units. Then in the second section, I’ll discuss some quantum puzzles that arise in light of Hawking’s discovery that black holes radiate, which seems to imply that information is lost as they evaporate, in violation of quantum mechanics. In the third and final section, I’ll explain how the considerations herein gave rise to the holographic principle, one of the deepest revelations in physics to date, which states that the three-dimensional world we observe is described by a two-dimensional hologram.
1. Black hole thermodynamics
Classically, black hole thermodynamics is a formal analogy between black holes and statistical thermodynamics. It was originally put forth by Jacob Bekenstein in his landmark 1973 paper , in which he proposed treating black holes thermodynamically, and argued that the entropy should be proportional to the area of the event horizon. Let’s start be examining the idea of black holes as thermodynamic objects, and build up to the (in)famous entropy-area relation as we go.
As I’ve mentioned before, black holes must be endowed with an entropy in order to avoid violating the second law of thermodynamics; otherwise, one could decrease the entropy of the universe simply by dropping anything into the black hole. Taking entropy as a measure of our ignorance — equivalently, as a measure of the inaccessibility of the internal configuration — this is intuitive, since the degrees of freedom comprising whatever object one dropped in are now hidden behind the horizon and should thus be counted among the internal microstates of the black hole. Furthermore, one knows from Hawking’s area theorem  that the surface area of a classical black hole is non-decreasing, and thus the dynamics of black holes appears to select a preferred direction in time, analogous to the thermodynamic arrow of time consequent of the fact that entropy (of any closed thermodynamic system) always increases. This led Bekenstein  to propose that one could “develop a thermodynamics of black holes”, in which entropy is precisely related to the area of the horizon, (here “” means “proportional to”; we’ll fix the coefficient later).
Thermodynamically, entropy is an extensive property, so associating the entropy to some function of the size of the black hole makes sense. But why , specifically? In statistical mechanics, the entropy generally scales with the volume of the system, so one might naïvely have expected . Indeed, one of the most remarkable aspects of black holes is that the entropy scales with the area instead of the volume. Insofar as black holes represent the densest possible configuration of energy — and hence of information — this implies a drastic reduction in the (maximum) number of degrees of freedom in the universe, as I’ll discuss in more detail below. However, area laws for entanglement entropy are actually quite common; see for example  for a review. And while the ultimate source of black hole entropy (that is, the microscopic degrees of freedom it’s counting) is an ongoing topic of current research, the entanglement between the interior and exterior certainly plays an important role. But that’s a QFT calculation, whereas everything I’ve said so far is purely classical. Is there any way to see that the entropy must scale with instead of , without resorting to QFT in curved space or the full gravitational path integral?
where is the Schwarzschild radius and . For , the metric is static: the spatial components look the same for any value of . But inside the black hole, , and hence . This makes the component positive and the component negative, so that space and time switch roles in the black hole interior. Consequently, the “spatial” components are no longer static inside the black hole, since they will continue to evolve with . Thus the “volume” of the black hole interior depends on time, and in fact on one’s choice of coordinates in general. (This isn’t too strange, if you think about it: the lesson of general relativity is that spacetime is curved, so your quantification of “space” will generally depend on your choice of “time”).
The issue is clarified nicely in a paper by Christodoulou and Rovelli  (be warned however that while the GR calculations in this paper are totally solid, the discussion of entropy in section VIII is severely flawed). The crux of the matter is that our usual definition of “volume” doesn’t generalize to curved spacetime. In flat (Minkowski) spacetime, we define volume by picking a Cauchy slice, and consider the spacelike 3d hypersurface bounded by some 2d sphere on that slice. But when space is curved, there are many different constant- slices we can choose, none of which has any special status (in GR, the coordinates don’t matter). Suppose for example we tried to calculate the interior volume in Schwarzschild coordinates (1). Our flat-space intuition says to pick a constant- slice bounded by some surface (in this case, the horizon itself), and integrate over the enclosed hypersurface :
where is the determinant of the (induced) metric on . Along a timeslice, , so we have
But the Schwarzschild coordinates break-down at the horizon, so the upper and lower limits of the remaining integral are the same, , and the integral vanishes. Thus the Schwarzschild metric would lead one to conclude that the “volume” of the black hole is zero! (Technically the integral is ill-defined at , but one obtains the same result by changing the outer limit to and taking the limit ).
Let’s try a different coordinate system, better suited to examining the interior. Define the new time variable
These are Gullstrand-Painlevé (GP) coordinates. They’re relatively unfamiliar, but have some useful properties; see for example , in which my colleagues and I utilized them in the context of the firewall paradox during my PhD days. Unlike the Schwarzschild coordinates, they cover both the exterior region and the black hole interior. They look like this:
where constant slices are in yellow, and constant slices are in green. One neat thing about these coordinates is that is the proper time of a free-falling observer who starts from rest at infinity. (Somewhat poetically, they’re the natural coordinates that would be associated to a falling drop of rain, and are sometimes called “rain-frame coordinates” for this reason). Another neat thing about them is that the constant- slices are flat! Thus if we attempt to calculate the interior volume along one such Cauchy slice, we simply recover the flat-space result,
and thus the volume is constant, no matter what -slice we choose; in other words, the observer can fall forever and never see less volume! See  for a pedagogical treatment of the volume calculation in some other coordinate systems, which again yield different results.
The above examples illustrate the fact that in general, there are many (in fact, infinitely many!) different choices of within the boundary sphere , and we need a slightly more robust notion of volume to make sense in curved spacetime. As Christodoulou and Rovelli point out, a better definition for is the largest spherically symmetric surface bounded by . This reduces to the familiar definition above in Minkowski space, but extends naturally and unambiguously to curved spacetime as well. Thus the basic idea is to first fix the boundary sphere , and then extremize over all possible interior 3d hypersurfaces . For a Schwarzschild black hole, in the limit where the null coordinate is much greater than (i.e., at late times), one finds 
Thus, the interior volume of a black hole continues to grow linearly for long times, and can even exceed the volume of the visible universe!
Whether one thinks of entropy as a measure of one’s ignorance of the interior given the known exterior state, or a quantification of all possible microstates given the constraints of mass (as well as charge and angular momentum for a Kerr-Newman black hole), it should not depend on the choice of coordinates, or continue growing indefinitely while the surface area (i.e., the boundary between the known and unknown regions) remains fixed. Thus if we want a sensible, covariant quantification of the size of the black hole, it must be the area. (Note that the area is more fundamental than the radius: the latter is defined in terms of the former (equivalently, in terms of the mass), rather than by measuring the distance from , for the same reasons we encountered when attempting to define volume above). Since the event horizon is a null-surface, the area is coordinate-invariant; fixing and in the Schwarzschild metric then simply yields the area element of the 2-sphere,
Thus areas, rather than volumes, provide the only covariant, well-defined measures of the spatial “size” of black holes.
Technically, this doesn’t prove that , of course; it might logically have been some other function or power of the area, but this would be less natural on physical grounds (though can be easily ruled out by considering a black hole merger ). And, while it’s a nice consistency check on the universe, it doesn’t really give any insight into why the degrees of freedom are ultimately bounded by the surface area, beyond the necessity-of-curved-space argument above.
There is however one problem with this identification: the entropy, in natural units, is dimensionless, while the area has units of length squared, so the mismatch must be remedied by the hitherto undetermined proportionality factor. As Bekenstein pointed out, there is no universal constant in GR alone that has the correct units; the only fundamental constant that fits the bill is the Planck length,
which is the celebrated Bekenstein-Hawking entropy of black holes. This is one of the most remarkable expressions in all of physics, insofar as it’s perhaps the only known example in which gravity (), quantum mechanics (), and special relativity () all come together (thermodynamics too, if you consider that we set ).
Hawking’s calculation, and the myriad alternative derivations put forward since, require a full QFT treatment, so I’m not going to go into them here. If you’re interested, I’ve covered one such derivation based on the gravitational path integral before, and the case of a collapsing black hole that Hawking considered is reviewed in the classic textbook . In the original paper  however, Bekenstein provides a very cute derivation which barely even requires quantum mechanics, and yet gets surprisingly close to the right answer. The basic idea is to calculate the minimum possible increase in the size of the black hole which, classically, would occur when we gently drop in a particle whose size is of order its own Compton wavelength (this is where the comes in). This can be related to the entropy on the basis that the loss of information is the entropy of a single bit, , i.e., the answer to the yes-or-no question, “does the black hole contain the particle?” This line of reasoning yields ; not bad, given that we ignored QFT entirely!
By now I hope I’ve convinced you of two facts: (1) black holes have an entropy, and (2) the entropy is given by the area of the horizon. This is the foundation on which black hole thermodynamics is built.
where is the internal energy, is the pressure, and , , and are the temperature, entropy, and volume as above. The second term on the right-hand side represents the work done on the system by the environment (in this context, “closed” refers only to the transfer of mass or particles; the transfer of energy is still allowed). Supposing that this term is zero, the first term can be regarded as the definition of entropy for reversible processes, i.e., where is the heat.
Now consider, for generality, a charged, rotating black hole, described by the Kerr-Newman metric; in Boyer-Lindquist coordinates, this reads:
which reduces to the Schwarzschild black hole above when the charge and angular momentum go to zero (after rescaling by the radius). For compactness, we have defined
The component diverges when , which yields an inner () and outer () horizon:
The inner horizon is generally thought to be unstable, while the outer is the event horizon whose area we’re interested in calculating. Setting and to constant values as above, the induced metric on the resulting 2d surface is
We can then consider the case where the radius , whereupon , and the area is simply
which is fairly intuitive: we get the Schwarzschild result, plus an additional contribution from the angular momentum.
Now, the area depends only on the mass , charge , and angular momentum (cf. the no-hair theorem), so a generic perturbation takes the form
which is the (normalized) gravitational acceleration experienced at the equator. (“Normalized”, because the Newtonian acceleration diverges at the horizon, so a meaningful value is obtained by dividing the proper acceleration by the gravitational time dilation factor).
Each term in this expression for has a counterpart in (11). We already identified the area with the entropy, cf. (10), and since the mass is the only relevant parameter in the problem, it plays the role of the internal energy . The surface gravity corresponds to the temperature. So if we restricted to a Schwarzschild black hole, we’d have
which just canonizes the relationship between entropy and area we uncovered above, with . What about the other terms? As mentioned above, the term in (11) corresponds to the work done to the system. And as it turns out, there’s a way of extracting energy from a (charged, rotating) black hole, known as the Penrose process. I don’t have the spacetime to go into this here, but the upshot is that the parameters and in (18) correspond to the rotational angular momentum and electric potential, respectively, so that is indeed the analogue of the work that the black hole could perform on some external system; i.e.,
And of course, energy that can’t be extracted as work is another way of describing entropy, so if even if you could extract all the angular momentum and charge from the black hole, you’d still be left with what Bekenstein calls the “degradation energy” , which is the area term (20) (determined by the irreducible mass).
That’s all I wanted to say about black hole thermodynamics here, though the analogy we’ve established above can be fleshed out more thoroughly, complete with four “laws of black hole thermodynamics” in parallel to the classic set. See for example my earlier post on firewalls, or the review by Jacobson , for more details. However, I’ve been glossing over a critical fact, namely that at the classical level, black holes are, well, black: they don’t radiate, and hence a classical black hole has zero temperature. This is the reason I’ve been careful to refer to black hole thermodynamics as an analogy. Strictly speaking, one cannot regard the temperature as the physical temperature of a single black hole, but rather as referring to the equivalence class of all possible black holes subject to the same (observable) constraints of mass, charge, and angular momentum. In other words, the “temperature” of a Schwarzschild black hole is just a quantification of how the entropy — which measures the number of possible internal microstates — changes with respect to the mass, .
2. Quantum black holes
(This result is for a Schwarzschild black hole in thermal equilibrium, and is precisely what we obtain when taking in the expression for the surface gravity (19)). Hawking’s calculation, and many other derivations since, require the machinery of QFT, so I won’t go into the details here. There is however a cute hack for obtaining the identification (22), whereby one Wick rotates to Euclidean signature so that the -dimensional Schwarzschild geometry becomes , whereupon the temperature appears as a consequence of the periodicity in Euclidean time; see my first post for a sketch of the resulting “cigar geometry”, or my upcoming post on QFT in curved space for a more detailed discussion about the relationship between periodicity and horizons.
Hawking radiation is sometimes explained as the spontaneous fluctuation of a particle-antiparticle pair from the vacuum across the horizon; the particle escapes to infinity as Hawking radiation, while the antiparticle is captured by the black hole. This is a cute cartoon, except that it’s wrong, and an over-reliance on the resulting intuition can get one into trouble. I’ve already devoted an entire post to this issue, so I’ll refer you there if you’re interested; if you’ve got a QFT background, you can also find some discussion of the physical aspects of black hole emission in chapter eight of . In a nutshell, the basic point is that radiation comes out in momentum-space modes with wavelength , which can’t be Fourier transformed back to position space to yield anything localized near the horizon. In other words, near the horizon of a black hole, the meaning of “particles” employed by an external observer breaks down. The fact that black holes can radiate away energy means that if you stop throwing in matter, the black hole will slowly shrink, which seems to contradict Hawking’s area theorem above. The catch is that this theorem relies on the weak energy condition, which states that the matter density along every timelike vector field is non-negative; this is no longer necessarily true once quantum fluctuations are taking into account, so there’s no mathematical contradiction. It does however mean that our formulation of the “second law” of black hole thermodynamics was too naïve: the area (and hence entropy) of a black hole can decrease, but only by emitting Hawking radiation which increases the entropy of the environment by at least as much. This motivates us to introduce the generalized entropy
where the first term is the black hole entropy (10), and the second is the entropy of the thermal radiation. In full generality, the Second Law of (Black Hole) Thermodynamics is then statement is that the entropy (10) of all black holes, plus the entropy of the rest of the universe, never decreases:
Evaporating black holes have some peculiar properties. For example, since the temperature of a Schwarzschild black hole is inversely proportional to the mass, the specific heat capacity is negative:
(We’re working in natural units, so and hence the heat ). Consequently, throwing matter into a black hole to increase its size actually makes it cooler! Conversely, as the black hole emits Hawking radiation, its temperature increases, causing it to emit more radiation, and so on in a feedback loop that causes the black hole to get hotter and hotter as it shrinks away to nothing. (Precisely what happens in the final moments of a black hole’s lifespan is an open question, likely requiring a more developed theory of quantum gravity to answer. Here I’m going to take the majority view that it indeed evaporates away completely). Note that this means that whenever one speaks about black holes thermodynamically, one should use the microcanonical ensemble rather than the canonical ensemble, because the latter is unstable to any quantum fluctuation that changes the mass of the black hole.
The fact that black holes radiate when quantum field theory is taken into account transforms black hole thermodynamics from a formal analogy to an ontologically meaningful description, where now the temperature is indeed the physical (thermodynamic) temperature of a single black hole. In this sense, quantum effects were required to resolve the tension between the fact that the information-theoretic interpretation of entropy as the measure of possible internal microstates was applicable to a single black hole — and hence had physical significance — while the temperature had no meaningful physical interpretation in the non-radiating (classical) case. The combination of seemingly disparate regimes in the expression for the entropy (10) is not a coincidence, but represents a truly remarkable unification. It’s perhaps the first thing a successful theory of quantum gravity should be expected to explain.
The fact that black holes evaporate also brings into focus the need for such a unification of general relativity and quantum field theory: a black hole is one of the only known regimes (the other being the Big Bang singularity) that falls within the purview of both theories, but attempts to combine them yield nonsensical infinities that have thus far resisted all attempts to tame them. This leads me to the main quantum puzzle I wanted to discuss: the information paradox. (The firewall paradox is essentially just a more modern sharpening of the underlying conflict, but is more difficult to sketch without QFT).
The information paradox, in a nutshell, is a conflict between the apparent ability of black holes to destroy information, and the quantum mechanical postulate of unitarity. Recall that unitarity is the statement that the time-evolution of a quantum state via the Schrödinger equation is described by a unitary operator, which have the property that they preserve the inner product. Physically, this ensures that probabilities continue to sum to one, i.e., that no information is lost. While evolution in open systems can be non-unitary due to decoherence with the environment, the evolution of any closed quantum mechanical system must be unitary, i.e., pure states evolve to pure states only, never to mixed states. This means that if we create a black hole by collapsing some matter in an initially pure state, let it evaporate, and then collect all the Hawking radiation, the final state must still be pure. The problem is that the Hawking radiation is, to a very good approximation, thermal, meaning it has the Planckian spectrum characteristic of black-body radiation, and thermal radiation contains no information.
The situation is often depicted by the Page curve [10,11], which is a plot of entropy with respect to time as the black hole evaporates. Suppose we collect all the Hawking radiation from a black hole that starts in a pure state; call the entropy of this radiation . Initially, , because our subsystem is empty. As the black hole evaporates, steadily increases as we collect more and more radiation. Eventually the black hole evaporates completely, and we’re left with a thermal bath of radiation in a maximally mixed state, so (after normalizing): a maximal loss of information has occurred! This is the information paradox in a single graph. In sharp contrast, what quantum mechanics expects to happen is that after the halfway point in the black hole’s lifespan, the late-time radiation starts to purify the early-time radiation we’ve already collected, so the entropy curve should turn around and head back to 0 when the black hole disappears. This is illustrated in the figure below, from Page’s paper . (The lack of symmetry in the upwards and downwards parts is due to the fact that the emission of different particles (in this calculation, just photons and gravitons) affect the change in the black hole entropy and the change in the radiation entropy slightly differently. The turnover isn’t at exactly half the lifetime either, but rather around .)
The fundamental issue is that quantum mechanics demands that the information escape the black hole, but there doesn’t seem to be any way of enabling this. (For more discussion, see my earlier post on firewalls. I should also mention that there are alternative proposals for what happens at the end of a black hole’s lifetime, but these are generally disfavoured for a variety of reasons, most notably AdS/CFT). That said, just within the past year, it was discovered that in certain AdS/CFT set-ups, one can obtain a Page curve for the entropy by including the contributions from wormhole geometries connecting different replicas that arise as subleading saddles in the gravitational path integral; see for example , or the talks by Douglas Stanford and Juan Maldacena as part of my research group’s QGI seminar series. While this doesn’t quite solve the paradox insofar as it doesn’t explain how the information actually escapes, it’s encouraging that — at least in AdS/CFT — there does seem to be a mechanism for correctly tracking the entropy as the black hole evaporates.
3. The holographic principle
To close this lecture/post, I’d be remiss if I didn’t mention the most remarkable and far-reaching consequence of the black hole investigations above: the holographic principle. Put forth by ‘t Hooft , and given a formulation in terms of string theory by Susskind , this is essentially the statement that the ultimate theory of quantum gravity must exhibit a dimensional reduction (from 3 to 2 spatial dimensions in our -dimensional universe) in the number of fundamental degrees of freedom. This developed from the arguments of Bekenstein, that the black hole entropy (10) represents a bound on the amount of information that can be localized within any given region. The basic idea is that any attempt to cram more information into a region of fixed size will cause the system to collapse into a black hole, and therefore the dimension of the Hilbert space associated to any region must scale with the area of the boundary.
The review by Bousso  contains an excellent modern introduction to this principle; I’ll only give a quick summary of the main idea here. Recall that in quantum mechanics, the number of degrees of freedom is given by the log of the dimension of the Hilbert space . For example, in a system with 100 spins, there are possible states, so and , i.e., the system contains 100 bits of information. One can crudely think of quantum field theory as a continuum theory with a harmonic oscillator at every spacetime point; a single harmonic oscillator already has , so one would expect an infinite number of degrees of freedom for any region. However, one can’t localize more than a Planck energy into a Planck cube without forming a black hole, which provides an ultra-violet (UV) cutoff on the spectrum. And since any finite volume imposes an infra-red (IR) cutoff, we can take the degrees of freedom in field theory to scale like the volume of the region, with one oscillator per Planck cell. In other words, we think of space as a grid with lattice spacing ; the total number of oscillators thus scales like the volume , and each one has a finite number of states due to the UV cutoff mentioned above. Hence and . Thus, since , , and we expect entropy to scale with volume just as in classical mechanics.
The lesson from black hole thermodynamics is that gravity fundamentally alters this picture. Consider a Schwarzschild black hole: the mass scales like , not , so the energy can’t scale with the volume: the vast majority of the states which QFT would naïvely allow can’t be reached in a gravitational theory, because we form a black hole when we’ve excited only a small fraction of them. The maximum number of states we can excite is (in Planck units).
You might object that the argument I gave above as to why the black hole entropy must scale with area rather than volume was based on the fact that the interior volume of the black hole is ill-defined, and that a volume law might still apply in other situations. Bousso  gives a nice argument as to why this can’t be true: it would violate unitarity. That is, suppose a region of space had an entropy that scaled with the volume, i.e., . If we then collapse that region into a black hole, the Hilbert space dimension would have to suddenly drop to . It would then be impossible to recover the initial state from the final state (e.g., after allowing the black hole to evaporate). Thus in order to preserve unitarity, the dimension of the Hilbert space must have been from the start.
I’ve glossed over an important technical issue in introducing this “holographic entropy bound” however, namely that the spatial bound doesn’t actually work: it’s violated in all sorts of scenarios. For example, consider a region of our universe, which is well-approximated as a flat (), homogeneous, isotropic space with some average entropy density . Then the entropy scales like
which exceeds the bound when . The proper way to generalize black hole entropy to the sort of bound we want is to recall that the area of the event horizon is a null hypersurface, and it is the formulation in terms of such light-sheets which is consistent with all known examples. This is known as the covariant entropy bound, and states that the entropy on (or rather, contained within) non-expanding light-sheets of some spacetime codimension-2 surface does not exceed the area of . A thorough discussion would be another lecture in itself, so do check out Bousso’s review  if you’re interested in more details. Here I merely wanted to bring attention to the fact that the holographic principle is properly formulated on null, rather than spacelike, hypersurfaces.
The holographic principle represents a radical departure from our intuition, and implies that reality is fundamentally nonlocal. One further expects that this feature should be manifest in the ultimate theory of quantum gravity. AdS/CFT provides a concrete realization of this principle, and its success is such that the unqualified “holography” is taken to refer to it in the literature, but it’s important to remember that the holographic principle itself is more general, and applies to our universe as well.
- J. D. Bekenstein, “Black holes and entropy,” Phys. Rev. D 7 (Apr, 1973) 2333–2346.
- S. W. Hawking, “Gravitational radiation from colliding black holes,” Phys. Rev. Lett. 26 (May, 1971) 1344–1346.
- J. Eisert, M. Cramer, and M. B. Plenio, “Area laws for the entanglement entropy – a review,” Rev. Mod. Phys. 82 (2010) 277–306, arXiv:0808.3773 [quant-ph].
- M. Christodoulou and C. Rovelli, “How big is a black hole?,” Phys. Rev. D91 no. 6, (2015) 064046, arXiv:1411.2854 [gr-qc].
- B. S. DiNunno and R. A. Matzner, “The Volume Inside a Black Hole,” Gen. Rel. Grav. 42 (2010) 63–76, arXiv:0801.1734 [gr-qc].
- B. Freivogel, R. Jefferson, L. Kabir, and I.-S. Yang, “Geometry of the Infalling Causal Patch,” Phys. Rev. D91 no. 4, (2015) 044036, arXiv:1406.6043 [hep-th].
- S. W. Hawking, “Particle creation by black holes,” Comm. Math. Phys. 43 no. 3, (1975) 199–220.
- N. D. Birrell and P. C. W. Davies, Quantum Fields in Curved Space. Cambridge Monographs on Mathematical Physics. Cambridge Univ. Press, Cambridge, UK, 1984.
- T. Jacobson, “Introductory Lectures on Black Hole Thermodynamics,”. http://www.physics.umd.edu/grt/taj/776b/lectures.pdf.
- D. N. Page, “Information in black hole radiation,” Phys. Rev. Lett. 71 (1993) 3743–3746, arXiv:hep-th/9306083 [hep-th].
- D. N. Page, “Time Dependence of Hawking Radiation Entropy,” JCAP 1309 (2013) 028, arXiv:1301.4995 [hep-th].
- A. Almheiri, T. Hartman, J. Maldacena, E. Shaghoulian, and A. Tajdini, “Replica Wormholes and the Entropy of Hawking Radiation,” arXiv:1911.12333 [hep-th].
- G. ’t Hooft, “Dimensional reduction in quantum gravity,” Conf. Proc. C930308 (1993) 284–296, arXiv:gr-qc/9310026 [gr-qc].
- L. Susskind, “The World as a hologram,” J. Math. Phys. 36 (1995) 6377–6396, arXiv:hep-th/9409089 [hep-th].
- R. Bousso, “The Holographic principle,” Rev. Mod. Phys. 74 (2002) 825–874, arXiv:hep-th/0203101 [hep-th].