QFT in curved space, part 1: Green functions

I was recently** asked to give a lecture on black hole thermodynamics and the associated quantum puzzles, which provided a perfect excuse to spend some time reviewing one of my favourite subjects: quantum field theory (QFT) in curved spacetime. I’ll mostly follow the canonical reference by Birrell and Davies [1], and will use this series of posts to highlight a number of important and/or interesting aspects along the way. I spent many a happy hour with this book as a graduate student, and warmly recommend it to anyone desiring a more complete treatment.

**(That is, “recently” when I started this back in February. I had intended to finish the series before posting the individual parts, to avoid retroactive edits as my understanding and plans for future segments evolves, but alas the constant pressure to publish (or, perhaps more charitably, the fact that I don’t get paid to teach a course on this stuff) means that studies of this sort are frequently pushed down my priority list (and that was before an international move in the midst of a global pandemic, for those wondering why I haven’t posted in so long). Since such time constraints are likely to continue — on top of which, I have no fixed end-point in mind for this vast subject — I’ve decided to release the first few parts I’ve been sitting on, lest they never see the light of day. I hope to add more installments as time permits.)

I’m going to start by discussing Green functions (commonly but improperly called “Green’s functions”), which manifest one of the deepest relationships between gravity, quantum field theory, and thermodynamics, namely the thermodynamic character of the vacuum state. Specifically, the fact that Green functions are periodic in imaginary time — also known as the KMS or Kubo-Martin-Schwinger condition — hints at an intimate relationship between Euclidean field theory and statistical mechanics, and underlies the thermal nature of horizons (including, but not limited to, those of black holes).

For simplicity, I’ll stick to Green functions of free scalar fields {\phi(x)}, where the {D}-vector {x=(t,\mathbf{x})}. As a notational aside, I will depart from [1] in favour of the modern convention in which the spacetime dimension is denoted {D=d\!+\!1}, with Greek indices running over the full {D}-dimensional spacetime, and Latin indices restricted to the {d}-dimensional spatial component. While I’m at it, I should also warn you that [1] uses the exceedingly unpalatable mostly-minus convention {\eta_{\mu\nu}=(+,-,-,-)}, whereas I’m going to use mostly-plus {\eta_{\mu\nu}=(-,+,+,+)}. The former seems to be preferred by particle physicists, because they share with small children a preference for timelike 4-vectors to have positive magnitude. But the latter is generally preferred by relativists and most workers in high-energy theory and quantum gravity, because 3-vectors have no minus signs (i.e., it’s consistent with the non-relativistic case, whereas mostly-plus yields a negative-definite metric), raising and lowering indices involves flipping only a single sign (arguably the most important for our purposes, since we’ll be Wick rotating between Lorentzian and Euclidean signature; mostly-plus would again lead to a negative-definite Euclidean metric), and the extension to general dimensions contains only a single {-1} in the determinant (as opposed to a factor of {(-1)^D} in mostly-minus).

Notational disclaimers dispensed with, the Lagrangian density is

\displaystyle \mathcal{L}(x)=-\frac{1}{2}\left(\eta^{\mu\nu}\partial_\mu\phi\partial_\nu\phi+m^2\phi^2\right)~, \ \ \ \ \ (1)

where {\eta^{\mu\nu}} is the Minkowski metric (no curved space just yet). By applying the variational principle {\delta S=0} to the action

\displaystyle S=\int\!\mathrm{d}^Dx\mathcal{L}~, \ \ \ \ \ (2)

we obtain the familiar Klein-Gordon equation

\displaystyle \left(\square-m^2\right)\phi=0~, \ \ \ \ \ (3)

where {\square\equiv\eta^{\mu\nu}\partial_\mu\partial_\nu}. The general solution, upon imposing that {\phi} be real-valued, is

\displaystyle \phi(x)=\int\!\frac{\mathrm{d}^dk}{2\omega(2\pi)^d}\left[a_k e^{i\mathbf{k}\mathbf{x}-i\omega t}+a_k^\dagger e^{-i\mathbf{k}\mathbf{x}+i\omega t}\right]~, \ \ \ \ \ (4)

where {k=(\omega,\mathbf{k})} (note that [1] restricts to the case of a discrete spectrum, as though the system were in a box; useful for imposing an IR regulator, but unnecessary for our purposes, and potentially problematic if we want to consider Lorentz boosts or Euclidean continuations). Here {a_k} is the annihilation operator that kills the vacuum state, i.e., {a_k|0\rangle=0} (so {\phi}, by extension, is a real-valued field operator).

One last (lengthy, but important) notational aside: different authors make different choices for the integration measure {\int\!\frac{\mathrm{d}^dk}{f(k)}}, which affects a number of later formulas, and can cause confusion when comparing different sources. The convention I’m using is physically well-motivated in that it makes the measure Lorentz invariant while encoding the on-shell condition {k^2\!=\!-m^2}. That is, the Lorentz invariant measure in the full {D}-dimensional spacetime is {\int\!\mathrm{d}^Dk}. If we then impose the on-shell condition along with {\omega>0} (in the form of the Heaviside function {\Theta(\omega)}), we have

\displaystyle \int\!\mathrm{d}^Dk\,\delta(k^2+m^2)\Theta(\omega) =\int\!\mathrm{d}\omega\int\!\mathrm{d}^dk\,\delta(\mathbf{k}^2-\omega^2+m^2)\Theta(\omega)~. \ \ \ \ \ (5)

We now use the following trick: if a smooth function {g(x)} has a root at {x_0}, then we may write

\displaystyle \int\!\mathrm{d} x\,\delta(g(x))=\int\!\mathrm{d} x\,\frac{\delta(x-x_0)}{|g'(x)|} =\frac{1}{|g'(x_0)|} \ \ \ \ \ (6)

where the prime denotes the derivative with respect to {x}. In the present case, {{g(\omega)=\mathbf{k}^2-\omega^2+m^2}}, and {x_0^2=\mathbf{k}^2+m^2} (note that the Heaviside function will select the positive root). Thus

\displaystyle \int\!\mathrm{d}\omega\!\int\!\mathrm{d}^dk\,\delta(\mathbf{k}^2-\omega^2+m^2)\Theta(\omega) =\int\!\frac{\mathrm{d}^dk}{2\sqrt{\mathbf{k}^2+m^2}}~. \ \ \ \ \ (7)

Finally, since {k} and {x} are related by a Fourier transform, we must adopt a convention for the associated factor of {(2\pi)^d}. Mathematicians seem to prefer splitting this so that both {\mathrm{d}^dk} and {\mathrm{d}^d x} get a factor of {(2\pi)^{d/2}}, but physicists favour simply attaching it all to the momentum, so that

\displaystyle \hat\phi(k)=\int\!\mathrm{d}^dx\,e^{-ikx}\phi(x) \qquad\mathrm{and}\qquad \phi(x)=\int\!\frac{\mathrm{d}^dk}{(2\pi)^d}\,e^{ikx}\hat\phi(k)~. \ \ \ \ \ (8)

which further implies the convention

\displaystyle (2\pi)^d\delta^d(k-p)=\int\!\mathrm{d}^dx\,e^{-i(k-p)x} \qquad \mathrm{and} \qquad \delta^d(x-y)=\int\!\frac{\mathrm{d}^dk}{(2\pi)^d}\,e^{ik(x-y)}~, \ \ \ \ \ (9)

as one can readily verify by substituting {\phi(x)} into {\hat\phi(k)} (or vice versa):

\displaystyle \begin{aligned} \hat\phi(k)&=\int\!\mathrm{d}^dx\,e^{-ikx}\!\int\!\frac{\mathrm{d}^dp}{(2\pi)^d}\,e^{ipx}\hat\phi(p) =\int\!\frac{\mathrm{d}^dp}{(2\pi)^d}\int\!\mathrm{d}^dx\,e^{-i(k-p)x}\hat\phi(p)\\ &=\int\!\mathrm{d}^dp\,\delta^d(k-p)\hat\phi(p) =\hat\phi(k)~. \end{aligned} \ \ \ \ \ (10)

Thus our choice for the measure in (4):

\displaystyle \int\!\frac{\mathrm{d}^dk}{f(k)}\equiv\int\!\frac{\mathrm{d}^dk}{2\sqrt{\mathbf{k}^2+m^2}(2\pi)^d} =\int\!\frac{\mathrm{d}^dk}{2\omega(2\pi)^d}~. \ \ \ \ \ (11)

(I realize that was a bit tedious, but setting one’s conventions straight will pay dividends later. Trust me: I’ve lost hours trying to sort out factors of {2\pi} and the like for failure to invest this time at the start).

We can now consider vacuum expectation values of products of field operators {\phi}. For free scalar fields, these can always be decomposed into two-point functions, which therefore play a defining role. In particular, we can construct various Green functions of the wave operator {(\square-m^2)} from the two-point correlator {\langle\phi(x)\phi(x')\rangle}, including the familiar Feynman propagator. Following [1], we’ll denote the expectation values of the commutator and anticommutator as follows:

\displaystyle \begin{aligned} iG(x,x')&=\langle\left[\phi(x),\phi(x')\right]\rangle=G^+\!(x,x')-G^-\!(x,x')~,\\ G^{(1)}(x,x')&=\langle\left\{\phi(x),\phi(x')\right\}\rangle=G^+\!(x,x')+G^-\!(x,x')~, \end{aligned} \ \ \ \ \ (12)

where {G^{\pm}} on the far right-hand sides are the so-called positive/negative frequency Wightman functions,

\displaystyle \begin{aligned} G^+\!(x,x')&=\langle\phi(x)\phi(x')\rangle~,\\ G^-\!(x,x')&=\langle\phi(x')\phi(x)\rangle~. \end{aligned} \ \ \ \ \ (13)

Note that while physicists call all of these Green functions, they’re technically kernels, i.e.,

\displaystyle \left(\square_x-m^2\right)\mathcal{G}(x,x')=0~,\qquad\qquad\mathcal{G}\in\{G,\,G^{(1)}\!,G^{\pm}\}~. \ \ \ \ \ (14)

One can immediately verify this by observing that since {\square_x} acts only on {\phi(x)} (that is, {\square_x\phi(x')=0}), it reduces to the Klein-Gordon equation above for the Wightman functions, from which the others follow.

Using these building blocks, we can consider the true Green functions

\displaystyle iG_F(x,x')=\langle\mathcal{T}\phi(x)\phi(x')\rangle=\Theta(t-t')G^+\!(x,x')+\Theta(t'-t)G^-\!(x,x')~, \ \ \ \ \ (15)

which is the familiar (time-ordered, {\mathcal{T}}) Feynman propagator, and

\displaystyle \begin{aligned} G_R(x,x')&=-\Theta(t-t')G(x,x')~,\\ G_A(x,x')&=\Theta(t'-t)G(x,x')~, \end{aligned} \ \ \ \ \ (16)

which are the retarded (R) and advanced (A) propagators. All three of these are Green functions of the wave operator, i.e.,

\displaystyle \left(\square_x-m^2\right)\mathcal{G}(x,x')=\delta^D(x-x')~,\qquad\qquad\mathcal{G}\in\{G_F,G_R,G_A\}~. \ \ \ \ \ (17)

Let’s verify this for the Feynman propagator; the others are similar. Using the fact that {\eta^{\mu\nu}\partial_\nu\Theta(t-t')=\eta^{\mu0}\partial_0\Theta(t-t')=\eta^{\mu0}\delta(t-t')}, we have

\displaystyle \begin{aligned} \square_x G_F=&-i\eta^{\mu0}\partial_\mu\left[\delta(t-t')G^+\!(x,x')-\delta(t'-t)G^-\!(x,x')\right]\\ &-i\eta^{\mu\nu}\partial_\mu\left[\Theta(t-t')\partial_\nu G^+\!(x,x')+\Theta(t'-t)\partial_\nu G^-\!(x,x')\right]~. \end{aligned} \ \ \ \ \ (18)

Now observe that by virtue of the delta function, the equal-time commutator {{[\phi(t,\mathbf{x}),\phi(t,\mathbf{x}')]=0}} means that in the first line, {G^+=G^-}. And since the delta function itself is even, this implies that the first two terms cancel, so we continue with just the second line:

\displaystyle \begin{aligned} \square_x G_F=&-i\eta^{00}\left[\delta(t-t')\partial_0 G^+\!(x,x')-\delta(t'-t)\partial_0 G^-\!(x,x')\right]\\ &-i\eta^{\mu\nu}\left[\Theta(t-t')\partial_\mu\partial_\nu G^+\!(x,x')+\Theta(t'-t)\partial_\mu\partial_\nu G^-\!(x,x')\right]\\ =&\,\,i\delta(t-t')\left[\pi(x)\phi(x')-\phi(x')\pi(x)\right]\\ &-i\left[\Theta(t-t')\square_x G^+\!(x,x')+\Theta(t'-t)\square_x G^-\!(x,x')\right]~, \end{aligned} \ \ \ \ \ (19)

where in the second step, we have used the fact that the delta function is even, and identified the conjugate momentum {\pi(x)=\partial_0\phi(x)}. Then by (14), the second line will vanish for all values of {t\!-\!t'} when we add in the {-m^2} term of the wave operator, and the first line is just (minus) the equal-time commutator {[\phi(t,\mathbf{x}),\pi(t,\mathbf{x}')]=i\delta^d(\mathbf{x}-\mathbf{x}')}. Hence

\displaystyle \left(\square_x-m^2\right) G_F=\delta(t-t')\delta^d(\mathbf{x}-\mathbf{x}')=\delta^D\!(x-x')~. \ \ \ \ \ (20)

Thus the Feynman propagator is indeed a Green function of the wave operator {(\square_x\!-\!m^2)}; similarly for {G_R} and {G_A}.

The reason I’ve been calling the Green functions {G_F,G_R,G_A} “propagators” is that, unlike the kernels {G,G^{(1)},G^{\pm}}, they represent the transition amplitude for a particle (virtual or otherwise) propagating from {x} to {x'}, subject to appropriate boundary conditions. To see this, consider the integral representation

\displaystyle \mathcal{G}(x,x')=\int\!\frac{\mathrm{d}^Dk}{(2\pi)^D}\frac{e^{ik(x-x')}}{-k_0^2+\mathbf{k}^2+m^2}~, \ \ \ \ \ (21)

where {k^2=k^\mu k_\mu=-k_0^2+\mathbf{k}^2}. Due to the poles at {k_0=\omega=\pm\sqrt{\mathbf{k}^2+m^2}}, we need to choose a suitable contour for the integral to be well-defined (analytically continuing to {{k\in\mathbb{C}}}). The particular choice of contour determines which of the kernels {G,G^{(1)},G^{\pm}} or Green functions {G_F,G_R,G_A} we obtain. (As for how we obtained (21) in the first place, one can directly substitute in the mode expansion (4) to the definitions, and convert the Heaviside functions into an appropriate integral. An easier way, at least for the Green functions, is to simply Fourier transform the wave equation (17):

\displaystyle \begin{aligned} \left(\square_x-m^2\right)\int\!\frac{\mathrm{d}^Dk}{(2\pi)^D}\,\tilde{\mathcal{G}}(k)\,e^{ik(x-x')}&=\delta^D(x-x')\\ \implies \int\!\frac{\mathrm{d}^Dk}{(2\pi)^D}\left(-k^2-m^2\right)\,e^{ik(x-x')}\tilde{\mathcal{G}}(k)&=\int\!\frac{\mathrm{d}^Dp}{(2\pi)^D}\,e^{ip(x-x')}~. \end{aligned} \ \ \ \ \ (22)

Since this expression (i.e., the delta function) is even in {x\!-\!x'}, we may absorb the sign into the integration variable, and identify

\displaystyle \tilde{\mathcal{G}}(k)=\frac{1}{k^2+m^2}~, \ \ \ \ \ (23)

whereupon Fourier transforming back to position space yields (21). As alluded above however, these expressions don’t make sense without specifying a pole prescription, so this argument isn’t very rigorous; it’s just a quick-and-dirty way of convincing yourself that (21) is plausible.)

To make sense of this expression, we split the integral based on the two poles of {k_0}:

\displaystyle \begin{aligned} \mathcal{G}(x,x')&=\int\frac{\mathrm{d}^dk}{(2\pi)^D}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}')}\oint\mathrm{d} k_0\frac{e^{-ik_0(x_0-x'_0)}}{-k_0^2+\mathbf{k}^2+m^2}\\ &=\int\frac{\mathrm{d}^dk}{(2\pi)^D}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}')}\oint\mathrm{d} k_0\frac{e^{-ik_0(x_0-x'_0)}}{-2k_0}\left(\frac{1}{k_0-\sqrt{\mathbf{k}^2+m^2}}+\frac{1}{k_0+\sqrt{\mathbf{k}^2+m^2}}\right)~. \end{aligned} \ \ \ \ \ (24)

Now, the boundary conditions of the propagator at hand determines the {i\epsilon} prescription, i.e., which of the poles we want to enclose with the choice of contour. Consider first the retarded propagator {G_R}: the boundary condition implicit in (16) is that the function should vanish when {x_0\!<\!x_0'} (where {x_0\!=\!t}). Conversely, when {x_0\!>\!x_0'}, we must close the contour in the negative half-plane so that {e^{-ik_0(x_0-x'_0)}\rightarrow e^{-i(-i\infty)(x_0-x'_0)}=e^{-\infty(x_0-x'_0)}=e^{-\infty}}, and the integral converges. Thus we should introduce factors of {i\epsilon} such that both poles are slightly displaced into the lower half-plane. We can then apply Cauchy’s integral formula to correctly capture the poles at {k_0=\pm\omega-i\epsilon}, and then take {\epsilon\rightarrow0}:

\displaystyle \begin{aligned} G_R(x,x')&=\int\frac{\mathrm{d}^dk}{(2\pi)^D}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}')}\oint\mathrm{d} k_0\frac{e^{-ik_0(x_0-x'_0)}}{-2k_0}\left(\frac{1}{k_0-\sqrt{\mathbf{k}^2+m^2}+i\epsilon}+\frac{1}{k_0+\sqrt{\mathbf{k}^2+m^2}+i\epsilon}\right)\\ &=\Theta(x_0-x'_0)\int\frac{\mathrm{d}^dk}{(2\pi)^D}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}')}(2\pi i)\left(\frac{e^{-i\omega(x_0-x'_0)}}{-2\omega}+\frac{e^{i\omega(x_0-x'_0)}}{2\omega}\right)\\ &=-i\Theta(x_0-x'_0)\int\frac{\mathrm{d}^dk}{2\omega(2\pi)^d}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}')}\left( e^{-i\omega(x_0-x'_0)}-e^{i\omega(x_0-x'_0)}\right)\\ &=-i\Theta(x_0-x'_0)\int\frac{\mathrm{d}^dk}{2\omega(2\pi)^d}\left[e^{ik(x-x')}-e^{-ik(x-x')}\right]\\ &=i\Theta(x_0-x'_0)\langle\left[\phi(x),\phi(x')\right]\rangle =-\Theta(t-t')G(x,x')~, \end{aligned} \ \ \ \ \ (24)

where in the penultimate line, we have taken {\mathbf{k}\rightarrow-\mathbf{k}} in the second term, using the fact that the integration over all (momentum) space is even; in the last line, we have used the mode expansion (4) and the commutation relation {[a_k,a_k'^\dagger]=2\omega(2\pi)^d\delta^d(\mathbf{k}-\mathbf{k}')}. Note that to yield the correct signs, we’ve chosen the contour to run counter-clockwise (note the factor of {+2\pi i}), which means that it runs from {+\infty} to {-\infty} along the real axis. The prescription for the advanced propagator is precisely similar, except we deform both poles in the positive complex direction (so that the integral vanishes when we close the contour below, as required for {x_0-y_0>0}), and the non-vanishing contribution comes from closing the contour in the positive half-plane, encircling both poles clockwise rather than counter-clockwise (so that the integral again runs from {+\infty} to {-\infty} along the real axis).

Note that {G_R,G_A} are superpositions of both positive ({\omega\!>\!0}) and negative ({\omega\!<\!0}) energy modes, which is necessary in order for them to vanish outside their prescribed lightcones (past and future, respectively). In contrast, the Heaviside functions in the Feynman propagator are tantamount to imposing boundary conditions such that it picks up only positive or negative frequencies, depending on the sign of {t\!-\!t'}. For {t\!>\!t'}, we close the contour in the lower-half plane for convergence ({e^{-ik^0(t-t')}=e^{-i(-i\infty)(t-t')}=e^{-\infty(t-t')}}), and enclose {k_0=\omega} counter-clockwise (in the present conventions, we’re again going from {+\infty} to {-\infty} along the real axis); conversely, we close the contour clockwise in the upper-half plane to converge with {k_0=-\omega} when {t\!<\!t'}. Hence the corresponding {i\epsilon} prescription is

\displaystyle \begin{aligned} iG_F(x,x')&=i\int\frac{\mathrm{d}^dk}{(2\pi)^D}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}')}\oint\mathrm{d} k_0\frac{e^{-ik_0(x_0-x'_0)}}{-2k_0}\left(\frac{1}{k_0-\sqrt{\mathbf{k}^2+m^2}-i\epsilon}+\frac{1}{k_0+\sqrt{\mathbf{k}^2+m^2}+i\epsilon}\right)\\ &=i\int\frac{\mathrm{d}^dk}{(2\pi)^D}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}')}\left[(2\pi i)\Theta(x_0-x_0')\frac{e^{-i\omega(x_0-x'_0)}}{-2\omega}+(-2\pi i)\Theta(x_0'-x_0)\frac{e^{i\omega(x_0-x'_0)}}{2\omega}\right]\\ &=\int\frac{\mathrm{d}^dk}{2\omega(2\pi)^d}\left[\Theta(x_0-x_0')e^{ik(x-x')}+\Theta(x_0'-x_0)e^{-ik(x-x')}\right]\\ &=\Theta(t-t')G^+(x,x')+\Theta(t'-t)G^-(x,x')~, \end{aligned} \ \ \ \ \ (25)

as desired. It is in this sense that the time-ordering is automatically encoded by the Feynman propagator: for {t\!>\!t'}, it corresponds to a positive-energy particle propagating forwards in time, while for {t\!<t'}, we have a negative-energy particle (i.e., an antiparticle) propagating backwards.

(I won’t go into the pole prescriptions for the kernels here, but the contours are illustrated in fig. 3 of [1]. The essential difference is that unlike the Green functions, the contours for the kernels are all closed loops, so these don’t correspond to propagating amplitudes.)

So far everything I’ve reviewed is for zero-temperature field theory; as alluded in the introduction of this post however, finite-temperature is where things get really interesting. Recall from quantum mechanics that a mixed state can be thought of as a statistical ensemble of pure states, so rather than computing expectation values with respect to the vacuum state, we compute them with respect to the mixed state given by the thermal density matrix

\displaystyle \rho=\sum_ip_i\left|\psi_i\rangle\langle\psi_i\right|=\frac{1}{Z}e^{-\beta H}~, \ \ \ \ \ (26)

where the system, governed by the Hamiltonian {H}, is in any of the states {|\psi_i\rangle} with (classical) probability

\displaystyle p_i=\frac{1}{Z}e^{-\beta E_i}~. \ \ \ \ \ (27)

Of course, not all mixed states are thermal, but the latter is the correct state to use in the absence of any additional constraints. (One way to think of this is that the mixedness of a quantum state is a measure of our ignorance, which is why pure states are states of minimum entropy). Expectation values of operators {\mathcal{O}} with respect to (26) are then ensemble averages at fixed temperature {T=\beta^{-1}}:

\displaystyle \langle\mathcal{O}\rangle_\beta=\mathrm{tr}\left(\rho\,\mathcal{O}\right) =\sum_ip_i\langle\psi_i\left|\mathcal{O}\right|\psi_i\rangle~. \ \ \ \ \ (28)

Note that we’re in the canonical ensemble (fixed temperature), rather than the microcanonical ensemble (fixed energy), because the energy — that is, the expectation value of the hamiltonian operator {H=\sum_k \omega\,a_k^\dagger a_k} — will fluctuate as quanta are created or destroyed. Strictly speaking I should also include the chemical potential, since the number operator {N=\sum_ka_k^\dagger a_k} also fluctuates, but it doesn’t play any important role in what follows. (The distinction is worth keeping in mind when discussing black hole thermodynamics, where one should use the microcanonical ensemble instead, because the negative specific heat makes the canonical ensemble unstable).

The thermal Green functions (and kernels), which we denote with the subscript {\beta}, are then obtained by replacing the vacuum expectation value with the expectation value in the thermal state, (28); for example, the Wightman functions become

\displaystyle \begin{aligned} G_\beta^+\!(x,x')&=\langle\phi(x)\phi(x')\rangle_\beta~,\\ G_\beta^-\!(x,x')&=\langle\phi(x')\phi(x)\rangle_\beta~. \end{aligned} \ \ \ \ \ (29)

The aforementioned KMS condition can then be obtained from the Heisenberg equation of motion,

\displaystyle \phi(t_1,\mathbf{x})=e^{iH(t_1-t_0)}\phi(t_0,\mathbf{x})e^{-iH(t_1-t_0)}~, \ \ \ \ \ (30)

by evolving in Euclidean time by {t_1-t_0=i\beta}:

\displaystyle \begin{aligned} G_\beta^+&=\frac{1}{Z}\mathrm{tr}\left[e^{-\beta H}\phi(t,\mathbf{x})\phi(t,\mathbf{x}')\right] =\frac{1}{Z}\mathrm{tr}\left[e^{-\beta H}\phi(t,\mathbf{x})e^{\beta H}e^{-\beta H}\phi(t,\mathbf{x}')\right]\\ &=\frac{1}{Z}\mathrm{tr}\left[\phi(t+i\beta,\mathbf{x})e^{-\beta H}\phi(t,\mathbf{x}')\right] =\frac{1}{Z}\mathrm{tr}\left[e^{-\beta H}\phi(t',\mathbf{x}')\phi(t+i\beta,\mathbf{x})\right]~, \end{aligned} \ \ \ \ \ (31)

where the last step relied on the cyclic property of the trace; similarly for {G_\beta^-}. Thus we arrive at the KMS condition

\displaystyle G_\beta^\pm(t,\mathbf{x};t',\mathbf{x}')=G_\beta^\mp(t+i\beta,\mathbf{x};t',\mathbf{x}')~. \ \ \ \ \ (32)

Note that this is a statement about expectation values of operators in the particular state (26) (indeed, this can easily be formulated for a general observable {\mathcal{O}}, we’re just sticking with scalar fields for concreteness; for a slightly more rigorous treatment, with suitable comments about boundedness and whatnot, see for example [2]). More generally however, any state which satisfies (32) is called a KMS state, and describes a system in thermal equilibrium. Similar relations hold for the other Green functions / kernels as well; e.g.,

\displaystyle G_\beta^{(1)}(t,\mathbf{x};t',\mathbf{x}')=G_\beta^{(1)}(t+i\beta,\mathbf{x};t',\mathbf{x}')~. \ \ \ \ \ (33)

As an exception to this however, note that since the commutator of free scalar fields is a c-number, {G} in (12) remains unchanged, i.e., {G_\beta=G}.

In arriving at (32), we evolved in imaginary time {\tau=it} by an amount given by the (inverse) temperature {\beta}. This is none other than the usual Wick rotation from Minkowski to Euclidean space, except that the periodicity of the Green functions implies that the Euclidean or thermal time direction is compact, with period {\beta}. That is, if the original field theory lived on {\mathbb{R}^{d+1}}, the finite-temperature field theory lives on {\mathbb{R}^d\times S_\beta^1}, where {\beta} denotes the (inverse) circumference of the {S^1} (observe that as {\beta\rightarrow\infty}, we recover the zero temperature Euclidean theory on {\mathbb{R}^{d+1}}). Thus in general, a Wick rotation in which Euclidean time is periodic makes an intimate connection between QFT and statistical thermodynamics, where the compact direction controls the temperature.

So what does this have to do with black holes, or horizons more generally? As I hope to cover in a future part of this sequence, the spacetime outside a horizon is also described by a thermal state. From the statistical thermodynamics or information theory perspective, one can think of this as due to the fact that we traced over the states on the other side, so the mixed density matrix that now describes the part of the vacuum to which we have access is a reflection of our ignorance. As alluded in the previous paragraph however, the thermodynamic character of the vacuum in the black hole state is already encoded in the periodicity of the Euclidean time direction, and emerges quite neatly in the case of the Schwarzschild black hole,

\displaystyle \mathrm{d} s^2=-f(r)\mathrm{d} t^2+\frac{1}{f(r)}\mathrm{d} r^2+r^2\mathrm{d}\Omega_{d-1}^2~, \quad\quad f(r)=1-\frac{r_s}{r}~, \ \ \ \ \ (34)

where {r_s} is the Schwarzschild radius, and {\mathrm{d}\Omega_{d-1}^2} is the metric on the {(d\!-\!1)-}sphere, which we’ll ignore since it just comes along for the ride. Recall from my very first blog post that after Wick rotating to Euclidean time, one can make a coordinate change so that the near-horizon metric becomes

\displaystyle \mathrm{d} s^2=\mathrm{d}\rho^2+\frac{\rho^2}{4r_s^2}\mathrm{d}\tau^2~, \ \ \ \ \ (35)

where {\rho} is the radial direction, and — since these are polar coordinates — {\tau} takes on the role of the angular coordinate, which must be periodic to avoid a conical singularity; that is, for any integer {n},

\displaystyle \frac{\tau}{2r_s}\sim\frac{\tau}{2r_s}+2\pi n \quad\implies\quad \tau\sim\tau+4\pi r_s n~, \ \ \ \ \ (36)

and thus we identify the period {4\pi r_s=\beta}.

As a closing comment, the density matrix for KMS states has deeper relations to the idea of time translation symmetry via Tomita-Takesaki theory, through the modular hamiltonian that generates this 1-parameter family of automorphisms of the algebra of operators in the corresponding region. See for example [3]; this strikes me as a surprisingly under-researched direction, and I hope to revisit it in glorious detail soon.

References

  1. N. D. Birrell and P. C. W. Davies, Quantum Fields in Curved Space. Cambridge Monographs on Mathematical Physics. Cambridge Univ. Press, Cambridge, UK, 1984. http://www.cambridge.org/mw/academic/subjects/physics/theoretical-physics-and-mathematical-physics/quantum-fields-curved-space?format=PB.
  2. S. Fulling and S. Ruijsenaars, “Temperature, periodicity and horizons,” Physics Reports 152 no. 3, (1987) 135 – 176.
  3. A. Connes and C. Rovelli, “Von neumann algebra automorphisms and time-thermodynamics relation in generally covariant quantum theories,” Classical and Quantum Gravity 11 no. 12, (Dec, 1994) 2899–2917, https://arxiv.org/abs/gr-qc/9406019
This entry was posted in Physics. Bookmark the permalink.

2 Responses to QFT in curved space, part 1: Green functions

  1. Wencong Gan says:

    I really appreciate your notes on this topic. I am a Ph.D student and also major in this field about holography, AdS/CFT, tensor network, black hole information problem and neural networks. I really learned a lot from your notes. Thank you very much.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s