Statistical Mechanics

Notes from Prof. E. Ercolessi’s Lectures
Daniele Pucci

September 2023

Contents

1 Prerequisites
1.1 Prerequisites of Thermodynamics
1.1.1 The laws of thermodynamics
1.1.2 Thermodynamic potentials (for reversible processes)
1.1.3 Thermodynamic limit
1.1.4 Equation of state - μ-n relation
1.1.5 Variational principles
1.2 Prerequisites of Classical Mechanics
1.2.1 On the notion of ”state”
1.2.2 Probability distribution
1.2.3 Liouville’s theorem
1.2.4 Time-Independent Hamiltonians
2 Classical Statistical Mechanics
2.1 Microcanonical Ensemble
2.1.1 Microcanonical entropy Smc
2.1.2 One example/exercise (1.1, 1.2): Perfect gas
2.2 Canonical Ensemble
2.2.1 Thermodynamic quantities
2.2.2 Equipartition theorem
2.2.3 Exercise (2.1): Find the partition function of a (classical) perfect gas in 3D
2.2.4 Magnetic systems
2.2.5 Exercise (2.5): Thermodynamics of a magnetic solid
2.3 Grancanonical Ensemble
2.3.1 Thermodynamical quantities
2.3.2 Virial expansion (van der Waals gases)
2.4 State counting and Entropy
2.4.1 Probability distribution from maximum entropy principle
3 Quantum Statistical Mechanics
3.1 Review on Quantum Mechanics and Statistics
3.1.1 Quantum System
3.1.2 Density matrix
3.1.3 Identical particles - Permutation group
3.1.4 Quantum statistic
3.2 Second quantization
3.2.1 Creation/Annihilation operators
3.2.2 Fock Space
3.2.3 Field operator
3.2.4 Observable operators
3.3 Quantum ensembles
3.3.1 Microcanonical ensembles
3.3.2 Canonical ensemble
3.3.3 Grancanonical ensemble
3.3.4 Exercise (1.1): Quantum magnetic dipoles
3.3.5 Exercise (1.2): Quantum harmonic oscillators
3.4 Quantum gases
3.4.1 Fundamental equations
3.4.2 Semi-classical limit (exercise 2.1)
3.4.3 Fermions at T=0
3.4.4 Bosons at T=0
3.4.5 Exercise (2.5): Gas of photons

Chapter 1
Prerequisites

1.1 Prerequisites of Thermodynamics

(I) THU 29/09/2022
In thermodynamics there are extensive quantities, that grows with the system size, and intensive quantities, which does not. Conjugation between them, means tuning intensive variables to make extensive one change.

Note that we always refer to quasi-static transformations

1.1.1 The laws of thermodynamics

0th Law Equilibrium (empirical) temperature.

We call: MA : space of state of A MB : space of state of B.
The total phase space is MA ×MB = {(a,b)}, where (a,b) denotes all possible couples.

At equilibrium, not all couples are possible. At equilibrium:

FAB (a,b) = 0 ⇐ ⇒   (a, b)

so there is a constraint. This is an equivalence relation:

A ~  A     A ~  B  ⇐ ⇒  B  ~ A     A ~  B &  B ~  C  =⇒   A ~ C

FAB (a, b) = fA (a) - fB (b) = 0

This condition define the empirical temperature tA = tB at equilibrium.

The transitive property allow us to choose anything as a measure (e.g. thermometer).

1st Law Internal energy E

There is an internal energy, which can change in different ways, but it always conserved (conservation of energy).

dE  = δQ  - δL + μdN

where δQ is heat, δL is work, N is number of particles and μ is the chemical potential: the energy needed to add/remove a particle.

d means that dE = 0 on any cycle, so the integral does not depend on the path. δ no, so δQ,δL is not necessary 0.

E.g. Classical fluids:
δL = pdV : the work derive from the compression/expansion. So the variation of the energy: dE = δQ - pdV + μdN.

2nd Law Entropy S

In a reversible process:

∮
   δQ-                     deltaQ-
    T  = 0     in general:     T    = dS

Putting the two laws together,

∮
   δQ-                δQ-
    T  ≤ 0 = ⇒  dS  ≥  T    (=  0 for reversible)

So, a system evolves towards the maximum of entropy

3rd Law For any isothermal process, ΔS---→
T→00

1.1.2 Thermodynamic potentials (for reversible processes)

Let’s see the relation between these quantities:

Internal energy dE = TdS - pdV + μdN
The internal energy can be changed by changing S,V,N so E = E(S,V,N) and:

        |                  |                |
     ∂E |               ∂E |             ∂E |
T =  ∂S-||        p = -  ∂V-||        μ =  ∂N-||
         V,N                S,N               S,V

All terms in couple are conjugate variables: T S,p V,μ N

E,S,V,N are extensive variables, so if

(
|{ N  →  λN
  V  →  λV     =⇒   E =  λE
|( S  → λS

where λ is a scaling factor.

So:

-------------------------------
|                             |
-E(λS,-λV,-λN-)-=-λE-(S,V,-N-)-

Homogeneous function of degree 1 (linear)

That requires: E(S,V,N) = TS - pV + μN dE = TdS - pdV + μdN

Usually it’s easier to work with T then S (there are no experiment where you can tune the entropy S), so thermodynamic potentials are introduced. These are other function which are more convenient to work with:

1.1.3 Thermodynamic limit

N,V →∞ with n = N∕V fixed In this limit, all the thermodynamic potential diverges, so (e.g) E has no meaning. What we can calculate is the ratio with the number of particle (e.g E∕N : internal energy for particle). Things do not change because they depend on N, except for Ω which doesn’t.

    E          S          V           F          G          Ω
e ≡ ---    s ≡ ---    v ≡ ---    f ≡  ---   g ≡  ---    ω ≡ ---
    N          N          N           N          N          N

1.1.4 Equation of state - μ-n relation

From the granpotential, one can derive p = -∂Ω-
∂V, which, written for a particular model (e.g. classical gas) gives the equation of state of the system/model.

Also, the μ-n relation is very important: N = -∂Ω
∂μ-

1.1.5 Variational principles

(II) MON 03/10/2022
Suppose we set up an experiment in which S,V,N are constant. Then dE = 0, which implies that the system evolves towards the minimum of energy E.
The same happen if T,V,N are constant, in which case dF = 0, so the system evolves towards the minimum of Helmotz free energy. Etc... .

So the problem becomes a problem of minimization.
We’ll see that the minimization of F and Ω are the most important.

For reversible processes, dF = -SdT + pdV + μdN, from which:

          |                  |                |
       ∂F-||               ∂F-||            ∂F--||
S  = - ∂T |        p = -  ∂V |        μ = ∂T  |
           V,N                T,N              T,V

The minimum is found with the hessian matrix:

           (                          )
               ∂2F      ∂2F     ∂2F
           ||   ---2   -------  -------||
           ||   ∂T     ∂T ∂V    ∂T ∂N  ||
           |   ∂2F      ∂2F     ∂2F   |
Hess (F ) = || -------  ----2   -------||
           ||  ∂V ∂T    ∂V      ∂V ∂N  ||
           (   ∂2F      ∂2F     ∂2F   )
              ------- -------   ----2
              ∂N ∂T   ∂N  ∂V    ∂N

In fact, the condition to have a minimum is that all the 3 eigenvalues of the hessian matrix should be > 0.

That is the same as saying that going in every direction the ”gradient” increase

We won’t prove this, but because of the way F = F(T,V,N), it us sufficient that

∂2F-         ∂2F--        -∂2F---
∂T 2 < 0     ∂V 2 > 0     ∂T ∂N  ≤ 0

The same can be done changing the variables and it also works with other different from F (e.g. E etc.)

1.2 Prerequisites of Classical Mechanics

To represent a classical system we need:

1) Coordinates: position q, momenta p.
For 1 particle. If a particle lives in IRd = ⇒⃗q IRd ⃗p IRd.
So the phase space M1 is {(q,p)}∈ IRd × Rd = 2IRd

For N particles living in IRd, the phase space is:

M    = {(q ,p ,q ,p ,...,q ,p  )} =  {(q,p )  i = 1,...,N } = IR2dN
  N       1  1  2  2      n  N         i i

2) Observable: An observable is given by a smooth real function from the phase space to a real number.

f (qi,pi) : MN  →  IR

e.g. H, angular momentum, kinetic energy T are all observables

3) Measure: (ideal, without errors) A measure of an observable on a state (qipi) is given by the value of the function at that particular point: f(q1,p1)

4) Evolution: The evolution of a system is fixed by a special observable, called Hamiltonian H, through Hamilton’s equation of motion:

     ∂H             ∂H
q˙i = ----    ˙pi = - ----    (they  give Newton  ’s law)
     ∂pi            ∂qi
(1.1)

In general, H = H(qi,pi,t), but we only study the case where the hamiltonian is time-independent.

The equation 1.1 are equation of the first order in t (qi(t)pi(t)), so the solution is uniquely fixed by the initial conditions (qi,pi). Because of that we have two theorems:

Conservation of volumes (even id the shape is changed). Since each point of the phase space is a state, the volume counts the number of states. In other words, giving a volume is the same as giving a subset.

Conservation of energy. If H does not depend explicitly on time, the hamiltonian is constant for each curve of motion:

E  = H (qi(t),pi(t)) = H (¯qi, ¯pi) = H (qi(t = 0),pi(t = 0))

1.2.1 On the notion of ”state”

We will call one of these states (qi(t),pi(t)) a microstate at time t. Fixing an initial microstate, completely determine the trajectory in phase space, backward and forward in time.

If the initial conditions are changed a little, allowing the system to ”choose” them between a given set, we enter the field of complex systems

In general, there is a (huge) number of possible microstates corresponding to the same macroscopic set of thermodynamic variables (the macrostate).

In order to study this, we can think of having an ensemble: a large number of copies of the system, all with the same Macroscopic State but with different microscopic realizations.

1.2.2 Probability distribution

Given an ensemble, in the limit in which the number of copies becomes very large, we can construct the probability with which, at a fixed time, a given microstate {qi(t),pi(t)} appears, thus recovering a probability density distribution on MN:

                          +
ρ(qi(t),pi(t))    MN   →  IR  (0,1)

Which is:

To have an a-dimensional quantity, we replace:

     ∏                         ∏
dΓ ≡     dqdp      with  d Ω ≡ ---idqidpi
           i  i                    h
      i

where h is a constant with the dimension of an action (here we don’t say anything about it, it’s just the dimension of a little state of sides dqi,dpi but in quantum mechanics, it will be Plank’s constant).

1.2.3 Liouville’s theorem

The conservation of volumes is the Liouville’s theorem.
Given a region Ω0 in which there is some density probability ρ(qi,pi) Ω0, there is a density current ⃗
J of particles moving out of Ω0:

J⃗ = ρ⃗v     where ⃗v = (q˙i, ˙pi) is a velocity in phase space

If there is a conservation of the total probability density: ∂ρ
∂t + ⃗∇(J⃗ ) = 0 That means

dρ-  -∂t     ⃗
dt = ∂h ot + ∇ (ρ⃗v) = 0

which is the Liouville’s theorem

We can also use the Poisson’s bracket’s to write ⃗∇(ρ⃗v) = {ρ,H},
reminding that {f,g} = i( ∂f ∂g   ∂g ∂f)
  ∂qi∂pi - ∂qi∂pi

Definition: A system is called stationary iff ∂ρ
∂t = 0. Stationarity is a necessary condition for equilibrium: dρ
dt = 0, which is what we want to study. From the previous condition follows that at equilibrium: {ρ,H} = 0 which can happen if

1.
ρ = const
2.
ρ = ρ(H(qi,pi))

giving, respectively, the cases of the microcanonical and canonical/grancanonical distributions.

Definition: Given an observable f, we can define its:

average fρ = MNf(qi,pi)ρ(qi,pi)( N        )
  ∏  dqidpi
       h
  i=1
standard deviation t)2 = f2 ρ - (fρ)2

The subscript ρ, indicates that we need to have defined a probability density distribution, to evaluate these two.

1.2.4 Time-Independent Hamiltonians

PIC

For time-independent hamiltonians, the energy is conserved. That means that all the trajectories will be on a (2M - 1)hypersurface in the phase space (where M is the total number of degree of freedom). The whole space can thus be foliated into different sheets according to different energies like in the figure.

So we can compute every integrals integrating firstly over an hypersurface of constant energy and then over all the possible energies:

∫      ∏           ∫ ∞     ∫
  (...)   dqidpi =      dH    dsH (...)
                    0       H

Definition: Volume (~ number of state) in phase space with energy lower than a certain value (0 H(qi,pi) E):

        ∫                   ∫      ∫          ∫
                ∏             E                  E
Σ(E ) ≡            dqidpi =     dH     dSH  =     dH ω (H )
         0≤H ≤E  i           0      SH          0

Definition: The density of state is the ”area” of the hypersurface SH

        ∫
                   ∂-Σ-
ω(E ) ≡     dSH  = ∂E
         SH

Σaren’t property of the space, but they depends on H because the surfaces depend on H.

In the case of time-independent hamiltonians:

⟨f ⟩E --1---
ω(E ) SEdSEf
time average ⟨f ⟩ lim T→∞1
--
T t0t0+T dtf(q i(t),pi(t))

the latter exists for almost all initial conditions and it is independent of t0

Definition: A system is said to be ergodic over the surface SE iff almost all points (qi,pi) SE pass through a neighbourhood U SE during the evolution. In other words, starting from any point, if you let the system evolve for a long time, every region will be explored.

Theorem: A system is ergodic iff, for almost all initial points: ⟨f ⟩ = ⟨f⟩E
This tell that the time spent on a region is proportional to the area of that region.

In the following we will consider only ergodic systems.

Chapter 2
Classical Statistical Mechanics

2.1 Microcanonical Ensemble

06/10/2022
Recap:

Phase space MN = {(qi,pi) i = 1N}
Hamiltonian H(qi,pi) : MN IR

The microcanonical ensemble represents a closed system that doesn’t exchange either energy or matter with the environment, so in which the energy E, the number of particles N and the volume V are fixed. The evolution occurs on the hypersurface SE MN H(qi,pi) = E

To describe the system we need the probability distribution of the microcanonical ensemble. We assume (a priori) a uniform probability:

Definition: The probability distribution of the microcanonical ensemble is:

ρmc (qi,pi) = C δ(H (qi,pi) - E )

where C is a constant we can obtain from the normalization:

    ∫                  ∫
                                                     --1---
1 =      Cδ (H  - E ) =     CdSE  = C ω (E) = ⇒  C  = ω (E )
     MN                 SE

(reminding that ω(E) is the area of SH = E). So:

               1
ρmc (qi,pi) = -----δ (H  - E )
             ω (E)
(2.1)

Working with S is difficult (e.g. it creates some problems when integrating), so we give an operative definition. We can write the volume of a phase space:

        ∫                 ∫
                            E
Σ (E ) =             d Γ =     dH ω (H )
         0≤H (qi,pi)≤E        0

from which,

        ∫                ∫ E+ ΔE            (if ΔE small)
Γ (E ) =            dΓ =         ω (E ′)dE ′ =    ≃      ω(E )ΔE
         E≤H ≤E+ΔE        E

So we can see that the microcanonical probability density function (2.1) is the limit for ΔE 0 of:

            (    1
            {  -----  E  ≤ H  ≤ E + ΔE
ρmc(qi,pi) =   Γ (E )
            ( 0       otherwise

2.1.1 Microcanonical entropy Smc

We will see that it coincide with the thermodynamic entropy.

Smc(E, V,N ) ≡ kB log ω(E )

1) We could actually define the entropy in three different ways:

1.
Smc(1) = k B log ω(E)
2.
Smc(2) = k B log Γ(E)
3.
Smc(3) = k B log Σ(E)

which in general are different quantities. However in the thermodynamic limit (N,V →∞, with n = N∕V const), they represent the same quantity:

        (1)     (2)     (3)
      S-mc   S-mc    Smc-
smc =  N   =  N   =  N

So while doing calculations, we can use one of the three.

2) Entropy should be extensive, so it should be additive: SmcA + S mcB = S mcAB

Proof: To prove it, let’s call the phase space of the two systems:
M1,M2. So MAB = M1 ×M2

       N1                  N2
dΓ  = ∏   dqdp      dΓ  = ∏   dqdp     =⇒     dΓ =  dΓ dΓ
  1         i  i      2         i  i                  1   2
      i=1                 j=1

If H = H1 + H2 =⇒ E = E1 + E2 + Eint.
Here we make an assumption: there could be interactions in the wall that divides the two systems, however, since E1 and E2 both scale with the volume (L3) while E int scale like the area of interaction (L2), we neglect the interaction term, because it disappears in the thermodynamic limit.

We can foliate the phase space with surfaces each at different energy

ω(E) = M1×M2dΓδ(H- E)
= (∫  ∞     ∫       )
      dH1    dSH1
   0( ∫ ∞     ∫       )
      dH2    dSH2
   0 δ(H1 + H2 - E)
= dH1 dH2δ(H1 + H2 - E) H1=E1dsH1 H2=E2dSH2
= dH1 dH2δ(H1 + H2 - E) ω1(E1) ω2(E2)
= 0EdE 1 ω1(E1) ω2(E2 = E - E1)

The integrand is 0 and defined in a compact interval [0,E), so it has a maximum. Let E1*,E 2* = E - E 1* be the value of energy for which the integrand is maximum. So:

        ( ∫ E    )
ω(E ) ≤       dE1  ω1 (E*1) ω2(E *2) ≤ E*1 ω1(E *1) ω2(E2*)
           0

and for ΔE small enough:

         *       *                  *      *      *
ΔE  ω1 (E1) ω2(E 2) ≤ ΔE  ω (E ) ≤ E 1 ω1(E1) ω2(E 2) ΔE

Reminding that Γ(E) ~ ω(E)Δ(E):

Γ1(E1* 2(E2*) Γ(E) -E--
ΔEΓ1(E1* 2(E2*)
log Γ1 + log Γ2 log Γ log Γ1 + log Γ2 + log  E
----
ΔE (×kB)
Smc1 + S mc2 S mc Smc1 + S mc2 + k B log -E--
ΔE

In the thermodynamic limit, dividing everything by N and letting N →∞, the last term approaches zero and can be neglected. So we find:

Smc (E ) = S1mc(E *1) + S2mc(E *2 = E - E *1)

____________________________________________________________________________________

3) In the thermodynamic limit, this entropy coincides with the thermodynamic entropy: smc = sth

Proof: At equilibrium, E = E1* + E 2*, where ω 1(E1) ω2(E - E1) is maximized.

Γ(E1*)Γ(E 2*) d(Γ (E ) = Γ 1(E1 )Γ 2(E2))|E1=E1*,E2=E2* = 0
(         )         (        ) |
  ∂Γ 1                ∂Γ 2     |
  ----dE1   Γ 2 + Γ 1---- dE2  ||
  ∂E1                ∂E2E 1*,E2* = 0
(dE2 dE1)  Γ 1
-----
Γ 1Γ 2(        |              |     )
   ∂Γ 1  |          ∂ Γ 2|
   ----Γ 2||    =  Γ 1---||
   ∂E1    E*1,E *2     ∂E2  E*1,E*2

1 ∂ Γ ||
-----1||
Γ 1∂E1E 1* = 1  ∂Γ  ||
-----2-||
Γ 2 ∂E2E 2*
                |
 ∂              |
----kB log Γ 1(E1 )||
∂E1E 1* =                  |
 ∂               |
----kB log Γ 2(E2 )||
∂E2E 2*
       |
-∂-- 1 ||
∂E1 Smc|E 1* =         |
-∂-- 2  ||
∂E2 Smc |E 2*

since this is calculated at E1*,E 2*, it is calculated at the equilibrium (N,V fixed).

The same thing happens in thermodynamic: if T1 = T2,

                                          |
dE  = T dS  -  pdV +  μdN  = ⇒  -1 =  ∂Sth||
          th                    T     ∂E  |V,N

So the equilibrium in thermodynamics requires this condition:

|-------------|
|∂S1     ∂S2  |
|---th-=  ---th-|
-∂E1-----∂E2--

So we find that:

Sth(E, V,N ) = Smc (E, V,N ) = kB log Γ (E )

(equal and not only proportional, because kB is the constant appositely chosen to make them equal)

____________________________________________________________________________________

4) We saw that

        ∫
             ( ∏        )
⟨f⟩mc =           dpidqi f (qi,pi)ρmc(qi,pi)
         MN

We can now prove the Universal Boltzmann’s formula (works in the TD-limit):

Smc =  - kB ⟨log ρmc⟩mc

Proof: Working from the Universal Boltzmann’s formula, we can arrive to the definition of entropy

- kB⟨logρmc ⟩mc = -kB MNdΓ(log ρmc)ρmc ρmc =   1
------
ω(E )δ(H- E) = ⇒ ρmc|Sc =   1
------
ω(E )
= -kB SE--1---
ω (E )(- log ω(E))
= kB--1---
ω (E) log ω(E) SdSE
= kB log ω(E) = Smc

So we have proved that, in the TD-limit:

Smc =  kB log Σ = kB log Γ = kB logω =  - kB ⟨log ρmc⟩mc

2.1.2 One example/exercise (1.1, 1.2): Perfect gas

We consider a gas of N non-relativistic and non-interacting monoatomic particles in 3D, confined in a volume V .

                                                     ∑N   2
MN   = { {(⃗qi,⃗pi)}Ni=1 : ⃗qi ∈ V, ⃗pi ∈ IR3}  H (qi,pi) =     ⃗pi--
                                                     i=1 2m

The volume of states is:

        ∫  ∏N   3  3
Σ (E) =       d-qid-pi
           i=1   h3

where the integral is calculated on:

                          ∑N  2                 ∑N  ( (x)2    (y)2    (z)2)
0 ≤ H (qi,pi) ≤ E     0 ≤     ⃗pi ≤ 2mE       0 ≤      pi   + pi   + pi    ≤  2mE
                          i=1                    i=1

So it is the volume of a 3N-dimensional sphere of radius √ -----
  2mE. So:

Σ(E) =  1
----
h3N( ∫      )
     d3qi
   VN 0⃗p i22mE i⃗p i2
= V N
-3N-
hΩ3N(R = √ -----
  2mE)
=   N
V---
h3N      3N∕2
----2π--------
(3N )Γ (3N ∕2)(R = √2mE---)3N

where Γ(x) = 0dt tx-1e-t is the Euler’s Γ-function, which can be seen as a generalization of the factorial. In fact, one can prove (integrating by parts) that
Γ(n) = (n - 1)! Γ(x + 1) = xΓ(x) log Γ(x) x log x - x

The previous formula is composed by an angular part (the Euler’s Γ-function) and a radial part (which scale with 3N). So finally we have that:

        2 ( V )N  (2πmE  )3N∕2
Σ (E ) = --  -3-   ------------
        3   h      N Γ (3N ∕2)

Calculating the derivative, one can also get the density of the states and dΓ:

         ∂Σ    3N                               3N
ω (E) = ∂E--=  2E-Σ (E )    Γ (E ) = ω(E )ΔE =  2E-Σ (E )ΔE

and we can check that in the TD-limit, the different ways of defining the entropy are the same:

                     //                  //       3/N-
log-Γ-=  logω-+  log-/ΔE-- = log-Σ + log-/ΔE- + lo/g-/2E
 N       N     / /N         N     / /N      / N

where the terms can be cancelled in the limit of N →∞ (for the last one, since E ~ N, the ratio 3N∕2E is constant).

We can then find the entropy for a perfect gas (supposing distinguishable particle)

Sdis = kB log Σ(E)
= kB[       (  (        )   )                 (    )        ]
             2-πmE--  3∕2        /           3N--       /2-
  N log  V     h2         - /lo/g N -  log Γ    2   +  lo/g 3
                                                   / (TD - limit)
= kB[       (               )                     ]
           ( 2 πmE  )3∕2     3N     3N     3N
  N log  V   ----2--      -  ----log ----+  ----
               h              2      2      2 (Stirling)
= kB[             (   (       )3 ∕2) ]
  3N--+ N  log   V   4πmE---
   2                3N h2

(III) MON 10/10/2022
This formula has a problem: it is not extensive: if N 2N and V 2V we expect S 2S, but we actually have S 2S + N log 2.

The solution is to consider indistinguishable particles. That means that the states: (q1,p1,q2,p2,,qn,pn) (q2,p2,q1,p1,,qn,pn)
which are different points in the phase space MN, should be counted only once. We have N! of this equivalent vectors, so Σ(E) Σ(E)∕N!, and we get:

Sind = kB log Σ(E )
------
 N !
= kB[log Σ (E) - logN !]
= kB[             (                ) ]
  5             V ( 4πmE   )3∕2
  -N  + N log   --- -------
  2             N    3N h2 (2.2)

which is an extensive quantity.

So for an ideal gas in 3D microcanonical we have:

dΩ = i=1Nddq ddp
----i--i
   hd distinguishable particles
dΩ = -1-
N ! i=1Nddqiddpi
   hd indistinguishable particles

From the entropy, knowing that TdS = dE + pdV - μdN, we can get:

Substituting 2.3 in 2.2 we get:

                      [                  ]
     5                  V ( 2πmkBT   )3∕2     5                 ( d )
S =  -N kB +  kBN  log  ---  ----2----      =  -N kB + 3N kB log   ---
     2                 N       h              2                   λT

where we have defined the thermal wavelength λT and the average inter-particle distance d as:

      ∘ ----------
        ---h2----         V--  -1    3
λT ≡    2πmkBT        v = N  = n  ≃ d

These formulas have a problem: it might happen that for low temperature T,d<
~ λT ,S < 0 !. So something break down when d ~ λT , but there will be a quantum mechanical solution. However, what we have found is true until d > λT , otherwise we have interaction effects.

2.2 Canonical Ensemble

System S = {(qi(1),p i(1))}, V 1 V 2, N1 N2
Environment ε = {(qi(2),p i(2)}, V 2,N2

We have a system and an environment with a wall in between them that permits heat and energy to pass but not particles. Energy is exchanged, but the total energy E E1 + E2 = const. (with E1 E), so that the universe U = Sε is microcanonical.

The phase space is MU = M1 ×M2 and:

                  ( N∏1          ) ( N∏2          )
d ΩU = d Ω1dΩ2 ∝       dq(1)dp (1)       dq(2)dp(2)
                    i=1   i    i     j=1  j    j

    (                )
      (1)  (1)  (2)  (2)    δ(H1-+--H2---E-)
ρmc  qi  ,pi ;qj ,pj   =       ω (E )

The whole universe is described with a microcanonical probability distribution function, and the (canonical) probability distribution describing the system only is obtained by integrating out the environment d.o.f. So:

ρc(S)( (1) (1))
 qi ,pi ρmc( )dΩ2 δ is selecting only the surface with E2 = E - E1
= --1---
ω(E ) E2=E-E1dΩ2
= ω(E2 =  E - E1 )
----------------
     ω (E)

So ρc(S)(        )
  q(i1),p(i1)~ ω2(E2 = E - E1) and we know that S = kB log ω, so:

log ρc ~ log ω2(E2) ~ S2(E2 = E - E1)
(E1 E) S2(E) +     |
∂S2 |
----||
∂E2E 2=E(-E1) +
= S2(E) ------E1-------
T2(= T1 =  T)

                      1  (         E1)
=⇒   log ρc ~ logω2 ~  ---  S(E ) - ---
                      kB           T

       ( (1)  (1))    S (E)∕k   - E ∕k T     -E ∕k T
=⇒   ρc qi  ,pi   = e 2    B e   1 B  ≃  e  1  B

since the first term is a constant.

We can see that it depends only on the system (E1). Defining β = 1∕kBT:

                            (      )
   ( (1)  (1))             -βH q(1i),p(i1)
ρc  qi ,pi   =  (const) e

We have eliminated the environment, so we will avoid writing the subscript (1) from now on:

             1
ρc(qi,pi) =  ---e-βH(qi,pi)
            ZN

where he have defined the canonical partition function ZN, which can be determined with the normalization:

     ∫            1  ∫   ∏N  ( dqidpi )
1 =      ρcdΩ =  ----          ---d--  e-βH(qi,pi)
      M1         ZN   MS  i=1    h

           ∫    ( ∏N       )
= ⇒  ZN  ≡           dqidpi  e- βH(qi,pi)
             MS   i=i  hd

ZN = ZN(T,V ) depends on N,T,V .

This expression is valid for distinguishable particles. For indistinguishable particles there is a 1∕N! factor. For simplicity we write:

         N∏                       {
dΩ =  1--   dqidpi     with ξ  =   1    distinguishable
      ξN      hd             N     N !  indistinguishable
         i=1

Using the fact that we can foliate the phase space, we can write (assuming the lowest bound to be 0):

            ∫             ∫  ∞    ∫                 ∫  ∞
ZN (V, T) =     dΩe -βH =      dE        dSHe -βH =      dE ω (E )e- βE
             M              0      SH=E               0

In systems with a discrete set of energies values Ej: ZN = je-βϵjg j

1) For more species A,B, of particles, we can assume that there is no interaction among different species: H = HA + HB. So:

ZN = dΩAdΩB e-β(HA+HB+ )
= (∫            )
         -βHA
    dΩAe( ∫            )
          -βHB
     dΩBe
= ZNAAZ NBB

So if the species are distinguishable and independent one from the others (not interacting), the partition function is the product of the partition function of all the species.

2) Given an observable f(qi,pi), the canonical average is:

       ∫                            ∫
                                -1--      -βH (qi,pi)
⟨f ⟩c ≡  S dΩ ρc(qi,pi)f(qi,pi) = ZN  S d Ωe       f (qi,pi)

2.2.1 Thermodynamic quantities

We recover the thermodynamic potentials, by defining:

1)

F(T, V,N ) = - 1-logZ  (T, V )    ( ⇐⇒   Z   = e-βF )
               β      N                    N

2)

              ∂ log ZN
E =  ⟨H⟩c = - ---------
                 ∂β

Proof: We can see that both ZN (microscopic description) and F (macroscopic) depend on N,V,T. This leads us to think that it must be related, We will see that: ZN = e-βF F = -1β log Z

              ∫               ∫
 -βF                -βH              -β(H -F)
e    =  ZN =    d Ωe     = ⇒     dΩe         = 1

Differentiating both sizes (   )
  ∂-
  ∂β :

∫             [           ∂F ]
  dΩe -β(H- F) F -  H + β ---- =  0
                          ∂ β

As an exercise, since β =  1
kBT- β∂
∂β = -T ∂
∂T-

          ∫      - βH         |
=⇒   F =  --dΩHe------+ T  ∂F-||   =  ⟨H⟩  + T ∂F--
          e-βF =  ZN       ∂T |V,N       c     ∂T

While in thermodynamic we have Fth = Eth - TS S = -∂Fth||
∂TV,N.
These two expressions are the same if we identify:

                                     |                     ∫      - βH
F   = F =  - 1-log Z       S  =  - ∂F-||        E  = ⟨H ⟩ =  -∫dΩHe------
 th          β      N      th     ∂T |V,N              c      dΩe- βH

____________________________________________________________________________________

A useful formula in exercises:

                             ( ∫      )
            ∫-----1------ ∂--     - βH       ∂--
E =  ⟨H⟩c =   e- βH = ZN  ∂β     e      =  - ∂β logZN


3) We saw the universal Boltzmann’s formula in the microcanonical. In the canonical it is the same (that’s why it’s called universal):

|-----------------|
|S = - kB ⟨log ρc⟩c|
-------------------

Proof:

- kB⟨log ρc⟩c = -kB dΩρc log ρc
= -kB dΩ -βH
e----
 Z(-βH- log Z)
= kB[  ∫     e- βH           ∫ dΩe -βH = Z ]
 β    dΩ ------H  + log Z ---------------
           Z                   Z
= kB[                  ]
   1
  ----⟨H ⟩c + log Z
  kBT (                                   )
         1                     1
  F = - --log Z  =⇒   log Z =  ----F
        β                     kBT
= E----F-
   T = S

2.2.2 Equipartition theorem

Theorem: Let ξ1 [a,b] denote one of the canonical coordinates (q) or momenta (p) and ξj ;  (j1) all other variables. Suppose that the following condition holds:

∫ ( ∏      )
        dξ   [ξ e-βH]b = 0
          j    1     a
    j⁄=1

Then:

⟨      ⟩
   ∂H--
 ξ1∂ ξ1   = kBT
         c

Proof:

    ∫          ∫     (       )
                      ∏        e-βH-
1 =    dΩ ρc =   dξ1      dξj    Z
                        j

But we can write the differential dξ1 = ddξ1∂ξ1 :

    -βH (ξ1,ξj)      (  - βH)           -βH ∂H--
dξ1e         = d ξ1  ξ1e      - ξ1(- β)e    ∂ξ1dξ1

So we obtain:

1 = (       )
 ∏
     dξj
 j⁄=1dξ1[       ]
   e--βH
 ξ1  Z + (           )
      ∏
  dξ1    dξj
      j⁄=1--1--
kBTe-βH-
 Z∂H--
∂ξ1ξ1

And from the hypothesis, the first term is zero. So:

       ∫       (  ∂H  )    ⟨   ∂H ⟩
kBT  =    dΩρc  ξ1----  =   ξ1 ----
                   ∂ξ1         ∂ξ1  c

     ⟨       ⟩
         ∂H--
= ⇒    ξ1∂ ξ1   = kBT
              c

____________________________________________________________________________________

We can now see that the standard equipartition theorem is a corollary of this:
If the coordinate ξ1 appears quadratically in the Hamiltonian, then it contributes to the internal energy with an addend of kBT∕2.

Indeed, if: H = 12 + H˜(ξ j) :

  ∂H                                                    k T
ξ1----=  2A ξ21 = 2H1  =⇒   2⟨H1 ⟩c = kBT  =⇒   ⟨H1 ⟩c = -B---
  ∂ ξ1                                                   2

Let’s analyze better the condition of the theorem:

              [           ]
ξ1 ∈ [a,b]      ξ1e-βH(ξ1,ξj) b = 0
                           a
qab

2.2.3 Exercise (2.1): Find the partition function of a (classical) perfect gas in 3D

                                                      N   2
         N      3N                   3                ∑   ⃗pi--
MN   = V   ×  IR       ⃗q ∈ V   ⃗p ∈ IR      H (qi,pi) =     2m
                                                     i=1

           e- βH
ρc(qi,pi) = -----
            ZN

ZN = ----1----
N !(h3)N V i=1Nd3q i IR3( ∏N     )
     d3pi
  i=1e-β i=1N⃗p2
2mi
=    N
--V----
N !h3N( ∫          2 )
      d3pe-β2⃗pm
   IR3N (p2 = p x2 + p y2 + p z2)
=   VN
-------
N !h3N( ∫ +∞        p2 )
       dp αe-β2αm
   -∞3N Gaussian integral

|-------------------------------------|
|        N ( √ -------)3N      N      |
|ZN  = V---  --2mkBT---    =  V----1- |
|       N !      h            N !λ3NT  |
--------------------------------------

Generalizing in d-dimension: ZN = V-N-
N !-1--
λdN
 T

From that partition function we can get:

13/10/2022
Tβ μ

Notice that cV comes out to be constant: this is in contradiction with thermodynamic identities that require cV 0 as T 0. Again this shows that this model is not suited to describe the low temperature limit.

2.2.4 Magnetic systems

Another way to make work is through magnetic interaction.
So: dE = δQ - δL + μdN = TdS - pdV + dEmag

If ⃗E,⃗P are the external field and the reaction of the matter to the external field respectively, we can define ⃗D as the Total magnetic field: ⃗D = ϵ0⃗E + P⃗
For historical reasons,  ⃗
P contains in its definition the factor ϵ0 already.

The Maxwell equations in matter becomes:

                                         ⃗                     ⃗
∇ ⋅D⃗ = ρ     ∇ ⋅B⃗ = 0     ∇ ×  ⃗E = - ∂-B-    ∇  × ⃗H  = ⃗j + ∂D--
                                        ∂t                    ∂t

where: ⃗H = ⃗B∕μ0 -M⃗

A charged particle is described by ⃗Fel = q⃗E ⃗E = ⃗∇V And we have:

        2
H  =  ⃗p---+ QV  (q )
  0   2m

where V (q) is the elastic potential energy.

There could be some other effects, like some atoms or molecules could have a dipole moment ⃗
d (e.g. water molecule). This dipole can be seen by an external field, because of the interaction ⃗dE⃗. However this effect is usually not very strong (P⃗ is weak), with the exception of the ferromagnetic materials.

We have different materials with different effects: diamagnetism, paramagnetism, ferromagnetism.

Paramagnetism The assumptions for paramagnetism are that:

⃗   ⃗     ⃗   ⃗         ⃗   ⃗         ⃗    ⃗      ∂⃗B--    ⃗    ⃗   ⃗
∇ ⋅D  = ϵ0∇  ⋅E =  0    ∇  ⋅B =  0    ∇  × E =  - ∂t     ∇  × H  = j

We can define the total magnetization (an extensive quantity) as:

     ∫
⃗        3  ⃗          ⃗
M  =    d qM  (q) = V M
      V

And we have: ⃗B = μ0⃗H + μ0M⃗(q), so:

δL = dt V d3q⃗j ⃗E
= dt V d3q(       )
 ⃗    ⃗
 ∇  × H⃗
E
= dt V d3q[   (       )       (       )]
 ⃗∇ ⋅  ⃗H ×  ⃗E  +  ⃗H ⋅  ⃗∇ × E⃗
= dt[       ]
 H⃗ × ⃗E∂V + dt V d3q⃗H + ∂B⃗
----
 ∂t the first is a boundary term
= V d3q⃗
H (   )
   ⃗
  dB = dEmag
= μ0 V d3q⃗H d(        )
 H⃗ + M⃗

If we suppose an homogeneous material, ⃗H is uniform, so M⃗ is uniform too. We obtain:

                                       (    )
          ⃗    ⃗       ⃗     ⃗           H2-       ⃗     ⃗
δL =  μ0V H ⋅ d G + μ0V H ⋅ dM =  Vμ0d    2    + μ0H  ⋅ dM

The first term is not relevant, since it’s the same we have in vacuum.

The total energy can thus change for more reasons:

d⃗E =  TdS  - pdV +  μdH  + μ0H⃗ ⋅ dM⃗

and ⃗H,M⃗ are a couple of conjugated variables, just like (T,S), (p,V ), (μ,N).

If we apply this to a solid (dV = 0) in the canonical ensemble (dN = 0):

Microscopically, paramagnetism is correlated to dipole molecules: the intrinsic magnetic moments ⃗μ (which is due to electrons orbiting around nuclei (microscopical current) or due to spin) can interact with an external field (⃗μ ⃗B) originating paramagnetism.

Diamagnetism The magnetic forces are more difficult than the electric ones:

⃗              ⃗      ⃗   ⃗    ⃗           ⃗
FLorentz = q⃗v × B     B  = ∇  × A     (with A: vector potential)

However, we can write: ⃗p ⃗p -Q-
c ⃗
A = ~
⃗pj (minimal coupling), from which:

         (        )2  for N particles      ∑N      (             )2
H  = -1--  ⃗p - Q-⃗A    -- --- ----→  H  =     -1-- ⃗pj - Q-A⃗(qj)
     2m        c                        j=1 2m         c

The canonical partition function is:

ZN[T,V,N,⃗
H] = --1--
h3N ! (  N         )
  ∏    3  3
     d qjd pj
  j=1e-β j=1N(⃗pj- Qc ⃗A(qj))22m
= --1--
h3N ! j=1Nd3q jd3~p j e-β j=1N~p j2m

We removed the dependency on H⃗ (ZN = ZN[T,V,N]), so any thermodynamic potential that we can get does not depend on H⃗. We have obtained the following

Theorem: (Bohr-van Leeuwen) In classical theory there is no diamagnetism:

       ∂G
⃗M  = - ----=  0
       ∂H⃗

In quantum mechanics, we can get diamagnetism through the Langevin Theory, due to Larmor procession

Ferromagnetism In some materials, the vector M⃗ is very strong and there are interactions between different magnetic moments: ⃗μi ⃗μj.

2.2.5 Exercise (2.5): Thermodynamics of a magnetic solid

We consider a solid of N atoms/molecules (V fixed) in a canonical setting, which have an intrinsic magnetic moment μ in a external magnetic (uniform) field H⃗ = H. We assume that the field is not too intense, so |μi| = μ does not change (the only effect the field has on μ is to make it rotate.

We assume the particle to be distinguishable (because they are fixed at their equilibrium positions) and consider the Hamiltonian: H = - i=1N⃗μ j ⃗H.

Since there is no motion degree of freedom, for each particle there are only the ones due to magnetic moment: ⃗μ = (μx,μy,μz ) with |⃗μ|2 fixed. The phase space (of a single particle) is a 2D sphere of radius |⃗μ| = μ.

So it’s better to use spherical coordinates:

μx = μ sin θcos ϕ     μy = μ sin θsinϕ     μz = μ cosθ

and the volume is: dqdp = μ2 sin θdθdϕ
Indeed, if we use q = ϕ, the conjugate variable to an angle is a momentum p = cos θ, so dq = dϕ, dp = sin θdθ.

We now have everything we need to write the partition function:

                   (                             )
            N              ∑      ⃗      ∑
ZT OT = (Z1)        H  = -     ⃗μ ⋅H  = -     μzH

      ∫  ∫
        π   2π             βμH cosθ
Z1 =          (sin θdθd ϕ) e
       0   0

And we can also calculate the total magnetization:  ⃗
Mz = ⟨        ⟩
 ∑N     z
   j=1 μjc = j=1Nμ jzZ

⟨  ⟩
 μzjc (sinθd θdϕ) ρcμjz = ∫ (sin θdθdϕ )eβμzjHμz
-∫---------------z--j
   (sin θdθdϕ )eβμjH
= 1
---
Z1 1
--
β∂Z
---1
 ∂H = 1
--
β ∂
----
∂H log Z1 =  ∂
----
∂H( 1      )
  --log Z
  β

Since all particles are the same: Mz = N⟨  ⟩
 μzjc = N(     )
 - ∂∂FH-

2.3 Grancanonical Ensemble

(IV) MON 17/10/2022
We consider a system S = {(qi(1),p i(1)} and an environment ε = {(q j(2),p j(2)} at equilibrium (thermal (T1 = T2 = T), mechanical (p1 = p2 = p) and a chemical (μ1 = μ2 = μ). These two can exchange both energy and particles, but the the total energy and the total number of particles are conserved (N = N1 + N2 = const), so that the whole universe U = Sε is canonical.

The (grancanonical) partition distribution of the system only is obtained by integrating out the environment d.o.f. So:

ρgc(S)(       )
   (1) (1)
  qi  pi = A ϵ(              )
  ∏    (2)   (2)
     dqj ,dpj
   jρc(U)(                )
  (1) (1)  (2)  (2)
 qi ,pi  ;qj ,pj
=            (             )
        ∫   ∏
      A         dq(j2)dp(j2)  e -βH1 e-βH2
             j
∫--(-N1----------)-(-N2----------)------------
    ∏     (1)  (1)    ∏    (2)  (2)   -β(H1+H2)
        dqi dpi         dqj dp j   e
      i               j

(the integrand term in the denominator is the total δ + ε, the total volume V = (V 1,V 2))

We will choose the constant A = -N!---
N1!N2!, then we will prove later that it is the correct one (the one that satisfies the normalization). So we have:

               -βH1 ∫  ∏
             e------      dq(2)dp(2) e-βH2
             N1!N2!         j    j
ρgc = ------------------j-----------------------
       1  ∫  N∏1   (1)  (1)∏N2   (2)  (2)
       ---      dqi dp i    dq j dpj e- β(H1+H2 )
       N !    i           j
(2.4)

Proof: Let’s now see that the normalization holds:

                             ∫   ∏                 ∫  ∏
                                    dq(1)dp (1)e-βH1       dq (2)dp(2)e- βH2
∫  ∏                    N !   V1  i   i    i        V2 j    j   j
      dq(i1),dp(1i)ρgc = ---------∫--∏------------∏------------------------
    i                 N1!N2!          dq(1)dp(1)   dq(2)dp(2)e-β(H1+H2 )
                                 V  i   i   i   j   j    j

We can multiply the first integral at the numerator by (V 1∕V 1)N1, the second by (V 2∕V 2)N2 and the denominator by (V∕V )N, obtaining:

                  ( ∫                 ) ( ∫                 )
                       dq (1i)dp(i1)e- βH1        dq(2j)dp(j2)e- βH2
          V1N1V2N2  -V1------N1-------    -V2------NN-------
   --N-!-------------------V1--------------------V2------------
=  N1!N2!         ( ∫  ∏     (1)  (1) (2)  (2)  -β(H1+H2))
               VN   -V---idq-i-dpi-dqj--dpj--e---------
                                    V N

If V 1,V 2,V are finite, the integrals are different, but in the thermodynamic limit, V 1,V 2,V IRd, so the overall ratio is 1.

∫         N !   ( V )N1 ( V  )N2
  ρgc = -------   -1-     -2-    = 1
        N1!N2!    V       V

However, N1,N2 are not fixed (only N is), so we have a different integral for each value of N1. So actually what we want to calculate is:

  N1=0N i=1Ndq i(1)dp 1(1)ρ gc(  (1)  (1))
  qi ,pi
=   N1=0N------N-!-----
N1! (N - N1 )!(    )
  V1-
  VN1 (    )
  V2-
  VN-N1 expansion of the binomial
=  ( V     V )
  --1+  -2-
   V    VN = 1N---→
N→ ∞1

So what we have proved is that the sum over all the possible number of particle (∑      )
    ∞N1=0 integrated over all the phase space is 1:

|-------------------------------------|
|∑∞  ∫  N∏1             (        )     |
|          dq(1)dp(1)ρ    q(1),p(1)  = 1 |
|            i    i  gc   i   i       |
-N1=0---i=1----------------------------

____________________________________________________________________________________

We can re-write the (2.4):

ρgc =   -βH1
-e------
hdN1N1!ZN2-[V2,T-]
 ZN [V, T]
=  e-βH1
--------
hdN1N1!e-βF[N2,V2,T]
------------
 e-βF[N,V,T]
=  e-βH1
-dN1----
h   N1! eβ(F[N,V,T]-F[N2,V 2,T])

F[N,V,T] - F[N2,V 2,T] = F(N,V,T) - F(N - N1,V - V 1,T) (N1  ≪  N,   V1 ≪  V)
(Taylor expansion) =     |
∂F  |
----||
∂NV,T ΔN +    |
∂F |
---||
∂VN,T ΔV +
= μ1(N1) + (-p1)(V 1) +

So we have:

   (        )                 ( (1) (1))
ρ    q(1),p(1)  =  ---1----e-βH1 qi ,pi  e+βN1μe- βV1p
 gc   i   i      hdN1N1!

The constant can be absorbed in the measure:

        ⌊                                 ⌋

    ∑   | ∫ ( ∏     (1)  (1))               |
1 =     ||     --idq-i-dpi--  e-βH1(q(1),p(1))|| e+βμN1e-βpV1
     N  ⌈       hdN1N1!                   ⌉
      1     ◟------◝◜------◞
                   dΩ
(2.5)

So usually we don’t write the constant ---1---
hdN1N1!.

Now that we have integrated out all the quantities related to the environment, we will drop the superscript (1) from everywhere:

ρ  (q,p ) = e-βH(qi,pi)e+βN μe-βVp
 gc  i i

using the granpotential Ω = -pV and defining the fugacity z e+βμ:

       -βH  N βΩ
ρgc = e   z  e

From the normalization (2.5) we obtain the grancanonical partition function:

       ∑
e-βΩ =     ZN zN ≡  Z     (grancanonical partition function)
        N

so the grancanonical probability distribution becomes:

      1- N  -βH    1- -β(H-μN )
ρgc = Z z  e    =  Z e

where H- μN is sometimes denoted with K: grancanonical hamiltonian.

Simple application of this formulas: perfect gas (exercise 3.1):

       V N
ZN =  -------
      N !λ3TN

     ∑∞        N         (     )
Z =      zN -V-----= exp   zV--
     N=0    N !λ3TN          λ3TN

Ω = - -1log Z =  - -1--log Z
      β            kBT

Definition: Grancanonical average
Given an observable fN(qi,pi) (the subscript N is because the expression of the observable can be different according to the number of particle), the grancanonical average of that observable is:

⟨f ⟩gc = N=0 ∏N
---i=1-dqidpi
   N !hdNe-β(H-μN-)
    ZfN(qi,pi)
= -1
Z N=0zNZ N∫  -βHf
--e-----
  ZN = 1-
Z N=0zNZ N⟨fN⟩c

2.3.1 Thermodynamical quantities

2.3.2 Virial expansion (van der Waals gases)

A real gas is a 3D gas where

      ∑N   ⃗p2   ∑
HN  =     --j-+     U (⃗ri,⃗rj)
       j=1 2m     i<j

We assume that the potential is a van der Waals potential: U(r = |⃗ri -⃗rj|), so there is interactions only between two particles.
Since this is a gas, we also assume that the interactions are weak, otherwise the system could become liquid or solid.

Starting from the grancanonical partition function, if z = eβμ > 0 is small, we can expand the expression up to the second order (Virial expansion):

     ∑∞
Z  =     zN ZN ≃  1 + zZ1 + z2Z2 + ...
     N=0

where Z1 is the partition function where there is only 1 particle, Z2 considers 2 particle, etc.. So:

      ∫  d3r d3p     2        V
Z1 =     ---3---e-β(⃗p ∕2m) = -3-
           h                 λT

        ∫   3    3   3    3
Z  = -1    d-r1 d-p1d-r2-d-p2e-β(⃗p12+ ⃗p22)e-βU(r)
 2   2!       h3       h3

Changing coordinates: ⃗RCM = 12(⃗r1 + ⃗r2) ⃗r = ⃗r1 -⃗r2

          ∫         ∫                    ∫
      -1--    3         3  - βU(r)   -V--    3   -βU (r)
Z2 =  2λ6    d RCM     d re       = 2λ6    d r e
        T             V                T

So:

                     ∫
         V z    V z2     3   -βU(r)      2
Z  = 1 + --3 +  ---6-   d r e      + o(z )
          λT    2 λT

And we can obtain Ω, n and p:

              1                    p     1
- pV =  Ω = - --logZ   =⇒  βp  = -----=  --log Z
              β                  kBT     V

               |
       ∂       |
N =  z ---log Z ||
       ∂z       β,V

To compute the log Z, we remind that log(1 + x) x -1
2x2 + if x is small:

log Z = Vz
λ3-
 T + V z2
λ6--
  T( ∫      )
      -βU
     e-1
2-V2z2
-λ6--
  T + o(   )
  z3
  λ9-
   T
= Vz-
λ3
 T + V-z2
2λ6
  T( ∫               ∫     )
      3  -βU (r)       3
   V d re      -   V d r
= Vz-
λ3T +    2
V-z-
2λ6T J2(β)

where J2(β) V d3r[e- βU(r) - 1] is the second Virial exponent.

From that we can obtain:

For a perfect gas, particles are non-interacting, so J2 = 0. Thus:

      z              z
n =  -3-    p = kBT  -3-= kBT  n
     λT              λT

So, since we have done an expansion in  z
λ3T-, we can make an expansion in density n. Assuming n small means considering a diluted gas.

(V) MON (ex.1) 24/10/2022
27/10/2022
From the density expansion (2.6) we can obtain:

              √ ---------
    - λ13-±  1λ3-- 1 + 4nJ2
z = ---T----T2J2--------
             λ6T

In the case of no interaction (perfect gas), J2 = 0, and for the diluted limit, N = -z3-
λT 1.
Reminding that √ ------
  1 + x 1 + x
2 -1
8x2 + for x 1:

        [      (                               ) ]
     λ3T              1          1        2
z ≃ 2J--  - 1 ±  1 + 2(4mJ2 ) - 8-(4mJ2 ) + ...
       2

Now we need to decide which solution to take. Since if we stop to the first order we should obtain z n (z = λT 3n), then we have to take the positive solution, so:

        3      2   3
z =   n◟λ◝T◜◞  -◟-n◝J◜2λT◞
    perfect gas correction

From the expansion of the pressure (2.7), one can obtain:

 p      z     z2                           [    J   ]
-----= -3-+  --3-J2(β) + ⋅⋅⋅ =⇒   p = kBT n  1- -2-n +O  (n3)
kBT    λT    2λT                      ◟--◝◜--◞   2
                                      perfect gas
                                      ◟-----◝◜ -----◞
                                        van der Waals

Now, if we assume that particles are spheres and the potential is the one in the Fig. 2.1, we can find the van der Waals equation of a real gas by expanding the term J2:

J2(β) = 4π 0dr r2[           ]
 e- βU(r) - 1
= 4π 02r0 dr r2[ -βU (r)    ]
 e      -  1 + 4π 2r0dr r2[ -βU(r)   ]
 e      - 1
In the limit e-βU(r) 1, we can use the 1° order Taylor expansion: e-βU(r) - 1 ≃|-βU(r)|
≃-4π 02r0 dr r2 + 4π 2r0dr r2|- βU (r)|
= -84 π
---
 3r03 +  4π
-----
kBT 2r0dr r2|U (r)|
= -◟◝2◜b◞<0 + -2a--
kBT
◟◝◜◞>0

where b is the volume of 2 particles, and a is the average measure of the potential.


PIC

Figure 2.1: Van der Waals potential. It is repulsive when U(r) > 0 (in the image indicated as V (r)


So we obtain: -J2(β)-
  2 = b --a--
kBT and from the expression of the pressure:

|------------------------------|
|          [    (      a  )   ]|
|p = kBT n  1 +   b - ----- n  |+ O (n3)
----------------------kBT-------

and defining n = N∕V = 1∕v, we obtain the van der Waals equation of a real gas:

(-------)---------------|
|     a--               |
| p - v2  (v - b) = kBT |
-------------------------

RECAP:

Microcanonical Canonical Grancanonical
ρmc = --1-
ω(E)δ(H- E) ρc = e-βH-
 ZN ρgc = e-β(H-μN)-
   Z
S = -kB⟨log ρ⟩TD=limSTh = kB log ∑
   (E )
◟-◝◜--◞#states

2.4 State counting and Entropy

We have seen that the Boltzmann’s universal law give a value for the entropy which (in the thermodynamic limit) is the same of the thermodynamic entropy. We also know from the variational principle of thermodynamics that the equilibrium corresponds to maximum entropy.

In this brief discussion, we fix our attention to the canonical ensemble, but similar considerations hold for the grancanonical one.

In some case, the energy is discretized and we use E (instead of H) to indicate the energy level. Also, each energy level can be degenerate, meaning that more than one state have that energy. We indicate the degeneracy with gi = g(Ei).

We define the Boltzmann’s weight as the probability to have energy E:

      e-βE         ∑      - βE                ∑
ρE  = -Z---  ZN  =     gEe     = ⇒  S =  - kB    ρE logρE
        N           E                          E
(2.8)

In some cases, we can measure the energy of a system, but we can’t look for its microstate. However this is a general formula and we can use it even when we don’t know the microstate.

Let’s look back at the ensemble description:

Is there a principle to derive the probability distribution describing equilibrium?

The probability distribution describing equilibrium is the one corresponding to maximum entropy, given the macroscopic constraints.

Remark. This set-up (Boltzmann, Gibbs) is grounded on the idea that
i) we have a clear identification of what a micro/macrostate is
ii) probability are defined a-priori quantities.

Previously, one constructed a theory based on the equations of motion, supplemented by additional hypotheses of ergodicity, metric transitivity, or equal a priori probabilities, and the identification of entropy was made only at the end, by comparison of the resulting equations with the laws of phenomenological thermodynamics. Now, however, we can take entropy as our starting concept, and the fact that a probability distribution maximizes the entropy subject to certain constraints becomes the essential fact which justifies use of that distribution for inference.
E.T Jaynes, Information theory and Statistical Mechanics, Phys. Rev. 106 (1957) 620

Inference problem

The ”objective” school of thoughts regards the probability of an event as an objective property of that event, always capable in principle of empirical measurement by observation of frequency ratios in a random experiment.
On the other hand, the ”subjective” school of thought regards probabilities as expressions of human ignorance; the probability of an event is merely formal expression of our expectation that the event will or did occur, based on whatever information is available.

The inference problem is the following:
If the only info we have is that a certain function of x has a given mean value ⟨f⟩ = j=1Np jf(xj), what is the expectation value of another function g(x)? We must use the probability distribution which has a maximum entropy subject to the constraints:

∑N            ∑N
   pj = 1         pjf(xj) = ⟨f (x )⟩
j=1            j=1

which is obtained by maximizing (Lagrange multipliers) the function:

                       (           )     (                )
       ∑N                ∑N                N∑
A  = -     pj log pj + α     pj - 1  + γ      pj(xj) - ⟨f⟩
        j=1               j=1               j=1

Remark: It can be easily generalized to more observables and/or higher moments of the distribution

2.4.1 Probability distribution from maximum entropy principle

We have a finite set of energy levels Er, each with degeneracy gr, on which we distribute a number nr of ensembles with total energy E to distribute among N copies of the system. The number of ways to do that is W{nr} = W{nr}(1)W {nr}(2).
where W(1) does not consider degeneracy so it counts how many ways we can put n r with Er; while W(2) : the n r particles can be distributed in gr state.

We first consider classical particles, so they are distinguishable if they have different energy. So:

  (1)    ----N-!-----
W {nr} = n1!n2!...np!         N  fixed   n1  n2   ...  nr   ...  np

  (1)   ∏    nr
W {nr} =     gr          Er fixed   —◟----—-----—◝◜--—----—---◞ gr
         r                                    nr

From the maximum entropy principle, the equilibrium distribution corresponds to max entropy S = log W{nr}
with the constraints

     ∑              ∑
N =      nr    E  =     nrEr
      r              r

So, in the classical case:

            ∏p  gnr
W  {nr} = N !    -r--   S  = kB log W {nr}
            r=1 nr!

A = kB log W{nr} + α(           )
      ∑
  N -     nr
        r + β(             )
      ∑
 E  -     nrEr
       r
= kB[         ∑            ∑         ]
 log N ! +    log gnr-     log nr!
           r      r     r + α(      ∑     )
  N  -     nr
        r + β(      ∑       )
  E -     nrEr
        r
= kB[                                              ]
           //    ∑             ∑             //
 N  log N/- N +     nr log gr -     (nr lognr/- nr)
                 r             r + α(            )
       ∑
  N  -     nr
        r + β(              )
      ∑
  E -     nrEr
        r
= kB[                                     ]
            ∑             ∑
 N  log N  +    nr loggr -     nr log nr
             r             r + α(           )
      ∑
  N -     nr
        r + β(             )
      ∑
 E  -     nrEr
       r

To maximize this we derive:

         |
0 =  ∂A--||     = log g - log n*-  1 - α - βE
     ∂nr |n =n *       r       r              r
          r   r

and we find the number of particle that maximizes the entropy:

= ⇒  n *= e -(1+α )gre-βEr
       r

     ∑    *    -(1-α)∑      -βEr
N  =     nr = e          gre
      r               r

And we can get the probability to get a particle in the energy level r:

     n*r-   --gre--βEr----  gre-βEr-
pr = N  =  ∑p   g e- βEr =    Z
             r=1 r

So we get the same expression as in 2.8. Also, we obtain the Lagrange multiplier β = -1--
kBT

Quantum particle instead are always indistinguishable, not only when they have the same energy. Thus there is only one way to have n1 particles with energy E1, n2 particles with energy E2, etc... So: W{nr}(1) = 1 and

           (1)    (2)      (2)
W {nr} = W {nr}W {nr} = W {nr}

For bosons, we can imagine of putting the particles in a line and draw boundaries to select in which energy level they are. So we have nr indistinguishable particles and gr - 1 indistinguishable boundaries. In total we have nr + gr - 1 objects:

           (2)    (nr + gr - 1)!
W {nr} = W {nr} = --------------
                  nr!(gr - 1)!

For fermions, we can put at most 1 particle for each ”box”, which is like saying that each box can be empty or with a ball (nr < gr). So it’s like selecting nr objects out of gr possibilities:

W {n } = W (2) =  ----gr!------
    r      {nr}   nr!(gr - nr)!

We will derive again this distributions in the next chapter.

Chapter 3
Quantum Statistical Mechanics

3.1 Review on Quantum Mechanics and Statistics

3.1.1 Quantum System

(VI) THU 03/11/2022
The degree of freedom of quantum particle are described in terms of a vector of the Hilbert space. With Dirac’s notation, vectors are denoted like: |ψ ⟩H. We can have a linear superposition: λ|ψ1⟩ + μ|ψ2 ⟩H.
The scalar product of two vector is called a braket.

A (pure) quantum state is a ray (an equivalence class):

|ψ ⟩ ~ eiθ |ψ ⟩     ⟨ψ| ~ e-iθ ⟨ψ|    ⟨ψ|ψ⟩ = 1

So |ψ⟩~ λ|ψ⟩ λ = |λ|e0

The projection operator represents uniquely a quantum state:

ℙψ =  |ψ-⟩⟨ψ-|
      ⟨ψ |ψ ⟩

is a projection operator if

Proof:

              †       †     †
ℙ† = (|ψ-⟩⟨ψ|)-=--(⟨ψ-|)-(|ψ⟩)- = |ψ⟩-⟨ψ| = ℙψ
 ψ            ⟨ψ |ψ ⟩             ⟨ψ|ψ⟩

        |ψ⟩ ⟨ψ|ψ⟩ ⟨ψ |
(ℙψ)2 = ---------2---=  ℙψ
           ⟨ψ |ψ ⟩

____________________________________________________________________________________

This operator projects on the linear subspace generated by |ψ ⟩:

H ψ = {λ |ψ⟩,   λ ∈ ℂ}     ℙ ψ(λ |ψ ⟩) = λ|ψ⟩

|ϕ⟩ ∈ H ⊥ψ = {|ϕ⟩ ≤ 1   ⟨ψ|ϕ⟩ = 0}     ℙ ψ |ϕ⟩ = 0

An observable is given by a self-adjoint operator: A : HH A = A
for which the spectral theorem holds: A|ψ  ⟩
  j = λj|ψ ⟩
  j:

Since we’re only referring to bounded operators, the words hermitian and self-adjoint are equivalent.

So each vector of the Hilbert space can be expressed as a linear combination of the basis vectors:

          ∑
H ∋ |ψ ⟩ =    ϵn|ψ ⟩
            n       n

                                                  |---------------|
if     n ⁄= m, ℙn ℙm =  |ψn ⟩⟨◟ψn|◝ψ◜m-⟩◞⟨ψm | = 0 = ⇒  -ℙnℙm--=-δnm-ℙn-|
                           orthogonal

Also, the sum 1 + 2 is the projection over the span (linear combination) of ψ12, so:

∑----------|
|   ℙn = I |   Completeness
|n         |
------------

Let’s recap better the spectral theorem: if an operator A is self-adjoint, there exists a set of projection operators that diagonalize the operator:

|----∑--------|
A  =     λ ℙ  |    ℙ  = |ψ ⟩ ⟨ψ  |
|         n n |     n     n    n
------n--------
(3.1)

with: n = n nm = δnmn nn = I n IR

The Evolution of a system is fixed by a special observable, called hamiltonian H, through the Schrodinger equation:

iℏ ∂-|ψ-(t)⟩-= H  |ψ(t)⟩
     ∂t

We will consider only cases where the hamiltonian is time-independent. In this case the evolution is fixed by a unitary operator U:

                                     -itH∕ℏ
|ψ(t)⟩ = U (t) |ψ (t = 0 )⟩    U (t) = e

Unitary means: U(t) = U(t)-1 = U(-t).
That means that: ⟨ψ (t)|ψ(t) = ψ(t = 0)|ψ(t = 0), so normalization is preserved (thus probability is conserved).

The dynamic of a quantum system is perfectly deterministic: if we know |ψ (t = 0)⟩ and apply the equation above, we have the evolution. The probabilistic aspect arises in the measurement.

A measure of an observable A on a state |ψ ⟩ yields a set of possible outcomes {λn} corresponding to its eigenvalues, with probabilities pn given by:

                                          (                                )
                           ∑                ∑
pn = |cn|2     where  |ψ ⟩ =    cn |ψn ⟩          |cn|2 = 1 ⇐ ⇒  ⟨ψi|ψj⟩ = δij
                            n                n

                                         2
pn = ⟨ψ |ℙn |ψ⟩ = ⟨ψ|ψn ⟩⟨ψn|ψ⟩ = |⟨ψn|ψ ⟩|

         ⟨    |        ⟩
              ||∑
⟨ψn|ψ⟩ =   ψn |   cnψn   = ...
              | n

So: pn = |cn|2 = ⟨ψ | n|ψ

Remark: after the measurement, the state |ψ ⟩ collapses into |ψ  ⟩
  n

Also, we can note that, using the spectral decomposition of the operator A (eq. 3.1) we can write:

                  ⟨  ||        ||  ⟩
⟨A ⟩ = ⟨ψ|A |ψ ⟩ =   ψ|∑   λ ℙ |ψ   =  ∑  λ  ⟨ψ|ℙ |ψ ⟩ = ∑  λ  p
                     ||     n n||           n     n            n n
                       n               n                 n

which is the statistical average.

This kind of measure is called projective measure, because one can also generalize the notion of measure by not starting with A decomposed using the projection operators.

3.1.2 Density matrix

Suppose we have two particles described by H1,H2. Then HTOT = H1 H2 and dim(H1 H2) = nm. This is different from classical mechanics, where the total space is the Cartesian product between two spaces: M1 ×M2 and dim(M1 ×M2) = n + m.

Let {|ψ ⟩}
   nn,{|ϕ  ⟩}
   mm be the o.n basis of H1,H2 respectively. Then H1 H2 is generated by |ψn ⟩|ϕm ⟩ = |ψnϕm ⟩ o.n basis.
That means that every object of this space can be written as a linear composition of this following object:

                 ∑
H1 ⊗ H2  ∋ |ψ⟩ =     αn,m |ψn ϕm ⟩
                 n,m

         ′ ′          ′          ′
⟨ψn|ϕm |ψ nϕm ⟩ = ⟨ψn|ψn⟩H1 ⟨ϕm |ϕ m⟩H2 = δnn′δmm ′

If we take a vector of the Hilbert space: |ψ ⟩H, its projection is

ℙψ =  |ψ ⟩⟨ψ |

Theorem: If ρψ = ψ = |ψ⟩⟨ψ|, then ρψ is:

i.
A bounded operator: ρψ∥≤ 1
ii.
Self-adjoint: ρψ = ρ ψ
iii.
Positive: ⟨α|ρψ|α0, |α
iv.
Unit-trace: Tr[ρψ] = 1
v.
Idempotent: ρψ2 = ρ ψ

Proof: Let’s prove the new ones (iii and iv):

iii.
ρψ = |ψ ⟩ ⟨ψ |
                                2
⟨α |ρ ψ|α ⟩ = ⟨α|ψ⟩ ⟨ψ |α⟩ = |⟨α|ψ⟩| ≥  0
iv.
[M]min = ⟨en|M|em {en} o.n. basis
         ∑N
Tr [M ] =     ⟨en|M |en⟩     finite-dim
         n=1

This is:

(1)
linear Tr[M1 + M2] = Tr[M1] + Tr[M2]
(2)
cyclic Tr[M1M2Mk] = Tr[MkM1Mk-1]

So it’s independent on the chosen o.n. basis. If U is the matrix of basis change:

M  →  U -1MU   =⇒   Tr[U -1MU  ] = Tr[U U- 1M -1] = T r[M - 1]
                                      ◟-◝◜-◞
                                        I

So we have a trace-class operator A such that:

        ∞
       ∑
TrA  ≡     ⟨en|A |en⟩ < +∞
       n=1

We want to prove that ρψ = |ψ ⟩ ⟨ψ | is a trace-class operator with
Tr[ρψ] = 1.
We can choose this o.n. basis: {|e ⟩
  1 = |ψ⟩,|e ⟩
  2,|e ⟩
  3} such that:

⟨ψ|e1⟩ = ⟨ψ |ψ⟩ = 1     ⟨ψ|ej⟩ = 0  j = 2,3

         ∑                ∑
T r[ρ ψ] =    ⟨en|ρψ|en⟩ =    ⟨◟en◝|◜ψ◞⟩⟨◟ψ|◝e◜n⟩◞ = 1
           n               n   δn1    δn1

____________________________________________________________________________________

Let’s now prove the other way:
Theorem: If ρ is such that (i) (v) are satisfied, then exists |ψ⟩H such that ρ = |ψ ⟩⟨ψ|

Proof: From (i) and (ii) follows that ρ is bounded and self-adjoint. So we can write it using the spectral decomposition: ρ = λnn and states that:

ℙn =  |en⟩ ⟨en |    {|en⟩}o.n. basis   ρ|en⟩ = λn |en⟩

From (iii): λn 0
From (v):

     ( ∑        )2   ∑                 ∑           ∑
ρ2 =       λnℙn    =     λnλm  ℙnℙm  =     λ2ℙn (=v)    λnℙn  ⇐ ⇒  λ2 =  λn
        n             nm       ◟ ◝◜-◞   n   n       n               n
                               δnmℙn

and since λ is positive ⇐⇒ λn = 0, λn = 1 n
From (iv): Tr[ρ] = nλn = 1. So that means that all the λ are 0 apart from one of them which is 1.

If we suppose λ1 = 1, λ2 = λ3 = ⋅⋅⋅ = 0 ρ = λ11 = 1 = |e1⟩⟨e1|

____________________________________________________________________________________

This allow us to give the following:

Definition: A pure state of a quantum system is described by ρ such that ρ = |ψ ⟩⟨ψ| ⇐⇒ (i) (v)
ρ is called density operator (matrix).

(VII) MON (ex.2) 07/11/2022
10/11/2022
So we’ve seen that a pure state is defined by a ray [|ψ⟩] or equivalently by its associated (rank-1) projector:
H|ψ ⟩ normalized |ψ⟩~ e|ψ⟩ ⇐⇒ density op. ρ ψ = |ψ⟩⟨ψ| iff (i) (v)

We can also have a mixed state, for instance an electron produced in a lab which is neither spin up nor spin down.
A mixed state is defined by a statistical ensemble of pure states:

{|ψk⟩ ,pk}k  k = 1, ...,M

and it’s represented by means of the density operator

    ∑M
ρ ≡     ρkpk
    k=1

A mixed state satisfies (i) (iv):
(i),(ii) is trivial because it’s sum of (i) and (ii)
(iii) ρ 0 because 0 ρk 1
(iv) Tr[ρ] = Tr[ kρkpk] = kpkTr[ρk] = kpk = 1

However, (v) does not hold (ρ2ρ).
In fact: {|ψ  ⟩
  k} orthogonal =⇒⟨ψ  |
  kψk = 0 if kk.

ρkρk′ = |ψk ⟩⟨ψk|ψk′⟩⟨ψk | = 0
            ◟--◝◜--◞
                0

      (∑M      )2    ∑M         ∑M
ρ2 =       pkρk    =     p2ρ2 =     p2ρ2
       k=0           k=0  k k   k=0  k k

which is equal to ρ = k=0Mp kρk, only if:
k : pk0, pk = 1 with pk = 0 kk

So (v) is true only if ρ = |ψk⟩⟨ψk | is a pure state.

So we can say that a (generic) state is described by a density operator ρ that satisfies (i) (iv) and:

Theorem: A density matrix is pure (|ψ ⟩, ρ = |ψ⟩⟨ψ|) ⇐⇒ ρ2 = ρ

Expectation value:
The expectation value of an observable A in the case of ρψ = |ψ ⟩ ⟨ψ | can be written as:

⟨A⟩ψ =  ⟨ψ |A|ψ⟩ = T r[ρψA ]

We can generalize this to a mixed case ρ = k=0Mp kρk:

        ∑M                  [( ∑M      )   ]
⟨A ⟩ =     pk Tr[ρkA-]=  Tr        pkρk  A   = T r[ρA ]
    ρ   k=0    ⟨A⟩ψk           k=0

So in general, ρ : ⟨A⟩ρ = Tr[ρA]

An example: the Qubit
A classic bit is just a number that can be 0 or 1, while the quantum bit is a 2-level system for which:

           {(  )            }            (  )
H  = ℂ2 =     α     α,β ∈ ℂ        |ψ ⟩ =  α     |α|2 + |β|2 = 1
              β                           β

An equivalent way to describe it is to chose an o.n basis {|0 ⟩,|1⟩}
where: |0⟩ = (1 )

 0, |1⟩ = (0 )

  1

A qubit is a generic state of this space, which is a linear superposition of the 0 and 1 state:

|ψ⟩ = α|0⟩ + β|1⟩ with |α|2 + |β|2 = 1

03/11/2022 (c)
We can describe the evolution on this system:

|ψ ⟩ = α |0⟩ + β |1⟩   |α|2 + |β|2 = 1  →    α ′|0⟩ + β′|1⟩  |α′|2 + |β′|2 = 1

trough a unitary operator U, which in this case represents rotations on the Bloch sphere. We can also define:

1.
I = (     )
  1  0
  0  1 I|ψ ⟩ = |ψ⟩
2.
NOT = (     )
  0  1
  1  0 = X Pauli matrix |0⟩|1⟩ |1⟩|0⟩
3.
Z = (       )
  1   0
  0  - 1 |0⟩|0⟩ |1⟩→-|1⟩

which construct a quantum gate on a single qubit.

With 2 qubits we have:

H1  = {|0⟩1,|1⟩1}     H2 = { |0⟩2,|1⟩2}     HT OT = H1  ⊗ H2

and the o.n basis of HTOT is 4 dimensional, made from:

|0⟩1|0⟩2 = |00 ⟩    |0⟩1|1⟩2 = |01⟩     |1⟩1|0⟩2 = |10⟩     |1⟩1 |1⟩2 = |11⟩

A generic state of 2 qubits is described by:

|ψ⟩ = α  |00⟩+ α   |01⟩+ α  |10⟩+ α   |11⟩     with |α  |2+ |α  |2+ |α  |2+ |α  |2 = 1
       00        01       10        11                00      01     10      11

We can also have separable or entangled states:

1.
α10 = α11 = 0
|ψ⟩ = α00|00 ⟩ + α01|01⟩ = |0⟩1(α   |0⟩  + α   |1⟩ )
   00   2    01   2 = |ψ ⟩1|ϕ⟩2
In this case, the state is called separable, because it can be separated into a multiplication of a state of particle 1 a state of particle 2.
2.
α01 = α10 = 0 =⇒|ψ⟩ = α00|00 ⟩ + α11|11⟩
This state is not separable, so we say it’s entangled. An example of an entangled state is a Bell state: |ψ ⟩ = |00⟩+√-|11⟩
   2

Suppose these are spin (0) or (1). Alice and Bob can measure it and the output will be unpredictably or with 50% of probability. However, if Alice measure , she knows for sure that Bob will measure too.

10/11/2022 (c)
A qubit is a pure state. In fact (i)(v) are satisfied. In particular, (v) follows from the condition |α|2 + |β|2 = 1.

Note that being a pure state, it means that all the particles are in the state |ψ ⟩ = α|0⟩ + β|1⟩, then it’s the measure procedure that makes it collapse to |0⟩ or |1⟩ - or .

We can notice that if we do a measure, the probabilities of the outcome are the same as before, even if the matrices are different.

Exercise: Design an experiment which is able to determine if the system is in a pure on in a mixed state.

3.1.3 Identical particles - Permutation group

Let’s consider a system composed by N subsystems (N particles), each described by Hj j = 1,,N. The system will be described by the Hilbert space:

Htot = H1 ⊗ H2 ⊗  ⋅⋅⋅ ⊗ HN

If the subsystems are identical: Hj H=⇒Htot = HN
If they are also indistinguishable, the states span only a subspace of Htot, whose vector have special properties under the action of the permutation group: under the action of a permutation (i.e. swapping particles around), the state should be invariant, up to a phase. In the following we will see why and also that there are two way to achieve this result: this will lead to the definition of bosons and fermions.

We need to see the properties of a quantum system under the effect of permutation group. Let’s see the permutations of the N objects we have:

              σ
(1,2,3,...N ) -→  (σ(1),σ(2),...,σ(N ))

The set of all possible permutations of N elements is a group. Let’s call it N: the permutation group on N elements. This is a group because:

We can notice that N has a finite number of elements (= N!).

A transposition (or elementary permutation) σj j = 1,,N - 1 is a swap between the j and the j + 1 elements. Then a permutation can be decomposed into transpositions. In other words, the N - 1 transpositions are the generator of the group:

Theorem: σ N : σ = σα1σα2σαk k finite
This decomposition is not unique and also k is not an unique value. However, all decomposition of the same element has always an even/odd number of transposition (k is always even or odd)

This allows us to divide the permutations into even and odd permutations.

Definition: sgn(σ) = {
  +1   k is even
  - 1  k is odd

The transpositions are not all independent, but there are relations between them. In fact, they satisfy the identities:

i.
σiσj = σjσi if |i - j|≥ 2 (visual proof in figure 3.1a)
ii.
σiσi+1σi = σi+1σiσi+1 (visual proof in figure 3.1b)
iii.
(σi)2 = I (trivial)

N, the group generated by the N - 1 transpositions σj, satisfies as well the properties (i) (ii) (iii).


PIC

PIC

Figure 3.1: Visual proofs of identities between the transpositions


3.1.4 Quantum statistic

If some objects are indistinguishable, it means that a permutation among those

               σ
ψ (1,2,...,N )↦-→  ψ (σ(1),σ(2),σ(N ))

doesn’t affect the physical content of a wave function, which is |ψ(x1,x2,,xN)|2. That means that the wave function should be the same up to a global phase:

ψ(σ(1),σ (2 ),...,σ (N )) = eiϕσψ (1, 2,...,N )

Remark: this is a physical property, not a mathematical one.

If we then decompose a permutation into transpositions that generate it:

σ = σ α1,σα2,...,σαk
               σ
ψ (1,2,...,N ) ↦--α→1 ψ (σα (1,...N )) = eiϕα1 ψ(1,...,N )
               σ        1
               ↦--α→2 eiϕα2 (eiϕα1 ψ (1,...,N ))
                ...
                ↦-→  ei(ϕα1+ ϕα2+ ⋅⋅⋅+ϕαN) ψ(1,...,N  ) = eϕσψ (1,...,N )

with: ϕσ = ϕα1 + ϕα2 + ⋅⋅⋅ + ϕαk

Let’s now analyze a single transposition: σj : ψ(1,,N)↦→ej ψ(1,,N), which we will simply write as σj↦→ej.
This must satisfy the (i),(ii),(iii) relations of a transposition and this leads to some considerations on the phase:

i
if |i - j|≥ 2, then σiσj = σjσi, so:
         i(ϕi+ ϕj)}
σiσj ↦→ e
σjσi ↦→ ei(ϕj+ ϕi) ⇐⇒ ϕi + ϕj = ϕj + ϕi Trivially satisfied
ii
σiσi+1σi = σi+1σiσi+1, so:
                             }
σiσi+1σi    ↦→  ei(ϕi+ϕi+1+ϕi)                                             |---------|
                i(ϕi+1+ϕi+ϕi+1)   ⇐ ⇒  ϕi+ ϕi+1+ ϕi = ϕi+1+ ϕi+ϕi+1  ⇐ ⇒  -ϕi =-ϕi+1-∀i
σi+1σiσi+1  ↦→  e

So ϕj = ϕ j and the single transposition σj↦→ej becomes simply: σ j↦→e

iii
(σj)2 = I, so:
                      }
(σ )2  ↦→  ei(ϕ+ϕ) = ei2ϕ
  j
  I    ↦→  ei2πn ⇐⇒ 2ϕ = 2πn n

So there are only 2 possibilities (since ϕ [0, 2π[):

1.  ϕ =  0    ψ (1,...,N )↦-σ→j ψ (1, ...,N )

2.  ϕ =  π    ψ (1,...,N )↦-σ→j - ψ(1,...,N )     (eiπ = - 1)

This applies for a single transposition. For the whole permutation σ = σα1σα2σαk and ϕσ = ϕα1 + ϕα2 + ⋅⋅⋅ + ϕαk. So we have:

                                     ∀σ ∈ ℙN
1.  ϕαj = 0   ϕσ = 0     ψ(1,...,N ) ↦-----→  ψ (1,...,N )             (Bosons )
                                                  k even ψ ↦→  ψ
2.  ϕαj = π   ϕσ = kπ     eiϕσ = eikπ = (- 1)k                       (Fermions )
                                                  k odd  ψ ↦→  - ψ

For bosons, the way function is completely symmetric. For fermions it is completely anti-symmetric. Remark: what we call bosons and fermions are just due to the statistic and has nothing to do with spin. Only in relativistic quantum mechanics one can prove the spin-statistic theorem.

(VIII) MON 14/11/2022
An example: System of N=2 particles in IR3
We can describe two particles in IR3 with two vectors: ⃗x 1,⃗x2 IR3.
The Hilbert spaces respectively for a single particle and for two particles are

HN=1 = L2(IR3) = {ψ(⃗x 1) square integral}
HN=2 = L2(IR6) = {ψ(⃗x 1,⃗x2), square integral}

where L2(IR6) = L2(IR3) L2(IR3).
Permutations are described by the permutation group: 2 = {I} with σ : x1 x2
According to our rules, the wave function should be symmetric in the bosonic case and anti-symmetric in the fermionic case. So we define two operators:

It is easy to show (as an exercises), that ^S and A^ are projection operators. In fact S^ = ^S ^S 2 = ^S ^A = A^ ^A 2 = A^
If we call HS and HA respectively the spaces of symmetric and anti-symmetric wave functions: ^
S : H↦→HS  ^
A : H↦→HA HS,HA H = L2(IR6)

Also: S^ ^A = A^ ^S = 0 HS HA
In fact:

⟨ψ+ |ϕ- def. of scalar prod. = d3⃗x 1d3⃗x 2 ψ+*(x 1,x2)ϕ-(x1,x2)
= symm antisymm = anti-symmetric function = 0

So we can write H = HS HA.  In fact every function can be written as the sum of a symmetric and an anti-symmetric function:

            ψ+(x1, x2) + ψ - (x1,x2)
ψ(x1,x2 ) = -----------------------
                      2

Notice that since fermions are described by an anti-symmetric wave function, they can’t occupy the same state (Pauli exclusion principle is automatically included in this construction):

u (x )u (x ) ↦→  uα(x1-)u-β(x2) --u-α(x1)uβ(x2) = 0  if α = β
 α  1  β   2                  2

Generic N > 2 particles in IR3
With N particles, we can have more transpositions. Let’s indicate with P N a permutation: ^P : ψ(x1,x2,,xn)↦→ψ(                     )
  x -1,x  -1,...,x -1
   P(1)   P(2)      P(N). This just re-shuffle the order of particles.
We define:

^S = -1-
N ! PN^P ^S : ψ(x1,x2,,xN)↦→-1-
N ! P P^ψ
A^ = -1-
N ! PNsgn(P)^P sgn(P) = {
  +1   P  is an even permutation
  - 1  P  is an odd  permutation

As before,  ^
S and ^
A are orthogonal projector operators:
S^ = ^S ^S 2 = ^S ^A = A^ ^A 2 = A^ . and: ^S A^ = ^A ^S = 0
Also: ^S : HN HS ^A : HN HA HS HA

=⇒   HN  =  HS   ⊕⊥  HA   ⊕ ⊥    H ′
            ◟◝◜◞     ◟◝◜◞     no◟n◝- p◜h◞ysical
           bosons   fermions

with 3 or more particles there are function that ar neither symmetric nor anti-symmetric.

We can have an example of Hin a system of N particles in IRd. A single particle is described by H1 = L2(IRd) with the orthonormal basis: {u α(x)}α=1.
N particles are described by HN = L2(IRd) ⊗ ⋅⋅⋅ ⊗ L2 (IRd )
◟---------◝◜----------◞N times with the o.n basis
{ψα α ...α  (x1,x2,...,xN ) = uα (x1)uα (x2)...u α (xN )}
   1 2  N                     1      2          Nα1α2αN
Notice that the order is important, because it indicates that there is particle 1 in α, particle 2 in β etc..

We aim at describing in an intrinsic way each HN( ^
S ),H N( ^
A ). We can define the symmetrizer and antisymmetrizer as:

S^ : ψα1α2αN(x1,x2,,xN) = ψn1,n2,,nk^
S(x 1,x2,,xN)
 ^
A : ψα1α2αN(x1,x2,,xN) = ψn1,n2,,nk^A(x 1,x2,,xN)

Here we can see that the order is no longer important and what matters is just how many particle are in each state. nk is called occupation number and counts just that: how many particles are in the k state. Notice that for bosons there are no constraints (nk = 0, 1, 2, ), while for fermions there can only be 1 particle at maximum for each state (nk = 0, 1).
Also, the total number of particles should be constant, so

k=1n k = N

For example, with N = 3 we have: uα(x1)uβ(x2)uγ(x3) as the o.n basis and the possible permutations are:

1 2 3 +
1 3 2 -
2 1 3 -
2 3 1 +      (the sign indicate if it’s an even or odd permutation )

3 1 2 +
3 2 1 -

So we have:

^S  (u (x )u (x )u (x ))
   α  1  β  2  γ  3 = -1
3!{ uα(x1)uβ(x2)uγ(x3)
±   uα(x1)uβ(x3)uγ(x2)
±   uα(x2)uβ(x1)uγ(x3)
+   uα(x2)uβ(x3)uγ(x1)
+   uα(x3)uβ(x1)uγ(x2)
±   uα(x3)uβ(x2)uγ(x1)}

(and similar for ^A )

Bosons are described by taking all the signs above (all +), fermions are described by the bottom one (so + - - + + -). However, one could take another combination of plus an minus, obtaining something non-physical (H)

3.2 Second quantization

We can’t work with a wave function spit in N! peaces (in a gas N ~ 1023). So, we see the second quantization, an algebraic and abstract approach that will lead to very powerful results.

The approach is different from what it is followed in QFT

ccqQdqqdlalauFouuossaTfaafssnnniit→tt→ccuiiaamzzll∞ee∞ m fieemclehdcahntaihnceisocrsy(finite dof)

We follow the right path. Also, we are not interested on time: time is fixed and independent.

3.2.1 Creation/Annihilation operators

There are two kinds of these operators: bosonic and fermionic

In both cases the vacuum |0⟩ (n = 0) is the lowest level and it’s defined by a|0⟩ = 0 c|0⟩ = 0

17/11/2022
From now on we will indicate with a,a both the fermionic and bosonic operators and we will write: [a,a†] = aa † ∓ a †a || F B

3.2.2 Fock Space

We will give now a rather simple algebraic construction of the Fock space IHF , showing that it has all the required properties.

We want to construct H(S∕A) N=0H N(S∕A) in an intrinsic way.
For each element of an (arbitrary) o.n. basis {u α(x)}L2(IR3) in a single particle H, we can consider a couple of creation/annihilation operators aα,a α such that:

           [      ]                  [     ]
[aα,aβ]  =  a †α,a†β   = 0   ∀α, β      aα,a†β    = δαβ
       ∓           ∓                         ∓

The α’s are the quantum number labeling the one-particle basis {uα}.
These are called canonical commutation relations (CRR). Note that this relations imply automatically that (aα)2 = (a α)2 = 0. In this case:

i.
We can define the vacuum state |0⟩ by requiring that the aα’s annihilate it:
|0⟩ :  aα |0⟩ = 0  ∀α

We will see immediately that this single requirement allows for a complete construction of the Fock space. In fact we can construct
H0B∕F = {λ|0⟩, λ ∈ ℂ }isomorphic to
    ≃

ii.
What happens if the creation operator is applied to |0⟩?
aα|0⟩ ⇐⇒ u α(x) creates one particle in the state α. The one-particle state will be defined as:
a†α |0⟩ = |0,...,0,1α,0,...⟩

For example:

HB  ∕F=  {{a †|0⟩}∞   o.n basis} ≃  L2(IR3 )
  1          α    α=1

∑∞                         ∑
    fαa†α |0⟩  ⇐⇒   |f(x)⟩ =     fαuα(x)
α=1                         α

iii.
Then, recursively, what about N = 2?
We’ll start with one particle in the state α (aα|0⟩), then we’ll add the second one:
iv.
We can now generalize to N particles:
|n ,n ,...,n  ,...⟩
  1   2      k  η ∘ -------
  ∏--1---
     j nj!(   )
  a†
   1n1 (  )
 a†
  2n2 (   )
  a†
   knk |0⟩ ⇐⇒ (3.2)
 1-1 corresp.  = = = = = = = = =      ψ{k} =  ÂŜ(u  (x ),...,u  (x ))
  α1  1       αk  k o.n. basis for HS,HA

where η = {
  1            for bosons
      ∑kj-=11 nj
  (- 1)        for fermions
since we pick up a -sign every time we commute to bring ak to |nk⟩.
Remember also that nj indicate the number of particles in the state j.

Then if we chose |n1,...,nk ...⟩ as an o.n basis, we can define HB∕F ”automatically” from a:

HS∕A =  HB ∕F = {{|n ,...,n  ...⟩}o.n. basis}
                    1      k

Now let’s analyze better for N particles: we expect ak|n ...n  ...⟩
  1    k to be proportional to |n1 ...(nk + 1)...⟩

                   (3.2)         η          ( †)n1    (  †)nk+1
|n1 ...(nk + 1 )...⟩ =   ∘-∏---------------- a1    ... a k     ...|0⟩
                            i⁄=k ni!(nk + 1)!

If we compute:

 †   1     ( †)n1 ( †)n2    ( †)nk          √ -------
ak∘-∏------ a1     a2    ... ak    ...|0⟩ =   nk + 1 η |n1, n2,...,(nk + 1),...⟩
      ini!

Therefore ak : H NB∕F H N+1B∕F

One can prove (as an exercise) that ak|n1 ...nk⟩ = η√ ---
  nk|n1,...,(nk - 1),...⟩
and if nk = 0 = ⇒ ak|ni...nk ⟩ = 0 so that ak : HNB∕F H N-1B∕F

We can thus construct the Fock space (useful in the grancanonical):

  B ∕F    ⊕∞    B∕F
H F   =      H N
         N=0

It is also useful to define the operator ˆnk which counts how many particles occupy the k-th state: ˆnk aka k

ˆnk : HB ∕F → HB ∕F     ˆnk |n1 ...nk ...⟩ = nk |n1 ...nk ...⟩
      N        N

the so-called Fock basis: { |n1 ...nk ...⟩}{n1,n2...} is a basis of eigenstates for ˆnk
and we can build the number operator ^N, that counts the total number of particles N^ = k=1ˆn k.

What we have seen is the following:
Theorem:

1.
The multi-particle states are an o.n basis for HNS∕A (in both cases):
⟨n′,n ′,...,n′...|n1,n2,...,nk ...⟩ = δ ′  δ ′  ...δ ′  ...
  1   2      k                        n1n1 n2n2    nknk
2.
annihilation: ak : HNS∕A H N-1S∕A
a  |n ,n  ,...n ...⟩ = η √n---|n  ,n ,...,(n  - 1)...⟩
 k   1  2     k            k   1  2       k

where η = {
  1   ∑        for bosons
  (- 1) kj-=11 nj for fermions

3.
creation: ak : H NS∕A H N+1S∕A
 †                    √ -------
ak |n1,n2, ...nk ...⟩ =  nk + 1 |n1, n2,...,(nk + 1)...⟩

Note that in the fermionic case we have ak|n ,n  ,...,n ,...⟩
  1  2      k = 0 if nk = 1. This is again the expression of Pauli exclusion principle.

4.
the operator ˆnk = aka k counts how many particles occupy the k-th state:
ˆnk |n1,n2,...,nk, ...⟩ = nk |n1,n2,...,nk, ...⟩

and the operator N^ = kˆnk = kaka k counts the total number of particles:

                      (            )
 ^                      ∑
N  |n1,n2, ...nk ...⟩ =      nk = N   |n1,n2,...nk ...⟩
                         k

Remarks:

3.2.3 Field operator

The creation and annihilation operators introduced so far are tied to the (arbitrary, of course) choice of a basis of one-particle states. The state aα|0⟩ corresponds then to the creation of one particle in the state uα out of the vacuum or, more generally, aα|n1, ...,ni,...⟩ will correspond to the addition of a particle in the same state. What if we want to “create” an additional particle in an arbitrary state represented by the wavefunction f(⃗x) = αuα(⃗x)⟨u α|f?
A little thought suffices to conclude that, if aα creates a particle in the basis state u α, then a particle in a generic state f L2(IRd) will be created by the operator:

 †      ∑          †
ψ (f) =     ⟨uα|f⟩aα
         α
(3.3)

And the adjoint of ψ(f):

        ∑
ψ (f ) =    ⟨f |uα ⟩aα
         α
(3.4)

will act as the corresponding annihilation operator.

Let’s make things a little more formally: firstly, we chose to work within the coordinate representation:

                                                            ∫
      2   d                              ∑                       *
H =  L (IR  )    {|eα⟩ = uα }α    f (x) =    u α(x)fα  fα =   IRd uα(x)f(x) = ⟨u α|f ⟩
                                          α

Then, if the associated creation/annihilation operators are denoted with aα,aα, we define the creation/annihilation field operators as:

         ∑                      ∑
ψ †(x) =     u*(x)a†     ψ(x) =     uα(x )a α
          α   α    α             α

and the (3.3) and (3.4) can be written as integrals of those (see the following theorem).

It is pretty obvious that ψ(⃗x) is a rather ill-defined operator on Fock space. Indeed, it is easily checked that, say, ψ(⃗x)|0⟩2 = δ(⃗0), a diverging quantity, and hence that ψ(⃗x)|0⟩ cannot be considered as a vector in Fock space. ψ(⃗x) has rather to be considered as a “distribution-valued” operator, i.e. it acquires a reasonable mathematical meaning only when it operates on functions in L2(IRd) like in the definition of ψ(f) below.

Theorem:

i)
(3.3) can be expressed by the field operator:
        ∫                                         ∫
  †           †                           †             †            †
ψ (f ) =  IRd ψ (x)f(x)     in particular: ψ (uα) =  IRd ψ (x )u α(x) = aα
ii)
Some nice commutation relations holds:
[ψ (f)ψ (g )] = [ψ †(f )ψ†(g)] = 0 [ψ (f )ψ†(g)] = ⟨f,g⟩
[ψ (x)ψ (y )] = [           ]
 ψ †(x )ψ †(y) = 0 [          ]
 ψ (x )ψ†(y) = δ(x - y)
iii)
The field operators are independent on the chosen basis:
  †      ∑    *    †   ∑    *    †
ψ  (x ) =    u α(x)aα =     vβ(x)bβ
          α             β

Proof:

(i)
IRdψ(x)f(x) = f(x) αuα*a α = α f(x )u *α
◟-◝ ◜-◞⟨f|uaα = αfαaα = ψ(f)
(ii)
(iii)
Let’s have another base {vβ(x)} from which we can get different creation annihilation operators bβ,bβ.
{uα(x)}→ aα,aα =⇒|n1,n2, ...,nk ...⟩ = C ( †)
 a1n1 (  †)
  a2n2 |0⟩
{vβ(x)}→ bβ,bβ =⇒|m1, m2, ...,mk ...⟩ = C ( †)
 b1m1 (  †)
  b2m2 |0⟩

f(x) = αfαuα(x) ψ(f) = αfαaα
f(x) = βf˜ βvβ(x) ψ(f) = βfβbβ

We won’t prove it, but the expressions on the right are equal. That means that they are independent on the chosen basis.

This is also true for the field operators:

        ∑                      ∑
ψ †(x ) =    u*α (x )a†α = ψ†(x) =     v*β(x)b†β
          α                     β

The two expressions are the field operator expressed in two different basis. Since they are equal, it means it is basis independent.

____________________________________________________________________________________

3.2.4 Observable operators

(IX) MON 21/11/2022
We want now to characterize how observables act in Fock space.

Single particles observables
In first quantization, single particles observables are written in HNS∕A as:

    ∑N
A^:    A (1)(⃗pj,⃗xj)
    j=1

where A(1) is an operator on the single particle H, and let {u α} be the basis of the eigenfunctions of A(1), i.e.: A(1)(⃗p ,⃗x)u α(x) = ϵαuα(x)
A trivial example could be the harmonic oscillator: ^
A = j=1N (    2          )
   ^⃗pj--  ω2m-- 2
   2m +   2  xˆj
◟ ------◝◜------◞A(1)(pj,xj)

So (in the following, C is a constant that we don’t care about):

                                ˆS
ψn1,n2,...,nk...(x1,x2, ...,xN) = C  ˆA(u α1(x1 )...uαN (xN ))

Multiplying both sides by ^A = j=1NA(1)(⃗p j,⃗xj):

 ^
A  ψn1,n2,,nk(x1,x2,,xN) = C ÂŜ(  N                                 )
  ∑    (1)
      A  (⃗pj,⃗xj) (u α1(x1)...uαN (xN))
  j=1
= C ÂŜ(       )
  ∑N
      ϵαj
  j=1(uα1(x1)...u αN(xN ))
using linearity of all the operators:
= (       )
  ∑N
      ϵαj
  j=1ψn1,n2,(x1,x2,,xN)
Another way to write it could be:
= (ϵα1 + ϵα2 + ⋅⋅⋅ + ϵαN ) ψn1,,nk,
and another one:
= ( ∑∞      )
      ϵαn α
  α=1ψn1,n2,

What we have shown is that: ^A ψn1,n2,(x1,,xN) =  ∑ ∞
(  α=1 ϵαnα) ψn1,n2,(x1,,xN)
We now want to define in the Fock space an operator that acts the same as our original operator, so such that:

                       (          )
                         ∑
A^|n1,n2 ...,nk,...⟩ =       ϵαnn α  |n1,n2 ...,nk,...⟩
                          α

It follows that, in Fock space IHNB∕F , the ”second-quantized” version of A^ is:

|------∞----------∞---------|
|^    ∑           ∑     †   |
AF  =     ϵαˆnα =     ϵαaαa α|
------α=1---------α=1--------
(3.5)

with ϵα = ⟨  |
 uα|A(1)| | uα or equivalently: A(1)(⃗p ,⃗x)u α(x) = ϵαuα(x)
Notice that ˆnα = aαa α, so this operator destroy then create a particle, thus the total number of particle is conserved.

We have shown that both the creation operators and the field operator are base-independent. However this is not the case for the single particles observables in Fock space A^ F in eq. (3.5). In fact, if {uα} is a generic basis and not the basis of eigenfunctions of A(1), we have:

⟨  |         |   ⟩
 uβ||Aˆ(1)(⃗p,⃗x )||uα   = ϵα⟨u β|u α⟩ = ϵα δαβ

          ⟨   |   |   ⟩
 ^    ∑       |ˆ(1)|      †
AF  =      u β|A  |uα  a αaβ
       αβ

Which is equal to eq. (3.5) only if we add δαβ.

Choosing a different basis vj with the corresponding creation/annihilation operators bj,bj:

      ∑  ⟨   || (1)||  ⟩  †     ∑      †
^AF =       vj|A ˆ |vk  bjbk =    tjkbjbk
       jk ◟----◝◜----◞        jk
              tjk

This operator is no longer diagonal (indexes are different). It destroys a particle in the state k and create another one in the state j.

Using the definition of the field operators ψ(⃗x)(⃗x), there also exists a way to write everything without referring to a basis:

     ∑N              in Fock space      ∫
A^=      A(1)(⃗pj,⃗xj)-- ---- -→  ^AF =    d3x ψ †(x)A(1)(⃗p,⃗x) ψ(x)
     j=1

In fact: ψ(x) αuα*(x)a α ψ(x) βuβ(x)aβ:

A = d3x(            )
  ∑    *    †
      uα(x)aα
   αA(1)(            )
  ∑
     u β(x)aβ
   β
= αβ ∫
    3   *     (1)
   d x uα (x )A  (⃗p,⃗x )uβ(x)
◟------------◝◜-----------◞⟨uα|A(1)|uβaαa β
= αβtαβaαa β

Now let’s see some examples:

i)
Density operator
On a single particle u(x) L2(IR3 ), we have:
                                 ∫
ˆρ(1) = δ(x -  x0)    ˆρ(1) : u (x ) ↦→   d3xu (x)δ(x - x0) = u(xo)
                                  IR3

For N particles, we have N coordinates:

                        ∑N
x1,x2,...xN      ˆρ(y) =    δ(y - xj)
                        j=1

In Fock space it becomes:

      ∫                      ∫
ρˆ =     d3y ψ†(y)ρ(1)ψ (y ) =   d3y ψ †(y )δ (y - x )ψ(y) = ψ †(x)ψ (x)
 F
ii)
Number operator
The previous operator is called density operator because the number operator is expressed by: ˆ
N = IR3d3xˆρ (x). In fact, in Fock space the number operator is:
ˆNF = d3x ψ(x)ψ(x) using definition of ψ
= d3x(            )
 ∑
     u*α (x )a†α
   α(             )
  ∑
      uβ(x)aβ
   β
= αβ (∫                )
    d3xu *α(x)uβ(x)
 --------- ---------
◟        ◝◜        ◞ uα
⟨  |u β=δαβaαa β = αaαa α = αˆnα

which is just the total number of particle.

iii)
Free hamiltonian
With N particles the hamiltonian is
     ∑N      2
H =        ⃗pj--
     j=1  ◟2◝m◜ ◞
        A (1)(⃗pj,xj)

acting in L2(IR3) = ϕ(x 1,x2,xN)

Using the substitution: ⃗p ↦→- i

                   (        )
⃗pj2                 - iℏ∇xj  2
---ϕ(x1,...,xN ) = -----------ϕ(x1,...,xj ...xN )
2m                     2m

with A(1)(⃗p j,xj) =  2
ℏ2m-x2.

In Fock space the hamiltonian becomes:

      ∫                             ∫            [      ]
          3   †     (1)                  3    †    -ℏ2- 2
HF  =    d x ψ (x)A   (⃗pj,xj)ψ(x) =    d x ψ (x)  2m ∇ x  ψ (x )
(3.6)

              (            )
∇2 ψ(x) = ∇2   ∑   u (x )a    =  ∑  (∇2 u  (x))a
  x         x       α     α           x  α     α
                 α               α
(3.7)

We want to choose uα(x) such that x2u α(x) = ϵαuα(x)
where α = ⃗k (α is the momentum).
The solution is to choose the single particle o.n. basis: u⃗k = ei⃗k⃗x√--
 V. We will comment later about the normalization factor 1√ --
  V
In fact:

                                   ∑
∇2xu ⃗k(x) = -◟ ⃗◝k◜2◞uα (x ) =⇒  (3.7) =     (- k )2u ⃗k(x )a⃗k
             ϵ⃗                      ⃗k
              k

And the hamiltonian in Fock space (3.6) becomes:

HF = d3x ψ(x)(     2   )
  - -ℏ-∇2
    2m   xψ(x)
= d3x(             )
  ∑    *    †
(     u⃗k′(x)a⃗k′)
   ⃗k′(      )
   -ℏ2-
 - 2m ⃗ka⃗k(  ⃗2)
 - ku⃗k(x)
= ⃗k⃗ka⃗ka⃗k(                        )
|      ∫                 |
|  ℏ2k2-   3    *        |
|(  2m     d x u⃗k′(x)u⃗k(x)|)
       ◟ ------◝◜-------◞
           ⟨uk′|uk⟩= δkk′
= kℏ2k2
-----
 2maka k = kϵnaka k

This describes free particles in Fock space.

About the normalization factor
However, there’s actually a problem (we’ve cheated a bit): the solution u⃗k(x) = ei⃗
 kx is not normalizable:

           ∫
∥u⃗k(x)∥2 =     d3x |u ⃗k(x)|= ∞
            IR3    ◟-◝◜-◞
                      1

To fix this problem we don’t work in all IR3, but with a finite volume. We choose a cube of size L. So we also have to specify the boundary conditions. Those can be (in 1D):

                 ℏ2  d2
x ∈ [0,L ]    -  -------uα(x) = ϵαu α(x)
                 2m dx2

but we can also have periodic boundary conditions, where ψ(L) = ψ(0).

       (
       | ψ(x = L, y,z) = ψ (x = 0,y,z)
       {
in 3D:  | ψ(x,y =  L,z) = ψ (x,y = 0,z)
       ( ψ(x,y, z = L) = ψ (x,y,z = 0)

So now we have:

      i⃗k⋅x     i(kxL+kyy+kzz)    i(kx0+kyy+kzz)
u⃗k = e       e             =  e

                                                      2π
= ⇒  eikxL =  1 =⇒   kxL =  2πnx   (nx ∈ ℤ) = ⇒  kx =  --nx
                                                      L

(and similarly for ky,kz) which means that k is quantized.

Now we can also normalize the wave function:

                   ∫
u α(⃗x) = Cei⃗k⋅x         d3x |uα(x)|2=  C2L3 =  1  if C = √-1--=  √1--
                     IR3    ◟--◝◜--◞                       L3      V
                              C

So:

                                        (    )
     ∑      †              ℏ2⃗k2      ℏ2   2π  2 ( 2    2    2)
H  =     ϵ⃗ka⃗ka⃗k    ϵ⃗k′ = - -----= - ----  ---    nx + ny + nz
      ⃗k                    2m       2m    L

So ⃗k is replaced by nx,ny,nz.

iv)
For completeness we can also show that in the case where we include a potential to the hamiltonian:
     ∑N  ( ⃗p 2         )   ∑                     (     )
H  =       -j--+ V (xj)  +     V (xi,xj )    in L2 IR3N
      j=1   2m               i<j

how is the potential written in Fock space? The idea is the following:

V = i<jV (xi,xj) = 1-
2 ijV (xixj) = 1-
2 i,jV (xi,xj) -1-
2 iV (xi,xi)
= 1
--
2 i,j d3x d3y V (x,y)δ(x - x i)δ(y - yj) -1
--
2 i d3x V (x,x)δ(x - x i)
= 1
--
2 d3x d3y V (x,y) (             )
  ∑
     δ (x - xi)
   i
◟------◝◜------◞ρ (             )
  ∑
      δ(y - yj)
   j
◟------◝◜------◞ρ -1
--
2 d3xV (x,x) (             )
  ∑
      δ(x -  xi)
   i
◟------◝◜------◞ρ

where ρ = iδ(x - xi) in Fock space becomes ρF = ψ(x)ψ(x), so:

        ∫     ∫                                           ∫
V  =  1-  d3x    d3yV (x,y)[ψ †(x ) ψ (x)][ψ†(y) ψ(y )] -  1-  d3x V (x,x) [ψ †(x)ψ(x)]
 F    2                           ◟----◝◜----◞           2
                                       (*)

and we have already proved that
ψ(x)ψ(y) ψ(x)ψ(y) = [ψ(x)ψ †(y )] = δ(x - y),
=⇒ (*) = ψ(x)ψ(y) = δ(x - y) ± ψ(y)ψ(x)

V F = ±1-
2 d3x d3y V (x,y)ψ(x)ψ(y) ψ (x)ψ(y)
◟---◝◜--◞±ψ(y)ψ(x)
= 1
--
2 d3x d3y V (x,y)ψ(x)ψ(y)ψ(y)ψ(x)
Using the definition of ψ, we get
= αβγδV αβγδaαa βa γaδ

               ∫      ∫
             1     3     3          *     *
with Vαβγδ = 2-   d x   d y V (x,y)uα (x )uβ(y)uγ(y)uδ(x)

3.3 Quantum ensembles

24/11/2022
In the following we work with systems with finite volume. As usual the thermodynamic limit is taken only at the end of the calculations.
N denotes the number of particles and, depending on whether particles are distinguishable or not, we will work in the Hilbert space HNtot = H⋅⋅⋅H = HN or H Ntot = H NS∕A. Also, the system will be described by a Hamiltonian HN
If we allow N to change, we have to work on the Hilbert space H N=0H N, and the system will be described by a Hamiltonian H that conserve the number of particles (commuting with the number operator), so that: H|HN = HN

3.3.1 Microcanonical ensembles

In the microcanonical ensemble, V,N,E are fixed. Since the hamiltonian is an observable, we can write it as its spectral decomposition:

                        n
      ∑                 ∑j
H  =     Ej ℙj    ℙj =     |ψj,α⟩⟨ψj,α-|    H |ψj,α⟩ = Ej |ψj,α⟩
       j                α=1◟   ℙ◝◜   ◞
                                j,α

where j is the index that represents the energy level, while α = 0, 1,nj indicate the degeneracy of level ϵj. Note that we assume that nj is a finite number, so there is not infinite degeneracy. j is the projection on the eigenspace E = Ej, which is equivalent to the selection of an energy sheet in the phase space.
Now we do the same thing that we did classically: once the system is fixed with an energy level, it can be in every point of the hypersurface with the same probability. The only different is that in quantum mechanics a probability distribution becomes an operator, so we’ll indicate it with ^ .
Also the normalization requirement (which in the classical case is that the integral over the phase space needs to equal 1) becomes that the trace over the Hilbert space needs to equal 1.

Energy is conserved, so E Ej is fixed and the nj states {|ψj,α⟩}α=1,,nj have the same probability. The mixed density matrix is:

      ∑nj
ˆρmc =     p |ψj,α⟩⟨ψj,α|
      α=1

since Tr[ρmc] = 1 = ⇒ α=1njp = 1 = ⇒ p = 1∕n j, so:

          n∑j
ˆρmc =  1--   |ψj,α ⟩⟨ψj,α|
       nj α=1

If we have an observable A (A = A), then: ⟨A⟩ = Tr[ρmcA ] (this is simply the definition of mean of an operator).

We can also define entropy using the Universal Boltzmann formula:

Sm(qc)=  - kB ⟨log ρmc⟩mc = - kBT r [ρmc logρmc ]

Using the o.n. basis of the Hilbert space { |ϕ λ⟩}λ, the trace of an operator is Tr[A] = λ⟨ϕ |
  λA|ϕλ, then :

           ∑nj  1     1           ( 1          )
S(mqc)= - kB     ---log---=  - kBnj   --(- lognj)   = kB lognj
           α=1 nj    nj             nj

3.3.2 Canonical ensemble

In the canonical ensemble, V,N,T are fixed, while E can be exchanged with a big reservoir. We can write the Hamiltonian as:

     ∑
H  =     E ℙ      ℙ  = |ψ   ⟩⟨ψ  |
           j j     j     j,α    j,α
      j

This time the probability is not the same for all the state, but we assume that:

pj ∝ e- βEj     β =  -1---
                    kBT

We also recall that k = j j2 = j jk = δjk And

     ∑                 ∑
ρ  ∝     e-βEjℙ  (=*)e-β  jEjℙj = ⇒  ˆρ  ∝ e-βHˆ
 c              j                    c
       j

(*) is justified by the following: Proof:

                       ∞     (             )n            [  ∞             ]
 -βHˆN     β∑ ∞j=1Ejℙj  ∑   -1      ∑           (ℙj)n=ℙj ∑    ∑   1-        n       ∑   - βEj
e      = e          =     n!   - β    Ejℙj       =             n! (- βEj ) ℙj =     e     ℙj
                       n=0          j                  j   n=0                    j

Now we should fix the normalization constant in front, imposing TrH[ρc] = 1:

       [  1      ]    1      [    ˆ]
1 = T r  ---e-βH   = ----TrH  e- βH
         ZN          ZN

where ZN TrH[ -βHˆ]
 e is the quantum canonical partition function.

So we get:

    -1-- -βH
ρ = ZN  e

We can also compute:

                     1    [      ]
⟨A⟩c = T rH [ρcA ] = ---T r e-βH A
                    ZN

Sc = kB ⟨log ρc⟩c = - kBT r [ρclog ρc]

Let’s now define all the thermodynamic quantities and then see that the relation S = E-TF- holds:

3.3.3 Grancanonical ensemble

In the canonical ensemble, V and T are fixed, while E and N can be exchanged with a big reservoir. We will work on the Hilbert state H = N=0H N or equivalently with an hamiltonian that conserve the number of particles (commutes with the number operator): H = N=0H N ⇐⇒[     ]
 Hˆ, Nˆ = 0.

H  |ψn α⟩ = En |ψn α⟩     α = 1,...nN

n labels the possible states/eigenvalues, but it’s actually a double label. In fact before the number of particle N is fixed, ñ expresses the set of eigenvalues and it can happen that the same energy comes from two different N.
So we have: Ĥ = nEnn [     ]
 ˆH, Nˆ = 0 ˆN = n(=N,ñ)Nn
The energy can be any of the eigenvalues Ej(N) of H N, with probability
pj e-β(Ej-μN) as in the classical case (this is an assumption).

The system is in a mixed state whose density matrix is given by:

      ∑∞  ∑   - β(Ej-μN )      -β(ˆH-μNˆ)    -β ˆK
ρgc ∝        e          ℙj = e         =  e
      N=0  j

where Kˆ is the grancanonical hamiltonian: Kˆ = Ĥ - μˆN.

We can also write: ρc = Z1e-β(Ĥ-μˆ
N), and from Tr IHF[ρc] = 1 we get the grancanonical partition function:

Z = TrH[ - β(Hˆ-μˆN)]
 e H = N=0H N
= N=0Tr HN[     ˆ   ˆ]
  e-β(H -μN ) now Nˆ = N
= N=0e◟βμ◝N◜◞zN       [    ˆ]
T rHN  e-βH
◟-----◝◜----◞ZN
= N=0zNZ N as in the classical case.

where z is the fugacity and zN = eβμN

We can also define the grancanonical average of an observable:

                     ∑∞                 ∞∑           [e- βHN  ]
⟨A⟩gc = TrH [ρgcA] =     T rHN [ρgcA ] =    zN T rHN  -------A
                     N=0               N=0             Z

Doing this calculation we are assuming that [    ]
 ˆA,Nˆ = 0, so this is true only for operators that conserves the number of particles.

Now we can define the thermodynamic functions as:

15/12/2022

3.3.4 Exercise (1.1): Quantum magnetic dipoles

3.3.5 Exercise (1.2): Quantum harmonic oscillators

3.4 Quantum gases

24/11/2022 (c)
Here we’ll talk about bosonic/fermionic indistinguishable particles in an external magnetic field. We are neglecting any relativistic effect and any particle-particle interactions.

If we have N particles, we can write the hamiltonian operator in first quantization as:

     ∑N  ^⃗p 2   ∑N
Hˆ =     -j--+     V(ˆxj)
     j=1 2m    j=1

The o.n Fock basis IHF is obtained starting from the base of a single particle hamiltonian:

                                         {
                  †            †           0,1,...   B
H1 {uα(x)} →  aα,aα     ˆnα = a αaα  nj =
                                           0,1       F

      ⊕∞    B∕F                          ( †)n1 ( †)n2
IHF  =     IH N   →  |n1,n2 ...,nk,...⟩ = C  a1     a2    ...|0⟩
      N=0

That’s why (as we’ll see), it is easier to work with the grancanonical ensemble, because we don’t have to keep track of the conditions. We will come back on this.

The Hamiltonian operator can be written, in second quantization, as:

|-----∑------------∑--------|                                       2      2 2
|Hˆ =     ϵ a†a  =     ϵ ˆn  |    H (1)|u (x )⟩ = ϵ  |u  (x)⟩     ϵ =  pα--= ℏ-k--
|          α α α        α α |          α        α   α         α    2m     2m
-------α-------------α------

where α labels single particle states and âα,â α are the creation/annihilation operators that create/destroy a particle in the state indexed by α.

The grancanonical partition function is easily obtained using the Fock basis:

Z = TrIHF[  -β(Hˆ-μNˆ)]
  e
= n1,n2,⟨            |
 n1,n2,n3,...|e-β α(ϵα-μ)ˆnα | | n1,n2,n3,...
(ˆn  |n ,n ,...,n  ,...⟩ = n  |n ,n  ,...,n ,...⟩)
   α  1  2       α         α  1   2      α
= n1,n2,e-β α(ϵα-μ)nα ⟨n1,n2,...|n1, n2,...⟩
◟--------◝◜--------◞1
= n1,n2, αe-β(ϵα-μ)nα I can switch with
= α[              ]
 ∑
     e-β(ϵα -μ)nα
  nα nα = {
  0,1,2...   (B )

  0,1        (F )

So, recalling the geometric series N=0xn = -1-
1-x if x < 1, we get:

      ∏   ∑     -β(ϵα-μ)nα    ∏  [     -β(ϵα- μ)]
ZF  =          e          =      1 + e
       α nα=0,1               α

      ∏   ∑∞               ∏   [      1      ]
ZB  =         e-β(ϵα-μ)nα =      ------β(ϵα-μ)
       α nα=0               α   1 - e

The geometric series apply if:

e- β(ϵα-μ) < 1 ⇐ ⇒   β(ϵ  - μ) > 0  =⇒  μ <  ϵ  ∀α  =⇒   μ < ϵ =  0
                       α                     α               0

so for bosons we re-scale the energy levels to have ϵ0 = 0

(X) MON 28/11/2022
We can re-write the grancanonical partition function as:

         ∏  [     -β(ϵα- μ)]∓1    βΩB ∕F
ZB ∕F =      1 ∓ e           = e
          α
(3.8)

where ΩB∕F is the grancanonical potential:

          1              1 ∑      [     -β(ϵα-μ)]
ΩB ∕F =  - β-log ZB ∕F = ± β-    log 1 ∓ e
                            α

We can also calculate the average number of particle for the k-th state:

nk = ⟨nˆk ⟩gc = TrIHF[ρˆnk]
= Tr     ∑
[ e-β  α(ϵα- μ)ˆnα   ]
  -------------nˆk
        Z
= -1
ZTr[                     ]
   1--∂---β ∑α(ϵα-μ)nˆα
 - β ∂ϵ e
       k
= -1-
β1-
Z∂Z--
∂ ϵk
=  ∂
----
∂ ϵk(   1      )
  - --log Z
    β
= -∂--
∂ ϵkΩB∕F
=      1
------------
eβ(ϵk-μ) ∓ 1

Using the (-) we get the Bose-Einstein distribution, while using the (+) we get the Fermi-Dirac distribution. Comparing this with the classical distribution (the Maxwell Boltzmann distribution nMB = e-β(ϵ-μ)), we can see in Fig. 3.4 that in the limit of high temperatures, the distributions converges.


PIC

Figure 3.4: Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein distributions. We can see that for high temperature (on the right of the graph), the distributions converges. (in the figure the subscript k is instead denoted with β)


Now that we got nk, we can also get the number of particles N, simply by evaluating:

     ∑         ∑        1
N =     ⟨ˆnk ⟩ =    ------------
      k          k eβ(ϵk-μ) ∓ 1
(3.9)

Or also, N can also be obtained from:

       ∂ΩB-∕F-
N  = -   ∂μ   =  (3.9) as exercise

We can also evaluate the energy as:

    ∑          ∑   -----ϵk-----
E =     ϵknk =     eβ(ϵk- μ) ∓ 1
      k          k

Thermodynamic limit
What we’ve done up to now works within a finite volume and in a discrete case:

     ℏ2⃗k2-                    ⃗    2π-
ϵα =  2m      periodic BCs →  k =  L (nx, ny,nz)    nj ∈ ℤ     α  = (nx,ny,nz )

In the thermodynamic limit V,L →∞, 2π
L 0, so the space between different levels becomes smaller and smaller and k becomes continuous. All the sum expressions become integrals. Let’s see how. For a single component nj = nx,ny,nz:

nj() = nj() Δnj
◟◝◜◞=1 kj = 2π-
Lnj ⇐⇒ Δkj = 2π-
 LΔnj
= kj() L
---
2πΔkj Δkj dkj in the limit
= -L-
2π -∞dk j()

Repeating this procedure for all the components give us:

α() = nx,ny,nz() ( L  )
  ---
  2π3 d3k () =   V
------
(2π)3 d3k ()
Since sometimes we have |⃗k|, it is convenient to use spherical coordinates:
= V-4π--
(2π)3 0k2dk f(|⃗k| = k)
since ϵα = ϵα(k) =  2 2
ℏ2km-, we can change variable: =  2
ℏmkdk
as exercise
   =-V--
2π2(    )
  2m--
  ℏ232 0ϵ12

So:

∑          ∫ ∞                      1  ( 2m )3 ∕2
    →  V A     ϵ1∕2d ϵ    with A =  --2-  --2-
 α          0                      2π    ℏ

we call g(ϵ) = 1
2 density of states.

Since we are using ϵα = ℏ2k2-
2m, this only works for non-interacting (free) non-relativistic particles. Also, we are assuming that we are in a three-dimensional space, otherwise the change of variable would be different.

Also notice something: in writing the grancanonical partition function (3.8), we didn’t consider the fact that an energy level can be degenerate. If we want to take that into account, we would have:

        ∏   [            ]∓g
ZB ∕F =      1 ∓ e- β(ϵα-μ)
         α

                   ∑
Ω  = 1-logZ  = ∓g     log [1 ∓ e- β(ϵα-μ)]
     β
                    α

and:     N  = g ∑  α ⟨ˆn ⟩     E  = g∑   ϵ n      etc
                       α                 α  α
                                      α

Now that we are in the continuous case, we can evaluate Ω,N,E as integrals.

          ∫  ∞
Ω = ±V  A-     ϵ1∕2 log [1 ∓ e-β(ϵα- μ)] dϵ
        β   0
         ∫  ∞
N  = AV       ϵ1∕2-----1-----dϵ
           0     eβ(ϵ-μ) ∓ 1
(3.10)

        ∫
          ∞  3∕2     1
E = AV       ϵ  -β(ϵ-μ)----dϵ
         0      e      ∓ 1
(3.11)

Notice that their expressions scale with V (as it should be, since they are extensive quantities)

Integrating by parts:

Ω
--
V = A
--
β{[  2         / /]//∞    ∫ ∞  2       ± β      }
  ± -/ϵ/3∕2/lo/g/(...)    ∓      -ϵ3∕2 -β(ϵ-μ)----dϵ
 / /3             0     0   3    e     ∓  1
= 2-
3A 0ϵ32----1------
eβ(ϵ-μ) ∓ 1

                       |----------|
              2        |      2   |
=⇒   ◟Ω◝◜◞ = - -E  = ⇒  |pV =  -E  | Equation  of state
     -pV      3        -------3---

This is the equation of state of a perfect (quantum) gas. In the classical case, E = 3
2NkBT =⇒ pV = NkBT

3.4.1 Fundamental equations

We can also calculate (remind that (-)Bosonic gas (+)Fermionic gas):

            ∫                                 ∫
    N--       ∞ --ϵ1∕2dϵ---        2-E-   2-    ∞ ---ϵ1∕2dϵ---
n = V  = A      eβ(ϵ- μ) ∓ 1    p = 3 V  = 3 A     eβ(ϵ-μ) ∓ 1
             0                                 0

By solving these equation, we would know the state of the gas (bosonic or fermionic). The problem is that they don’t have a solution/primitive in 3D. We’ll see the solution for low and high temperatures.
Using the fugacity z = eβϵ:

n = A 0   ϵ1∕2dϵ
--βϵ---1---
e  z   ∓ 1
= A 0-zϵ1∕2dϵ
e βϵ ∓ z = change of variable: βϵ = x2 βdϵ = 2xdx
= √4g-
  π-1-
λ3T 0  2
zx-dx--
ex2 ∓ z

with λT = √--ℏ----
 2πmkBT

Clearly, since we are not able to do the first integral, we can’t do the last either, but:

   z        ze -x2       -x2 ∞∑      n (  - x2)n
-x2---- =  -------x2-= ze       (∓1 )  ze         if the series is convergent
e   ∓ z    1 ± ze            n=0

For fermions(-), it converges z∕∈[- ∞, - 1 ]
For bosons(+), it converges only for z < 1 =⇒ μ < 0, but this is only what we needed to suppose from the beginning. So n becomes:

n = √4g-
  π1--
λ3T 0dx x2e-x2 z n=0(±1)n(      )
  ze-x2n
and if the series is convergent I can swap the sum with the integral.
=  4g
√---
  π1
-3-
λT n=0(±1)nzn+1 0dx x2e-(n+1)x2

Now the integral can be evaluated, because it is the second moment of the Gaussian integral: √-π
 4---1---
(n+1)3∕2

     g  ∑∞  zn+1(±1 )n
n =  λ3-    (n-+-1)3∕2-
      T n=0

Similar calculations can be done for p, with the only difference being a x4 instead of x2 inside the integral. Thus obtaining:

         g  ∑∞  zn+1(±1 )n
p = kBT  -3-    -------5∕2-
         λT n=0 (n + 1)

If we define

                   ∑∞   zn+1
Bosons     bl(z) ≡     -------l
                   n=0 (n +  1)

                     ∑∞         zn+1
Fermions     fl(z) ≡    (- 1)n-------l
                     n=0      (n + 1)

then we obtain the fundamental equations:

|---------------------------------------------------|
|             {                          {          |
|n = N--=  -g-  b3∕2(z)        -p---=  g--  b5∕2(z)   |
|    V     λ3T   f3∕2(z)        kBT     λ3T   f5∕2(z)   |
-----------------------------------------------------
(3.12)

In the classical limit z 1, we can take just the first term of the fl and gl functions: bl(z) fl(z) z so we get:

                      3
n =  g-z  =⇒   z = nλ-T ≪  1  dilute gas, λ3 small → T large
     λ3T              g                    T

--p--   -g-         N--
k  T =  λ3 z = n =  V  =⇒   pV  = N kBT
  B      T

which is the equation of a perfect classical gas, which we obtained from Bosons and Fermions statistic.

Also, in the limit of high temperatures we get the exact formulas for μ and n(ϵ) that we got for a classical gas:

     g                    3        (   nkBT   )
n = --3eβμ = ⇒  μ (T) = - -kBT  log  ----2-3∕2  -- -→ - ∞
    λ T                   2          2π ℏ n      T→∞

βμ →  - ∞   =⇒   n(ϵ) = ----1------≃ e- β(ϵ-μ) = n
                        eβ(ϵ-μ) ∓ 1               MB

So in the limit of high temperatures, both the Bose-Einstein and the Fermi-Dirac distributions converges to the Maxwell-Boltzmann distribution.

3.4.2 Semi-classical limit (exercise 2.1)

01/12/2022
At first, we can study the semi-classical limit, by taking the expansion of the fundamental equations (3.12):

(      g [     z2]
{ n = λ3T- z[± 23∕2-  ]  (I)
( -p--   g--    -z2-
  kBT =  λ3T  z ± 25∕2   (II)

We will derive z = z(n) from the first and plug into the second equation:

            z2        nλ3
from (I)  --√--±  z ∓ --T- = 0
          2  2         g

=⇒ z1,2 = √--
 2              ∘ ----------------
[                       (   3 )]
 (B)∓  1 ± (z2)  1 ± √2--  nλT-
 (F)      (z1)         2    g
If we expand up to second order: √------
 1 ± x 1 ±1
2x -1
8x2
~=√ --
  2[      (                              ) ]
              1  ( nλ3 )    1( n λ3)2
  ∓1 ±   1 ± √---  ---T  -  -- ---T
               2    g       4    g

Now, which one between z1 and z2 should we accept? The one that made the 1 cancels with ±1, since we already know that the first order of z is z ~nλ3
-gT-. So:

                (     )2
    nλ3T   --1--  nλ3T-
z =  g   ∓ 2√2--   g

Plugging this into (II) keeping only the first order of nλ3T-
 g, gives:

  p      g nλ3  [     1   ( nλ3 )]     [      1  nλ3 ]
-----=  -3----T  1 ∓ -√---  --T-   =  n  1 ∓ -5∕2---T-
kBT     λT  g        4  2    g               2    g

The term 215∕2-nλ3
-gT- is called quantum correction or semiclassical correction.

We can notice the sign of the quantum correction: it is - for bosons and + for fermions. That means that the pressure p is reduced by bosonic gases and increased by fermionic gases, as if there is an attractive potential between bosons and a repulsive between fermions.
We also notice that the correction is quantum in nature, as it is shown by the fact that it goes to zero when h 0 =⇒ λ = √-h----
 2mkBT 0 or when g = 2S + 1 →∞, i.e. when all states have an infinite degeneracy so that quantum counting does not have anymore effect.

How much high should the temperature T be to have z 1? It depends on n: it must be T 3 1.

Now let’s do the opposite limit and analyze very low temperatures:

3.4.3 Fermions at T=0

Starting from the Fermi-Dirac distribution:

     -----1------
nα = eβ(ϵα-μ) + 1

we’ll take the limit T 0 = ⇒ β →∞, so the behaviour of nα depends on the sign of ϵ - μ:

We call the Fermi energy:

ϵF lim T0μ(T)
So at T = 0, the Fermi distribution is a step function: all states with energy ϵ < ϵF are occupied (with only 1 particles, since they are fermions) and all the states with energy ϵ > ϵF are empty (red in Fig. 3.5)


PIC

Figure 3.5: Fermions distribution at different temperatures


ϵF is defined by the number of particle N (since there is just one particle for every ϵα), so the equation N = αnα fixes μ(T). In other words, to change the Fermi energy we have to change the number of particles.
Now z = eβμ is no longer small.
Also recall that any fl(z) is convergent z < 1 and it’s a smooth function.

We can also define the Fermi temperature as: ϵF = kBTF . If a system has a temperature T TF , then we are in the so-called degeneration limit and the system behaves effectively as at T = 0 (n(ϵ) actually look like a step function and the calculations are easier). Examples could be:

The initial equations (3.10) and (3.11), becomes for T = 0:

N        ∫ ∞       ϵ1∕2     (*)   ∫  ϵF             2  3∕2
---=  gA     dϵ -β(ϵ-μ)-----= gA      dϵ ϵ1∕2 = gA --ϵF
V         0     e     +  1        0              3

E        ∫ ∞       ϵ3∕2          ∫  ϵF             2
-- =  gA     dϵ-----------(*=)gA      dϵ ϵ3∕2 = gA --ϵ5F∕2
V         0    eβ(ϵ-μ) + 1        0              5

(*) is justified because, when T = 0 -β(ϵ1-μ)--
e    +1 = nα0 (= 1) only if ϵ < ϵF .

From the first expression, one gets: ϵF = (    )
  32gnA-23 and dividing the second by the first, we can get the energy for particle: E-
V = 3
5ϵF . Using this and the fact that -pV = Ω = -2
3E  we can get:

    2 N      2
p = ----ϵF = --nϵF
    5 V      3

So notice that at T = 0, p > 0.
This is a consequence of the Pauli exclusion principle
Remark: it is also possible to expand the fundamental equations for small temperatures, using the Sommerfield expansion.

3.4.4 Bosons at T=0

(XI) MON 05/12/2022
We will start from the fundamental equation (3.12) for bosons, recalling that the series b32(z) it’s only convergent if |z| < 1. At z = 1, g32(z) is still defined and it’s called the Reimann function ζ(32), but it has a vertical derivative (g32(1) = ), so the function is no longer analytic. This is a signal that something is happening in the gas of bosons. Since, z = eβμ, let’s focus on the chemical potential μ(T).

So there are only two possibilities:

1.
μ(T) 0 for T 0, which is what happen for a non-relativistic Bose gas in 2D (Fig. 3.6 left).
2.
μ(T) 0 for finite Tc > 0, which is what we have for a non-relativistic Bose gas in 3D (Fig. 3.6 right)


Tμ

TμTc

Figure 3.6: μ(T) for 2D (left) and 3D non-relativistic Bose gas


Now we would like to find Tc. We can invert the first fundamental equation (3.12) to get μ(T):

            -g-
from   n =  λ3 g3∕2(z) →  μ = μ(T )
             T

Tc will be the (critical) temperature such that: μ(T = Tc) = 0 and we can find it by evaluating n with z = 1 (since z = eβμ):

     g
n = -3-g3∕2(z = 1 ) ---→ 0
    λT             T→0

This is absurd: particles are not leaving the container! There must be something we have missed and this formula is not correct in the range T < Tc.

To understand why, let’s go back to the Bose-Einstein distribution:

n   (ϵ) = -----1-----μ==0 ---1---
  BE      eβ(ϵ- μ) - 1    eβϵ - 1

which is well defined if ϵ > 0, but divergent if ϵ = 0:

                 ---1---
nBE (ϵ = 0) = liϵ→m0 e βϵ - 1 → ∞

So the ϵ = 0 level is filled with an increasing number of particles. In other words, particles like to go in the ground state.

So the number of particles in the ϵ = 0 state diverges: N0 Nϵ=0 = . That means that, even if in the thermodynamic limit N →∞, the ratio N
N0- stays finite, while usually N (ϵ⁄=0)
--N--- is infinitesimal and such that 0N(ϵ⁄=0)
  N= 1.
This is what we call macroscopic occupation of the ground state.
We can also define the density of particle in the ground state (ground state density) as:

n0 ≡ N0- =  N0-N--= N0-n
      V     N  V    N

which is finite.

We can now notice something: n = gλ3-
 Tg32 is the TD limit of the equation:

     ∑             ∑                         ∫ ∞
N  =     n   (ϵ  ) =    -----1-------- --→  V     dϵ g(ϵ)n   (ϵ)
          BE  α        eβ(ϵ-μ) - 1 TD limit     0          BE
      α              ϵ

ϵ = 0 is just an external point of the integral 0 , so even if nBEϵ→0
--→, the integral is convergent. That is the reason why we didn’t see the divergence when we performed the calculations. But now, if μ = 0, then the fraction of particles in ϵ = 0 becomes macroscopic and, as we have seen, it gives problems if we don’t consider it, so we have to add it manually, writing:

N = ϵnBE(ϵ) = nBE(0) + ϵ>0nBE(ϵ) = N0 + ϵ>0nBE(ϵ)
--T-D -li-m→it  N0 + V 0 g(ϵ)n BE(ϵ)

and the integral is not divergent since n(ϵ0) in 0 is infinitesimal.

Dividing by V we obtain:

         -g-
n = n0 + λ3 g3∕2(z ) = n0 + nn (T)    nnormal ≡ n(ϵ ⁄= 0)
          T

So:

         (                                       (
         { 0 [           ]   T ≥ Tc              { n           T ≥ Tc
n0 (T) =          (  )3 ∕2                nn(T ) =    (   )3∕2
         ( n  1 -   TTc-       T < Tc              ( n   TTc      T < Tc

As you can notice in Fig. 3.7, both n0(T) and nn(T) are continuous at T = Tc but not differentiable. As always happen, this is the sign of a phase transition: in this case between a quantum gas to a Bose-Einstein condensate. This is a new state of matter, so the population of the ground state actually has a macroscopic effect.


PIC

Figure 3.7: nn(T) and n0(T). For T < Tc we have a Bose-Einstein condensate.


We could analyze the other fundamental equation (3.12), to see that is has no problems z and T, because g52 doesn’t have the same divergence problem as g32:

                                {
  p     g                         z = eβμ  T  ≥ Tc
-----=  -3g5∕2(z)    both  with
kBT     λT                        z = 1    T  ≤ Tc

Thermodynamical quantities

(XII) MON (ex.3) 12/12/2022
(XIII) MON (ex.4) 19/12/2022

3.4.5 Exercise (2.5): Gas of photons

Let us consider a gas of photons, confined in a volume V at equilibrium at a temperature T. Photons are ultra-relativistic bosonic particles, for which: ϵp⃗ = c|⃗p |. They can be absorbed/emitted so that their particle number is not conserved, providing an example of bosonic systems with zero chemical potential: μ(T) 0. Recall also that photons have two independent polarizations, so that g = 2.

1.
Density of states:
The density of states is easily obtained from
        ∫      3   3         ∫  ϵ∕c
Σ(ϵ) = g      d-x-d-p-=  8πV--    dp p2 =  8-πV--ϵ3
          cp<ϵ   h3        h3   0           3(hc)3

since ω(ϵ) = Σ∕∂ϵ.

2.
Granpotential, number of particles, internal energy
From the previous point it follows that:
     1∫  ∞                         8πV   ∫ ∞    ϵ3
Ω = --     dϵ ω(ϵ)log(1 - eβϵ) = -------3     -βϵ----dϵ
    β   0                         3 (hc )  0   e  - 1

where the last equality has been found after integration by parts.
Similarly, one gets:

     ∫ ∞                8πV  ∫ ∞    ϵ2
N =      dϵ ω(ϵ)n(ϵ) = -----     -------d ϵ
      0                (hc)3  0  e βϵ - 1

     ∫  ∞                     ∫ ∞
                         8πV--     --ϵ3---
E  =   0  dϵ ω (ϵ)ϵn (ϵ) = (hc)3 0  eβϵ - 1 dϵ

From the relation Ω = -pV it follows that: p = 1
3E-
V

3.
Density of energy and particles
Using the relation
∫ ∞   xn
     -x----dx = ζ(n + 1)
 0   e - 1

we can show that:

N    8π ζ(3)
---= -----3-(kBT )3
V     (hc)

E-=  8π-ζ(4)(k T )4
V     (hc)3   B

Also, as a consequence of CV = ∂E∕∂T, we have: cv = CV-
 V T3

4.
Spectral distribution and density Recalling that the energy of a photon is related to the frequency through: ϵ = , we can consider the number of photons N(ν) with an energy 0 ϵ , given by:
         8πV  ∫ hv   ϵ2
N (ν ) = ----3     -βϵ----dϵ
         (hc )  0   e  - 1

We can calculate the spectral distribution:

                              2               2
f(ν) = dN--(ν-)=  -8πV---(βh-ν)--βh =  8πV----v----
         dν      (βhc)3 eβhν - 1       c3 eβhv - 1

This represents the number of photons with frequency between ν and ν + . We can also show that the energy spectral density (defined as the energy for unit frequency and volume) is:

                3
u(ν ) = 8πh----ν----
         c3 eβhν - 1
5.
Wein’s and Rayleigh-Jeans’ laws
Rayleigh-Jeans’ law is obtained by expanding the exponential function in the denominator at first order, while Wien’s law is got by neglecting the addend -1 in the denominator:
            (v )3
u (ν) ≃ 8πh  --   e-βhν    βh ν ≫  1
              c

        8πh  ν3     8π  2
u(ν ) ≃ --3----- = -3--ν     βh ν ≪  1
         c  βh ν   c β