import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook
%matplotlib inline
Last time, we made a neat little implementation of a 2D Ising model simulation, and we showed that it looks reasonable. Well, that it might, but we can’t be certain of that! I know I said that next time we would, er, %timeit
, put it on the GPU and make it GO FAST, but perhaps it’s a better idea to start with some data analysis first, making sure the result we’re getting are quantitatively what we would like them to be - physical.
One more errata: conveniently forgetting to mention that we were working with the assumption that there’s no external magnetic field. Let it be known that we’re neglecting any external magnetic field. Let’s resummarize the results here:
'figure.figsize'] = [11, 6] # for larger pictures
plt.rcParams[= (64, 64)
size
0)
np.random.seed(= np.random.randint(low=0, high=2, size=size) * 2 - 1
a ='binary')
plt.imshow(a, cmap; plt.colorbar()
I’ll modify the better_iteration
function we arrived at in two ways: the name could be better, as in, it tries to flip half the spins on the grid, so we’ll call it half_iteration
. We’ll also make it so that the spin matrix a
is modified in-place, without copying, to save memory and computation time:
def half_iteration(a, mask, J = 1, beta = 1):
= np.roll(a, 1, 0) + np.roll(a, 1, 1) + np.roll(a, -1, 0) + np.roll(a, -1, 1)
interactions = 2 * J * a * interactions
deltaE = np.exp(-beta * deltaE) * mask
boltzmann = np.random.random(a.shape) < boltzmann
flip_these *= -1
a[flip_these]
# make a checkerboard pattern to preserve causality, updating spins on alternating diagonals in each iteration
def mask(a):
= np.ones_like(a)
a_mask 2, ::2] = 0
a_mask[::1::2, 1::2] = 0
a_mask[return a_mask
def full_iteration(a, mask, J = 1, beta = 1):
# this now becomes a VERY thin wrapper!
half_iteration(a, mask, J, beta)1-mask, J, beta)
half_iteration(a,
= mask(a)
a_mask
full_iteration(a, a_mask)='binary') plt.imshow(a, cmap
We are now in the exact spot where we left off a week ago. Let’s do some quantitative calculation now!
There are three main diagnostics (that I know of) that you would be interested in when looking at this kind of system:
- the average magnetization, which is simply the average of all spins
- the internal energy, which is
-0.5 * deltaE
in ourhalf_iteration
code for each spin (and globally, we would take the average of that, so technically it’s the average internal energy per spin) - the heat capacity, which is the partial derivative of internal energy with respect to the temperature
The first two can be grabbed straightforwardly from each snapshot of the iteration, and then averaged to decrease the effect of fluctuations. The third, as seen on Richard Fitzpatrick’s website, can be calculated as
\[C_v = \frac{dU}{dT} = \frac{\sigma_U^2}{k_B T^2} \] Let’s get to it!
def magnetization(a):
return a.mean(), a.std()
def internal_energy(a, J = 1):
= np.roll(a, 1, 0) + np.roll(a, 1, 1) + np.roll(a, -1, 0) + np.roll(a, -1, 1)
interactions = -J * a * interactions
current_energy return current_energy.mean(), current_energy.std()
magnetization(a), internal_energy(a)
((0.0205078125, 0.9997896926986519), (-2.123046875, 1.816938556075228))
Well, that’s a whole lot of variance! Our system has definitely not converged to any stable state yet. Let’s repeat that after a bunch of iterations:
= 100
a_bunch for i in range(a_bunch):
full_iteration(a, a_mask)
='binary')
plt.imshow(a, cmap magnetization(a), internal_energy(a)
Well, that’s better. A little bleak, but it certainly seems stable.
All right! This is what we wanted. The system has nearly converged to a stable state, so its magnetization (remember that we’re way under the critical temperature) is almost fully in one direction, and the internal energy is way lower than what we’ve had before, suggesting that this state is much closer to equilibrium.
Let’s try to get the energies and plot this:
= 1
k_b = np.linspace(1.5, 3.5, 300)
T_range = 100
iterations = []
energies = []
magnetizations for T in tqdm_notebook(T_range):
= 1 / (k_b * T)
beta 0)
np.random.seed(= np.random.randint(low=0, high=2, size=size) * 2 - 1
a = mask(a)
a_mask for i in range(iterations):
=beta)
full_iteration(a, a_mask, beta
energies.append(internal_energy(a))
magnetizations.append(magnetization(a))
= np.array(energies).T
E, dE = np.array(magnetizations).T M, dM
We’ll also need a bunch of plots, and we’d like to use \(C_v\) to let us find the critical temperature as exhibited by our simulation as its maximum. Let’s stick the analysis we want into a function:
= 2 / np.log(1 + 2**0.5) # a theoretical value
Onsager_critical_temperature
def analysis(T_range, E, dE, M, dM, plotting=True):
= dE**2 / k_b / T_range**2 # see Fitzpatrick
Cv = np.argmax(Cv)
maximum_index = T_range[maximum_index]
our_critical_temperature if plotting:
plt.plot(T_range, E)r"Temperature [in units where $k_B = 1$]")
plt.xlabel("Average energy per spin")
plt.ylabel(min(), T_range.max());
plt.xlim(T_range.min(), E.max())
plt.vlines(Onsager_critical_temperature, E.
plt.figure()
plt.plot(T_range, M)r"Temperature [in units where $k_B = 1$]")
plt.xlabel("Average magnetization")
plt.ylabel(min(), T_range.max());
plt.xlim(T_range.min(), M.max())
plt.vlines(Onsager_critical_temperature, M.
plt.figure()"o-")
plt.plot(T_range, Cv, r"Temperature [in units where $k_B = 1$]")
plt.xlabel("Heat capacity per spin")
plt.ylabel(min(), T_range.max());
plt.xlim(T_range.min(), Cv.max())
plt.vlines(Onsager_critical_temperature, Cv.
return our_critical_temperature
analysis(T_range, E, dE, M, dM)
2.2959866220735785
And if I’ve ever seen a plot that says nothing, it’s this one. It seems all random. There’s a simple issue underneath this: since we’re starting from a randomized grid, there is no telling which spin state the system will converge to at a low temperature. Note how there’s much less noise above the critical temperature as found by Onsager back in 1944, as denoted by the vertical line. This makes intuitive sense: above the critical temperature the system converges to an essentially random state, and each of those is basically equivalent.
Let’s try this again, from a cold start (all spins up):
= []
energies = []
magnetizations for T in tqdm_notebook(T_range):
= 1 / (k_b * T)
beta = np.ones(size)
a = mask(a)
a_mask for i in range(iterations):
=beta)
full_iteration(a, a_mask, beta
energies.append(internal_energy(a))
magnetizations.append(magnetization(a))
= np.array(energies).T
E, dE = np.array(magnetizations).T
M, dM
analysis(T_range, E, dE, M, dM)
2.322742474916388
That’s a bit better, but not quite enough. The measurement is still noisy. What we need to do is average the energies and magnetizations over a bunch of iterations. We’ll also stick our entire logic into a function:
def simulation(iterations, size=size, T_range=T_range, k_b=k_b, plotting=True):
= []
energies = []
magnetizations = np.ones(size)
a_mask 2, ::2] = 0
a_mask[::1::2, 1::2] = 0
a_mask[
for T in tqdm_notebook(T_range):
= 1 / (k_b * T)
beta = np.ones(size)
a = np.zeros(2)
E = np.zeros(2)
M for i in range(iterations):
+= internal_energy(a)
E += magnetization(a)
M =beta)
full_iteration(a, a_mask, beta/ iterations)
energies.append(E / iterations)
magnetizations.append(M
= np.array(energies).T
E, dE = np.array(magnetizations).T
M, dM return analysis(T_range, E, dE, M, dM, plotting)
1000) simulation(
2.32943143812709
So we’re getting relatively good agreement with Fitzpatrick’s results, but on the other hand… our critical temperature is slightly off and the peak is not as sharp as it should probably be. Perhaps this is an issue of small grid size?
1000, (128, 128)) simulation(
2.3361204013377925
Okay, and what if we go in the other direction, towards smaller sizes:
500, (16, 16)) simulation(
2.3896321070234112
Interestingly, while the peak’s behavior is not changing, the estimated critical temperature does change a bit. However, I’m trying to write this blog in half-hour chunks, and running multiple simulations does eat into that period. In other words, while we have shown that the magnetization and internal energy qualitatively behave just as they should for these few runs, it’d be nice to find out whether we converge to the critical temperature - and we’re going to need to run a whole bunch of simulations to do that. We do need to speed this up, after all!