Ising model [Python] -
i trying simulate ising phase transition in barabasi-albert network , trying replicate results of observables such magnetization , energy 1 observe in ising grid simulation. however, having troubles interpreting results: not sure if physics wrong or there error in implementation. here minimum working example:
import numpy np import networkx nx import random import math ## sim params # coupling constant j = 1.0 # ferromagnetic # temperature range, in units of j/kt t0 = 1.0 tn = 10.0 nt = 10. t = np.linspace(t0, tn, nt) # mc steps steps = 1000 # generate ba network, 200 nodes preferential attachment 3rd node g = nx.barabasi_albert_graph(200, 3) # convert csr matrix adjacency matrix, a_{ij} adj_matrix = nx.adjacency_matrix(g) top = adj_matrix.todense() n = len(top) # initialize spins in network, ferromagnetic def init(n): return np.ones(n) # calculate net magnetization def netmag(state): return np.sum(state) # calculate net energy, e = \sum j *a_{ij} *s_i *s_j def netenergy(n, state): en = 0. in range(n): j in range(n): en += (-j)* top[i,j]*state[i]*state[j] return en # random sampling, metropolis local update def montecarlo(state, n, beta, top): # initialize difference in energy between e_{old} , e_{new} dele = [] # pick random source node rsnode = np.random.randint(0,n) # spin of node s2 = state[rsnode] # calculate energy summing interaction , append dele tnode in range(n): s1 = state[tnode] dele.append(j * top[tnode, rsnode] *state[tnode]* state[rsnode]) # calculate probability of flip prob = math.exp(-np.sum(dele)*beta) # if probability greater rand[0,1] drawn uniform distribution, accept # else retain current state if prob> random.random(): s2 *= -1 state[rsnode] = s2 return state def simulate(n, top): # initialize arrays observables magnetization = [] energy = [] specificheat = [] susceptibility = [] count, t in enumerate(t): # temporary variables e0 = m0 = e1 = m1 = 0. print 't=', t # initialize spin vector state = init(n) in range(steps): montecarlo(state, n, 1/t, top) mag = netmag(state) ene = netenergy(n, state) e0 = e0 + ene m0 = m0 + mag e1 = e0 + ene * ene m1 = m0 + mag * mag # calculate thermodynamic variables , append initialized arrays energy.append(e0/( steps * n)) magnetization.append( m0 / ( steps * n)) specificheat.append( e1/steps - e0*e0/(steps*steps) /(n* t * t)) susceptibility.append( m1/steps - m0*m0/(steps*steps) /(n* t *t)) print energy, magnetization, specificheat, susceptibility plt.figure(1) plt.plot(t, np.abs(magnetization), '-ko' ) plt.xlabel('temperature (kt)') plt.ylabel('average magnetization per spin') plt.figure(2) plt.plot(t, energy, '-ko' ) plt.xlabel('temperature (kt)') plt.ylabel('average energy') plt.figure(3) plt.plot(t, specificheat, '-ko' ) plt.xlabel('temperature (kt)') plt.ylabel('specific heat') plt.figure(4) plt.plot(t, susceptibility, '-ko' ) plt.xlabel('temperature (kt)') plt.ylabel('susceptibility') simulate(n, top)
observed results:
i have tried comment code heavily, in case there overlooked please ask.
questions:
- is magnetization trend correct? magnetization decreases temperature increases cannot identify critical temperature of phase transition.
- energy approaches 0 increasing temperature, seems coherent whats observed in ising grid. why negative specific heat values?
- how 1 choose number of monte carlo steps? hit , trial based on number of network nodes?
edit: 02.06: : simulation breaks down anti-ferromagnetic configuration
first of all, since programming site, let's analyse program. computation inefficient, makes impractical explore larger graphs. adjacency matrix in case 200x200 (40000) elements 3% non-zeros. converting dense matrix means more computations while evaluating energy difference in montecarlo
routine , net energy in netenergy
. following code performs 5x faster on system better speedup expected larger graphs:
# keep topology sparse matrix top = nx.adjacency_matrix(g) def netenergy(n, state): en = 0. in range(n): ss = np.sum(state[top[i].nonzero()[1]]) en += state[i] * ss return -0.5 * j * en
note 0.5 in factor - since adjacency matrix symmetric, each pair of spins counted twice!
def montecarlo(state, n, beta, top): # pick random source node rsnode = np.random.randint(0, n) # spin of node s = state[rsnode] # sum of neighbouring spins ss = np.sum(state[top[rsnode].nonzero()[1]]) # transition energy dele = 2.0 * j * ss * s # calculate transition probability prob = math.exp(-dele * beta) # conditionally accept transition if prob > random.random(): s = -s state[rsnode] = s return state
note factor 2.0 in transition energy - missing in code!
there numpy
index magic here. top[i]
sparse adjacency row-vector node i , top[i].nonzero()[1]
column indices of non-zero elements (top[i].nonzero()[0]
rows indices, equal 0 since row-vector). state[top[i].nonzero()[1]]
values of neighbouring nodes of node i.
now physics. thermodynamic properties wrong because:
e1 = e0 + ene * ene m1 = m0 + mag * mag
should be:
e1 = e1 + ene * ene m1 = m1 + mag * mag
and:
specificheat.append( e1/steps - e0*e0/(steps*steps) /(n* t * t)) susceptibility.append( m1/steps - m0*m0/(steps*steps) /(n* t *t))
should be:
specificheat.append((e1/steps/n - e0*e0/(steps*steps*n*n)) / (t * t)) susceptibility.append((m1/steps/n - m0*m0/(steps*steps*n*n)) / t)
(you'd better average energy , magnetisation on)
this brings heat capacity , magnetic susceptibility land of positive values. note single t
in denominator susceptibility.
now program (hopefully) correct, let's talk physics. each temperature, start all-up spin state, let evolve 1 spin @ time. obviously, unless temperature zero, initial state far thermal equilibrium, therefore system start drift towards part of state space corresponds given temperature. process known thermalisation , collecting statical information during period meaningless. have split simulation @ given temperature in 2 parts - thermalisation , significant run. how many iterations needed equilibrium? hard - employ moving average of energy , monitor when becomes (relatively) stable.
second, update algorithm changes single spin per iteration, means program explore state space , need huge number of iterations in order approximation of partition function. 200 spins, 1000 iterations might enough.
the rest of questions not belong here.
Comments
Post a Comment