# Finding Magnetic Crystal Structures with Simulated Annealing

We use simulated annealing to find globally efficient crystal structures in a simple magnetic crystal model.

A simple model for magnetism in a crystal is to imagine a grid with a dipole that can either point up or down. The most energy-stable configuration is of course a checkerboard pattern. However, imagine a scenario in which each cell can only feel the cells above or below it and left or right to it, how can you reach this stable configuration from a random starting configuration?^{[1]}

One way we might solve this problem is to flip the cell if three or more of its neighbours are the same. This way, we flip if by flipping we can reach a lower energy, and we don't flip if we don't reach a lower energy from flipping.

However, we see that in practise for many initial configuration, this method doesn't yield the optimal checkboard pattern. Why?

The problem is that our algorithm *only* flips if it results in a lower-energy configuration: It is a greedy algorithm, and tends to end up in local minima. Sometimes, you have to make a short-term sacrifice in order to achieve a long-term goal. But how can we make these short-term sacrifices in pursuit of our long-term goal?

Meet simulated annealing. In physics, temperature is essentially small random vibrations - the higher the temperature, the stronger the vibrations. Introducing a temperature parameter can give us a mechanism to make these short-term sacrifices. The temperature will start high, and eventually go to zero which reduces to our initial algorithm.

When the temperature is above zero, we will have an exponentially decaying probability of flipping a cell even if it will result in a higher energy-state. Moreover, the probability will also depend on how bad the flip is with dramatically worse flips being more unlikely. The code might look something like:

```
temperature = lambda time_fraction: 1-time_fraction
# T: Temperature
# E: Fraction of neighbours which are dissimilar.
accept_prob = lambda T, E: np.exp(-E/T)
```

We can plot the probability of flipping a cell with only $1$ similar neighbour over time

We see that, with a bit of tuning, this algorithm is able to find the globally optimal configuration since it is less likely to get stuck in a local minima.

Simulated annealing is a powerful optimization technique that is useful well being this very simple example. It is particularly useful if it's better to find an approximate global optimum^{[2]} than a precise local optimum found by methods such as gradient descent. You can play around with the code for this experiment on Colab.

We assume that each cell is sequentially updated in a scanning motion, and that the cells at the edges feel the cell at the opposite edge. ↩︎

Sometimes, even the annealing method doesn't yield the globally optimal configuration, but it is usually closer than the simple non-annealing method. (See demonstration) ↩︎