Sunday, February 6, 2011

Adiabatic quantum computation in Egypt

I am amused by the recent proliferation of things Egypt-related but not Egypt-specific essentially related to the current mess. This is the case, e.g., with Hernando de Soto's [1] WSJ op-ed on property rights in Egypt, and, more benignly, with this old NASA photograph of Cairo and Alexandria that has been circulating about the internet. (Alexandria turns out to be one of those long skinny coastal cities like Santa Barbara.) I suppose there's no harm in using topical excuses to force one's longstanding obsessions down the casual reader's throat, though it's hard to do this in a way that's not misleading. So instead of parodying de Soto with "How Egyptian protesters used the laws of gravity to bring down the regime," I'll just write a physics-y post that I was meaning to anyway and add "in Egypt" the way one adds "in bed" at the end of a fortune cookie.

[NASA photo, Flickr, creative commons, etc.]

What follows really belongs on the defunct physics blog, being a little technical. But it isn't really physics, and has to do, besides, with the work of Dorit Aharonov, another of the squalid scholars.


1. Conventional quantum computers are structured like Turing machines. They consist of a (finite) set of internal states (logic gates, etc.) and a "tape" which (at the beginning of the computation) has the input string written on it, say in Arabic numerals. The tape is connected to the machine by a read-write head. At each step of the algorithm one can move the tape back and forth, change the internal state, and/or write on the bit of tape that's under the head. At some point the algorithm stops (assuming the problem is decidable); this happens when the internal state of the machine is the designated "end" state and the answer to the original problem is on the tape. Quantum computers differ from classical computers in that the number of allowed tape configurations is (in principle) much larger, as superpositions are allowed in intermediate steps. The complexity of an algorithm is related to how many steps it takes to process an input string of length N, and in particular how this grows with N (e.g., polynomially/exponentially). The complexity of a problem is the complexity of the asymptotically slowest-growing (at large N) algorithm that can solve it.

The problem with this construction is that, while it is easy to find upper bounds for complexity (just write down an algorithm), it is generally hard to establish lower bounds, because it is hard to make useful statements about all conceivable algorithms, or to rule out the possibility that one is just not being clever enough. Generally in this sort of situation one's instinct is to look for a way of talking about the "space of all problems," which (if one is lucky) has some degree of geometric structure -- so that, e.g., two problems are "near" or "far" in problem space.

2. Adiabatic quantum computation begins with the observation that "satisfiability" problems in computer science [2] can be recast in terms of the physics of magnets. Magnets consist of spins that can either point up or down (i.e., true or false); depending on the details, spins might want either to line up or to point in opposite directions [3]. Depending on the interactions there might be a "good" configuration for the spins (i.e., one in which all pairs of spins that want to line up do so and all pairs that want to point in opposite directions do so) or not. (In the latter case the system is called "frustrated." A simple example of a frustrated system: three spins on a triangle that all want to point in opposite directions. If 1 is up, then 2 and 3 want to point down, but this doesn't work because 2 and 3 want to point in opposite directions. No configuration satisfies all the bonds in Egypt.) The lowest possible energy of a frustrated system is higher than that of an unfrustrated system, so the question of whether a spin system is frustrated reduces to that of what its lowest possible energy ("ground state energy") is. Obviously the question of whether a spin system is frustrated is closely related that of whether a set of statements can be satisfied, if you map the clauses onto spins (T/F = up/down). There are some explicit examples of this mapping in the original paper of Farhi et al.

3. This mapping isn't immediately useful as it just recasts the satisfiability problem in the language of magnetism. This is where the physics comes in. Farhi et al. observed that the quantum mechanical "adiabatic theorem" tells you that, if you change your Hamiltonian (the Hamiltonian of a system is an assignment of an energy to every possible configuration of the system) sufficiently slowly, the ground (lowest-energy) state of the original Hamiltonian goes into the ground state of the final Hamiltonian. How slowly you have to go depends on the energy gap between the ground and the next-lowest-energy (first excited) state all along the path in "Hamiltonian space" that leads from the initial to the final Hamiltonian. The bigger the minimum gap, the faster you can afford to go. Often the minimum gap vanishes in the large-system limit (this is called a quantum phase transition); in this case you have to go arbitrarily slowly as the number of spins increases. The minimum allowed speed might either vanish as a power-law or exponentially with the system size. Alternatively there might be exact "degeneracies" for finite systems in which case the algorithm is doomed.

4. The adiabatic algorithm works like this. You start the system off in some reference "trivial" state, let's say with a Hamiltonian that wants all the spins to point up. (E.g. spins in a strong external field.) Call this H_0. The problem Hamiltonian, which encodes the input -- the logical expression you want to check the satisfiability of -- is called H_1. You turn a hypothetical knob so that at time t, the Hamiltonian is H(t) = t/T H_1 + (1 - t/T) H_0. Or you use a curvier path. Assuming T is large enough, the ground state of H(T) is the answer to your problem. The complexity question becomes one of finding the path with the slowest-growing T(N); in general the slowness comes from segments of the path that are near the phase transitions between the initial and final states; the gaps here are given by the theory of critical slowing down; therefore you have mapped the complexity problem into a problem about phase transitions. To rephrase, what this approach does for you is it maps the space of problems onto the space of quantum Hamiltonians, the structure of which can be understood in terms of renormalization group flows.

5. Satisfiability problems aren't everything. The Aharonov et al. paper establishes that adiabatic quantum computation is equivalent to standard quantum computation. (I haven't read the proof.)

6. Of course, the Hamiltonians of conventional magnetic systems are what they are; you can't implement the adiabatic algorithm as stated, and it isn't likely to be terribly useful as a means of quantum computation. However, one does have a fair amount of control over the Hamiltonians describing cold atomic gases, and there's been a fair amount of work on actually implementing the algorithm. A variant that's been proposed is computation-through-dissipation, which is a clever mashup of the adiabatic algorithm and optical pumping. I'm a little skeptical about the practical prospects for this approach, but it is theoretically interesting because dissipative systems are hard to analyze and it would be nice if one were able to use the mapping in reverse and use computer science results to say something about them.

[1] Yes, I considered calling him Hernando de Stoato and posting this on STOATUSblog.
[2] Viz. questions about whether a very long logical string is true on any assignment of truth-values to its constituent particles, which, e.g., A or B is but A and not-A isn't
[3] In a physical system this depends on why the spins are interacting at all; there are various possible mechanisms. In the model one puts this in by hand by assigning an energy penalty to undesired configurations.

No comments: