QEC07 Registration Now Open

I hear it is warm in Southern California in winter. Registration for QEC 07, organized by Daniel Lidar, Todd Brun, and Paolo Zanardi, is now open:

The University of Southern California’s Center for Quantum Information Science & Technology (CQIST) will be hosting the First International Conference on Quantum Error Correction during the week of Dec. 17, 2007. Details are available at http://qserver.usc.edu/qec07/ and will be updated with a schedule and local links.
The goal of the conference is to bring together a wide group of experts to discuss all aspects of quantum error correction, decoherence control, and fault tolerance. Every effort will be made to include talks surveying the latest experimental progress, and to promote an
interaction between theoreticians and experimentalists. The morning of the first day of the conference will be devoted to a tutorial session. A special emphasis will be made on student participation, through 50 guaranteed student spots, and student travel grants.
The conference is now open for registration and submission of contributed talks and posters. We invite you to participate and would be grateful if you could forward the announcement to others who might be interested.

Protect Me From What I Want

Quantum error correction is a beastly miracle: an astounding discovery which lets us dream of a future quantum computer but which quickly turns into a nightmare when we think of actually implementing it in the lab. Motivated in part by this, a large number of people have been thinking about ways to considerably simplify the problems of building a fault-tolerant quantum computer by designing physical systems where error correction is more a part of the natural dynamics of these systems. This approach is more in keeping with our classical computers where fault-tolerance is built into the physics of the devices we use to compute. There are now many different sketches of implementations of these ideas, ranging from very simple systems designed to energetically combat single qubit errors (quant-ph/0012018), to gapped systems which allow for topological quantum computation (quant-ph/9707021), to four dimensional systems which are fully fault-tolerant (quant-ph/0110143.) But these proposals are all just theory right now. What is interesting is to contemplate the actual physical implementation of these systems in the lab.
Along these lines, a lot of interesting ideas have been proposed. Some of these have been in condensed matter systems, like for example in engineered superconducting systems (cond-mat/0407663 and cond-mat/0111224). At the same time there has been a lot of interest in also engineering these systems in atomic and molecular systems. These proposals including using polar molecules in lattices (quant-ph/0612180, Nature Physics 2, 341 (2006)), engineering the appropriate interactions in optical lattices (cond-mat/0210564), systems of trapped ions (quant-ph/0509197, etc, etc. But care must be taken! Especially when you have to work to get the interaction you need in order to obtain the robustness offered by these energetic schemes.
Case in point is a paper which appeared today on the archive, “Energy protection arguments fail in the interaction picture” 0705.2370 by Ken Brown from Georgia Tech. The basic idea behind the energy protection schemes is as follows. Suppose you have a system whose ground state is a quantum error correcting code. Then you can encode quantum information into these energy levels. If you design your system Hamiltonian correctly then it is possible that single qubit errors, for instance, always take you out of this error correcting code space, i.e. that such single qubit errors all correspond to transitions from this ground state to higher energy excited states. More generally you can design systems, like Kitaev’s toric code, whose ground state can only be excited by errors on a large number of qubits. The basic idea is then that if your environment is cold enough, then the process which would normally cause decoherence are surpressed since EVERY single qubt error (or up to the number of errors the error correcting code can detect minus one) will take the system from the ground state to an excited state of higher energy. In contrast for most physical qubits it is possible to have errors (phase errors) which cause the phase of a quantum system to change without exchanging energy between the system and environment. [As an aside: decoherence when it is perturbative likes to preserve the separate energies of a system and its environment, thus decoherence can either be heating, cooling, or more insideously, these phase errors. Just making a system degenerate doesn’t help you protect against errors since then all errors are these insideous phase errors.] For an example of this type of argument see (quant-ph/0012018)]
Okay, so far so good, hopefully. Now suppose that you want to engineer such a system in an ion trap. Indeed, for example, in quant-ph/0703270, the authors consider engineering a system known as the long range quantum compass model in ion trap quantum systems (a long time ago the short range version of quantum compass model also appeared in the thesis of a lunatic thinking along similar lines.) The way that they achieve the necessary two qubit interactions to implement this model is to use what I like to call the Sorrenson-Molmer interactions (see quant-ph/0002024.) In particular it is possible to take ions in an ion trap, and apply lasers to the ions such that the internal energy levels of the ions couple to the motional state of the ions, and, if you do this just right, you can engineer two qubit effective interactions like, say Ising interactions along X and Y directions: [tex]$X otimes X$[/tex] or [tex]$Y otimes Y$[/tex] in qubit language. Now it turns out that the quantum compass model uses exactly these different competing Ising bonds. So the idea of quant-ph/0703270 is exactly to engineer the quantum compass model in these ion trap systems.
But wait! There was a bold word in that last paragraph. Effective. In particular in the Sorenson-Molmer scheme what you end up doing is showing that in the interaction picture of the ions the effective interaction is the requisite Ising terms along the appropriate directions. What does this mean, in the interaction picture? It means that there is a rotating frame for the ions (rotating both the internal degrees, i.e. your qubits, and the motional degrees of freedom, i.e. the bus) where the interaction looks like the Ising terms. But no system is isolated, so an important question to ask is what does the environment see in this frame of reference. In particular, since you are going to use these interactions in order to protect your system energetically, the arguments about protecting qubits by keeping the environment cold better work out.
And do they? Sadly, no and this is the content of 0705.2370. In particular if you make your effective interaction in the interaction picture, then you’d better look at you system environment coupling which you are tyring to protect against in this picture. Now when you do this what you find out is that the system environment coupling gets changed in an important way. In particular in this frame, the energetic arguments you make about decoherence being either cooling, heating, or phase-ish errors get modified (this argument is basically the rotating wave approximation.) Indeed what you discover is that there are interactions where you excite from the ground state in the interaction picture to a higher energy state and get a change in the zero temperature environment but still conserve energy. What is this process back in the non-interaction picture? Nothing more than spontaneous emission from your ion energy levels. Doh! [As an aside I should note that this effect occurs because the coupling you engineer is much much weaker than the bare energy levels of you ion.]
So the short of the long of this story is that because the interaction you engineer is created in the interaction picture, the arguments about how the system interacts with its environment gets modified. This in effect is a huge problem for proposals like quant-ph/0703270. Further it is a big problem for proposal which try to use Sorenson-Molmer type gates for doing things like simulation of many-body quantum systems (the system won’t thermalize to the thermal state of the interaction you are trying to create) and quantum adiabatic computation (same problem.) But to learn that you’d best check the details of 0705.2370.

Due When?

Now that is an interesting homework due date:

22.3.: Homework 2 is available here [ps] [pdf] – due date: second class after strike ends (not 19.4.2007)

Update: (5/27/07) Well the strick is over so Julia has had to get back to work 🙂

2007 Gödel Prize

The 2007 Gödel Prize has been announced and the winners are Alexander Razborov and Steven Rudich for their “Natural Proofs” paper. Here is the citation:

The theory of Computer Science was developed to formalize methods for computation and the problems they can solve. The class P consists of the problems solvable by conventional computers in time polynomial in the input size. Another class of problems called NP requires one to find a solution of a problem where it is feasible to quickly verify that the solution is correct. Many NP problems have no known efficient solution even though they have major practical applications. The P =? NP question has challenged researchers in theoretical Computer Science for many decades and is at the core of the challenges facing this field. This question first began to be actively investigated in the early 1970s when Stephen Cook and Leonid Levin showed that the efficient solution of a “complete” problem for NP could be used to efficiently solve all of the problems in NP. To many in the field, this indicated that P is not likely to be equal to NP, but the question has been open for over 35 years since then. Researchers (Baker, Gill, and Solovay) in 1975 showed that certain general proof techniques known as diagonalization, which had been successfully used to provide proofs in theoretical computer science, could not be used to separate P and NP. In the two decades after that, researchers developed proofs that computational models known as circuits (similar to the circuits in conventional chips) could not solve certain computational problems, in the hope of generalizing those proofs to show that P is not NP. These proof methods, known as “circuit lower bounds,” generally restricted resources of circuits (such as size and/or depth) so the circuit could not solve a given problem. The Natural Proofs paper is concerned with showing that proof techniques using circuit lower bounds are not likely to resolve the P =? NP question. The paper formalizes some properties found in common in all known circuit lower bound proofs, and calls proofs having these properties “Natural Proofs”. The paper provides strong evidence that no Natural Proof separates P and NP, by showing that otherwise a widely held conjecture would be violated. This conjecture is that “pseudorandom number generators” exist; these are procedures that take at input a short random “seed” sequence of bits and then, without further use of randomness, generate a very long (exponential in the seed length) sequence of pseudo-random bits. The resulting pseudo- random bits are actually not random but appear, even to a very powerful computer running for a very long time, to be completely random. The existence of “pseudo-random number generators” is also related to the existence of unbreakable methods for cryptography, also widely conjectured to exist. The paper also proves, without any assumptions, that there is no Natural Proof that certain computational problems often used in cryptography (e.g., integer factoring and discrete log) are hard to solve. Such cryptographic methods are of critical use to electronic commerce, and though widely conjectured to be unbreakable, this implies that there are Natural Proofs for their security. In summary, the Natural Proofs paper proves that a wide class of proof techniques canu2019t be used to resolve the P =? NP and related questions, unless widely held conjectures are violated. The Natural Proofs paper impacts the efforts of generations of theoretical scientists that have worked to resolve the question of P =? NP, and implies that other proof techniques need to be applied (such as non-constructive methods which make use of techniques not allowed by Natural Proofs).

Annealing to Zen-Budha Ground State

Okay, so talking about D-wave is certainly not good for my Zen-Budha nature. Not good at all! So let’s talk about something related to D-wave but only tangentially so that I can get that Zen-Budha nature back.
One issue that comes up a lot in discussion about D-wave is simulated annealing. Wikipedia does a nice job defining simulating annealing. The basic idea is to mimic cooling in a physical system. Suppose you have a space of different configurations and some sort of energy function on this space. You’re goal is to find the lowest energy configuration because then you will be the coolest physicist on the block…and of course at the same time you might be solving some very hard computational problem. At a given step in the simulated annealing algorithm, you are in a given configuration and then you take a step in a neighborhood of this configuration. The probability of your particular change depends on a parameter T which is roughly the temperature. The simplest update rule is something like Metropolis Monte Carlo: if the energy is decreased, then always accept the change but if the energy is increased, accept this change with a probability [tex]$e^{-1/T}[/tex]. Of course more complex update rules are possible, but you get the idea. Then as a function of time you decrease the temperature and roughly freeze out the system. Hopefully when the temperature has reached zero you will either be at the global minimum energy or will have seen the global optimal energy. Okay, so a cool strategy for finding a global optimum. (Okay, okay, that will be my last temperature joke!)
But if there is one thing you learn when you begin to learn about classical algorithms it is that oftentimes what is good for solving one problem can be bad for solving other problems. Take, for example the maximum matching problem. A matching of a graph is a set of edges which do not share common vertices. The maximum matching problem is to find a matching with the largest number of edges. There is a classical algorithm for finding the maximum matching which runs in a time proportional to the number of edges times the square root of the number of vertices (due to Micali and V. Vazirani). In other words the running time of this algorithm is polynomial.
Now what happens if we run simulated annealing on this problem? Well of course simulated annealing is a meta-algorithm…there is a lot I haven’t specified. But just take a fairly generic and straightforward way of implementing simulated annealing for this problem. Then it was shown by Sasaki and Hajek that there exist bad instances for this approach. In other words simulated annealing fails to find the maximum matching in a time polynomial in the size of the graph. Doh! On the other hand, simulated annealing does find a “close” maximum matching for suitably small temperatures. This later fact is interesting, but what is more interesting to me is the fact that the simulated annealing algorithm fails where a classical polynomial time algorithm is known to exist.
So why does this all matter in the big scheme of things? Because one of the claims made by those who believe in D-wave is that their quantum system will outperform classical simulated annealing. But what does it mean to outperform classical simulated annealing if classical simulated annealing itself isn’t even the best classical algorithm? There is no simple answer to this question as far as I know, but it does give me pause everytime I see simulated annealing compared to some quantum version of this annealing.

Friday Rant Blogging

Over at Roseblog, Geordie gets on a high horse and tells us quantum theorists what we haven’t read enough because we haven’t read papers like this. Which, I guess makes me not a quantum theorist! No, people like me, we never read outside of our narrow focus or try to learn as much about a field we work in as possible. We’ve never heard of this Aeppli guy either….what even Scott Aaronson, the quantum theorist of theorists, sites cites his work? That’s just impossible!
Oh, and by the way, the paper sited shows a speedup of a factor of thirty with their “quantum annealing” strategy over their “classical annealing” strategy. See substantial comments below by Bill Kaminsky for why the above sentence is wrong.
And yes, I’m grumpy.

Julia!

Julia Kempe, quantum computing theorist extraordinaire, written up in this nice article on Sciencecareers.org. Hmmm, what is Julia drinking in the second picture?

128 Bit Madness

Get your own 128-bit number here!
Speaking of which, what would happen if I designed a system which used a digital rights management system with my own 128-bit number and used this system to spread the HD-DVD 128-bit number? Then they would have had to cracked your system to know that you had cracked their system… Does the legal system just blow up into an infinite loop at this point?
Speaking of which, how many bits of the number do you have to publish? What if I leave off the last five bits? What about if I create a superposition of numbers such that with a one in one hundred chance you get the right 128-bit string? Is publishing such a quantum state a violation of the DMCA?

0, 1, superposition

Doh, quantum computers are tristate logic devices?

In classical computer science, bits — or binary digits — hold data encoded as ones and zeros. In quantum computing, data is measured in qubits, or quantum bits. As such, a qubit can have three possible states — one, zero or a “superposition” of one and zero.

I mean technically it is correct, I guess (ignorning mixed states), but doesn’t this make it sound like qubits are just three state classical systems? Or is my nitpicky-meter too high?