Q-circuit v2.0

Many readers are familiar with the LaTeX package called Q-circuit that I coauthored with Bryan Eastin. If you aren’t familiar with it, it is a set of macros that helps make typesetting quantum circuits easy, efficient and (reasonably) intuitive.  The results are quite beautiful, if I do say so myself, as can be seen in the picture to the left.
In the past year Bryan and I began getting emails from Q-circuit users who were experiencing some bugs. It turns out that the issue was usually an incompatibility between Q-circuit v1.2 and Xy-pic v3.8, an update to a package that Q-circuit relies on heavily.
Thanks to the user feedback and some support from the authors of Xy-pic, we were able to stamp out the bugs. (Probably… no guarantees!) Thus I present to you the latest version of Q-circuit!
Download Q-circuit v2.0
There is more info on the Q-circuit website, where you will find the tutorial, some examples, and you can also enjoy the painfully retro green-on-black motif. (Let the haters hate… I like it.) A few additional technical details:

  1. Nothing has been added to the new version.  It is as near as possible to the old version while still functioning with Xy-pic version 3.8.x.
  2. The old version of Q-circuit works better with Xy-pic version 3.7. (When using Xy-pic 3.7, Q-circuit 2.0 makes PDFs with slightly pixelated curves.)
  3. The arXiv is still using Xy-pic 3.7 and they don’t know when they’ll update to 3.8.

Finally, a big thank you to my coauthor Bryan for putting in so much hard work to make Q-circuit a success!

Entangled LIGO


The quest to observe gravitational waves has been underway for several years now, but as yet there has been no signal. To try to detect gravitational waves, the LIGO collaboration basically uses huge kilometer-scale Michaelson-type interferometers, one of which is seen in the aerial photo to the left. When a gravitational wave from, say, a supernova or in-spiraling pair of black holes arrives at the detector, the wave stretches and shrinks spacetime in the transverse directions, moving the test masses at the ends of the interferometer arms and hence changing the path length of the interferometer, creating a potentially observable signal.
The problem is, the sensitivity requirements are extreme. So extreme in fact, that within a certain frequency band the limiting noise comes from vacuum fluctuations of the electromagnetic field. Improving the signal-to-noise ratio can be achieved by a “classical” strategy of increasing the circulating light power, but this strategy is limited by the thermal response of the optics and can’t be used to further increase sensitivity.
But as we all know, the quantum giveth and the quantum taketh away. Or alternatively, we can fight quantum with quantum! The idea goes back to a seminal paper by Carl Caves, who showed that using squeezed states of light could reduce the uncertainty in an interferometer.
What’s amazing is that in a new paper, the LIGO collaboration has actually succeeded for the first time in using squeezed light to increase the sensitivity of one of its gravity wave detectors. Here’s a plot of the noise at each frequency in the detector.The red line shows the reduced noise when squeezed light is used. To get this to work, the squeezed quadrature must be in phase with the amplitude (readout) quadrature of the observatory output light, and this results in path entanglement between the photons in the two beams in the arms of the interferometer. The fluctuations in the photon counts can only be explained by stronger-than-classical correlation among the photons.
It looks like quantum entanglement might play a very important role in the eventual detection of gravitational waves. Tremendously exciting stuff.

Why medicine needs scirate.com

Defenders of the traditional publishing model for medicine say that health-related claims need to be vetted by a referee process. But there are heavy costs. In quantum information, one might know the proof of a theorem (e.g. the Quantum Reverse Shannon Theorem) for years without publishing it. But one would rarely publish using data that is itself secret. Unfortunately, this is the norm in public health. It’s ironic that the solution to the 100-year-old Poincaré conjecture was posted on arxiv.org and rapidly verified, while research on fast-moving epidemics like H5N1 (bird flu) is
delayed so that scientists who control grants can establish priority.
All this is old news. But what I hadn’t realized is that the rest of science needs not only arxiv.org, but also scirate.com. Here is a recent and amazing, but disturbingly common, example of scientific fraud. A series of papers were published with seemingly impressive results, huge and expensive clinical trials were planned based on these papers, while other researchers were privately having trouble replicating the results, or even making sense of the plots. But when they raised their concerns, here’s what happened (emphasis added):

In light of all this, the NCI expressed its concern about what was going on to Duke University’s administrators. In October 2009, officials from the university arranged for an external review of the work of Dr Potti and Dr Nevins, and temporarily halted the three trials. The review committee, however, had access only to material supplied by the researchers themselves, and was not presented with either the NCI’s exact concerns or the problems discovered by the team at the Anderson centre. The committee found no problems, and the three trials began enrolling patients again in February 2010.

As with the Schön affair, there were almost comically many lies, including a fake “Rhodes scholarship in Australia” (which you haven’t heard of because it doesn’t exist) on one of the researcher’s CVs. But what if they lied only slightly more cautiously?
By contrast, with scirate.com, refutations of mistaken papers can be quickly crowdsourced. If you know non-quantum scientists, go forth and spread the open-science gospel!

Quantum conferences at ETH-Zurich


ETH-Zurich, the MIT of Switzerland (or Europe), has hosted a spate of quantum conferences lately. The largest, called QIPC, for “Quantum Information Processing and Communications,” covered both experiment and theory, and took place at ETHZ’s suburban Science Center campus, in the HCI building, which looks like an elegant homage to MIT’s legendary Building 20, incubator of radar and much else during its 55 year existence.
A few weeks earlier, QIFT11, a small workshop on Quantum Information and the Foundations of Thermodynamics invoked quantum information to shed a suprising amount of new light on such time-worn problems as Maxwell’s Demon and the origin of irreversible phenomenology from reversible dynamics.
This week, QCRYPT 2011, which boldly calls itself the First Annual Conference on Quantum Cryptography, is taking place in ETHZ’s downtown CAB building, again featuring both theory and experiment.

Stability of Topological Order at Zero Temperature

From today’s quant-ph arXiv listing we find the following paper:

Stability of Frustration-Free Hamiltonians, by S. Michalakis & J. Pytel

This is a substantial generalization of one of my favorite results from last year’s QIP, the two papers by Bravyi, Hastings & Michalakis and Bravyi & Hastings.
In this new paper, Michalakis and Pytel show that any local gapped frustration-free Hamiltonian which is topologically ordered is stable under quasi-local perturbations. Whoa, that’s a mouthful… let’s try to break it down a bit.
Recall that a local Hamiltonian for a system of n spins is one which is a sum of polynomially many terms, each of which acts nontrivially on at most k spins for some constant k. Although this definition only enforces algebraic locality, let’s go ahead and require geometric locality as well by assuming that the spins all live on a lattice in d dimensions and all the interactions are localized to a ball of radius 1 on that lattice.
Why should we restrict to the case of geometric locality? There are at least two reasons. First, spins on a lattice is an incredibly important special case. Second, we have very few tools for analyzing quantum Hamiltonians which are k-local on a general hypergraph. Actually, few means something closer to none. (If you know any, please mention them in the comments!) On cubic lattices, we have many powerful techniques such Lieb-Robinson bounds, which the above results make heavy use of [1].
We say a Hamiltonian is frustration-free if the ground space is composed of states which are also ground states of each term separately. Thus, these Hamiltonians are “quantum satisfiable”, as a computer scientist would say. This too is an important requirement, since it is one of the most general classes of Hamiltonians about which we have any decent understanding. There are several key features of frustration-free Hamiltonians, but perhaps chief among them is the consistency of the ground space. The ground states on a local patch of spins are always globally consistent with the ground space of the full Hamiltonian, a fact which isn’t true for frustrated models.
We further insist that the Hamiltonian is gapped, which in this context means that there is some constant γ>0 independent of the system size which lower bounds the energy of any eigenstate orthogonal to the ground space. The gap assumption is extremely important since it is again closely related to the notion of locality. The spectral gap sets an energy scale and hence also a length scale, the correlation length.  For two disjoint regions of spins separated by a length L in the lattice, the connected correlation function for any pair or local operators decays exponentially in L.
The last property, topological order, can be tricky to define. One of the key insights of this paper is a new definition of a sufficient condition for topological stability that the authors call local topological order. Roughly speaking, this new condition says that ground states of the local Hamiltonian are not distinguishable by any (sufficiently) local operator, except up to small effects that vanish rapidly in a neighborhood of the support of the local operator. Thus, the ground space can be used to encode quantum information which is insensitive to local operators! Since nature presumably acts locally and hence can’t corrupt the (nonlocally encoded) quantum information, systems with topological order would seem to be great candidates for quantum memories. Indeed, this was exactly the motivation when Kitaev originally defined the toric code.
Phew, that was a lot of background. So what exactly did Michalakis and Pytel prove, and why is it important? They proved that if a Hamiltonian satisfying the above criteria is subject to a sufficiently weak but arbitrary quasi-local perturbation then two things are stable: the spectral gap and the ground state degeneracy. (Quasi-local just means that strength of the perturbation decays sufficiently fast with respect to the size of the supporting region.) A bit more precisely, the spectral gap remains bounded from below by a constant independent of the system size, and the ground state degeneracy splits by an amount which is at most exponentially small in the size of the system.
There are several reasons why these stability results are important. First of all, the new result is very general: generic frustration-free Hamiltonians are a substantial extension of frustration-free commuting Hamiltonians (where the BHM and BH papers already show similar results). It means that the results potentially apply to models of topological quantum memory based on subsystem codes, such as that proposed by Bombin, where the syndrome measurements are only two-body. Second, the splitting of the ground state degeneracy determines the dephasing (T2) time for any qubits encoded in that ground space. Hence, for a long-lived quantum memory, the smaller the splitting the better. These stability results promise that even imperfectly engineered Hamiltonians should have an acceptably small splitting of the ground state degeneracy. Finally, a constant spectral gap means that when the temperature of the system is such that kT<<γ, thermal excitations are suppressed exponentially by a Boltzmann factor. The stability results show that the cooling requirements for the quantum memory do not increase with the system size.
Ah, but now we have opened a can of worms by mentioning temperature… The stability (or lack there of) of topological quantum phases at finite temperature is a fascinating topic which is the focus of much ongoing research, and perhaps it will be the subject of a future post. But for now, congratulations to Michalakis and Pytel on their interesting new paper.

[1] Of course, Lieb-Robinson bounds continue to hold on arbitrary graphs, it’s just that the bounds don’t seem to be very useful.

Geocentrism Revival

Robert Sungenis is an idiot.Seriously?

A few conservative Roman Catholics are pointing to a dozen Bible verses and the church’s original teachings as proof that Earth is the center of the universe, the view that was at the heart of the church’s clash with Galileo Galilei four centuries ago.

I can confidently speak for all of the quantum pontiffs when I say that we reject the geocentric view of the universe. I never thought I would have to boldly stand up for these beliefs, yet here I am.

… Those promoting geocentrism argue that heliocentrism, or the centuries-old consensus among scientists that Earth revolves around the sun, is a conspiracy to squelch the church’s influence.

This sentence nearly made my head explode.
First of all, heliocentrism is of course not agreed upon as scientific fact. As readers of this blog surely know, General Relativity teaches us that there are an infinite number of valid coordinate systems in which one can describe the universe, and we needn’t choose the one with the sun or earth at the origin to get the physics right (though one or the other might be more convenient for a specific calculation.) 
Second, you’ve gotta love the form of argument which I affectionately call “argument by conspiracy theory”, in which any evidence against your position is waved away as the work of a secret organization with interests aligned against you. Oh, what’s that? You don’t have any evidence for the existence of this secret society? Well, that simply proves how cunning they are and merely strengthens the argument by conspiracy theory!

“Heliocentrism becomes dangerous if it is being propped up as the true system when, in fact, it is a false system,” said Robert Sungenis, leader of a budding movement to get scientists to reconsider. “False information leads to false ideas, and false ideas lead to illicit and immoral actions — thus the state of the world today.… Prior to Galileo, the church was in full command of the world, and governments and academia were subservient to her.”

So in case you were wondering: yes, this guy is serious. In fact, he is also happy to charge you $50 to attend his conference, or sell you one of several books on the topic, as well as some snazzy merchandise like coffee mugs and t-shirts that say “Galileo was wrong” on the front. (Hint: they don’t say “Einstein was right” on the back.)
To Mr. Sungenis and his acolytes: I implore you. Please just stop. It’s embarrassing for both of us. And if you’re worried about your bottom line, then consider going into climate change denial instead, which I hear is quite lucrative.

Fighting tuberculosis with BB84

Tuberculosis (TB) has been with humans for millenia, infects 1 in 3 people worldwide and kills almost 2 million people per year.  BB84 is everyone’s favorite information-theoretically secure key expansion system, and is secure at bit error rates up to at least 12.9%.  So what’s the connection?
TB is treatable, but the treatment involves taking multiple antibiotics daily for 6-9 months (or up to 24 months for drug-resistant strains).  The drugs have painful side effects (think chemotherapy) and most TB symptoms go away after a few months, so it can be hard for people to be motivated to complete the course.  In poor countries, where TB is most common, doctors are in short supply, and have little time for counseling about side effects, or patients might not have access to doctors, and just buy as many pills as they can afford from a pharmacist.  But when people stop treatment early, TB can return in a drug-resistant form, of which the scariest is XDR-TB.
As a result, the WHO-recommended treatment is DOTS (directly observed treatment, short course), in which a health worker watches the patient take all of their pills.  This is effective, though proving this is hard, and implementation is difficult.  The community health workers monitoring patients are paid little or nothing, are often unmonitored, and spend their time in the houses of people with active TB, often without good masks.  So absenteeism and low morale can be problems.  Patients also can find it condescending, disempowering, and stigmatizing, since neighbors can notice the daily community-health-worker visits.
One ingenious alternative is called X out TB.  Patients are given a device that dispenses a strip of paper once every 24 hours.  If a patient is taking their antibiotics, then peeing on the paper will cause a chemical reaction (with a metabolite of the drugs) that reveals a code, which patients send to the local clinic by SMS.  As a result, the clinic can remotely monitor which patients are reliably taking their pills.  Patients in turn are given a reward (cell phone minutes have been popular) for taking their pills every day.
This system seems to be working well in trials, but the presence of the dispenser means that batteries are necessary, and security considerations arise. For example, one could try to open the dispenser up, to save a jar of urine and keep dipping strips in it after stopping the pills, or even to pour urine inside the dispenser. Imagine that the unfortunate TB patient is actually Eve, who has a dark determination to cheat the system, even at the expense of her own health.
Fortunately, BB84 has already provided an elegant, if not entirely practical, solution to this problem.  The dispenser can be replaced by a numbered series of strips, and the bottle of pills needs to be replaced by a similarly numbered blister pack (for simplicity, the two could be packaged together). On day i, the patient takes pill i and several hours later, pees on strip i. The twist is that there are two types of strips—let’s call them X and Z—and two types of pills, which we will also call X and Z.  These appear the same visually, but have different chemical properties.  Peeing on an X strip after taking an X pill will reveal the code, as will peeing on a Z strip after taking a Z pill.  But if the strip type doesn’t match the pill type, or there are metabolites from both pill types present, then the code will be irrevocably destroyed.
For a patient following instructions, the pill on day i will always match the strip on day i, and so all of the codes will be properly revealed.  But any attempt to reveal codes without matching up pills and strips properly (e.g. peeing on all the strips at once) will inevitably destroy half the codes.  The threshold for rewards could be set at something like 90-95%, which is safely out of range of any cheating strategy, but hopefully high enough to prevent resistance.
This scheme has its flaws.  For example, a patient could get a friend to take the pills for them (although this friend would probably suffer the same side effects).  The metabolites might not clear the system quickly enough, in which case honest patients would still invalidate strips sometimes when an X strip/pill is followed by a Z strip/pill or vice versa.  While the original X-out TB approach relied on using metabolites of common TB medications, the BB84 approach would probably want to use pharmacologically inactive additives, and I don’t know if drugs exist that are FDA-approved and have the necessary properties. On the other hand, this enables the additive to have a half-life much shorter than the medicine.  And of course, patients generally want to get better, and are likely to take their pills when given even mild encouragement, monitoring and counselling.  So information-theoretic security might be more than is strictly necessary here.
Can anyone else think of other applications of BB84?  Or other ways to stop TB?

The first SciRate flame war

Maybe it’s not a war, but it is at least a skirmish.
The first shot was fired by a pseudonymous user named gray, who apparently has never scited any papers before and just arrived to bash an author of this paper for using a recommendation engine to… cue the dramatic musicrecommend his own paper!
In an effort to stem this and future carnage, I’m taking to the quantum pontiff bully pulpit. This is probably better suited for the SciRate blog, but Dave didn’t give me the keys to that one.
Since it wasn’t obvious to everyone: SciRate is not a place for trolls to incite flame wars. Use the comments section of this post if you want to do that. (Kidding.) Comments on SciRate should have reasonable scientific merit, such as (at minimum) recommending a paper that was overlooked in the references, or (better) posting questions, clarifications, additional insights, etc. As an example, look at some of the excellent substantive comments left by prolific scirater Matt Hastings, or this discussion.
Nor is SciRate the place for insipid dull self-promotional comments and/or gibberish.
Now to make things fun, let’s have a debate in the comments section about the relative merits of introducing comment moderation on SciRate. Who is for it, who is against it, and what are the pros and cons? And who volunteers to do the moderating?
As for “gray” or any other troll out there: if you want to atone for your sins, my quantum confessional booth is always open.

Efficient markets and P vs. NP

According to Phillip Maymin of the NYU Poly Department of Finance and Risk Engineering:

Markets are Efficient if and Only if P = NP

Abstract:
I prove that if markets are efficient, meaning current prices fully reflect all information available in past prices, then P = NP, meaning every computational problem whose solution can be verified in polynomial time can also be solved in polynomial time. I also prove the converse by showing how we can “program” the market to solve NP-complete problems. Since P probably does not equal NP, markets are probably not efficient. Specifically, markets become increasingly inefficient as the time series lengthens or becomes more frequent. An illustration by way of partitioning the excess returns to momentum strategies based on data availability confirms this prediction.

I guess this means that libertarians will be petitioning the Clay institute to collect their million dollars then?