Not Replacing, Enhancing!

Long article appearing on CNNmoney written for Fortune magazine on quantum computers. My favorite quote from the article:

But quantum computing scientists are surprisingly bullish, for scientists. “This is the most exciting time of my life, and I’m not young,” says Eli Yablonovitch, professor of electrical engineering at UCLA. “We’re looking forward to a direct impact on everybody in the world.”

And the most interesting quote is

Granted, changing the spin of an electron is a long way from building a circuit out of the same, and history is littered with promising technologies that didn’t pan out. Intel CEO Paul Otellini is one major quantum skeptic, increasingly reluctant to fund R&D for it. Reports of the death of silicon have been greatly exaggerated, he says

What is interesting, I think is that the article makes it sound like Paul Otellini thinks that quantum computers are somehow meant to be computers “beyond Moore’s law,” i.e. as a technology that moves beyond today’s Silicon based transistor technology. I think this is kind of crazy. The point of quantum computers is not that they will be “beyond Moore” but that there is an entirely new form of computing beyond the form of computing we are executing today. I’m skeptical of quantum computers as devices “beyond Moore’s law” too, mostly because I figure there is always a factor in the error correction needed to achieve quantum error correction over classical error correction. Thus any quantum computer I design, say from the single spin up, could probalby be put to use as a classical computer with less overhead needed for error correction. I think this view of quantum computers comes about because an argument made for quantum computing is usually to put up Moore’s law and point out that if it continues, it will eventually make atom sized transistors. This is kind of a silly argument and hides the idea that the reason you build a quantum computer is because it is an entirely new form of computation.

Nullius in Verba

In October I get to go to London for the 7th European QIPC Workshop Quantum Information Processing and Communication which is being hosted by the Royal Society of London (Which is cool, because the original society was founded, IIRC, as a meeting to discuss the ideas of one Sir Francis Bacon!) The theme of this workshop is “Physicists and Computer Scientist Reunite!” Here is a blurb which Scott Aaronson produced after taking a huge verbose statement I wrote about this theme (those computer scientists have fine compression algorithms, I do say):

In the early days of quantum information science, physicists and computer scientists could make great progress by just sitting down at the same table and explaining the basic concerns of their respective fields. But those halcyon days are gone. Today the major discoveries of the mid-nineties have essentially morphed into separate disciplines. One can now work on quantum error correction without knowing anything about, say, quantum complexity or ion traps. Whereas the early days were distinguished by conferences in which computer scientists were forced to listen to physicists and vice versa, today the field is large enough that cross-disciplinary traffic has been reduced to a trickle. Yet quantum information is still just a fifteen-year-old teenager, so it would be astonishing indeed if the borderlands between physics and computer science were already picked clean. What connections remain to be discovered? What recent work in physics should computer scientists know about, and vice versa?
Quantum information scientists of the world, unite!

And indeed the workshop will have an interesting format with each one hour time slot consisting of one topic as seem by a more physics perspective and a more computer science perspective. It will be interesting to see if the audience will be able to withstand this dual sided barrage of perspectives!

All You Compute, and All You QFT, Is All Your Life Will Ever Be

I think if I had to choose an official question for quantum computing it would be the question “where are all the quantum algorithms?” I mean, every time I talk to someone outside of quantum computing it seems like this is the first question they ask (followed closely by “when is a large quantum computer going to be built?”)
Of course this question ignores a lot of hard work put in by a lot of people to come up with new quantum algorithms. Efficient quantum algorithms like solving Pell’s equation, estimating certain Gauss sums, solving hidden shift problems (more here), and solving certain non-abelian hidden subgroup problems, just for example. But when you mention these algorithms, you invariably find that these algorithms are just viewed as “variants of Shor’s algorithm.” In a sense this is true, but how true is it? Certainly many of the algorithms listed above require a lot more than “just Shor.”
In my mind it is useful to partition Shor’s algorithm for factoring into two parts (pun, pun, pun.) One part of Shor’s algorithm is the part of the algorithm that uses a fourier transform to efficiently solve a hidden subgroup problem. The second part is realizing that the solution of a hidden subgroup problem can be used to efficiently factor. What most people mean when they say that our quantum algorithms are “just Shor” is that they mimic the first part of the algorithm, i.e. they use a fourier transform. And indeed, all of the algorithms I list above use, in some manner, quantum fourier transforms. But they certainly don’t necessarily use them in the same, straightforward, manner they are used in Shor’s algorithm.
Last week I was in San Diego at a meeting on quantum algorithms. During part of the meeting we broke up into small groups and attempted to reason about where the field of quantum algorithms was going and then regroup to hear what each group had said. Wim van Dam made the following observation, which I found both funny and very striking. Wim noted that the Toffoli gate and the Hadamard gate are universal for quantum computing (see here for a simple proof by Dorit Aharonov.) The Toffoli gate is, of course, a “classical” gate which is universal for classical computation. The Hadamard gate is nothing more that the quantum fourier transform over the simplest of all groups, the cyclic group with two elements, Z_2. What does this mean? This means, as Wim said, that EVERY QUANTUM ALGORITHM IS NOTHING MORE THAN FOURIER TRANSFORMS AND CLASSICAL COMPUTATION! Ha! I love it!
Of course this points out the absurdity of in calling all algorithms after Shor, “like Shor.” Certainly they bear a resemblance, but isn’t this resemblance due to the fact that classical computations plus fourier transforms is all the quantum algorithms will ever be able to achieve? In one manner I ask this question in jest, but in another more serious manner, I do not think this view is all that crazy. That universal gate set of classical computing plus the Hadamard is, I think, a striking result. Might it be a key to understanding all our quantum algorithms and coming up with new ones? I think this is a great question and since I won’t be teaching until winter term I will have plenty of hours to mull it over!

Wither Funding?

DARPA (Defense Advanced Research Projects Agency) has, over the years, been an important source of funding for all sorts of important computer science research (It was [D]ARPA funding which helped start the Internet!) For example, DARPA was an early and important source of funding for quantum computing research. Recently, there have been claims that DARPA has shifted its focus from more long term projects to more short term and classified programs. Here is a very interesting article about this controversy and, in particular, about the rebuttle of these charges by DARPA director Tony Tether in front of a House science committee meeting.

Quantum Doctor

Here is an interesting article by Dominik Janzing and Thomas Beth, both from Karlsruhe, Germany. The jist of the article is that if the world were more quantum, doctors would have to learn Bell inequalities!

On the potential influence of quantum noise on measuring effectiveness in clinical trials
ABSTRACT:To test the effectiveness of a drug, the medical researcher can instruct two randomly selected groups of patients to take the drug or not to take it. Each group should comply with the instructions exactly or else the causal effect cannot be identified satisfactorily. This holds true even when those who do not comply are identified afterwards, since latent factors such as patient’s personality can influence both his decision and his physical response. However, one can still give bounds on the effectiveness of the drug depending on the rate of compliance. They are described by 16 inequalities. Remarkably, the proofs for these bounds given in the literature rely on models that represent all relevant latent factors influencing patient’s behavior by hidden classical variables. In a strong analogy to the violation of Bell’s inequality, half of the inequalities fail if patient behavior is influenced by latent quantum processes (e.g. in his nervous system). Quantum effects could fake an increase in the recovery rate by about 13%, although the drug would hurt as many patients as it would help if everyone took it. We show that the other eight inequalities remain true even in the quantum case. We do not present a realistic model showing the above effect. We only point out that the physics of decision-making could be relevant for the causal interpretation of every-day life statistical data. Our derivation of the remaining eight inequalities suggests how to avoid problematic hidden-variable models in classical causal reasoning by representing all latent factors by quantum systems.

I’m guessing that this won’t help in the battle physicist’s have when teaching premed students (“Ooooh, but please just give me one more point on this quiz. Pleeeease!”)

Stay At Home Poker Dad

Tired of your job? Good at poker? Do what my friend Ben Foster did, quit your job and play poker online for a living. From the Wired articled linked above:

Last summer, Ben Foster quit his $110,000-a-year job as a senior product manager at eBay to play online poker full-time. In the months leading up to his decision, the 28-year-old with a degree in statistics had been tracking his winnings on a detailed spreadsheet. After checking and rechecking the numbers, he came to the conclusion that he could earn more money playing poker than working at eBay. Plus, his wife was pregnant with their second child, and he was needed around the house. Foster decided to become a stay-at-home poker dad.

[As a side note, Ben has indeed taken much money from me at poker. But that was before I learned to play poker. That was when I learned that I just shouldn’t play with him. “Strange game Dr. FalconFalken, the only way to win is not to play.”]

Beyond the Quantum Event Horizon

The Perimeter Institute along with the University of Waterloo together form probably the largest concentration of the theorists in quantum computing in the world. And they just keep growing! So, in the past few years, I have been calling those two institutions “the black hole of quantum computing.” When I told this to Andrew Landahl at a meeting I was just attending in San Diego, he came up with a great correction to my description. Really I should have been calling it “the blackberry hole of quantum computing!”

Beyond DNA?

Just got an email with the following article. A big deal?

Scientists Say They’ve Found a Code Beyond Genetics in DNA
By NICHOLAS WADE
Researchers believe they have found a second code in DNA in addition to the genetic code.
The genetic code specifies all the proteins that a cell makes. The second code, superimposed on the first, sets the placement of the nucleosomes, miniature protein spools around which the DNA is looped. The spools both protect and control access to the DNA itself.
The discovery, if confirmed, could open new insights into the higher order control of the genes, like the critical but still mysterious process by which each type of human cell is allowed to activate the genes it needs but cannot access the genes used by other types of cell.
The new code is described in the current issue of Nature by Eran Segal of the Weizmann Institute in Israel and Jonathan Widom of Northwestern University in Illinois and their colleagues.
There are about 30 million nucleosomes in each human cell. So many are needed because the DNA strand wraps around each one only 1.65 times, in a twist containing 147 of its units, and the DNA molecule in a single chromosome can be up to 225 million units in length.
Biologists have suspected for years that some positions on the DNA, notably those where it bends most easily, might be more favorable for nucleosomes than others, but no overall pattern was apparent. Drs. Segal and Widom analyzed the sequence at some 200 sites in the yeast genome where nucleosomes are known to bind, and discovered that there is indeed a hidden pattern.
Knowing the pattern, they were able to predict the placement of about 50 percent of the nucleosomes in other organisms.
The pattern is a combination of sequences that makes it easier for the DNA to bend itself and wrap tightly around a nucleosome. But the pattern requires only some of the sequences to be present in each nucleosome binding site, so it is not obvious. The looseness of its requirements is presumably the reason it does not conflict with the genetic code, which also has a little bit of redundancy or wiggle room built into it.
Having the sequence of units in DNA determine the placement of nucleosomes would explain a puzzling feature of transcription factors, the proteins that activate genes. The transcription factors recognize short sequences of DNA, about six to eight units in length, which lie just in front of the gene to be transcribed.
But these short sequences occur so often in the DNA that the transcription factors, it seemed, must often bind to the wrong ones. Dr. Segal, a computational biologist, believes that the wrong sites are in fact inaccessible because they lie in the part of the DNA wrapped around a nucleosome. The transcription factors can only see sites in the naked DNA that lies between two nucleosomes.
The nucleosomes frequently move around, letting the DNA float free when a gene has to be transcribed. Given this constant flux, Dr. Segal said he was surprised they could predict as many as half of the preferred nucleosome positions. But having broken the code, “We think that for the first time we have a real quantitative handle” on exploring how the nucleosomes and other proteins interact to control the DNA, he said.
The other 50 percent of the positions may be determined by competition between the nucleosomes and other proteins, Dr. Segal suggested.
Several experts said the new result was plausible because it generalized the longstanding idea that DNA is more bendable at certain sequences, which should therefore favor nucleosome positioning.
“I think it’s really interesting,” said Bradley Bernstein, a biologist at Massachusetts General Hospital.
Jerry Workman of the Stowers Institute in Kansas City said the detection of the nucleosome code was “a profound insight if true,” because it would explain many aspects of how the DNA is controlled.
The nucleosome is made up of proteins known as histones, which are among the most highly conserved in evolution, meaning that they change very little from one species to another. A histone of peas and cows differs in just 2 of its 102 amino acid units. The conservation is usually attributed to the precise fit required between the histones and the DNA wound around them. But another reason, Dr. Segal suggested, could be that any change would interfere with the nucleosomes’ ability to find their assigned positions on the DNA.
In the genetic code, sets of three DNA units specify various kinds of amino acid, the units of proteins. A curious feature of the code is that it is redundant, meaning that a given amino acid can be defined by any of several different triplets. Biologists have long speculated that the redundancy may have been designed so as to coexist with some other kind of code, and this, Dr. Segal said, could be the nucleosome code.