Quantum CMOS?

If you go back and look at the early days of the semiconductor revolution, you see that there were many ideas for how to build gate sets out of n-type and p-type transistors. One set of ideas was known as NMOS technology and other set of ideas was known as CMOS technology. In almost every modern computer chip CMOS technology currently reigns supreme. One of the main reasons for this has to do with the power dissipation properties of these technologies. CMOS circuits dissipate a lot less energy than NMOS circuits. Basically one may think that CMOS circuits dissipate energy only when the circuit voltages are being switched wheras NMOS circuits are often running in a situation where they are dissipating energy even in between the switching of the voltages. As you can guess, this makes quite a bit of difference in the power consumption details of these devices.
So what is the equivalent technological detail for a quantum computer? Here the concept of the transistors used to build a circuit is modified to the concept of a fault-tolerant quantum gate. So if we look at fault-tolerant gate constructions and consider their energetics, are there technologies that minimize energy dissipation?

Wowha II

Some days, it just seems that anything I search for gives me interesting results. The previous search was for information on some properties of the golden ratio. Now I just did a searching for “harmonic series fractal” which remarkably sent me to the same site and this fun page which opens with

An open letter to significant scientists begging them to embrace and embed the arrival of self-aware consciousness in the Earth’s magnetic body, by redesigning power systems which now bleed our collective coherence bubble.

Wowha

Wave Mechanics Allow Embedding to Draw You into Gravity Centers: HOW TO INHABIT ANYTHING

Digital, Analog, Deterministic, Probabilistic

After reading Robert Laughlin’s book and thinking about his remarks on quantum computing, I’ve decided I can pinpoint exactly the error which he is making in his disregard for the possibility of building a quantum computer. The mistake, I believe, is basically a category error.
So let’s talk about computers. Computers are inheritly physical devices, so all discussion of what is and what is not a computer boils down, at some level to physics (I’m not taking a reductionist viewpoint here: I include the study of emergent properties of physical systems in what I call physics.) Now there are two separations which I want to make when I talk about what types of computers we encounter out in the wilds of the labratory: the first is the a description of the state space and the other is a description of the dynamics.
The state space of a classical computer come in two forms: digital and analog. For digital computers the state space is a discrete, finite space. Such state spaces don’t exist in nature (as far as we know) but are great approximations for a great series of physical devices. Thus the hard drive on your computer can store the value of a bit in the orientation of a ton of magnetic spins, but if we try to store a bit into a single spin we have problems. Every discrete finite space I know of is discrete only after some coarse graining. But surely, you say, quantum mechanics gives discrete state spaces, right? The energy levels from atomic orbitals are discrete. But this is only true if we have an isolated atomic system which does not interact with the rest of the universe. Introduce coupling to an electromagnetic field (acting either as the environment or as a measuring device) and these discrete energy levels “smear out,” meaning in effect that we can only think about them as discrete after we have coarse grained over this smearing. Whether there are truely discrete state spaces in physics (say in the wavefunction of the universe), I think, is something for which we have little concrete evidence.
Back to computers. On the opposite end of the spectrum from a digital computer, it has been useful to introduce the notion of an analog computer. Here the state space of the computer is taken to be some continuous parameter. Thus, for example, the position of a the planet Mercury could be considered a state space variable for our “solar system” computer. If one takes seriously the idea of an analog computer one runs into all sorts of problems. This is because it is easy to couple analog state spaces such that the variables of these systems are manipulated in such a way as to perform elementary arithmetic on these variables. And if you take seriously the continuous nature of the state space this allows you to do amazing things like solve NP-complete (and harder!) problems efficiently on these computers. Of course there is a fatal reason why no one has built such a computer: if we introduce any noise into these variables or we introduce imprecission in our measurements, this power disappears. (There is still interest in analog computers not, then, for the computational power arising from the continuous nature of their state spaces, but because the transformations one can perform on them are sometimes difficult to perform on a digital computer. There doesn’t seem to be any superpolynomial speedup in any such devices, but one might get more efficient algorithms because the dynamics happens to do something which is not natural for your digital computer.)
OK, so state spaces are digital or analog. And analog state spaces do not lead to robust computing devices. Now we move on to the second category describing classical computers, the dynamics. The first type of dynamics I want to draw out is deterministic evolution. In this case the time evolution of the state space is perfectly fixed by the initial conditions of the state space. So we set up our planets in a particular configuration and Newton’s laws (or general relativity) takes over and pushes forward the analog computation. Similarly two transistors work to enact a NAND gate where the digital data is transformed in a fully deterministic manner. As with the idea of digital data, deterministic evolution is often a consequence of some emergent property of the system (i.e. every once in a while your computer does have a hardware error…but you’ll have to wait around quite a long time for this to happen with modern computers!)
In the distinction between digital and analog, we saw that physics essentially gave us analog state spaces which for certain physical system, when we apply the appropriate coarse graining, we end up with a digital device. What, then, is the dynamics from which deterministic evolution arises for computer? For classical computers, this dynamics is basically probabilistic evolution. Instead of being able to completely predict the outcome of the evolution of the state space, now we can only predict the probability of the evolution. Sometimes this evolution is so unpredictable (so noisy) that there is no way for such a device to function as anything resembling a deterministic computer. But sometimes this evolution is predictable enough to allow us to construct a robust effectively deterministic computer from the evolution (and in the case of our real world computers this is effectively done for us by the physics of the semiconductor transistors.) It even seems that using the inherit probabilistic nature of the evolution can increase the computational power of the device (although this is certainly one of the outstanding problems in computer science: whether randomness really gives one any computational advantage (the BPP=P question).) So sometimes we not only want the probabilistic nature of our computer to yield deterministic evolution, but often we want it to evolve probabilistically, but in a manner such that we control the probabilities with a high degree of precission.
Notice further that the type of dynamics for a classical computer effects how we describe the state space of the computer. If the evolution is deterministic, we can just keep track of the variables, but one we introduce probabilistic evolution, we need to talk about probability distribtuions (over a discrete state space in the digital case and over a continuos state space in the analog case.)
So what is the category error which makes Laughlin sceptical of quantum computing? It appears that he is failing to distinguish between dynamics that is not deterministic and analog state spaces. The argument is basically that the amplitudes which appear in quantum theory are an analog state space and thus the computational power of quantum computers is analogous to that of analog classical computers. But this is to confuse the dynamics of a quantum computer (unitary evolution of probability amplitudes) with the state space itself. Among the first discoveries about quantum computers (which I basically credit to Bernstein and Vazirani) was that the amplitudes of quantum states behave, for most purposes, like probabilities and not like an analog state space. Indeed the state space of a quantum system can be either discrete (think energy levels) or continuous (think position of a particle). But since the dynamics is that funny form of evolution called unitary evolution, the description of either of these systems is in terms of probability amplitudes. Confusing the probability amplitudes with the state space is directly analogous to confusing the probability distribution of a classical system with the state space of the system.
Now just as probabilistically evolving digital classical computer can be made to act deterministically, we can talk about getting a digital quantum computer (digital means the state space is finite and discrete) to act deterministically. Or better yet, just as sometimes we desired probabilistic evolution with a given accuracy, we can get a digital quantum computer to behave as a unitary evolving systems with a given accuracy. This, of course, is what the theory of fault-tolerant quantum computation is designed to do. Of course the world didn’t have to give us the ability to perform fault-tolerant quantum computation, but this doesn’t seem to be true.
State spaces are different from the dynamics and hence the way we describe the system. In fact, it’s one of the reasons I cringe when I hear the word “analog” in discussions of not just quantum computers, but also in the biological sciences. The probabilistic nature of biological information processing has got nothing to do with the analog or digital nature of the underlying information (so while concentrations are analog, they should for all intensive purposes be thought of as a finite size frequentist version of probabilities.)
OK, I’ve clearly written too much on this subject. I’ll leave you be now. But just to finishing things off here is an interesting argument to give to those who believe that quantum computers are analog devices. As I mentioned above it is known how to use analog computers to solve NP-complete problems efficiently. But it is widely suspected that quantum computers will not be able to solve NP-complete problems efficiently (which in turn is perhaps a reason to think long and hard about whether this really true: I waver on about on three year period as to whether I should spend more time attempting to solve NP-complete problems efficiently on a quantum computer.) The question to ask a sceptic, then, is why is it, if quantum computers are analog computers that they cannot solve NP-complete (or harder) problems? Surely this indicates that there is not direct analogy between analog and quantum computers or one of the thousands of people working in the field would surely have discovered such a connection! Well, the argument might not work, but I’d be curious to hear the variety of sceptics retorts.

Freeriding on the Back of a Giant While Slapping the Giant Upside the Face

Somedays I just simply can’t take it anymore and would like to see nothing more than the banning of all my anti-science bretheren from using all of the benefits which science has brought to the world. Don’t believe in science, FINE, don’t use anything science helps discover. I’ll be happy to take the other side of the coin and agree to not benefit from any drug invented by some evolution doubting creationist or other such creature.
In the local UW rag: “The Daily”

Is intelligent design that scary?
As Jared Silvia states in “So what if we are monkeys?” (May 5) we indeed are “truly special.”
We are special enough to understand, as legislators in Kansas have grasped, that evolution is not the end-all it proposes to be. These legislators have realized since science will never be able to prove or disprove the evolutionary process, there must be another pertinent argument out there.
Contrary to what Silvia says, the theory of evolution is easy to wrap one’s mind around. It’s easy to look at dinosaurs, fossils and apes and draw conclusions that we are the apex of a long chain of evolution. What is hard is proving this fact. Where are the laws of science (the second law of thermodynamics), the DNA sequences, or the evidence of transitionary species (like the humans with no eyes) that prove evolution is the truth?
The Kansas legislators must have come to the same conclusions a lot of astronomers have come to recently when looking at the galaxies of the universe. These things may have been created by intelligent design.
Science can neither prove nor disprove evolution, so let’s stop the scientific and educational community from favoring only the theory of evolution. Is it such a bad thing to juxtapose one “theory” with another to facilitate a balanced discussion, as we will hopefully see during the future in Kansas?
John Messner, senior, history

When I see dribble like this I always think of my favorite theory which explain the universe. It involves lots of pink elephants, psycho grandmothers, gay sex, and arguments that Jesus was an athiest. And of course, it’s a possible explanation for all the experimental data (you really don’t want me to give the full theory do you? I’ll bet anyone I can turn any set of four items into an alternative explanation of the theory of evolution.) So please, Mr. Messner, let me teach my theory to all those young little minds and I’ll be glad to teach your intelligent design theory.
And then, to make my day even brighter, I find in Nature:

Academics stress licence threat to US science
Geoff Brumfiel, Washington
Alarms ring over rules for foreign nationals and ‘sensitive’ equipment.
Proposed changes to an obscure set of export rules could derail US research, say academic and industrial groups, who are now frantically trying to raise the alarm among scientists.
The modified rules would require academic researchers from countries including China and India to obtain a government licence before operating a wide range of lab equipment in the United States. In a 22 April letter to university department chairs, Judy Franz, executive officer of the American Physical Society, warned that the changes constitute a “potential threat to research”. And this week, the National Academies are convening a special workshop to inform scientists of the proposed changes.
At issue is a set of rules governing the export of sensitive technologies. Known as the Export Administration Regulations, the rules are meant to limit the transfer of equipment that could advance the military might of ‘countries of concern’ — a list that includes China, India, Pakistan and Russia. The regulations also require researchers from these countries working with some items of equipment to obtain a licence from the US Department of Commerce.
Traditionally, universities have thought themselves exempt from the regulations. But a March 2004 report from the Department of Commerce’s Office of Inspector General, an independent watchdog, argued that the regulations do apply to academic labs. The report also proposed expanding the criteria under which a licence would be required for using controlled equipment, and applying the rules by country of birth rather than country of citizenship (see Nature 431, 615; 2004).
The 45-page equipment list includes common lab apparatus such as lasers and sealed glove boxes for handling hazardous material. Getting a licence for each potential user would overwhelm lab supervisors, warns Dan Mote, president of the University of Maryland, who is scheduled to talk at a National Academies workshop. “This really is potentially devastating,” he says. “It’s quite conceivable that this would just bring work to a halt.”
Industry is also concerned, according to Cynthia Johnson, director of government relations for Texas Instruments, a major US semiconductor manufacturer. Although industrial labs already have to comply with the rules, the proposal to base the regulations on a researcher’s country of birth rather than citizenship could alienate fresh talent, she says.
Department of Commerce officials stress that they are still far from making a final decision about how to modify the rules. “What we are doing is seeking input,” says Peter Lichtenbaum, assistant secretary for export administration.
That is why it is important for researchers to weigh in with their objections, says Arthur Bienenstock, a physicist and dean of research and graduate policy at Stanford University in California. “What the Department of Commerce needs is an honest assessment of what it would mean if the inspector-general’s rules were implemented,” he says. The comment period closes on 27 May
Nature 435, 4 (5 May 2005).

Yeah! Let’s demolish the research universities! Yeah!
And of course, today, in Kansas, even Todo is spinning in his grave as the Kansas State Board of Education seems to be unaware that there are vast legions of scientists out there who are extending their lives by developing drugs whose validity across species is explicitly searched for using the theory of evolution. Instead they choose to call hearings which are boycotted by scientists and say such amazing things as

“It’s intellectually stimulating,” said board Chairman Steve Abrams, of Arkansas City, one of the three presiding members. “It’s good information.”

Oh, and one more thing. If I see the theory of evolution called the theory of the origin of life one more time, I’m simply, well, I’m simply going to explode.

The History of My Brain

The brain is a remarkable computational device (not material? bah. get your faith out of my office.) A rough estimate of the computational power needed to simulate the brain’s neural power is around a billion MIPs (millions of instructions per second.) When I was growing up, I was fascinated by the possibilities of artificial intelligence. In fact, this forms the basis of my three ways to do theoretical physics:

  1. Do it yourself. This requires lots of work and the ego to believe you can actually figure things out. The benefit is that you can perhaps contribute now. Unfortunately, I don’t believe that this is a long term solution to doing theoretical physics (see 2 and 3)
  2. Build a computer to do it for you. Or build a computer to speed up certain functions in your brain. How long before we can build computers with vastly greater ability than our mushy brains? Are our brains really operating at some optimal limit for computation? For number crunching tasks, most definitely not. This solution seems to me the much more realistic long term way to contribute to theoretical physics. Of course, you may not get the credit for any of this. The computer that did all the work will surely the lions share of the credit. Can a computer win the Nobel prize?
  3. Join the Search for Extraterrestrial Intelligence (SETI). The theory here is that if we discover intelligent alliens they will more than likely be more advanced that we are. Thus they will have used 1 and 2 to have learned vastly more about theoretical physics than we currently know. So if you join SETI and SETI succeeds, you will just be able to ask the aliens about advanced theoretical physics. Of course, this assumes that the aliens are nice and not bug-like (because as every fan of sci-fi knows, bug-like aliens are all about killing fleshy humans.)

But back to the topic at hand: simulating the human brain. One problem with an analysis that just looks at the raw computational speed of the human brain is that it disregards the computational complexity inherit in the structure of the brain itself. In particular, the brain is the product of 500 million plus years of evolution (I want to start counting from at least some ancestor which had some sort of nervous system). So the question I’ve always found fascinating is how large this overhead of so many years of evolution costs in producing the structure of the brain? Do we need to simulate all 500 million plus years of evolution, yielding a computational power far beyond even our best projections of the computational power of the brain? This is particularly troublesome when you think about the power of evolution: our ancestors have been roaming around the landscape of possible brains for a long long time. A counter to this is the idea that we don’t need to simulate all of the evolution, but only the developemental biology which leads to a full grown human. This later point of view seems less computationally challenging, but still daunting. And I’ll bet that this is the computational hangup for simulating human intelligence: not the raw power of the brain, the but the power to reproduce the developement of the brain. I wonder if anyone has done any rough calculations on the computatonal power needed for this latter task.
As I said, when I was growing up, I was fascinated by the possibility of advanced computer intelligences. I can still remember the frustration of not being able to produce intelligence on my TRS-80 Color Computer II. So I left those dreams behind me, but I promised myself that if I ever thought computers were powerful enough to really start producing intelligence I would return to the field. I guess you shouldn’t be surprised, then, if in thirty years you find me hacking away in some dark basement beside a machine named “Newton.”

The Return of Speed

In Santa Fe, there was nothing quite so painful as going to get a croissant and coffee in the morning only to watch as fifteen minutes was forever lost from my life. Even after I learned that giving exact change would shorten my wait by at least two minutes, the sheer slowness was pretty painful. Which is why it is so wonderful to be back in a university town. Especially a university town in a city. Because the beautiful thing about both of these two settings is the absolute speed with which getting my morning breakfast is achieved. Wam! Bam! Here’s your food! I love it! I’ve even noticed that I’m out of shape in getting my money ready so fast. I’d better practice getting my wallet out faster. Ahhhh….life among the young’uns makes me happy.

Unconventional Convention

Via the Preposterous Universe (must resist bad words about the University of Chicago for denying Sean Carroll tenure) comes the announcement of The Time Traveler Convention to be held at MIT. If anyone sees me there, could they please send me an email and tell me about it? Because I’m in Seattle, and if I do show up at MIT, then this clear means that I invented a time machine. And if this is true, I’ve got a lot of thinking to do about flux capacitors.

Did We Overgeneralize to the Hidden Subgroup Problem?

The most famous problem in quantum computing is the hidden subgroup problem. In the hidden subgroup problem you are given access to an oracle which computers a function f. This function is a function from a group G to some set with a promise on the function. The promise is that f is constant and distinct on left cosets of some subgroup H of G. The goal of the hidden subgroup problem is, by querring the function, to determine a generating set for the subgroup H. We say that an algorithm for the hidden subgroup problem is efficient if it uses resources proportional to a polynomial in the logarithm of the size of the group.
The reason the hidden subgroup problem is famous is threefold. The first reason is that when G is Abelian, efficiently solving the hidden subgroup problem leads to an efficient algorithm for factoring integers and for the discrete logarithm problem, i.e. to Shor’s algorithm(s). The second reason is that efficiently solving the hidden subgroup problem when G is the symmetric group would lead to an efficient algorithm for the graph isomorphism problem. The third reason is that efficiently solving the hidden subgroup problem when G is the dihedral group would lead to an efficient algorithm for certain shortest vector in a lattice problems. The first of these reasons is a concrete reason: a quantum algorithm worked! But for the second and third, no efficient quantum algorithm is known.
Now what I find interesting, is that for the second of these reasons, for graph isomorphism, one can get by with solving a much more restricted version of the hidden subgroup problem than the full hidden subgroup problem over the symmetric group. The way to do this is to modify the problem to a problem I call the hidden subgroup conjugacy problem. Call two subgroups, H and K conjugate to each other if there exists an element g of the group G such that gHg^{-1}=K. In the hidden subgroup conjugacy problem, instead of identifying the subgroup (by returning a set of generators, for example) we require that you only identify which conjugacy the subgroup belongs to, i.e. the setup is the same: you are given an f which hides a subgroup H, but now instead of identifying the subgroup you only want to know which conjugacy class the subgroup belongs to. It is easy to see that for the Ableian case, this reduces to the hidden subgroup problem: gHg^{-1}=H for all g in this case. For graph isomorphism, the standard reduction to the hidden subgroup problem reduces to distinguishing between the subgroup being a trivial subgroup and the subgroup being a order two subgroup. So efficiently solving the hidden subgroup conjugacy problem would lead to an efficient algorithm for graph isomorphism. Interesting, the same does not seem to be true for the reduction from the hidden subgroup to certain shortest vector in a lattice problems, although I haven’t thought hard about this.
So the question I ask is, did we overgeneralize the hidden subgroup problem? Did we generalize ourselves into a problem which is just too hard, while efficiently solving a smaller variant would lead to interesting new quantum algorithms? I leave history to judge.