Over at Information Processing, Steve Hsu has a post about a recent paper (with coauthors R. Buniy and A. Zee), hep-th/0606062: “Discreteness and the origin of probability in quantum mechanics” which is probably of interest to quant-ph exclusive readers (it was cross posted to quant-ph today.)
The basic idea of the paper is as follows. In the many-world’s interpretation of quantum theory a major challenge is to derive Born’s probability rule (which, of course, Born got wrong the first time!) One way that this can be achieved (well, sort of!) is as follows. Consider a world in which we perpare a particular quantum state and measure it in a particular basis. Now repeat this process for the same state and the same measurement. Do this an infinite number of times (more on this later…you know how I hate infinity.) Born’s probability rule predicts that the probability of a particular sequence of measurement outcomes is given by [tex]$|langle s_1|psirangle langle s_2|psirangle cdots|^2$[/tex]. In the limit of an infinite number of measurements, the typical fractions of the different outcomes dominates this expression (i.e. the terms which recover Born’s rule.) In other words, the sequences with fractions which dont’ satisfy Born’s rule have vanishing norm. So what do you do? Suppose you simply exclude these “maverick” worlds from the theory. In other words you exclude from the accessible states those with vanishing norm. (There are a bunch of things which bother me about this derivation, but lets just go with it!)
Now the problem addressed in the paper is that of course the limit of an infinite number of experiments is pretty crazy. And if you cut off the number of experiments, then you run into the problem that the maverik worlds have small, but non-zero norm. So why should you exclude them? What is suggested here is that you should exclude them because the Hilbert space of quantum theory isn’t quite right (the authors have arguments about this discreteness coming from arguments concerning gravity.) Instead of our normal Hilbert space, the argument is that a discrete set of vectors from the Hilbert space are all that are physically possible. One picture you might have is that of a bunch of nearly equally spaced vectors distributed over the Hilbert space and unitary evolution for a time T is followed by a “snapping” to the nearest of these vectors. How does this discretized Hilbert space (isn’t calling it a discrete Hilbert space an oxymoron?) fix the problem of a finite number of measurements? Well you now have a minimal norm on your Hilbert space. States which are two close together appear as one. In other words you can exclude states which have norms which are too small. So, if you put some discretation on your Hilbert space you will recover, almost, Born’s rule in the same manner as the infinite limit arguement worked. An interesting argument, no?
One interesting question I’ve pondered before is how to make a discretized version of Hilbert space. Take, for example a qubit. Now you can, as was suggested above, just choose a finite set of vectors and then proceed. But what if you want to avoid the “snapping” to this grid of vectors? Well you might then think that another way to proceed is to have a finite set of vectors and then to allow unitary transforms only between these vectors. But when you do this, what unitary evolution you can apply depends on the vector you are applying it to. Nothing particularly wrong about that, but seems like it might lead to a form of quantum theory with some strange nonlinearities which might allow you to solve hard computational problems (and thus send Scott Aaronson’s mind spinning!) So what if you don’t want to mess with this linear structure. Well then you’re going to have to require that the discrete set of vectors are transformed into each other by representations of some finite group. Does this really limit us?
Well it certainly does! Take for instance, our good friend the qubit. An interesting question to ask is what are the finite subgroups of the the special unitary transforms on a qubit, SU(2). It turns out that there aren’t very many of these finite subgroups. One finite subgroup is simply the cyclic group of order n. Since we are dealing with SU(2) and not SO(3), this is just the set of rotations in SU(2) about a fixed axis by angle [tex]$frac{n}{4 pi}$[/tex]. Another subgoup is the dihedral group of order n which is just like the cyclic group, but now you add an inversion which corresponds to flipping the equator where the cyclic group states are cycling. Finally there are the binary tetrahedral group, the binary octahedral group, and the binary icosahedral group which are SU(2) versions of the symmetries of the correspondingly named platonic solids. And thats it! Those are all the finite subgroups of SU(2)! (There is a very cool relationship between these subgroups and ADE Dynkin diagrams. For more info see John Baez’s week 230) So if we require that we only have a discrete set of states and that the group structure of unitary operations be maintained, this limits us in ways that are very drastic (i.e. that conflict with existing experiments.)
So if one is going to take a discrete set of vectors and use only them for quantum theory, it seems that you have to modify the theory in a way which doesn’t preserve the unitary evolution. Can one do this without drastic consequences? I think that is an interesting question.
Not Quite That Wide
An article today in the New York Times describes a cool experiment with “backwards propogating light.” It’s a cool experiment, but what I love best from the article is the following line:
However, the pulses were in a shape known as Gaussian, which is, in principle, infinite in width, though in practice not quite that wide.
Winner of the understatement of the year?
One Heck of a Lorentz Transform
When I was little I used to wonder if a long time ago a distant alien race had noticed our little planet and set up a gigantic mirror pointing towards the earth such that we could use superpowerful telescopes to look into our planet’s past. Mostly I remember thinking that it would be cool if this were true and we could see dinosaurs (that was, I believe, the complete and total extent of my own version of the dinosaur fetish that seems to infect so many children.) Of course I was delighted when I discovered many years later the writings of Jorge Luis Borges, who had quite a fetish for mirrors. A memorial quote of Borges on mirrors is from “Tlön, Uqbar, Orbis Tertius”
Then Bioy-Casares recalled that one of the heresiarchs of Uqbar had stated that mirrors and copulation are abominable, since they both multiply the numbers of man.
While this is not a quote I can exactly sympathize with, the logic of mirrors holds, I think, some strange consequences. For example, suppose you want to freeze a moment of your life for future generations to look back upon. Well, simply launch a mirror away from you at as close to the speed of light as you can possibly manage. Then some future generation will be able to use this mirror to look back at this moment. (Of course the closeness to which you can launch this mirror to the speed of light will effect how much you have “frozen” this instant.) Of course, you could just as easily take a picture of the moment. But the mirror trick affords a certain sense of security: as long as no one can launch a mirror faster than yours, your mirror is safe. And this only gets better as time goes on (they need a faster mirror than yours to catch up to yours.) Of course the size of the mirror needed might be a little extravegant, and certainly gets worse as a function of time. Why Kodak hasn’t marketed this to cult leaders who wish to preserve their teachings, however, I do not know. 🙂
The reason for this post is completely a function of associative memories: yesterday I was bored, and so I calculated for myself that if you move approximately 1-10^(-39) percent of the speed of light as compared to everyone else, then the event in your frame which has coordinates x=1cm and t=0, would correspond to an event which happened about 14 billion years ago, i.e. at the begining of the big bang. In other words there are reference frames where what is next to you happened at the big bang.
So what physicists/(whatever job title describes what I am) do when they get bored? Well it appears to me to be the same things we thought about when we were little. But now they just involve numbers.
Free to Decide
Over at Michael Nielsen’s blog, Michael has a post telling us that he won’t be posting again until August. Personally Michael’s lack of posting scares the bejebus out of me: if he’s not posting, he must be working on some grand research which will make everything I do look even more trivial than before. Michael, you’re scaring me!
Anyway, along with the post Michael posts a comment by UW’s John Sidles trying to stir up some debate by asking about a paper by Conway and Kochen, “The Free Will Theorem”, quant-ph/0604069. Actually I had heard about this paper a while ago, via some non-arxiv channel (where I can’t remember, exactly) and had basically guessed from the brief description I had heard what the paper was about. This is how you know that you are getting old and curmudgeony when you can hear a title to a paper and a description of the results and can guess the way in which those contents were prove (There are rumors, which I myself have never verified, that at a certain well known quantum computing research group, the days starts as follows. A little before lunch, the researchers wander in, check their email and look at the day’s postings on the arxiv. Now they don’t do anything more than read the titles. The research group then proceeds to go to lunch. At the lunch they discuss, with great debate, the most interesting papers posted that day. Having never ever even read the papers! There is a similar story about a certain researcher in quantum computing, who, if you tell that researcher a new result, (s)he will, within a day, almost always be able to rederive the result for you. Of course, my personal nickname for this person is “The Oracle” and it is tempting to tell “The Oracle” that a certain open problem has been solved, when it has not been solved, and see if (s)he can come up with the answer!)
(A note: throughout this post I will use the words “free will” to describe something which, you may or may not agree is related to “free will” as you imagine it. In particular if an object is said to not have free will if its future evolution can be predicted from information in the past lightcone of the object. If it cannot be so predicted with certainty it is then said to possess free will. In fact, I find this definition already interesting and troublesome: can we ever predict anything by only knowing information in our past light cone? How do we know that in the next instance of our evolution a light ray will hit us and burn us up? Certainly we cannot see such a light ray coming, can we? We can, of course, use physics to explain what happened: but can we use it to predict our future behavior? Of course for the electromagnetic field, we could sheild ourselves from such radiation and reasonably assume that we can predict what is occuring. But what about gravity, which can’t be sheilded? For an account of this type of argument I recommend Wolfgang’s comments here here.)
Okay, back to the story at hand. What is Conway and Kochen’s free will theorem? The basic idea is quite simple. I will explain it in the context of Bell’s theorem and the Kochen-Specker theorem, since the author’s don’t describe it in this manner. Bell’s theorem, we known, tells us that there is no local hidden variable theory explaining what quantum theory predicts. The Kochen-Specker theorem is less well known (which leads, in my opinion, the proponents of this different result to suffer a severe inferiority complex in which they constantly try to argue that the KS theorem is more important than Bell’s theorem.) What the Kochen-Specker theorem says is that if there is a hidden variable theory of quantum theory, it must be contextual, i.e. the Kochen-Specker theorem rules out non-contextual hidden variable theories. The way I like to think about the Kochen-Specker theorem is as follows: suppose that there are some hidden variables associated with some quantum system. Now if you make a measurement on this system you will get some outcomes with differing probabilities. Now sometimes you get outcomes with certainty. You’d like to say that when you perform this measurement, this outcome is actually associated with the value of some real hidden variable. But what the KS theorem tells you is that this is not possible: there is no way that those measurement outcomes are actually associated with the hidden variables in a nice one to one manner. What does this have to with contextuality/non-contextuality? Well the “context” here is what other measurement outcomes you are measuring when you measure along with the outcome associated with a particular hidden variable. In non-contextual hidden variable theories, what those other measurement results are doesn’t matter: it is those types of theories that the KS theorem rules out.
(Note: From my personal perspective, I find the KS theorem fascinating, but not as disturbing at Bell’s theorem: that “what you measure” determines “what you can learn” is a deep insight, and one that tells us something about the way reality can be described. However it is not that difficult to imagine the universe as a computer in which accessing the memory of the computer depends on the context of your input: i.e. to get ahold of memory location which holds the value 01001010, you need to query the machine and it seems perfectly reasonable to me that the machine is set up in a manner such that I can’t get all of those bits, since my measurement will only get some of them and the context of the measurement will change some of the other bits. This was basically John Bell’s reaction to the Kochen-Specker theorem. Interestingly there is a claim in this Conway and Kochen theorem that this loophole has been filled! I have a bit to say about this below. Of course no matter where you come out in this arguement, there is no doubt that the KS is DEEP: it tells us that the universe is not a computer whose memory we can gain total access to. And if we can’t gain access to this memory, then does the memory have any “reality”?!!)
Well I’m rambling on. Back to the subject at hand, the free will theorem. In the free will theorem, Conway and Kochen set up an experiment in which you take two spin-1 particles and perform measurement on these spins. (Now for those of you in the know you will already be suspicious that a spin-1 particle was used (the 3 dimensional irrep of SU(2)) as well as an entangled quantum state…sounds like both KS and Bell doesn’t it?)) The free will theorem is then:
If the choice of directions in which to perform spin 1 experiments is not a function of the information accessible to the experimenters, then the responses of the particles are equally not functions of the information accessible to them
In other words if we have free will, then particles have free will! How does the theorem get proven? Well basically the proof uses the KS theorem as well as the perfect correlations arising from maximally entangle spin-1 systems. First recall that the KS theorem says that hidden variable theories must be contextual, i.e. if I give you just the measurement directions involved in a measurement, there is no way to map this onto yes/no outcomes in a manner consistent across a set of possible measurements. But suppose, however, that your map to yes/no outcomes (i.e. the particles response) also depends on a hidden variable representing information in the particles past light cone, i.e. that the particles have no free will (contray hypothesis.) Now because we are dealing with a maximally entangled spin-1 system, two spacelike separated parties, A and B, will obtain the same outcomes for their measurement results for measurement directions for which they measure along the same direction. So for fixed values of the information in the past of both parties, the particle response should be identical and can only depend on local measurement direction. But this is not possible when one chooses an appropriate set of directions corresponding to the Kochen Specker proof. One can thus conclude that we cannot freely choose the measurements directions, i.e. that not all choices of measruements are possible: there must be hidden variables associated with the measurement choice as well. Thus we have shown that particles having dependence on information in the past light cone implies that the measurement choice must have dependence on information in the past light cone. Having shown the contrapostive, we have shown the free will theorem.
Now the interesting thing about the free will theorem is that doesn’t tell us whether the universe alows us to have free will or not. It simply says that if we assume some form of free will, then the particles we describe will also have free will. Of course the “free will” we describe here is “independence of (classical) information in the past light cone,” so some would object to this definition of “free will.” In particular, by this definition, a system which is totally random has free will. But is seems to me that the interesting question about free will is not whether one can have such random systems, but whether one can have a mixture of determined and undetermined evolutions. I mean the fundamental paradox of free will seems to me to be that free will involves a lack of cause for an action, but we want this action to itself have causes. In this respect, the above theorem suffers a bit, in my opinion, for a simplistic version of free will which is too absolutist for my tastes. What I find fascinating is whether we can “quantify” different versions of free will and what such quantifications would tell us about our real world.
Well it seems that I’ve had the free will to ramble on quite a bit in this post. Hopefully you might decide that the subject is interesting enough to choose to read the paper on your own 😉
Forward To the Past
Daniel sends me this Physorg article titled: “Professor Predicts Human Time Travel This Century.” (Rule of thumb: never believe any prediction whose time span streches beyond the retirement age of the person making the prediction!) The physicist in this title is University of Connecticut’s Ronald Mallett and the prediction is based, in some ways, on his paper , “The gravitational field of a circulating light beam,” Foundations of Physics 33, 1307 (2003). In this paper the Mallett solves Einstein’s equation for an infinitely long cylinder of rotating light and finds that this solution contains closed timelike curves. A similar construction of an infinite cylinder of rotating dust also produces closed timelike curves (this solution actually predates Godel’s universe historically, although the closed time like curves were not pointed out until after Godel constructed his crazy universe.) I’m always skeptical about these solutions as it is generally the case that there is something unphysical about these solutions. In the above papers case this appears to be the fact the solution is not assymptotically flat. It is also not clear that the solution is robust to perturbations or will be valid for a cyllinder of finite length (although apparently for the dust case the finite length solution has closed time like curves.) But what is fascinating to me is how simple it sometimes appears to be to make general relativistic systems which have closed time like curves. Okay, simple is perhaps not the best word, and of course the real question is whether it is possible to make solutions which don’t have physically bad properties. For some excited crazy optimism along these lines, check out gr-qc/0211051.
Of course in any popular science article about time travel, the question of grandfather violence comes up. Interestingly Mallett deals with the issues of causality for systems with closed time like curves a la David Deutsch:
“The Grandfather Paradox [where you go back in time and kill your grandfather] is not an issue,” said Mallett. “In a sense, time travel means that you’re traveling both in time and into other universes. If you go back into the past, you’ll go into another universe. As soon as you arrive at the past, you’re making a choice and there’ll be a split. Our universe will not be affected by what you do in your visit to the past.”
Which makes me think that this process should not be called “time travel” but should be called rewind, as in “pushing rewind on the VCR and start recording a new program over the old one.” When I was growing up I used to wonder what made us think that if we traveled back in time whether there would actually be anything back there. What a bummer to build a time machine, take a trip into the past, and find that you are the only thing in existence in this past!
Finally, the article ends in a very sad manner:
In light of this causal “safety,” it’s kind of ironic that what prompted Mallett as a child to investigate time travel was a desire to change the past in hopes of a different future. When he was 10 years old, his father died of a heart attack at age 33. After reading The Time Machine by H.G. Wells, Mallett was determined to find a way to go back and warn his father about the dangers of smoking.
Which should convince you that physicists too are only human.
Oldschool, Contradiction, and a Strawman
Today, Robert Alicki and Michale Horedecki have an interesting preprint on the arxiv, quant-ph/0603260: “Can one build a quantum hard drive? A no-go theorem for storing quantum information in equilibrium systems.” (Now where have I seen that word “quantum hard drive” before?) As you can imagine, the paper cuts closely to my own heart, so I thought I’d put some comments on the paper here.
The question the paper addresses is right there in the title: is it possible to store quantum information in a system in a way similar to how a hard drive stores its information in ferromagnetic magnetic domains? The authors clame to show that this is not possible (On a personal note, I cannot read the words “no-go theorem” without thinking of John Bell’s famous quote: “what is proved by impossibility proofs is lack of imagination.” Of course, this quote is also ironic: Bell is most famous for proving the impossibility of using local hidden variable theories to simulate quantum correlations! And of course I in no way think that Alicki or Hordecki suffer from a lack of creativity, that’s for sure!)
The paper is structured into three parts, which I will call oldschool, contradiction, and strawman.
In the oldschool section of the paper the authors discuss why it is possible to store classical information in a ferromagnetic domain. They use the Curie-Weiss (CW) model of a ferromagnet, which is not exactly realistic, but captures (for ferromagnetics of high enough dimension) much of the essential ideas of real ferromagnets. In the Curie-Weiss model, we have classical spins which point either up or down. The total magnetization is then the sum of all of the spins, counting +1 for spin up and -1 for spin down. The energy of a particular configuration in the CW model is given by negative of the quantity of the total magentization squared times some arbitrary coupling constant. This means that in the CW model, each spins is coupled to every other spin via a ferromagnetic Ising interaction. Ferromagnetic Ising interactions produce an energetically favorable condition to alligning the two spins involved in the interaction. Thus the ground state of the CW model is two-fold degenerate and is the all spins pointing up and the all spins pointing down states. Further there is a big barrier of energy for flipping between these two ground states. In their paper Alicki and Horodecki present the old school argument about why information related to the total magnetization is resistent to the effects of interacting with a thermal environment. They provide a heuristic argument which basically says that the time it will take to mix between the two ground states (or, more precisely, to wash out the magnetization), if you are below the critical temperature is suppresed like one over the number of spins. The argument is very old school: one simply looks at a microscopic dynamics where spins are flipped at a rate proprotional to the Boltzman factor coming from flipping the spin. In such a setting the relaxation looks like a random walk of the magnetization on an energy landscape given by the free energy of the system. One then finds that below a cricital temparture the free energy has two minima and when you do an anlysis of the diffusion on this free energy the time it takes to diffuse between the two minima is suppresed by the number of spins. Okay, all is good and well. This is (basically) the reason why if I encode classical information into a cold enough ferromagnetic system (in high enough dimensions) then the time it will take to destroy this information will be huge. Of course, one might also note that the time is huge, but finite: the states are metastable. Note, however, that this finite time means that eventually the information stored in the magnetization will be thermalized (or so close to thermalized that you can’t tell the difference.)
Okay, so much for the oldschool part of the paper. Now onto the meat of the paper. The authors next consider whether similar constructions can be performed for quantum information. Are there quantum systems with metastable states which can be used to store quantum information (a “quantum hard drive”?) Now, of course, given the above argument what one would like to see is an argument that the time it takes to disorder quantum information in a system scales with the size of the system. Unfortunately, the authors do not take this approach but instead go a different route. Their approach is straightforward: they attempt to analyze the mathematical structure of the metastable states. They base their argument on the second law of thermodynamics. One way to phrase the second law of thermodynamics is that which is done by Kelvin: it is impossible to construct an engine which has no effect other than extracting heat from a reservoir and converting this to an equivalent amount of work. Fine. The authors then rephrase this in what is called “passitvity.” A quantum state is passive when it is not possible to extract energy from the system at a given state by means of an engine’s cycle. Now the short of the story about pasivity is that for finite systems, completely passive (meaning that for all n, any n-fold tensor product of the states is passive with respect to n-fold local cyclic evolutions) states are Gibbs states. In other words, for finite systems, there is only one completely passive states: the Gibbs state [tex]$exp[-beta H] /Z$[/tex]. Fine. The authors then discuss the case for infinite systems, but I will ignore this argument (1) because even in the oldschool part of the paper, the argument was not about infinite lifetimes, only about metastable states, (2) I’ve never seen an infinite system, nor, unless I become God, do I think I will ever in the future see an infinite system, (3) even at this stage, I object to their argument, and indeed everything about my objection carries over to the infinite case. Okay, I’ll mention the infinite system arguement, just for completeness, but really I think the above objections should be kept in mind. The infinite system arguement is that one can show that completely passive states in an arbtrary (finite, infinite) system are a simplex. So now the authors argue that since these states are a simplex, i.e. a convex combination of a finite number of states, then these completely passive states (finite or infinite) cannot be used to store quantum information. So you see, even for the finite systems, where there is just one completely passive state, the Gibbs state, the argument will be the same.
So what is my objection? Well notice that the wool has been pulled over your eyes in this argument. In the oldschool section the authors argued that if you try to store information into the CW system, then the amount of time it takes for this information to be destroyed grows with the system size. In other words, if you start a sytem out in the non-Gibbs states of stored information, then it will take a long time to thermalize into the Gibbs state. This is what is important for storing information. Notice in particular that despite the fact that thermodynamic equilibrium is the end result of the dynamics considered, the process described is from a non-equilibrium system to the equilibrium system. In the contradiction section of the paper the authors argue that the Gibbs state is the only state which does not violate the second law (or in the infinite case, states which form a symplex.) But what does this have to do with the process where we attempt to store quantum information in the system and then ask how long it takes to reach the thermal equilbrium where the information is destroyed? As far as I can tell it doesn’t have anything to do with this. In fact, if you think about their argument for a while, you will notice that for finite systems, they are arguing that the only allowable states are the completely passive state of the Gibbs state. In other words if you apply their argument for finite systems to the system described in the old school section, the conclusion is that there is only one state and one state cannot store information, i.e. classical hard drives, if they are finite, are not possible!
So what was the falacy in the argument? It was that the question of whether you can store information (classical or quantum) in a system is one that can be analyzed from equilbrium alone. Certainly the second law, as stated in the Kelvin formulation, holds for states in thermal equilibrium, but we are explicitly not asking about this situation: we are asking about what are the states in moving from a nonequilibrium situation to an equilibrium situation. And we are interested in timescales, nota bout final states.
So what is the final section of their paper about? It is about Kitaev’s toric code model for storing quantum information. This is the section I call the strawman. In particular, as I have argued before (see my paper here or any of the many powerpoints of my talks where I talk about self-correcting quantum computers), it is well known that Kitaev’s model, in eqiulibrium, cannot store quantum information. The reason for this is the same reason as for the one dimensional Ising model. But we don’t care about equilbrium, remember. We care about how long it takes to move to equilibrium. Here is where the energy gap comes into play: Kitaev’s basic argument is that the creation of free thermal anyons will be supressed like the Boltzman factor [tex]$exp[-beta g]$[/tex] where [tex]$g$[/tex] is the gap energy. Thus, if we start the system away from equilibrium with the encoded quantum information, the probability that we will obtain a real thermal excitation is exponentially supressed if you temperature is below the gap. Of course, every single qubit will be interacting with a bath, so the total rate of production of thermal anyons needs to be below one over the size of the system. Of course for infinte systems this can never be fullfilled. But for finite sized systems, this be fullfilled: note that the temperature only needs to be decreased in the logarithm of the system size (which, even for macroscopic systems is rarely more than a factor of a hundred.) In the preprint, the authors basically point out that in the equilibrium state there is a constant fraction of real thermal excitations and these can diffuse around the torus to disorder the system: but again the question is what happens when you start out of equilbrium and how long does it then take to return to equilbrium. So I believe that in this section they are attacking basically a strawman. Of course I have my own personal worries about the toric code model and in particular about its fault tolerant properties. You can read about this in my paper on self-correcting quantum memories.
Gravitomagnetic London Moment?
Two papers (experiment: gr-qc/0603033, theory: gr-qc/0603032), an ESA press release, and blog posts (Uncertain Principles, Something Similar, and Illuminating Science) today are all about a recent experiment performed by Martin Tajmar (ARC Seibersdorf Research GmbH, Austria) and Clovis de Matos (ESA-HQ, Paris) which they claim shows a gravitomagnetic effect in a laboratory experiment. Not only do they claim that they observe a gravitomagnetic effect, but that the effect comes not from the standard general relativity mechanism, but instead from their own theory which has massive photons and massive graviphotons (which is what the authors call the carriers of the force which arises when one linearlizes gravity and obtains equations which resemble Maxwell equations, i.e. spin 1 instead of spin 2)! Color me skeptical.
The experiment is actually fairly straightforward. Just take a superconducting ring and spin it up. The authors then look for a gravitomagnetic field which can be measured by nearby accelerometers. Now the normal gravitomagnetic field strength measured in their experiment is about 30 orders of magnitude lower than what standard general relativity predicts. But when they run this experiment they indeed do find accelerations in their accelerometers for superconducting rings (and none for non-superconducting rings.) The authors then intepret this effect as confirming evidence of their theory which invokes “gravitophotonic” masses. If this is correct, then this is an astounding result: not only does it detect a gravitomagnetic field, but it also is a field which is not arising from standard general relativity. Wowzer.
Of course you can color me skeptical. As Chad Orzel points out, the signal they are talking about is only about 3 times as strong as their noise. Now when you look at one of their runs, i.e. figure 4 of gr-qc/0603033, the peaks look pretty good, no? Well figure 4b is a little strange: the gravitomagnetic effect appears to occur before the acceleration. Okay a bit strange, but a single run proves nothing, right? Okay, what about figure 5? Ignore the temperature dependence now, but would you have picked out the peaks that they picked out? Okay so these things make me a little uneasy. Okay, so well certainly they did a lot of runs and tried to get some statistics on the effect. Indeed, they did something like this. This is figure 6. And this is what makes the paper frustrating: “Many measurements were conducted over a period from June to November 2005 to show the reproducibility of the results. Fig. 6 summarizes nearly 200 peaks of in-ring and above-ring tangential accelerations measured by the sensor and angular acceleration applied to the superconductors as identified e.g. in Fig 4 with both electric and air motor.” Why is this frustrating? Well because I have no clue how they analyzed their runs and obtained the tangential accelerations. Were the peaks extacted by hand? (And what is the angular acceleration? Is it the average acceleration?) Argh, I can’t tell from the paper.
Well certainly this is a very interesting set of papers (the theory paper is pretty crazy: if I get a chance I’ll try to go through it in some detail and post on it.) I would personally love to know more about the experiment. I spent this afternoon pondering if I could think up a way in which they get the effect they do, but I’m certainly no expert on this stuff, and might be missing something totally obvious.
(Note also that there is another claim of interesting gravitational effects around superconductors made by Podkletnov. For a history of this “anti-gravity” effect see Wikipedia. Since the Podkletnov experiment is considered controversial, the authors of the above article make sure to point out that they are not observing the effect Podkletnov claimed.)
The Cosmic Computer
Seth Lloyd has a new book out Programming the Universe : A Quantum Computer Scientist Takes On the Cosmos . I just picked up a copy and flipped to this amusing anecdote:
…When Shannon showed his new formula for information to the mathematician John von Neumann and asked him what the quantity he had just defined should be called, von Neumann is said to have replied “H.”
“Why H?” asked Shannon.
“Because that’s what Boltzmann called it,” said von Neumann…
That crazy von Neumann.
APS TGQI Best Student Paper Awards
Congrats to the two winners of the first Best Student Paper Awards for the APS Topical Group on Quantum Information, Concepts and Computation: Michael Garrett (Calgary) and Chris Langer (NIST, Boulder) (What you’ve already seen this announcement, congrats! What you’ve not seen this announcement? Must be because your not a member of the topical group. Maybe you should join? Then again who am I to say what you should do!) The awards are sponsored by the Perimeter Institute (theory) and the Institute for Quantum Computing (experiment) and come with a $500 award, but more importantly with fabulous fame! (Hey no jokes about the “American Physical Society” awards in quantum computing both being sponsered by Canadian institutions.)
Here is the abstract from Michael Garrett’s talk:
9:36AM U40.00007 Stochastic One-Way Quantum Computing with Ultracold Atoms in Optical
Lattices , MICHAEL C. GARRETT, University of Calgary, DAVID L. FEDER, University of Calgary — The one-way model of quantum computation has the advantage over conventional approaches of allowing all entanglement to be prepared in a single initial step prior to any logical operations, generating the so-called cluster state. One of the most promising experimental approaches to the formation of such a highly entangled resource employs a gas of ultracold atoms confined in an optical lattice. Starting with a Mott insulator state of pseudospin-1/2 bosons at unit filling, an Ising-type interaction can be induced by allowing weak nearest-neighbor tunneling, resulting in the formation of a cluster state. An alternate approach is to prepare each spin state in its own sublattice, and induce collisional phase shifts by varying the laser polarizations. In either case, however, there is a systematic phase error which is likely to arise, resulting in the formation of imperfect cluster states. We will present various approaches to one-way quantum computation using imperfect cluster states, and show that the algorithms are necessarily stochastic if the error syndrome is not known.
and here is the abstract from Chris Langer’s talk
8:48AM U40.00003 Robust quantum memory using magnetic-field-independent atomic qubits1, C. LANGER, R. OZERI, J. D. JOST, B. DEMARCO2, A. BEN-KISH3, B. BLAKESTAD, J. BRITTON, J. CHIAVERINI, D. B. HUME, W. M. ITANO, D. LEIBFRIED, R. REICHLE, T. ROSENBAND, P. SCHMIDT, D. J. WINELAND — Scalable quantum information processing requires physical systems capable of reliably storing coherent superpositions for times over which quantum error correction can be implemented. We experimentally demonstrate a robust quantum memory using a magnetic-field-independent hyperfine transition in 9Be+ atomic ion qubits at a field B = 0.01194 T. Qubit superpositions are created and analyzed with two-photon stimulated-Raman transitions. We observe the single physical qubit memory coherence time to be greater than 10 seconds, an improvement of approximately five orders of magnitude from previous experiments. The probability of memory error for this qubit during the measurement period (the longest timescale in our system) is approximately 1.4 × 10−5 which is below fault-tolerance threshold for common quantum error correcting codes.
Bits, Bits, Wherefore Art Thou Bits?
Black holes evaporate via the process of Hawking radiation. If we take the initial pure state describing a body which will collapse and form a black hole which can evaporate, then it would appear that this pure state will evolve, after the complete evaporation, to a state which is mixed. Since it is doing this without leaving any remnant for quantum information to hide in, this would seem to violate the unitarity of quantum theory, which does not allow a closed system pure state to evolve to a closed system mixed state. Similarly, if we consider the an evaporating black hole, there are spacelike surfaces which contain the collapsing body and the outgoing Hawking radiation: therefore if we somehow believe that the outgoing Hawking radiation contains quantum information which we have thrown into the black hole, this process must have cloned this information and thus violated the linearity of quantum theory. This reasoning has been hounding theoretical physicists for ages and is known as the black hole information paradox.
A few years ago, Horowitz and Maldacena proposed a solution to this problem (see hep-th/0310281, “The black hole final state”.) The basic idea of their solution is to use boudary conditions at the black hole singularity to fix this problem. Actually the basic idea of their solution is to use post-selected quantum teleportation. How does this work? Well consider the initial pure state describing the collapsing matter. In addition to this collapsing matter, there will be the Unruh state of the infalling and outgoing Hawking radiation. Now it works out that this Unruh state is basically a maximally entangled quantum state. So what Horowitz and Maldacena propose is that, in order to get the pure state describing the collapsing matter “out” of the black hole, one need only “teleport” this information by using the Unruh radiation. Indeed, we can consider such teleportation: perform a measurement of the pure state describing the collapsing matter and the state of the infalling Hawking radiation in the appropriate (generalized) Bell basis. Now, of course, in order to complete the teleportation procedure we need to send the result of this measurement to the outgoing radiation and apply the appropriate unitary rotation to get the pure state of the collapsing matter outside of the black hole. But, of course, we can’t do this: we can’t send the classical information from inside the black hole to outside. So what Horowitz and Maldacena propose is that instead of performing the full teleportation, a particular result is post-selected for this teleportation. And this result will be the one where you do nothing in the teleportatin protocol. In other words, they postulate that the black-hole singularity acts like a measurement in which one always gets a particular outcome. This is the “final state” projection postulate.
This is a very nice way to try to avoid the black-hole information paradox. One reason is that it seems to put all the craziness at the black-hole singularity, and not of the craziness elsewhere, where we would expect our current ideas about physics to be rather robustly true. But does this really solve the problem? Gottesman and Preskill, in hep-th/0311269 argue that it does not. What Gottesman and Preskill point out is that the collapsing matter and the infalling Unruh radiation will, in general, interact with each other. In this case, the problem is that this interaction can cause the information to no longer be faithfully reproduced outside of the black hole. The problem is that this interaction causes post-selection onto a state which is no longer maximally entangled: the state will then fail at producing the appropriate teleported copy of the state outside of the black hole. Sometimes, in fact, this interaction can completely destroy the effect Horrowitz and Maldacena are attempting to achieve (if we disentangle the post-selected state.) This does not bode well for this post-selected answer to the black-hole information paradox.
So is this the end? Well it is never the end! One might try to show that the amount of information destroyed in the Gottesman-Preskill scheme is small. Now this, in some sense, would be comforting: who would notice a single qubit missing in a world so large? On the other hand, while this might be comforting, it would certainly cause certain consternation among those of us who would like an explanation which is satisfying without giving up even an ounce of information. This question, of the amount of informatin lost, is addressed, in part, in the recent paper “Almost Certain Escape from Black Holes in Final State Projection Models” by Seth Lloyd, Physical Review Letters, 061302 (2006)
Consider the Horowitz and Maldacena scheme as modified by Gottesman and Preskill with an interaction between the infalling and collapsing matter state described by U. Lloyd calculates the fidelity of this scheme, when averaged over all unitaries U according to the Harr measure (this fidelity is the overlap between the initial pure collapsing matter state and the outgoing state after the Horowitz-Maldacena post-selection.) Lloyd finds that this fidelity is 0.85…. In the paper, Seth argues that this is indeed large enough as to indicate that most of the information survives. So, I ask, is 0.85… fidelity really satisfying? I would argue that it is not, even if I accept that averaging over the unitaries makes any sense at all. Why? Well suppose that you daisy chain this procedure. Then it seems that your average fidelity could be as small as desired. Thus while you might argue that there is only a small loss of unitarity for random unitaries, there are physical processes for which this loss of unitarity is huge. This seems to be one of the lessons of quantum theory: destroying only a little bit of information is hard to do without bringing down the whole wagon. Daniel Gottesman says, I think, exactly this in this New Scientist article. Even Lloyd, at the end of his article, hedges his bets a bit:
Final-state projection will have to await experimental and theoretical confirmation before black holes can
be used as quantum computers. It would be premature to jump into a black hole just now.
What about you? Would you jump into a black hole with the fidelity that we could put you back together of 0.85… on average?