Holography Oversold?

Warning: this post is about a subject I know a tiny tiny bit about. I suspect I will have to update it once I get irrate emails pointing out my horrible misunderstandings.
Roman Buniy and Stephen Hsu (both from the University of Oregon…quack, quack…the mascot of UofO is the Duck!) cross listed an interesting paper to quant-ph today: hep-th/0510021: “Entanglement entropy, black holes and holography.” (Steve posted about it on his blog) As many of you know, the idea of holography is that the number of degrees of freedom of a region of our universe scales proportional to the area of surface of the region. This strange conjecture is extremely interesting, and bizarre, because it raises all sorts of questions about how such theories work (I especially have problems thinking about locality in such theories, but hey that’s just me.) One line of evidence for the holographic principle comes from black hole physics. One can formulate a thermodynamics for black holes, and this thermodynamics gives an entropy for a black hole which is proportional to its area. Another interesting fact is the AdS/CFT correspondence which shows an equivalence between a certain quantum gravity theory in an anti-deSitter universe and a conformal field theory on the boundary of this space: i.e. quantum gravity in this space can be described by a theory on the surface of the space, a holographic theory, so to speak. Indeed, the fact that certain string theories have black holes which have a holographic number of degrees of freedom is taken as evidence that string theory might be consistent with our universe.
What Buniy and Hsu suggest in their paper is that the holographic bound is not a bound on the degrees of freedom of our theory of the universe, but that instead, the holographic bound should be thought about as a bound on entropy of a region in the presence of gravity. They point out that if you take gravity away, then the scaling of the degrees of freedom scales like the volume (although, if you take the ground state of a local quantum field theory, then this particular state has an entropy which scales like the area: such states that Buniy and Hsu consider are therefore necessarily not ground states of such theories. But this doesn’t mean that they don’t exist or that we can’t construct such states.) They then argue that if, on the other hand, you want to avoid gravitational collapse, then this requirement precludes such states, and indeed gives you states whose entropy scales like the area. What Buniy and Hsu seem to be arguing is that while one does obtain entropies which scale like the area using these arguments about black holes, this doesn’t imply that the degrees of freedom of the underlying theory must scale as the area.
One might wonder whether there is a difference between having an entropy scaling like the area and the degrees of freedom scaling like the area. Well certainly there would be for an underlying theory of quantum gravity: presumably different degrees of freedom can be accessed which give the same area scaling, but which represent fundamentally different physical settings. So, for example, I can access some of these degrees of freedom, and as long as I don’t create a black hole, these degrees will be as real for me as they can be. But if I try to access them in such a manner that I create a black hole, I will only see the effective degrees of freedom proportional to the area of the black hole.
Which is all very interesting. Just think, maybe one of the greatest achievements of string theory, deriving holographic bounds, actually ends up being a step in the wrong direction. And, no I’m not wishing this fate upon string theory. I wish no fate among any theories: I just want to understand what nature’s solution is.

Nature Physics

The first issue of Nature Physics is out (which was where the article by Brassard on information and quantum theory appeared.) From the opening letter from the editor

Authors may be pleased to know that manuscripts can be submitted to Nature Physics not only in Microsoft Word, but in LaTeX too.

Which caused me to almost fall out of my chair laughing. Welcome to the modern world, Nature publishing.

Super Solid Helium?

Yesterday I went to a condensed matter seminar on “super solid Helium” by Greg Dash. What, you ask, is super solid Helium? Well certainly you may have heard of superfluids. When you take Helium 4, and cool it down, somewhere around 2 Kelivin the liquid He4 (assuming the pressure is not too high so that it does not solidify) makes a transition to a state of matter, the superfluid, where it is a liquid which doesn’t have any viscousity. Well actually I think what happens is that you get a state of matter which has one part which is superfluid and the other part which is normal. The superfluid part of the liquid, along with having no viscosity, also has infinite thermal conductivity so it’s impossible to set up a temperature gradiant with a superfluid. He3 also froms a superfluid at cold enough temperatures. But He3 does this at a much lower temperature, I believe around a few micro Kelvin. The mechanisms for superfluidity in these systems is different: in He4 it is bose condensation of the He4’s themselves (which are bosons) and in He3 it is a bose condensation of the He3 which act as composite particles with bose statistics (the mechanism is similar to the role Cooper pairs play in superconductivity.)
So what is super solid Helium? Well the theoretical conjecture is that if you take a solid, this solid has vacancies (i.e. it’s not perfect and there are places where the solid is missing atoms in the lattice where it should have atoms in the lattice.) and it is these vacancies which can form a bose condensate at low enough temperature. So the idea behind super solid Helium is that you have a highly pressurized chunck of cold Helium and below a certain temperature vacancies in the solid will all condense into the same state. Thus in such a substance the vacancies should flow without resistance in the solid. (I say Helium here, but it could possibly occur in other substances as well.)
But the question is, does such a mechanism occur? Over the years, various experiments have been performed looking for super solids and no one has seen any evidence of this strange phase of matter. Well in 2004, Eun-Seong Kim and Moses Chan of Pennsylvania State University performed experiments in which they claimed to have observed super solid Helium.
The basic idea behind the experiment is pretty simple: if you take a superfluid and try to spin it, it will be much easier to spin because of the lack of viscosity of the fluid. Thus if you take a torsional pendulum (a pendulum which instead of swinging back and forth like a normal pendulum is a disk attached to a rod and the disk is rotated by an angle and then this rotation angle oscillates like a pendulum) and start it oscillating, and then cool the system from above the superfluid transition temperature to below the superfluid transition temperature, then the system will all of a sudden become easier to spin, i.e. it’s moment of intertia will decrease. This will result in an increase in the oscillation frequency of the torsion pendulum. So what Kim and Chan did was they got highly pressurized Helium, so that it was solid, and put it on such a torsional pendulum. And at around a one tenth of a Kelvin, Kim and Chan observed exactly the effect of a decrease in the moment of inertia!
Now of course, science isn’t just about one team observing something and then everybody sitting back and saying “yes that must be super solid Helium.” Instead what happens is that (1) theorists get all up in arms trying to figure out if there are alternative explanations for this experiment and begin thinking about how to test these explanations and (2) experimentalists design experiments to duplicate or to make complementary confirmations of different properties super solid Helium would exhibit. The talk I went to yesterday was about some of the theoretical ideas for alternative explanations of the results of Kim and Chan as well as some discussion of more recent results reported by Kim and Chan. Interesting, I’d say that right now there is a stalemate: the alternative explanations now have problems with explaining the experimental results, but new, more recent experiments also exhibit effects which are harder to fit with what the theory of super solid Helium would predict (in particular an experiment I did not understand very well which attempted to verify in a different manner the existence of the super solid phase ( i.e. one of those complementary confirmations seemed to fail.)) What was nice to hear was that a different experimental group was gearing up to repeat the experiment of Kim and Chan. So maybe soon we will have either a confirmation of the effect seen by Kim and Chan, or no confirmation and then trying to figure out what the heck is causing the effect seen by Kim and Chan.
Science in action. Ain’t it beautiful?

Nobel Prize in Chemistry 2005

The Nobel prize in Chemistry this year goes to Robert H. Grubbs (Caltech), Richard R. Schrock (MIT), and Yves Chauvin (Institut Francais du Petrole) for the development of metathesis. Massive misspelling of “Caltech” runs amok among the world’s newspapers.

Is Geometry the Key?

Gilles Brassard has a very nice article in the Commentary section of Nature, “Is information the key?” From the abstract:

Quantum information science has brought us novel means of calculation and communication. But could its theorems hold the key to understanding the quantum world at its most profound level? Do the truly fundamental laws of nature concern — not waves and particles — but information?

The article is well worth reading.
The basic question asked in the paper is whether or not it is possible to derive quantum theory from basic rules about information plus a little bit more. Thus for instance, one can ask, whether it is possible to derive quantum theory by assuming things like no superluminal communication plus no bit commitment (two properties of quantum theory as we understand it today.) To date, there have been some very nice attempts to move this task forward. In particular Brassard mentions the work of Bub, Clifton and Halvorson which is very nice. However, my beef with all such derviations I’ve seen so far is that their assumptions are too strong. For example in the Bub et al work, they assume theory must be described within a C*-algebraic framework. And this assumption just hides too much for me: such assumptions basically are assumption of the linearity of the theory and don’t really shed light on why quantum theory should act in this manner. Linearity, for me, is basically the question “why amplitudes and not probabilities?” This, I find, is a central quantum mystery (well not a mystery, but something I’d like to see a reason given for in the same way that if I had been around in 1900, I would have wanted to see an explanation for the Lorentz transform, which is what Einstein, so beautifully, did.) On the other hand, the fact that one can make these assumptions and derive quantum theory or quantum-like behavior is extremely suggestive and I would be happy if lots of people started thinking about this question.
Personally, I’ve already gone through a stage where I thought this might be the approach to undestanding quantum theory and moved to another stage (just as I have, throughout my life, loved every single interprtation. I even have a strong memory of sitting in a car on a vacation with my family, probably when I was in high school, where I distinctly remember understanding the many-worlds interpretation of quantum theory! On the other hand, I also have a vivid memory of waking up one night while I was an undergrad at Caltech and having an absolutely impossible to disprove reason for why there is evil in the world. Strange feelings, those. “The moment of clarify faded like charity does.”) In work I did with Ben Toner, we showed a protocol for simulating the correlations produced by projective measurements on a singlet using shared randomness and a single bit of communication. For a long time, Ben and I wondered whether we could derive this protocol via more basic assumptions. For example, is there a simple game for which the protocol with one bit of communication is the best strategy (this game also being best solved by bare, unaided with communication, quantum theory?) Of course, one can always define a game such that these correlations and the protocol are the best, but that is cheating: we wanted a simple game to go along with our simple protocol. Alas we could never find such a game or a more basic set of principles from which to derive our protocol. As a funny note, we called the game we were searching for “Warcraft.” And when we were looking for a game which the optimal strategy would yeild all of quantum theory we called it simply “The Game.” What is “The Game” at which quantum theory is the optimal strategy?
After working on “The Game” and related ideas, I’ve become less convinced that this is the proper approach to take. Why? Well mostly due to the structure of the protocol we came up with for simulating the singlet quantum correlations. The important observation, I now believe, about this protocol is its geometric nature. If you look at the protocol (quant-ph/0304076) what is interesting about it, to me, is the beautiful geometry of the protocol. I should mention that recently an reformulation of our protocol has appeared quant-ph/0507120 by Degorre, Laplante, and Roland which is very nice and also demonstrates how simple the geometry involved in the protocol is. So why do I focus on this geometric aspect? Well because I think that the ultimate explanation for why quantum theory is the way it is must, in some way provide answers to the question of hidden variables in quantum theory (prejudice number 1) and that the only way in which I know to get around such obstacles is to muck with the topology of spacetime (prejudice number 2), and thus an attempt to understand our protocol in terms of changes in the topology of spacetime is the proper route to take. However, I haven’t yet succeeded in recasting our protocol in these terms. Perhaps some crazy wonderkid out there can easily see how to do this! (Alternatively I wouldn’t be surprised if my prejudice number two is somehow replaced with particularly strange reasoning about the nature of time. Here I am thinking somewhat of the transactional interpretations of quantum theory.)
Anyway, understanding why quantum theory is the way it is, is either one of the greatest mysteries of physics, or a dead end that will never lead to any experiments. I hope for the former, but, can’t quite convince myself that it can’t be the latter.

An New Optimized Blog

Via Michael Nielson’s blog, I’ve learned that Scott Aaronson has strated a blog: Shtetl-Optimized. Since Scott is essentially crazy (that’s a complement, peoples) I’m really looking forward to his posts. Plus he said he will talk about the Simpsons, and heck, who doesn’t like talking about the Simpsons?

Nobel Physics Prize 2005

The physics Nobel prize has been announced. This years winners are John Hall (University of Colorado), Theodor Hansch (MaxPlanck Institute in Garching) and Roy Glauber (Harvard). The first two for experimental work in high precision laser spectroscopy and the later for theoretical work in quantum optics. Sweet! I note with some delight that Ted Hansch’s research group currently oversees work which is strongly motivated by the question to build quantum information processing devices. Which reminds me of something I like to say to my experimental friends in quantum information science: “First one to a quantum computer gets a Nobel prize!”

Shor Broke Our Toys

Rod Van Meter has an interest post up today on his blog titled Stop the Myth: QKD doesn’t fix what Shor broke. Along similar lines, Rod points to the paper quant-ph/0406147 by Kenneth G. Paterson, Fred Piper, and Ruediger Schack, “Why Quantum Cryptography?”
Here is my take on what Rod and quant-ph/0406147 are arguing (but I’d recommend reading what they say, because my expertise on quantum cryptography goes to zero in the limit of the people who know what they are talking about going to some number greater than four.) Mostly I’ve just cobbled this together from what Rod said and my own personal misunderstandings. So you should read Rod’s post first, and then laugh at my silly rehashing of the main points.
To simplify life let’s break secure communication into two steps. The first is authentication and the second is key exchange. In most applications what we need to do is first authenticate our channel and then second we need to do key exchange using this authenticated channel. Now Shor’s algorithm broke the most popular public key cryptosystems (those based on the difficulty of factoring and discrete log.) These public key cryptosystems were often used for both authentification and for key exchange. Thus Shor broke both of these when operating with those cryptosystems.
OK, now what the quantum cryptography (QKD) do? Well what it does is, given an authenticated channel, it allows you to create secret keys whose security is based on quantum theory being a correct description of nature. But how does one establish this authentification? Because Shor broke the most widely used public key cryptosystems, we need to establish this with some other technique. One way to do this is for the parties to share some small secret keys which they can then use to establish the authenticated channel. So the point Rod and others are making is that QKD doesn’t provide for an authentication scheme whose security is based on the quantum theory. Rod thinks that physicists haven’t known about this or realized it’s importance, which may be true as I can’t read other physicists brains or listen to all they, but I certainly think everyone working in quantum cryptography realizes this (and even some people who don’t like me 😉 ) QKD is thus best viewed not as a way to providing a total solution to unconditional security but as a way to take a small secret key and use it to create a huge shared secret key. The security of this will be based on (1) the security of the authentication scheme and (2) the laws of quantum theory. Maybe to put this into simple words, I would say that QKD fixes only part of what Shor broke.
Another important point is that in the above paragraph I’m talking about using QKD as a way to distribute keys at a sufficiently high rate to use these keys as one time pads for communication. Right now, many of the QKD systems do not opperate at speeds fast enough to achieve this goal for reasonable communciation rates. Certainly getting to these high speeds is one of the grand challenges in experimental QKD! Currently, I am told, that many QKD systems sold use the keys they generate as part of another layer of cryptography (like 3DES or AEA). This does provide security, but weakens the security (one could conveivably break these other systems…one would need to do this for the size of the key the QKD creates: so as QKD rates go up and up, conceivably one should be able to make this security higher and higher until eventually you will just want to use the key as a pad.)
But, like I said, my understanding of this is murky, so go read Rod’s blog and check out quant-ph/0406147! Oh, and for all you lurking quantum cryptologists, I’d love to hear your take on all of this.