An New Optimized Blog

Via Michael Nielson’s blog, I’ve learned that Scott Aaronson has strated a blog: Shtetl-Optimized. Since Scott is essentially crazy (that’s a complement, peoples) I’m really looking forward to his posts. Plus he said he will talk about the Simpsons, and heck, who doesn’t like talking about the Simpsons?

Nobel Physics Prize 2005

The physics Nobel prize has been announced. This years winners are John Hall (University of Colorado), Theodor Hansch (MaxPlanck Institute in Garching) and Roy Glauber (Harvard). The first two for experimental work in high precision laser spectroscopy and the later for theoretical work in quantum optics. Sweet! I note with some delight that Ted Hansch’s research group currently oversees work which is strongly motivated by the question to build quantum information processing devices. Which reminds me of something I like to say to my experimental friends in quantum information science: “First one to a quantum computer gets a Nobel prize!”

Shor Broke Our Toys

Rod Van Meter has an interest post up today on his blog titled Stop the Myth: QKD doesn’t fix what Shor broke. Along similar lines, Rod points to the paper quant-ph/0406147 by Kenneth G. Paterson, Fred Piper, and Ruediger Schack, “Why Quantum Cryptography?”
Here is my take on what Rod and quant-ph/0406147 are arguing (but I’d recommend reading what they say, because my expertise on quantum cryptography goes to zero in the limit of the people who know what they are talking about going to some number greater than four.) Mostly I’ve just cobbled this together from what Rod said and my own personal misunderstandings. So you should read Rod’s post first, and then laugh at my silly rehashing of the main points.
To simplify life let’s break secure communication into two steps. The first is authentication and the second is key exchange. In most applications what we need to do is first authenticate our channel and then second we need to do key exchange using this authenticated channel. Now Shor’s algorithm broke the most popular public key cryptosystems (those based on the difficulty of factoring and discrete log.) These public key cryptosystems were often used for both authentification and for key exchange. Thus Shor broke both of these when operating with those cryptosystems.
OK, now what the quantum cryptography (QKD) do? Well what it does is, given an authenticated channel, it allows you to create secret keys whose security is based on quantum theory being a correct description of nature. But how does one establish this authentification? Because Shor broke the most widely used public key cryptosystems, we need to establish this with some other technique. One way to do this is for the parties to share some small secret keys which they can then use to establish the authenticated channel. So the point Rod and others are making is that QKD doesn’t provide for an authentication scheme whose security is based on the quantum theory. Rod thinks that physicists haven’t known about this or realized it’s importance, which may be true as I can’t read other physicists brains or listen to all they, but I certainly think everyone working in quantum cryptography realizes this (and even some people who don’t like me 😉 ) QKD is thus best viewed not as a way to providing a total solution to unconditional security but as a way to take a small secret key and use it to create a huge shared secret key. The security of this will be based on (1) the security of the authentication scheme and (2) the laws of quantum theory. Maybe to put this into simple words, I would say that QKD fixes only part of what Shor broke.
Another important point is that in the above paragraph I’m talking about using QKD as a way to distribute keys at a sufficiently high rate to use these keys as one time pads for communication. Right now, many of the QKD systems do not opperate at speeds fast enough to achieve this goal for reasonable communciation rates. Certainly getting to these high speeds is one of the grand challenges in experimental QKD! Currently, I am told, that many QKD systems sold use the keys they generate as part of another layer of cryptography (like 3DES or AEA). This does provide security, but weakens the security (one could conveivably break these other systems…one would need to do this for the size of the key the QKD creates: so as QKD rates go up and up, conceivably one should be able to make this security higher and higher until eventually you will just want to use the key as a pad.)
But, like I said, my understanding of this is murky, so go read Rod’s blog and check out quant-ph/0406147! Oh, and for all you lurking quantum cryptologists, I’d love to hear your take on all of this.

Got Security?

John Preskill points me to Workshop on Classical and Quantum Information Security to be held December 15-18, 2005 at Caltech:

This workshop will bring together researchers from a variety of backgrounds who work on different aspects of classical and quantum information security. Participants will strive to identify issues and problems of common interest that can be effectively addressed by pooling their expertise.

Looks like a very broad spectrum of researchers will be attending, which should make for a very interesting workshop. Overcoming communication barriers is often the first step in interesting new research directions, and I’m sure one hope of the workshop is to aid this difficult process.
Which reminds me of a documentry I saw a few years ago which tried to identify why certain schools were much better at producing (not buying 😉 ) Nobel prizes. What the documentry argued was that the common theme for the top Nobel producing institutes was that they all had very loose definitions of what was considered “in a particular discipline.” Or, the way I like to think about it: the energy landscape on the space of different research programs had very shallow minimums at the top Nobel producing institutes, whereas at most other schools, these wells were very deep…sometimes being even infinite square wells! This is, of course, an oversimplification, but I’m glad to see that Caltech (highest Nobel prize count per member of the community that I know of ) is hosting a workshop which so well imbodies this spirit.

Marilyn >> Newton!

Hawking interview in the Guardian. Funny:

“If you could go back in time, who would you rather meet, Marilyn Monroe or Isaac Newton?” and after 10 minutes he says in that voice that makes the blandest statement sound profound: “Marilyn. Newton seems to have been an unpleasant character.”

The interview reminds me of attending a physics talk by Hawking at Caltech. What was awesome was that after the talk, there was a question from the audience. And then, because it takes Hawking a long time to compose his answer, the big wig professors at Caltech began to debate what exactly Hawking’s answer should be. Wow, very fun stuff to witness! Then after about 15 minutes came Hawkings answer. We all waited in antipication for that famous digitized voice. His answer to the question: “No.”

Dirac >> Feynman?

Information Processing points to a review by Freeman Dyson in the New York Review of Books of Perfectly Reasonable Deviations From The Beaten Track: The Letters Of Richard P. Feynman .
What I find interesting in the article is Dyson’s claim that Paul Dirac was a greater genius than Richard Feynman. Of course, judging “greater genius” seems about as silly as worrying about whether it is a “double backward doggy dare” or a “triple backwards over-the-top doggy dare.” With this caviot, I however, just have to say “huh?”
In my mind Dirac has three claims to genius: his derivation of the Dirac equation, his work on magnetic monopoles (showing that the existence of a single such monopole could explain the reason charge comes in discrete quantities), and his work unifying the early differing approaches to quantum theory. Feynman, in my mind, has four or more claims to genius: his derivation of the path integral formulation of quantum theory, his space-time approach to solving the problems in quantum electrodynamics, his work on the theory of superconductivity (showing the importance of quantum theory on a “macroscopic” level), and his model of weak decay (work with Gell-Mann which was also independently done by George Sudharsan and Robert Marshak.) So in my mind, I put Feynman just above Dirac (what, you mean you don’t have your own personal ordering of geniuses?)
And, after thinking about it for a while (too much time, perhaps!) I think I can guess why Dyson puts Dirac above Feynman (oh, to be a physicist known by your last name alone!) I believe the reason is that Dyson was originally a mathematician. Feynman’s work is filled with the sort of raw physical insight that physicists love and admire. Sure, making the path integral rigorous is a pain the rear, but it works! In Dirac’s work, we find, on the other hand, a clear mathematical beauty: the Dirac equation and the magnetic monopole are motivated more my arguments of symmetry than by any appeal to a physicist’s “calculate and run” methodologies (indeed the latter is not even known the correspond to experimental reality!)
So who is the greater genius? Well I “double dog dare you” to come up with reasons that Dirac is a greater genius than Feynman.
Update: See the comments for some fun back and forth. OK, in my head really I put Dirac and Feynman at the same level. What I find intersting is how one’s background influences this (silly) debate. If you are a particle theorist, I bet Dirac>Feynman. If you went to Caltech as an undergrad, I bet you have Feynman>Dirac. Ah, the ways theorists waste away their days.

Angel Dust, Ozone, Wack, and Rocket Fuel

Course I’m sitting in on this term (will probably have to miss two lectures, drats!):

CSE 533: The PCP Theorem and Hardness of Approximation, Autumn 2005
Instructors:
Venkatesan Guruswami and Ryan O’Donnell
Meeting times: MW 10:30-11:50 at Loew Hall, Room 222
Course Description
The PCP Theorem states that there is a certain format for writing mathematical proofs, with the following property: a verifier can randomly spot-check only a *constant* number of bits of the proof and yet be convinced of its correctness or fallaciousness with extremely high probability. Further, proofs in this format need only be polynomially longer than they are in “classical” format, and can be produced in polynomial time from the classical proof.
The proof of this theorem in the early 1990’s was a great triumph of theoretical computer science, and it became even more important in the mid-1990’s when it was shown to be extremely powerful in proving NP-hardness results for *approximate* solutions to optimization problems.
Unfortunately, the known proof of the PCP theorem was extremely complex and also decentralized; even complete courses devoted to it rarely finished all of the proof details. However in April 2005, Irit Dinur completely changed the PCP landscape by giving a self-contained proof taking only a dozen or so pages. (See http://eccc.uni-trier.de/eccc-reports/2005/TR05-046/index.html)
In this course we will prove the PCP Theorem and also gives some of its consequences for hardness of approximation. Topics will be a mix of older work from the 1990’s, as well as very recent results on the cutting edge of research, such as:
Interactive proofs
The Parallel Repetition Theorem
Hardness of approximation for linear equations, clique size, maximum cut, set cover, and lattice problems
Fourier analysis of boolean functions
The Unique Games conjecture

Hopefully I will have time to blog about some of this, as it is probably some of the most beautiful stuff in computer science. Even if it does share a name with a drug.