No arXiv support yet, but a cool firefox reference organizer tool to keep an eye on Zotero. Without arxiv support its pretty much useless to me right now, but playing around with it for other sites makes me think it would be extremely valuable.
Quantum Percent Sign?
Okay, so when we talk about classical systems, our description of the configuration of the different states is given by a probability. So I might say my bit is a mixture of 50% 0 and 50% 1. Now when we move to quantum theory we no longer have probabilities but instead have complex numbers. But what symbol am I supposed to use for this? My state is a mixture of 1/sqrt(2) q% 0 and 1/sqrt{2} q% 1? Mabye we should invent a new symbol which is the % sign but with the slash the other direction? Or turn those 0s in the % sign into “q”s?
QIP 2007 Deadlines Approaching
Michel Nielsen passes along and email detailing the fact that the deadlines for QIP 2007, to be held in Brisbane, Australia, are fast approaching:
QIP 2007
The tenth QIP (Quantum Information Processing) Workshop is to be held in
Brisbane, Australia, from January 30 through February 3, 2007. QIP covers
theoretical aspects of quantum information science, including quantum
computing, quantum cryptography, and quantum information theory.
The deadline for abstract submission for contributed talks (long and
short) and for posters is 4 November, 2006.
The deadline for early bird registration is 24 November, 2006.
Some partial support available for students and postdocs will be available
(see the website).
Full details are available at the workshop website:
http://qipworkshop.org/
Links to past QIP workshops (including programs) are available at the
website. Note that this year’s program will follow a similar format to
QIP 2006, with approximately 10 invited talks, and 30 contributed talks.
Hope to see you in Brisbane in 2007!
I have to decide whether I’m going to be able to make it this year as I’m teaching Data Structures next term. Two years in a row missing QIP sounds really bad to me.
Siskiyou Daily News Police Reports
I grew up in a small town. Whenever I go home, one of the first things I do is turn to the Police Department report in the local newspaper. Why? To see if my friends have ended up in jail? Nope. For gems like this
Possible dead male in wheelchair. Emergency personnel were dispatched. Officier advised, subject was fine, wheelchair was dead. Wheelchair and subject was transported to his residence.
or, how about this,
Report of a dog causing a traffic problem in the area. Officer arrived and located the dog. After a game of fetch the dog was loaded into the car taken to the kennels.
or, maybe this
Report of a man carrying dead turkey down Main Street. Turkey determined to be recent kill and hunting season open.
What's wrong? House ran away? Dog on fire?
Why visit this blog? For such stellar content as Talking Dogs.
Green Eggs and Green Bacon
Apparently I’m running for the public regulation commission in Albuquerque, NM (spotted by UNM quantum scientists eating at the delicious El Patio, those lucky bastards.)
Free internet for everyone! So you can waste your time reading this blog, of course 🙂
A PR Battle Worth Fighting?
Yoinked from the comments of my post laugh therapy, John Preskill weighs in with a wise remark:
…But actually it is nice, for those of us who may have come to take the theory of quantum fault tolerance for granted, to be reminded of how truly remarkable and marvelous it is. This paper does not lay a glove on the theory. Even so, let’s be careful not to be too smug. We sure have a long way to go toward turning the theory into practice.
Indeed! My first reaction is always to act like I’m a book critic, and to crank up my hyperbole meter to overdrive. But I certainly agree with John that we should not be too smug. To destroy a line from a baseball movie, “Until we build it, they won’t come.” Indeed to me the best critique of quantum error correction is simply “you haven’t done it yet” to which I can only nod my head in agreement and then run over to the experimentalists and cheer them along.
But John’s comment got me thinking (again) about the relationship quantum computing theory has with the physics community. Certainly I don’t think there has been much of a change in the hiring practices of U.S. physics departments when it comes to quantum computing theorists. In two words: “not good.” And I wonder if perhaps one of the reasons for this is that the central message of the threshold(s) theorem(s) has not penetrated into physics. Indeed, in my mind, the threshold theorem for quantum computation is essentially a statement about a new phase of many-body quantum systems. But to many physicists, I’ll bet that the result, if they’ve heard anything about it at all, sounds more like a strange engineering/computer science result, and the inclussion of the word “theorem” sets off their antimathematical radar detection system.
In some ways what I’m saying is that it feels like we’ve lost the public relations battle in publicizing the significance of the threshold theorem to physics departments. Perhaps part of this is because the language used to describe the theorem is more often devoid of terms physicist would like to see. Indeed when I talk about the threshold theorem I always always immediately transport myself into computer science speak. But that doesn’t mean that there isn’t a beautiful way to cast the result in terms of the physics of many-body quantum systems. Should we be making fault-tolerance more accessible to physicists? Maybe this is a PR battle we should be trying harder to overcome!
Okay this is strange. Just as I was about to post this, an email popped into my inbox about the APS March meeting:
TGQI is also organizing a tutorial on Quantum Error Correction and Fault-Tolerant Quantum Computation, which will be given on Sunday, March 4, with Daniel Gottesman of the Perimeter Institute as the instructor. To attend a tutorial, you must pre-register for the Meeting.
Sounds like a good way to convert some physics skeptics!
Will the Real Dave Bacon Please Stand Up?
Update: As Greg points out the way this thing works is pretty dang silly. For me it gives me 108 with my name (10 if I use Dave, and 98 if I use David). Anecdotally this seems off. Why? Well I grew up in a town of just over 6000 and there were two (2!) people with my name. One of them was even my age and had the same middle initial. In fact that’s how my name got changed to Dave from David. The teachers needed some way to distinguish us and so I got Dave and he got David. So 2/6000 times 300 million = 100000. Mmm…argument by anecdote.
Not as Bad as the Best Expert
Tuesday I went to a cool talk by Sanjeev Arora on the multiplicative weight method and, because I should be working on a different calculation, I thought I’d write up a description of a basic version of this algorithm (Mmm…I love the smell of procrastination in the morning.)
Suppose that there is a stock exchange where each day you can bet on whether a stock will go up or down that day. Of course you want to bet correctly on this stock every day so as to earn the maximum amount of moolah. But you don’t know much about stocks and are a bit lazy. But you do have access to a group of [tex]$n$[/tex] experts. Everyday you get to learn these experts predictions for that days movement of the stock price. Now of course, since these are experts, you’d like to do as well as the best of these experts. So what should you do? It seems that knowing what the experts predict shouldn’t help you perform as well as the best of these experts, right? I mean how would you know who the best of these experts is?
Well, it turns out that there is a way to guarantee performing almost as well as the best expert in this game. This is the heart of the multiplicative weight algorithm. Begin by assigning a weight to every expert. Since we don’t know anything at the time of our first bet we might as well assign a weight of one to every expert. Now intuitively what we are going to do is to increase the weight of experts when they correctly predict the outcome of a days movement. Thus we will update our weights as follows. If an expert got the prediction correctly, then we do nothing to the weight. If, on the other hand, the expert was wrong then we multiply the experts corresponding weight by [tex]$1-epsilon$[/tex] where [tex]$0< epsilon< frac{1}{2}$[/tex] is a small number which we will discuss later. Okay, so the weights will now begin to favor experts who are right more. Now how should we exploit this in our own playing of the game? Well each expert will have a prediction for the movement of the stock. What we do is we take a weighted majority of these predictions. That is if the weight of the experts predicting the stock will go up is more than half the total weight of all experts, then we will bet on up. If the weight of the experts predicting the stock will go down is more than half the total weight of all experts, then we will bet on down. A pretty simple algorithm, no? As a we play the game, we change the weights of the experts and bet depending on what the weighted majority of them predict. That's among the simplest strategies we could imagine…so how well does it work?
Now here is the cool thing. Suppose that we play the game [tex]$t$[/tex] times. Let [tex]$m(t)$[/tex] denote the number of times we, using the above algorithm, guess incorrectly (our number of mistakes). Further, let [tex]$m_i(t)$[/tex] denote the number of times expert [tex]$i$[/tex] guesses incorrectly. Then we claim that, for all experts the above algorithm satisfies [tex]$m(t) leq {2 log n over epsilon} +2(1+epsilon) m_i(t)$[/tex]. We'll prove this in a second but note that since this holds for all experts (for all [tex]$i$[/tex]), this means that it holds, in particular for the best expert. In other words, the number of mistakes we make is (ignoring for the moment the first term) at most roughly twice that of the best expert! Pretty cool, no?
How do you prove this? Well it's fairly straighforward. Note that the weight of the [tex]$i$[/tex]th expert after playing [tex]$t$[/tex] times is [tex]$w_i(t)=(1-epsilon)^{m_i(t)}$[/tex]. Define the "potential" function [tex]$phi(t)=sum_i w_i(t)$[/tex] (so that [tex]$phi(1)=n$[/tex].) Now what happens to the potential function as a function of time? Well the potential will always decrease as long as there is one expert who is wrong. What we are interested in is how it will decrease when we make a mistake in our guess, i.e. when by following the weighted majority we are wrong. In this case, at least half of the total weight is decreased by [tex]$1-epsilon$[/tex]. Thus in this case the potential will decrease by at least [tex]$frac{1}{2}+frac{1}{2}(1-epsilon)=1-frac{epsilon}{2}$[/tex]. Thus if we have made [tex]$m(t)$[/tex] mistakes at time [tex]$t$[/tex], this means that the potential at that time must satisfy [tex]$phi(t) leq nleft(1-frac{epsilon}{2} right)^{m(t)}$[/tex]. Now since the weights remain postive the potential function is always as great as one of its constituent weights: [tex]$phi(t) geq w_i(t)$[/tex] for all [tex]$i$[/tex] so that [tex]$w_i(t) leq nleft(1-frac{epsilon}{2} right)^{m(t)}$[/tex]. Putting this together with [tex]$w_i(t)=(1-epsilon)^{m_i(t)}$[/tex] we thus find that [tex]$(1-epsilon)^{m_i(t)} leq nleft(1-frac{epsilon}{2} right)^{m(t)}$[/tex]. Taking the log of this expression we obtain [tex]$m_i(t) log(1-epsilon) leq log n + m(t) log left(1-frac{epsilon}{2} right)$[/tex] and using [tex]$- log(1-x) < x+x^2$[/tex] for [tex]$0<x<frac{1}{2}$[/tex] plus throwing away some [tex]$epsilon$[/tex] corrections, we obtain our desired expression [tex]$m(t) leq {2 log n over epsilon} +2(1+epsilon) m_i(t)$[/tex].
So what use is this multiplicative weight algorithm? Well that was the subject of Sanjeev's talk, describing all sorts of cool applications of this algorithm, including boosting in machine learning theory, approximately solving semidefinite programs, approximately solving fraction packing and covering problems, and online convex optimization. These later tasks were the subjecto Sanjeev's talk and the interested reader is directed to this survey paper.
Pre-Dewiging Pictures
Last week, after going to Innsbruck, I attended the QIPC Workshop in London. The workshop was held at the Royal Society of London and the theme was “Physicists and Computer Scientists Unite!” Scott Aaronson and I were asked to open up the workshop, and, because we are two entirely shameless people we decided that the best way to do this was, given the locale, to replace the debate between physicists and computer scientists with an opening debate between Newton and Leibnitz. Scott has posted our opening dialogue here and the talk which followed it here. And, even more importantly, and to most embarass ourselves, I now present to you incriminating evidence (thanks Viv!) that Scott and I are indeed crazy enough to mock these two great scientists by wearing the appropriate attire:
If you look closely at the following picture, you’ll see that Scott is on the ground groveling before Newton:
And finally, here we make fun of a cookie name