When Two Zeros Are Not Zero: The Strange Lives of Quantum Cell Phones

A result of much quantum coolness out today: arXiv:0807.4935 (scirate): “Quantum Communication With Zero-Capacity Channels” by Graeme Smith and Jon Yard. Strange things they are going on when we try to use our quantum cell phones, it seems. Quantum cell phones, what the hell? Read on…

You know the situation. You’re standing in line to get your morning coffee and bagel, and you get a call from your boss: “Hey Pontiff Dude, what’s your bank account number? I need to have it so that I can deposit this large bonus into your account and if I don’t do this within a few seconds, you won’t get the dinero.” “My account number is [sound of cell phone noise]”. “Crap!” Yep, you know the situation: you’ve just encountered a noisy communication channel. Your cell phone, for whatever reason (but most probably due to the cell phone company or maybe that jammer installed by the coffee place owner who is sick of you jabbering in line), cut out the information you wanted to transmit. You tried to send your account number: “11235813” but this information came out on the other end, to your boss, as “bbbbbbbb!” (It could have been worse: it could have come out as “you stink boss”) How annoying. And now you have to go back to being poor.
Noisy communication channels are a communicators worse enemy. Sending information down a line (like the tubes out of which the Internet is made) is supposed to get that information to its destination. When this fails, we are not happy campers. Of course, when you think a bit about this, you might begin to wonder whether there are ways to get around this problem. And, of course, you already know such a method and you’ve also most certainly used it on a cell phone: “Can you hear me now? Can you hear me now?” Suppose that your cell phone behaves in a particular manner: for every phrase you utter, it has a one quarter chance of correctly transmitting this phrase, and a three quarter chance of destroying the message, so the receiver on the other end hears nothing. This is not good if your life depends on getting that message to the other end of the line. How can we get around this?
Simple: just repeat yourself. The probability that both of your messages will be destroyed is now three fourths times three fourths: which is nine sixteenths. Or turning this around, the probability of successfully transmitting at least one of your phrases is now seven sixteenths, which is greater than your original success probability which was one fourth. By using the channel twice you can boost your probability of successfully transmitting information!
This general setting, of transmitting information down a channel which behaves in a probabilistic manner, not just destroying information, but also scrambling it up (so that when you say “Bacon” the person at the other end of the line hears “Cbdpo”) is the subject of the field known as information theory. And one of the oldest and most important results in this field was given by Claude Shannon in his article A Mathematical Theory of Communication published in 1948 (amusingly an expanded version of this article appear with coauthor Warren Weaver as a book in 1963 title “The Mathematical Theory of Communication”. From “A” to “The” in fifteen years: the speed of scientific dogmafication!) One of the things that Shannon did in this paper was to define the capacity of a communication channel. The capacity, measured in bits per channel use, gives the rate at which you can noiselessly send bits down a communication line with a vanishingly small probability of failing, if you are given access to repeatedly using this channel. Thus a capacity of, say, 1 would indicate that you have a noiseless channel, while a capacity of, say one half, would indicate that if you want to send n bits down the line you need to use 2n channel uses (note the statement is in the limit of large n if you want the probability of failure to “go away.”)
What Shannon did was not just define the capacity of a channel, but he gave a nice simple formula for it. And what quantum information theory people have been struggling to do for a long time is to come up with the analogous notion for the situation where instead of sending classical information, you use particles that have quantum properties, and send this quantum information noiselessly down a quantum communication channel. In other words, quantum information theorists have been trying to build a “quantum Shannon theory.”
Now the story here gets complicated and long, but we don’t need to know too much about it, except that recently there has been a lot of good work on defining a quantum capacity for a channel, going back to the work of Schumacher and Nielsen who defined a notion called coherent information way back in 1996, and culminating with some nice more recent work by Shor and Devetak (following up on some work of Lloyd) for defining the quantum capacity of a channel. Okay, great, so there is a sort of “quantum Shannon theory” which defines a quantum channel capacity.
Now enter onto the scene arXiv:0807.4935 by Graeme Smith and Jon Yard. One of the most basic properties of the capacity that Shannon defined for classical communication channels is that it is additive. What does this mean? Well suppose that you have a channel which has a capacity C1 and another channel which has a capacity C2. Then you can ask, well, what if I use these two channels in parallel: i.e. I get to send information down each of the lines at the same time for every message? What is the capacity of this new parallel channel? The additivity of the capacity that Shannon defined tells us that the answer to this question is C1+C2: i.e the capacities simply add. This is a nice feature of the capacity: it means that the capacity of a channel is independent, in a nice way, of what other channels you have available. By combining channels you don’t get some magic boost in the capacity: they just add up.
In particular think about what this means for communication channels which have zero capcities: i.e. the always malfunctioning cell phone. If I have one cell phone that has zero capacity and another cell phone with zero capacity, I can’t put them together to transmit information. Sounds pretty obvious. But, uh oh, what about the quantum version of this statement? What Graeme Smith and Jon Yard claim to have shown in this paper (I’m not certain I grok the details of their argument yet, but I’m working on it!) is that the analogous statement for the quantum channel capacity isn’t true. There exist quantum channels that, when you use them individually have zero capacity for transmitting quantum information, but when you use them together have a nonzero capacity for transmitting quantum information.
Or putting it another way, there exists quantum cell phones which, used individually cannot transmit quantum information noiselessly, but if you use them together, then you can noiselessly transmit quantum information. How strange is that?
(This post brought to you by Pear, maker of the new qPhone. Get one now before the companies wavefunction collapses onto higher prices.)

4 Replies to “When Two Zeros Are Not Zero: The Strange Lives of Quantum Cell Phones”

  1. I’m especially intrigued by their statement

    Perhaps each channel transfers some different, but complementary kind of quantum information.

    since transfer of complementary (classical) information is the way I tend to approach questions of channel capacity and the like. (The other option roughly being to show that the complementary channel, i.e. the one going to the environment, doesn’t transmit any information whatsoever.) Though I guess the erasure channel doesn’t transmit much of anything…

  2. She sells cell phones by the cellar door.
    See, the cells she sells are surely Ma Bells.
    Since she sells cell phones by the cellar door,
    Why the hell are these cells not wave-functional any more?

Leave a Reply

Your email address will not be published. Required fields are marked *