From quant-ph/0606131:
How many copies are needed for state discrimination?
Authors: Aram W. Harrow, Andreas Winter
Comments: 1 page, submitted to QCMC’06, answer is O(log # of states)
O brave new quantum world!
From quant-ph/0606131:
How many copies are needed for state discrimination?
Authors: Aram W. Harrow, Andreas Winter
Comments: 1 page, submitted to QCMC’06, answer is O(log # of states)
Over at Information Processing, Steve Hsu has a post about a recent paper (with coauthors R. Buniy and A. Zee), hep-th/0606062: “Discreteness and the origin of probability in quantum mechanics” which is probably of interest to quant-ph exclusive readers (it was cross posted to quant-ph today.)
The basic idea of the paper is as follows. In the many-world’s interpretation of quantum theory a major challenge is to derive Born’s probability rule (which, of course, Born got wrong the first time!) One way that this can be achieved (well, sort of!) is as follows. Consider a world in which we perpare a particular quantum state and measure it in a particular basis. Now repeat this process for the same state and the same measurement. Do this an infinite number of times (more on this later…you know how I hate infinity.) Born’s probability rule predicts that the probability of a particular sequence of measurement outcomes is given by [tex]$|langle s_1|psirangle langle s_2|psirangle cdots|^2$[/tex]. In the limit of an infinite number of measurements, the typical fractions of the different outcomes dominates this expression (i.e. the terms which recover Born’s rule.) In other words, the sequences with fractions which dont’ satisfy Born’s rule have vanishing norm. So what do you do? Suppose you simply exclude these “maverick” worlds from the theory. In other words you exclude from the accessible states those with vanishing norm. (There are a bunch of things which bother me about this derivation, but lets just go with it!)
Now the problem addressed in the paper is that of course the limit of an infinite number of experiments is pretty crazy. And if you cut off the number of experiments, then you run into the problem that the maverik worlds have small, but non-zero norm. So why should you exclude them? What is suggested here is that you should exclude them because the Hilbert space of quantum theory isn’t quite right (the authors have arguments about this discreteness coming from arguments concerning gravity.) Instead of our normal Hilbert space, the argument is that a discrete set of vectors from the Hilbert space are all that are physically possible. One picture you might have is that of a bunch of nearly equally spaced vectors distributed over the Hilbert space and unitary evolution for a time T is followed by a “snapping” to the nearest of these vectors. How does this discretized Hilbert space (isn’t calling it a discrete Hilbert space an oxymoron?) fix the problem of a finite number of measurements? Well you now have a minimal norm on your Hilbert space. States which are two close together appear as one. In other words you can exclude states which have norms which are too small. So, if you put some discretation on your Hilbert space you will recover, almost, Born’s rule in the same manner as the infinite limit arguement worked. An interesting argument, no?
One interesting question I’ve pondered before is how to make a discretized version of Hilbert space. Take, for example a qubit. Now you can, as was suggested above, just choose a finite set of vectors and then proceed. But what if you want to avoid the “snapping” to this grid of vectors? Well you might then think that another way to proceed is to have a finite set of vectors and then to allow unitary transforms only between these vectors. But when you do this, what unitary evolution you can apply depends on the vector you are applying it to. Nothing particularly wrong about that, but seems like it might lead to a form of quantum theory with some strange nonlinearities which might allow you to solve hard computational problems (and thus send Scott Aaronson’s mind spinning!) So what if you don’t want to mess with this linear structure. Well then you’re going to have to require that the discrete set of vectors are transformed into each other by representations of some finite group. Does this really limit us?
Well it certainly does! Take for instance, our good friend the qubit. An interesting question to ask is what are the finite subgroups of the the special unitary transforms on a qubit, SU(2). It turns out that there aren’t very many of these finite subgroups. One finite subgroup is simply the cyclic group of order n. Since we are dealing with SU(2) and not SO(3), this is just the set of rotations in SU(2) about a fixed axis by angle [tex]$frac{n}{4 pi}$[/tex]. Another subgoup is the dihedral group of order n which is just like the cyclic group, but now you add an inversion which corresponds to flipping the equator where the cyclic group states are cycling. Finally there are the binary tetrahedral group, the binary octahedral group, and the binary icosahedral group which are SU(2) versions of the symmetries of the correspondingly named platonic solids. And thats it! Those are all the finite subgroups of SU(2)! (There is a very cool relationship between these subgroups and ADE Dynkin diagrams. For more info see John Baez’s week 230) So if we require that we only have a discrete set of states and that the group structure of unitary operations be maintained, this limits us in ways that are very drastic (i.e. that conflict with existing experiments.)
So if one is going to take a discrete set of vectors and use only them for quantum theory, it seems that you have to modify the theory in a way which doesn’t preserve the unitary evolution. Can one do this without drastic consequences? I think that is an interesting question.
Posted, without comment:
quant-ph/0606017 [abs, ps, pdf, other] :
Title: Can measuring entanglement be easy?
Authors: S.J. van Enk
Comments: No
One of the cool talks at the northwest APS meeting I attend a little over a week ago was a talk by Mark Beck from Whitman college on implementing Hardy’s test of local realism in an undergraduate lab. I sure wish I’d had this lab when I was an undergraduate (as it is I most remember a lab in which we made a high temperature superconductor…mostly due, unfortunately, to the ungodly amount of time we spent trying to get the stuff to superconduct!) There aren’t many labs where you can get your hands on an experiment related to quantum informatino information processing, are there. In fact the only other one I know of is in the Junior lab at MIT where they do an NMR quantum computing experiment. Anyone know of any other undergraduate labs which are relevant to quantum computing?
For those of you local to Seattle, Scott Aaronson, keeper of the complexity zoo, will be giving a talk this Thursday:
Event: Colloquium, 05/25/2006 11:30 am, Gates Commons, CSE 691
Speaker: Scott Aaronson (University of Waterloo)
Talk: The Learnability of Quantum States
Abstract: Using ideas from computational learning theory, I’ll show
that “for most practical purposes,” one can learn a quantum state
using a number of measurements that grows only linearly with the
number of qubits n. By contrast, traditional quantum state tomography
requires a number of measurements that grows exponentially with n.
I’ll then give two applications of this learning theorem to quantum
computing: first, the use of trusted classical advice to verify
untrusted quantum advice, and second, a new simulation of quantum
one-way protocols.
Even though there exists an algorithm to “learn” a quantum state after
a small number of measurements, that algorithm might not be efficient
computationally. As time permits, I’ll discuss ongoing work on how to
exploit that fact to copy-protect and obfuscate quantum software
From article entitled “Bright Outlook for Queenland Nanotechnology Alliance”:
…their efforts are focused on making a significant impact into solving many of the holy grails, such as clean energy, personalised medicine and quantum computing.
Well I’m not sure if quantum computing is a hoy grail, but I’m pretty sure quantum computing researchers all know and laugh at “The Holy Grail.”
Okay, we will officially call this week “D-wave week.” According to an article here (update: for a link that may work for all browsers, see here), D-wave just secured another round of financing, to the tune of $14,000,000.
The KITP is hosting a conference on Topological Phases and Quantum Computation which starts today. The talks should be appearing online this week here. Wish I could be there, but hopefully most of the talks will appear online, thus allowing the teaching bound like me to follow the conference from afar.
Lately I have been pondering how I would teach quantum computing to computer science undegraduates. Imagine the class was made up of juniors and seniors who are majoring in computer science or computer engineering. How would you teach such a class?
My first thought is that I would attempt to structure the class more around programming and simulating quantum circuits than on the more abstract course that one normally sees in quantum computing. Certainly this would have the advantage of stressing the students previous strengths (leaving out, unfortunately, the theory students. But if they are really destine to be theory students they should try to take the graduate level course, no?) It would also give them hands on access to the abstract ideas of quantum computing. What I’d really like to use for this would be somethign akin to a hardware description language combined with a simulation synthesis tool. It would also be awesome if a CAD tool could also be developed. However, I haven’t seen anything quite resembling this in the quantum simulation world, but I’d certainly be happy to find out if such programs exist.
Which would you rather have, breadth or depth? Suppose I give you the choice between the following two directions in experimental research in quantum computing in the next five years: either a few (say ten to thirty) qubits with extremely low decoherence and high control precision (perhaps even a high enough to perform quantum error correction) or a huge number of qubits (say hundreds to thousands to millions) with fairly poor decoherence and control. Assume that in both cases, fabrication can be done with a fairly high degree of sophistication. Of of these two options, which would you perfer to see in the near future?