Are we ready for Venture Qapital?

From cnet and via Matt Liefer, comes news of a new venture capital firm, known as The Quantum Wave Fund. According to their website:

Quantum Wave Fund is a venture capital firm focused on seeking out early stage private companies with breakthrough quantum technology. Our mission is to help these companies capitalize on their opportunities and provide a platform for our investors to participate in the quantum technology wave.

The cnet article clarifies that “quantum technology” means “Security, new measurement devices, and new materials,” which seems about right for what we can expect to meaningfully commercialize in the near term. In fact, two companies (ID Quantique and
MagiQ) are already doing so. However, I think it is significant that ID Quantique’s first listed product uses AES-256 (but can be upgraded to use QKD) and MagiQ’s product list first describes technologies like waveform generation and single-photon detection before advertising their QKD technology at the bottom of the page.
It’ll be interesting to see where this goes. Already it has exposed several areas of my own ignorance. For example, from the internet, I learned that VCs want to get their money back in 10-12 years, which gives an estimate for how near-term the technologies are that we can expect investments in. Another area which I know little about, but is harder to google, is exactly what sort of commercial applications there are for the many technologies that are related to quantum information, such as precision measurement and timing. This question is, I think, going to be an increasingly important one for all of us.

Throwing cold water on the Quantum Internet

There has been a lot of loose talk lately about a coming “Quantum Internet”. I was asked about it recently by a journalist and gave him this curmudgeonly answer, hoping to redirect some of the naive enthusiasm:
…First let me remark that “quantum internet” has become a bit of a buzzword that can lead to an inaccurate idea of the likely role of quantum information in a future global information infrastructure.  Although quantum concepts such as qubit and entanglement have revolutionized our understanding of the nature of information, I believe that the Internet will remain almost entirely classical, with quantum communication and computation being used for a few special purposes, where the unique capabilities of quantum information are needed.  For other tasks, where coherent control of the quantum state is not needed, classical processing will suffice and will remain cheaper, faster, and more reliable for the foreseeable future.  Of course there is a chance that quantum computers and communications links may some day become so easy to build that they are widely used for general purpose computing and communication, but I think it highly unlikely.
Would the quantum internet replace the classical one, or would the two somehow coexist?
As remarked above, I think the two would coexist, with the Internet remaining mostly classical. Quantum communication will be used for special purposes, like sharing cryptographic keys, and quantum computing will be used in those few situations where it gives a significant speed advantage (factoring large numbers, some kinds of search, and the simulation of quantum systems), or for the processing of inherently quantum signals (say from a physics experiment).
Would quantum search engines require qubits transmitted between the user’s computer and the web searcher’s host? Or would they simply use a quantum computer performing the search on the host machine, which could then return its findings classically?
It’s not clear that quantum techniques would help search engines, either in transmitting the data to the search engine, or in performing the search itself. Grover’s algorithm (where coherent quantum searching gives a quadratic speedup over classical searching) is less applicable to the large databases on which search engines operate, than to problems like the traveling salesman problem, where the search takes place not over a physical database, but over an exponentially large space of virtual possibilities determined by a small amount of physical data.
On the other hand, quantum techniques could play an important supporting role, not only for search engines but other Internet applications, by helping authenticate and encrypt classical communications, thereby making the Internet more secure. And as I said earlier, dedicated quantum computers could be used for certain classically-hard problems like factoring, searches over virtual spaces, simulating quantum systems, and processing quantum data.
When we talk about quantum channels do we mean a quantum communication link down which qubits can be sent and which prevents them decohering, or are these channels always an entangled link? …
A quantum channel of the sort you describe is needed, both to transmit quantum signals  and to share entanglement. After entanglement has been shared, if one has a quantum memory, it can be stored and used later in combination with a classical channel to transmit qubits.  This technique is called quantum teleportation (despite this name, for which I am to blame, quantum teleportation cannot be used for transporting material objects).
But could we ever hope for quantum communication in which no wires are needed – but entanglement handles everything?
The most common misconception about entanglement is that it can be used to communicate—transmit information from a sender to a receiver—perhaps even instantaneously. In fact it cannot communicate at all, except when assisted by a classical or quantum channel, neither of which communicate faster than the speed of light. So a future Internet will need wires, radio links, optical fibers, or other kinds of communications links, mostly classical, but also including a few quantum channels.
How soon before the quantum internet could arrive?
I don’t think there will ever be an all-quantum or mostly-quantum internet. Quantum cryptographic systems are already in use in a few places, and I think can fairly be said to have proven potential for improving cybersecurity. Within a few decades I think there will be practical large-scale quantum computers, which will be used to solve some problems intractable on any present or foreseeable classical computer, but they will not replace classical computers for most problems. I think the Internet as a whole will continue to consist mostly of classical computers, communications links, and data storage devices.
Given that the existing classical Internet is not going away, what sort of global quantum infrastructure can we expect, and what would it be used for?  Quantum cryptographic key distribution, the most mature quantum information application, is already deployed over short distances today (typically < 100 km).   Planned experiments between ground stations and satellites in low earth orbit promise to increase this range several fold.  The next and more important stage, which depends on further progress in quantum memory and error correction,  will probably be the development of a network of quantum repeaters, allowing entanglement to be generated between any two nodes in the network, and, more importantly, stockpiled and stored until needed.  Aside from its benefits for cybersecurity (allowing quantum-generated cryptographic keys to be shared  between any two nodes without having to trust the intermediate nodes) such a globe-spanning quantum repeater network will have important scientific applications, for example allowing coherent quantum measurements to be made on astronomical signals over intercontinental distances.  Still later, one can expect full scale quantum computers to be developed and attached to the repeater network.  We would then finally have achieved a capacity for fully general processing of quantum information, both locally and globally—an expensive, low-bandwidth quantum internet if you will—to be used in conjunction with the cheap high-bandwidth classical Internet when the unique capabilities of quantum information processing are needed.

Now that's what I call self correcting

credit: University of Illinois

Apparently this is the year for breakthroughs in self-correcting computer hardware. After hearing about Jeongwan Haah’s new self-correcting topological quantum memory at QIP, I just learned, via BBC news, about a new type of (classical) self correcting circuit: one which heals itself when one of the wires cracks! The full paper can be found here, and it is mostly quite readable. The basic idea is to start with a standard classical wire made of (for example) gold. The researchers sprinkled the wire with tiny capsules filled with a metal alloy (Ga-In) which is liquid at room temperature and has high conductivity. Then they bent the circuit board until it cracked, breaking the wire, and hence the circuit. Within milliseconds, the capsules also broke, the cracks filled with the liquid metal, and conductivity was restored. Self correcting, indeed!
One thing I didn’t understand is how the liquid metal stays in the cracks. I guess that at the scale they are working at, the surface tension alone is sufficient to keep the liquid metal in place?

dabacon.job = "Software Engineer";

Some news for the remaining five readers of this blog (hi mom!) After over a decade of time practicing the fine art of quantum computing theorizing, I will be leaving my position in the ivory (okay, you caught me, really it’s brick!) tower of the University of Washington, to take a position as a software engineer at Google starting in the middle of June. That’s right…the Quantum Pontiff has decohered! **groan** Worst quantum to classical joke ever!
Of course this is a major change, and not one that I have made lightly. There are many things I will miss about quantum computing, and among them are all of the people in the extended quantum computing community who I consider not just colleagues, but also my good friends. I’ve certainly had a blast, and the only things I regret in this first career are things like, oh, not finding an efficient quantum algorithm for graph isomorphism. But hey, who doesn’t wake up every morning regretting not making progress on graph isomorphism? Who!?!? More seriously, for anyone who is considering joining quantum computing, please know that quantum computing is an extremely positive field with funny, amazingly brilliant, and just plain fun people everywhere you look. It is only a matter of time before a large quantum computer is built, and who knows, maybe I’ll see all of you quantum computing people again in a decade when you need to hire a classical to quantum software engineer!
Of course, I’m also completely and totally stoked for the new opportunity that working at Google will provide (and no, I won’t be doing quantum computing work in my new job.) There will definitely be much learning and hard work ahead for me, but it is exactly those things that I’m looking forward to. Google has had a tremendous impact on the world, and I am very much looking forward to being involved in Google’s great forward march of technology.
So, onwards and upwards my friends! And thanks for all of the fish!

Katamari Damacy Any Website

If you know what Katamari Damacy is, then you will love http://kathack.com.
(The script was created by University of Washington students Alex Leone, David Nufer, and David Truong for the 2011 Yahoo HackU contest. See, dear physicists, the benefits of living in a computer science department 🙂 )

Book: The Myths of Innovation

Last week I picked up a copy of The Myths of Innovation by Scott Berkun. It’s a short little book, clocking in at 256 pages, paperback. The subject is, well, read the damn title of the book, silly! Berkun picks apart the many different myths that exist around innovation: epiphany, lone inventors, and many of the stories we tell ourselves after the fact about the messy process of innovation. It’s probably fair to say none of the insights provided by Berkun is all that shocking, but in a nice collected form you really get the point that we tell ourselves a lot of funny stories about innovation. My first thought upon reading the book was “oh, this book is for curmudgeons!” But upon reflection, perhaps this is exactly opposite. Curmudgeons will already know many of the myths and be curmudgeonly about them: it is the non-curmudgeonly among you who need to read the book 🙂
But one point that Berkun makes is something I heartily concur with: that laughter can be a sign that innovation is occurring (dear commenter who is about to comment on the causal structure of this claim, please reread this sentence.) As a grad student in Berkeley I participated in a 24 hour puzzle scavenger hunt around nearly all of the SF Bay Area. At each new location a puzzle/brainteaser would be given whose solution indicated the next location in the puzzle hunt. At many of these locations we would start working on the puzzle and someone would suggest something real crazy about the puzzle “hmmm, I bet this has something to do with semaphore” because, well the chess board colors are semaphore colors. And we would all laugh. Then someone would think to actually check the idea that we all laughed about. And inevitably it would be the key to solving the damn puzzle. After a few stops, we noticed this and so anytime someone would say something we would laugh at we’d have to immediately follow up on the idea 🙂 But this makes complete sense: insight or innovation occurs when we are, by definition, pushing the limits of what is acceptable. And laughter is often our best “defense” in these situations. Further laughter has a strong improv component: the structure of what is funny requires you to accept the craziness behind the joke and run with it. Who knows where a joke may take you (as opposed to this paragraph, which is going nowhere, and is about to end.)
Finally I wish every reviewer of papers and grants would read this book and especially the reviewers who said one of my grant applications was just too speculative for the committee’s taste 😉
And a note to myself when I get a bad review about something I really think is the bees knees: reread this book.

De Took Er DataBs Jrbs!

Over at Daily Speculations, Alan Corwin writes about database programming jobs that will never return. The gist of Alan’s piece is that the tools for databases are basically so turn-key and so easy that those who were trained to build their own database code by hand will be unlikely to see those job returns. He ends his article by noting: “For my friends in the programming community, it means that there are hard times ahead.”
Turn the page.
Here is a report from UCSD on “Hot Degrees for College Graduates 2010.” 3 of the top 5 are computer science related, and number 3 is “Data Mining.”
Now I know that database programming does not equal data mining. But it is interesting to contrast these two bits of data (*ahem*), especially giving the dire prediction at the end of Alan Corwin’s article. Besides my tinkering with iPhone apps, simulations for my research, and scirate, I’m definitely not a professional programmer. But I am surrounded by students who go on to be professional programmers, many of them being immensely successful (as witnessed by alumni I have met.) And when I talk to my CS students about job prospects, they are far from doom and gloom. So how to reconcile these two views?
Well, I think what is occurring here is simply that those who view themselves as a set of tools and languages they use to get their jobs are misunderstand what the role of a programmer should be. There are many variations on this theme, but one place to find a view of the programmer as different than someone whose skill set defines them is The Programmers Stone. And indeed, in this respect, I think a good CS degree resembles a good physics degree. Most people who come out of physics programs don’t list on their resume: “Expert in E&M, quantum theory, and statistical physics.” The goal of a good physics program is not to teach you the facts and figures of physics (which are, anyways, easily memorized), but to teach you how to solve new problems in physics. For computer science this will be even more severe, as it is pretty much guaranteed that the tools you will be using today will change in the next few years.
So doom and gloom for programmers? Only time will tell, of course, but I suspect this answer is a strong function of what kind of programmer you are. And by kind I don’t mean a prefix like “Java” or “C++”.
(And yes I realize that this is an elitist position, but I just find the myth of the commodity programming job as an annoying misrepresentation of why you should get a degree in computer science.)
Update: more here.

Steve Ballmer Talk at UW March 4, 2010

Today Microsoft CEO Steve Ballmer spoke at the University of Washington in the Microsoft Atrium of the Computer Science & Engineering department’s Paul Allen Center. As you can tell from that first sentence UW and Microsoft have long had very tight connections. Indeed, perhaps the smartest thing the UW has ever done was, when they caught two kids using their computers they didn’t call the police, but instead ended up giving them access to those computers. I like to think that all the benefit$ that UW has gotten from Microsoft are a great big karmic kickback for the enlightened sense of justice dished out by the UW.
Todd Bishop from Tech Flash provides good notes on what was in Ballmer’s talk. Ballmer was as I’ve heard: entertaining and loud. Our atrium is six stories high with walkways overlooking it which were all packed: “a hanging room only” crowd as it was called by Ballmer. The subject of his talk was “cloud computing” which makes about 25 percent of people roll their eyes, 25 percent get excited, and the remaining 50 percent look up in the sky and wonder where the computer is. His view was *ahem* the view of cloud computing from a high altitude: what it can be, could be, and should be. Microsoft, Ballmer claimed, has 70 percent of its 40K+ workforce somehow involved in the cloud and that number will reach 90 percent soon. This seems crazy high to me, but reading between the lines what it really said to me is that Microsoft has *ahem* inhaled the cloud and is pushing hard on the model of cloud computing.
But what I found most interesting was the contrast between Ballmer and Larry Ellison. If you haven’t seen Ellison’s rant on cloud computing here it is

Ellison belittles cloud computing, and rightly points out that in some sense cloud computing has been around for a long time. Ballmer, in his talk, says nearly the same thing. Paraphrasing he said something like “you could call the original internet back in 1969 the cloud.” He also said something to the effect that the word “cloud” may only have a short lifespan as a word describing this new technology. But what I found interesting was that Ballmer, while acknowledging the limits of the idea of cloud computing, also argued for a much more expansive view of this model. Indeed as opposed to Ellison, for which server farms equal cloud computing, Ballmer essentially argues for a version of “cloud computing” which is far broader than any definition you’ll find on wikipedia. What I love about this is that it is, in some ways, a great trick to create a brand out of cloud computing. Sure tech wags everywhere have their view of what is and is not new in the recent round of excitement about cloud computing. But the public doesn’t have any idea what this means. Love them or hate them, Microsoft clearly is pushing to move the “cloud” into an idea that consumers, while not understand one iota of how it works, want. Because everything Ballmer described, every technology they demoed, was “from the cloud”, Microsoft is pushing, essentially, a branding of the cloud. (Start snark. The scientist in you will, of course, revolt at such an idea, but fear not fellow scientist: you’re lack of ability to live with imprecision and incompleteness is what keeps your little area of expertise safe and sound and completely fire walled from being exploited to the useful outside world. End snark.)
So, while Ellison berates, Ballmer brands. Personally I suspect Ballmer’s got a better approach…even if Larry’s got the bigger yacht. But it will fun to watch the race, no matter what.