Swimming in a Sea of Qubits

Yesterday I was at Bell Labs for a one day meeting on quantum computing. Bell Where, the young ones ask? You know, the place where the transistor was invented! (Can you name the three who won the Nobel prize in physics for the invention of the transistor? How many people that you meet walking down the street can name any of these three?)
Amazingly this meeting was covered by local media: see here. Any investors might be interested in the last few paragraphs of the article:

At least one audience member was impatient.
Jan Andrew Buck heads Princeton Group International, which backs biotech ventures. He said he is itching for a bare-bones quantum computer for plotting complicated routes and schedules.
“I think I can get a squeaky, scratchy quantum computer to market in two or three years,” Buck said. All he needs, he said, are investors with deep pockets and short deadlines.

Now I consider myself an optimist, but I think Jan Andrew Buck has just out optimismed even my cheery outlook.
Back to the topic at hand, the meeting was fun! The first two talks, by David DiVincenzo and Isaac Chuang, were interesting in that they both made some arguments about whether the “sea of qubits” type architecture for a quantum computer is really feasible. Loosely, the idea of a sea of qubits is to have, say, a two dimensional dense grid of qubits which you can control with nearest neighbor interactions. One difficulty with this approach is that if you have a dense sea of qubits, it is hard to imagine how to get all of the elements you need to control these qubits from a classical controller outside of the quantum computer to each individual qubit. This is particulary worrisome for some solid state qubits, where high density is often needed in order to get controllable strong two qubit interactions (like say in some quantum dot approaches), but applies to many other types of implementations as well. David DiVincenzo talked about work he performed with Barbara Terhal and Krysta Svore on threshold for two dimensional spatial layouts (see quant-ph/0604090.) Because the cost of this spatial layout was not huge, along with his work with a particular implementation of a superconducting qubit at IBM, David reconsidered, in his talk, whether the sea of qubit was really that bad of a problem. He concluded by discussing how perhaps techniques developed in making three dimensional circuitry could be used to overcome the sea of qubits problem. Ike, on the other hand, talked about the issues of designing a quantum computing made out of ions, where the issue of getting your classical control may not be as severe (other talks focused on the MEMS mirror arrays which will be used to control, in parallel, many thousands of ion trap qubits.) Ike was one of the original people to point out the difficulties in the “sea of qubits” ideas, and I can’t help but think the reason he started working on ion traps and not solid state implementations was in some part motivated by this problem.
To me, the debate about what the architecture for a future quantum computer will look like is very intersting. Mostly because this debate has to do, I think, with quantum computing people taking very seriously what “scalable” means. I personally can’t stand the word “scalable.” Why? Well mostly because it is put in front of every single proposal in which the authors can reasonably imagine some far fetched way to scale their proposed quantum system up. Call me jaded. But what is fun to watch is that, now that there is serious discussion of many qubit quantum computers, the real difficulties of scalbility are beginning to emerge. Scalability is about money. Scalability is about ease. Scalability is about architecture. Which physical implementation will scale up to our future quantum computer and what will the architecture of this computer look like? Depending on the day you ask me you’ll get a different answer. Which, I suppose, is one of the reasons why I remain but a lowly theorist…too scared to jump on the bandwagon I trust the most.

8 Replies to “Swimming in a Sea of Qubits”

  1. Okay, to be completely immodest for a moment, check out Sec. 7.2 (in the Quantum Multicomputer chapter) of my freshly-minted Ph.D. thesis at quant-ph/0607065, titled “A Engineer’s Definition of Scalability”. I use the following definition (with a few pages of explanation):
    Above all, it must be possible, physically and economically, to grow the system through the region of interest. Addition of physical resources must raise the performance of the system by a useful amount (for all important metrics of performance, such as calculation speed or storage capacity), without excessive increases in negative features (e.g., failure probability).
    Nobody would call a technology scalable, for example, if each qubit cost a million dollars, or if each gate covers an optical bench two meters by four meters, or if the clock speed is so slow that it takes a century or more to run algorithms on interesting problem sizes.
    Turn up your nose at such a pedestrian definition if you wish, but I’m with Buck — I want to build a minimally useful machine ASAP.

  2. P.S. The genesis of that definition was a working group on architecture Kochi Summer School, Sept. 2005. organized by Yoshi Yamamoto and friends. Ike was there, and gave us excellent lectures on fault tolerance. Personally, I learned an incredible amount in two weeks.

  3. First, a question (and pardon any ignorance – I’m not all that familiar with QC architecture): what is the link between entropy (both Shannon and otherwise – and that’s another discussion entirely) and scalability in quantum computers? It seems to me a link ought to exist – entropy playing the role of limiting or guiding factor.
    As for the Bell lab guys, even fewer people are likely to know John Bardeen is the only person to have won the Nobel Prize in Physics twice (there are others who have won two prizes, but not both in physics).

  4. Ah, yes, those nasty “technical considerations.” Incidentally, physics is my third career – my first was engineering. And now I’m a theorist…
    Anyway, what I was thinking was this: in a standard classical sense, stastical mechanics tells us that the multiplicity function becomes sharper for larger systems (and entropy is really just a more convenient way of measuring multiplicity). Quantum mechanically we usually look at density operators, but depending on the type of ensemble, this can still be represented as a function of the number of particles present in a given population (essentially it’s not much of a stretch from the classical interpretation) – hence the scalability relation.
    And keep in mind that not only is physics my third career, but quantum information is my third sub-field within physics, hence I’m playing a little catch-up, particularly on the architecture end of things. 🙂

  5. There is certainly a connection between error correction and entropy. In particular, the entropy created by the environement (or lack of efficient control of your system) on your quantum computer is, during quantum error correction, dumped back into an environment. This can be made quantititative: I recommend section 12.4 in Nielsen and Chuang.
    But, and this is part of my point about scalability. For too long scalability has simply meant that fault-tolerant quantum computation is “possible.” But I think the issue is a lot more like Rod phrases it: it has to do with technical considerations.

  6. This is an embarassing, cruel fact I’m about to relay, but when I’m asked who the nobel prize given for the invention of the transistor went to I always think of these three people: John Bardeen, the-jerk-who-isn’t-Bardeen, and some-guy-who-also-isn’t-Bardeen.
    Shameful, I know.

Leave a Reply

Your email address will not be published. Required fields are marked *