Acronyms Beyond NISQ

NISQ is a term coined by John Preskill1 circa 2018 and stands for “Noisy Intermediate-Scale Quantum”. The term is aimed to describe quantum computers that were not just toy few qubit systems, but systems of a slightly larger scale. This “slightly larger” is a bit hard to define, but roughly most people take it as what is achievable with a quantum computer that does not use error correction. Or in other word the “intermediate” means roughly “what you can do with the natural fidelities of your qubits” (with a fudge factor for those who want to plug their nose and use error mitigation.)

Now this is a blog, so I will give my opinion. And that is that word intermediate in NISQ drives me nuts. I mean in part because it is vague (intermediate between what?), but more because the word itself is a disaster. Intermediate comes to us from Latin, being a combination of inter, meaning “between”, and medius meaning “middle”. But this is silly how can there be a middle without being between? It’s like saying “middle middle”. Whenever I hear NISQ I am reminded of this bastard doubling, and I start working on a time machine to go back in time and work on etymological corrections (good short story idea: a society of time travelers whose sole goal is to bring reason to the etymological record).

A more interesting question than my own personal hangups on word origins is what should we call what exists on the other side of intermediate. My friend Simone Severini has used the term LISQ which stands for “Logical Intermediate-Scale Quantum”2. The idea, as I understand it, is to use this term to refer to the era where we start to construct the first error corrected quantum devices. In particular it is the place where instead of using raw physical qubits one instead uses some logical encoding to build the basic components of the quantum computer. (<high horse>Of course, all qubits are encoded, there is always a physical Hamiltonian with a much larger Hilbert space at work, what we call a qubit subsystem is a good approximation, but it is always an approximation.</high horse>). I am exciting that we are indeed seeing the ideas of quantum error correction being used, but I think this obscures that what is important is not that a qubit is use error correction, but how well it does that.

I want to propose a different acronym. Of course, I will avoid the use of that annoying term intermediate. But more importantly I think we should use a term that is more quantitative. In that vein I propose, in fine rationalist tradition, that we use the metric system! In particular the quantity that is most important for a quantum computer is really the number of quantum gates or quantum instructions that one can execute before things fall apart (due to effects like decoherence, coherent imprecision, a neutral atom falling out of its trapping potential, or a cataclysmic cosmic ray shower). Today’s best perform quantum computations have gotten signal out of their machine while reaching somewhere near the 1000 gate/instruction level3. We can convert this to a metric prefix, and we get the fine “Kilo-Instruction Scale Quantum”. Today’s era is not the NISQ era, but the KISQ era.

And as we start to move up the scale by using error correction (or somehow finding natural qubits with incredible raw fidelities) we then start to enter the regime where instead of being able to run a thousand instructions we start to be able to run a million instructions. This till be the “Mega-Instruction Scale Quantum” era or MISQ era. And I mean how cool will that be, who doesn’t love to say the word Mega (Just don’t drawl your “e” or you might stumble into politics). Then we can continue on in this vein:

  • 103 instructions = KISQ (kilo) = NISQ
  • 106 instructions = MISQ (mega)
  • 109 instructions = GISQ (giga)
  • 1012 instructions = TISQ (terra) <– Shor’s algorithm lives around here

An objection to this approach is that I’ve replaced the word intermediate with the word instruction and while we gain the remove of the “middle middle”, we now the vague term instruction. The word origin of instruction is a topic for another day, but roughly it is a combination of “in” and “to pile up”, so I would argue isn’t doesn’t have as silly an etymology as intermediate. But more to the point, an “instruction” has only an imprecise meaning for a quantum computer. Is it the number of one and two qubit gates? What about measurements and preparations? Why are we ignoring qubit count or gate speed or parallelism? How do we quantify it for architectures that use resource states? To define this is to fall down the rabbit hole of benchmarks of quantum computers4. Benchmarking is great, but it always reminds me of a saying my grandfather used to tell me “In this traitorous world, nothing is true or false, all is according to the color of the crystal through which you look”. Every benchmark is a myopia, ignoring subtleties at the cost of quantiative precision. And yes, people will fudge any definition of instruction to fit the strengths of their quantum architecture (*ahem* algorithmic qubit *ahem*). But terms like NISQ are meant to label gross eras, and I think its okay to have this ambiguity.

One thing I do like about using the metric prefix is a particularly pressing problem. While it has been a real challenge to find NISQ algorithms that have “practical” (whatever that means5) an equally pressing problem is what sort of quantum algorithms will be achievable in the MISQ era. The place where we have the most confidence in the algorithmic advantage offered by quantum computers, simulation and experimental math algorithms (like factoring), lie above the GISQ and probably in the TISQ region. Roughly what we need are quantum algorithms that are linear time algorithms, so that for instances sizes becoming non-trivial (say a thousand), their total spacetime volume is a this size squared. And while there has been work on algorithms in this era, I would not say that we confidently have algorithms we know will be practically useful in MISQ. And this MISQ/GISQ gap is extremely scary!

So long live the NISQ era! And onward and up to MISQ and beyond!

  1. “Quantum Computing in the NISQ era and beyond”, Preskill arXiv/1801.00862 ↩︎
  2. “Bye NISQ. Hello LISQ?”, Simone Severini LinkedIn post ↩︎
  3. As an example “Phase transition in Random Circuit Sampling” by the Google group (of which I’m a member) shows a signal for circuits with 32 cycles and 67 qubits. arXiv/2304.11119 ↩︎
  4. A prominent benchmark is Quantum Volume, defined in “Validating quantum computers using randomized model circuits” by Cross, Bishop, Sheldon, Nation, and Gambetta arXiv/1811.12926. This is a fine benchmark modulo that because Executives at BigCo’s apparently can be fooled by log versus linear scale, they really should have taken the log of the quantity they use to define the Quantum Volume. ↩︎
  5. My own personal opinion is that current claims of “quantum utility” are an oversell, or what we nowadays call quantum hype, but that is a subject for a beer at a quantum beer night. ↩︎

Qafblog

“We are in a box,” says me.

“Do you see some radium hooked up to a crazy steampunk device with skulls and crossbones and yellow, definitely yellow, but maybe also neon green or wavey blue?” says Qubitslets.

“No I think we put ourselves in the box,” says me. “I don’t see any radium.”

“Maybe we’re in that branch of the wave function where we’ve transmogrified ourselves into a simulation. And we’re in a box because those post capitalists are too damn cheap to simulate us outside of a small goddamn box. Just like new Seattle townhouses. We’re in the cheap Android version of a tech bros afterlife. Do you see brass?”

“No brass,” says me, “but there is something growing in the corner.”

“A tunnel?” asks Qubitslets, “Remember that tunnel we were digging to the moon? I’ve been thinking about the replica symmetry breaking structure of our many tunnel passages to the moon. Maybe we should leverage quantum effects? Jesus, did I just say that? Have I been infected with Goldman Socks ibank spin 3/2s disease? At least I’m a fermion I guess. Not like those collective Marxist bosons over at Google Vultures.”

“I will remind you of the last time we used quantum effects,” says me. “We ended up changing the vacuum state from suck to blow. Luckily it was an unstable equilibrium, and because we are particle theorists and not cosmologists, we didn’t have to think ‘equilibrium of the Higgs-Anderson-Goldstone field with respect to what?’, so the universe just relaxed back to its current vacuum. I sure do miss the werewolves from that old vacuum, though. No, the thing growing in the corner is in a jar.”

“Crap”, says Qubitslets, “we’ve been quarantined!”

Suddenly the Medium Cryostat materializes from The Void. “There is no virus and if there were viruses they would be foreign viruses with foreign RNA and a sheath of foreignness so tricky in its conformal structure we would need a wall to stop it. Because there is no virus, which there certainly isn’t, we must build walls around us and between us and under us. The Great Walling must begin. And we must stay behind our walls and only go out to visit our grandma if she is in a nursing home but we can only do that if we test grandma for foreign viruses, which don’t exist, and, of course we must stay six feet from any particle in the universe. A foreign virus has a Compton wavelength of six feet, I am told.”

“In light of us not being afraid of viruses, because they don’t exist, we will need to plan for how The Great Economy can survive the foreign viruses, if they existed. Stock buybacks were insurance claims for the future antibodies the corporations of The Great Economy would need during The Great Walling, so we should cash out their claims. Because of the uniform structure of economic strata across our Great County, we can use the base minimum wage to pay the minimum required to sustain minimal substance for those impacted by the viruses. If the viruses existed.”

“But we must also not forget what made this a Great Country, again. Never forget the resilience of our people to the scientific method, again. Of the possibility of our citizens being able to think in terms of counterfactuals, again. And of our dedication to spring break and Easter services and a Latin homily about licentious spring breakers, which no, does not arouse the Medium Cryostat. Amen. Oops, I mean Again.”

“And because we are a generous Medium Cryostat, and we know that living behind walls is hard (but necessary because maybe viruses), we shall provide to every home everywhere a jar of Sourdough Levain. Lactobacilli and yeast for all, because nothing says that you aren’t scared of invisible microscopic viruses like cooking with a self reproducing jar of sticky goo.”

“Hooray!” says Qubitslets, “we get to bake bread!”

“Did I ever tell you about the time my girl forgot to feed the starter,” says me ,”and the starter died, and in the tears that fell into the hooch that was all that remained, I could see that the relationship was over by studying the way the waves spread and reflected off the walls of the jar?”

“I bet we can study the exponential growth of the yeast and use that to model viruses,” says Qubitslets. “As physicists we know that only simple models that use physics concepts can be used for public health decisions.”

“Or maybe in the replication of the yeast, we’ll discover that we are just cellular automata or hypergraphs transforming under some crowdsourced update rule.”

“But no brass,” says Qubitslets.

“Yeah, no brass.”

** This post is a tribute to the best blog that every was, ever will be, and ever could be. Fafblog you are greatly missed.

The open access wars

Vox has just published an excellent article on open access scientific publishing, “The open access wars“. While that is too provocative of a title, the article still manages to give an accurate assessment of the current state of play. Although I like the article, I can’t help but nitpick a few points.

From the article,

“We spoke with executives at both Elsevier and Springer Nature, and they maintain their companies still provide a lot of value in ensuring the quality of academic research.”

This is false. Publishers do not add any significant value in ensuring the quality of the academic research. Peer reviewers do that, and even then not consistently. True, the publishers facilitate finding peer reviewers, but this has nothing to do with the actual publishing of the research. The role of the journal itself (sans peer review) is just to market and advertise the research, not to ensure quality. The journal also provides a venue for signaling for the prestige-smitten researcher. It is a personal value judgement how much these other things matter, but they certainly don’t impact the quality of the research.

Organizing the peer review (as opposed to actually reviewing) is the job of a middleman: it may provide a service, but it doesn’t add value to the product and it only drives up prices. This is why non-profit overlay journals like Quantum only cost between 0 and €200 to publish. The average cost per submission for hosting a preprint on the arxiv is less than $7.

Which brings me to my next point: I was also a little disappointed in the article that they failed to mention arxiv.org. They do mention the rise of preprints, but they only mention prepubmed.org, which apparently only came online in 2007. By contrast, the oldest arxiv preprint that I’m aware of is Paul Ginsparg’s notes on conformal field theory, which were posted in 1988!

That might be the oldest timestamp, but the arxiv only started having regular preprint service in 1991. Still, this means that essentially all physics research in the last 25+ years is available for free online. In practice, this means that any time you need a physics paper, you simply find the associated preprint and read that instead of the journal version. This is especially convenient for a field like quantum information, where all but a handful of papers are available on the arxiv.

Any article on open access should lead with a discussion of the arxiv. It’s one of the most important science-related developments of the last 30 years.

Quantum in the wild

Sometimes quantum appears out of nowhere when you least expect it.

From the September 2, 2018 edition of the New York Times Magazine.

Un-renunciation

Can pontiffs un-retire (un-renunciate)?  I mean, I retired from being a pontiff way before it was cool.  But now the sweet siren call of trying to figure out whether there is really a there there for noisy intermediate scale quantum devices has called me back.   I think it may be time to start doing a little bit of quantum pontificating again.  My goal, as always, will be to bring down the intellectual rigor among quantum computing blogs.  And to show you pictures of my dog Imma, of course.
Cue bad joke about unitary dynamics and quantum recurrences in 3, 2, 1, 0, 1, 2, 3, …
 

Cosmology meets Philanthropy — guest post by Jess Riedel

My colleague Jess Riedel recently attended a conference  exploring the connection between these seemingly disparate subjects, which led him to compose the following essay.–CHB
Impact_event

People sometimes ask me what how my research will help society.  This question is familiar to physicists, especially those of us whose research is connected to every-day life only… shall we say…tenuously.  And of course, this is a fair question from the layman; tax dollars support most of our work.
I generally take the attitude of former Fermilab director Robert R. Wilson.  During his testimony before the Joint Committee on Atomic Energy in the US Congress, he was asked how discoveries from the proposed accelerator would contribute to national security during a time of intense Cold War competition with the USSR.  He famously replied “this new knowledge has all to do with honor and country but it has nothing to do directly with defending our country except to help make it worth defending.”
Still, it turns out there are philosophers of practical ethics who think a few of the academic questions physicists study could have tremendous moral implications, and in fact might drive key decisions we all make each day. Oxford philosopher Nick Bostrom has in particular written about the idea of “astronomical waste“.  As is well known to physicists, the universe has a finite, ever-dwindling supply of negentropy, i.e. the difference between our current low-entropy state and the bleak maximal entropy state that lies in our far future.  And just about everything we might value is ultimately powered by it.  As we speak (or blog), the stupendously vast majority of negentropy usage is directed toward rather uninspiring ends, like illuminating distant planets no one will ever see.
These resources can probably be put to better use.  Bostrom points out that, assuming we don’t destroy ourselves, our descendants likely will one day spread through the universe.  Delaying our colonization of the Virgo Supercluster by one second forgoes about 10^{16} human life-years. Each year, on average, an entire galaxywith its billions of starsis slipping outside of our cosmological event horizon, forever separating it from Earth-originating life.  Maybe we should get on with it?
But the careful reader will note that not everyone believes the supply of negentropy is well understood or even necessarily fixed, especially given the open questions in general relativity, cosmology, quantum mechanics, and (recently) black holes.  Changes in our understanding of these and other issues could have deep implications for the future.  And, as we shall see, for what we do tomorrow.
On the other side of the pond, two young investment analysts at Bridgewater Associates got interested in giving some of their new disposable income to charity. Naturally, they wanted to get something for their investment, and so they went looking for information about what charity would get them the most bang for their buck.   But it turned out that not too many people in the philanthropic world seemed to have many good answer.  A casual observer would even be forgiven for thinking that nobody really cared about what was actually getting done with the quarter trillion donated annually to charity.  And this is no small matter; as measured by just about any metric you choose—lives saved, seals unclubbed, children dewormed—charities vary by many orders of magnitude in efficiency.
This prompted them to start GiveWell, now considered by many esteemed commentators to be the premier charity evaluator.  One such commentator is Princeton philosopher Peter Singer, who proposed the famous thought experiment of the drowning child.  Singer is also actively involved with a larger movement that these days goes by the name “Effective Altruism”.  It’s founding question: If one wants to accomplish the most good in the world, what, precisely, should one be doing?  
You won’t be surprised that there is a fair amount of disagreement on the answer.  But what might surprise you is how disagreement about the fundamental normative questions involved (regardless of the empirical uncertainties) leads to dramatically different recommendations for action.    
A first key topic is animals.  Should our concern about human suffering be traded off against animal suffering? Perhaps weighted by neural mass?  Are we responsible for just the animals we farm, or the untold number suffering in the wild?  Given Nature’s fearsome indifference, is the average animal life even worth living?  Counterintuitive results abound, like the argument that we should eat more meat because animal farming actually displaces much more wild animal suffering than it creates.
Putting animals aside, we will still need to balance “suffering averted”  with “flourishing created”.  How many malaria deaths will we allow to preserve a Rembrandt?  Very, very bad futures controlled by totalitarian regimes are conceivable; should we play it safe and blow up the sun?
But the accounting for future people leads to some of the most arresting ideas.  Should we care about people any less just because they will live in the far future?  If their existence is contingent on our action, is it bad for them to not exist?  Here, we stumble on deep issues in population ethics.  Legendary Oxford philosopher Derek Parfit formulated the argument of the ”repugnant conclusion”.  It casts doubt on the idea that a billion rich, wealthy people living sustainably for millennia on Earth would be as ideal as you might initially think. 
(Incidentally, the aim of such arguments is not to convince you of some axiomatic position that you find implausible on its face, e.g. “We should maximize the number of people who are born”.  Rather, the idea is to show you that your own already-existing beliefs about the badness of letting people needlessly suffer will probably compel you to act differently, if only you reflect carefully on it.)
The most extreme end of this reasoning brings us back to Bostrom, who points out that we find ourselves at a pivotal time in history. Excepting the last century, humans have existed for a million years without the ability to cause our own extinction.  In probably a few hundred years—or undoubtedly in a few thousand—we will have the ability to create sustainable settlements on other worlds, greatly decreasing the chance that a calamity could wipe us out. In this cosmologically narrow time window we could conceivably extinguish our potentially intergalactic civilization through nuclear holocaust or other new technologies.  Even tiny, well-understood risks like asteroid and comet strikes (probability of extinction event: ~10^{-7} per century) become seriously compelling when the value of the future is brought to bear. Indeed, between 10^{35} and 10^{58} future human lives hang in the balance, so it’s worth thinking hard about.
So why are you on Facebook when you could be working on Wall Street and donating all your salary to avert disaster? Convincingly dodging this argument is harder than you might guess.  And there are quite a number of smart people who bite the bullet.