API is an abbreviation that stands for “Application Program Interface.” Roughly speaking an API is a specification of a software component in terms of the operations one can perform with that component. For example, a common kind of an API is the set of methods supported by a encapsulated bit of code a.k.a. a library (for example, a library could have the purpose of “drawing pretty stuff on the screen”, the API is then the set of commands like “draw a rectangle”, and specify how you pass parameters to this method, how rectangles overlay on each other, etc.) Importantly the API is supposed to specify how the library functions, but does this in a way that is independent of the inner workings of the library (though this wall is often broken in practice). Another common API is found when a service exposes remote calls that can be made to manipulate and perform operations on that service. For example, Twitter supports an API for reading and writing twitter data. This later example, of a service exposing a set of calls that can manipulate the data stored on a remote server, is particularly powerful, because it allows one to gain access to data through simple access to a communication network. (As an interesting aside, see this rant for why APIs are likely key to some of Amazon’s success.)
As you might guess, (see for example my latest flop Should Papers Have Unit Tests?), I like smooshing together disparate concepts and seeing what comes out the other side. When thinking about APIs then led me to consider the question “What if Papers had APIs”?
In normal settings academic papers are considered to be relatively static objects. Sure papers on the arXiv, for example, have versions (some more than others!) And there are efforts like Living Reviews in Relativity, where review articles are updated by the authors. But in general papers exist, as fixed “complete” works. In programming terms we would say that are “immutable”. So if we consider the question of exposing an API for papers, one might think that this might just be a read only API. And indeed this form of API exists for many journals, and also for the arXiv. These forms of “paper APIs” allow one to read information, mostly metadata, about a paper.
But what about a paper API that allows mutation? At first glance this heresy is rather disturbing: allowing calls from outside of a paper to change the content of the paper seems dangerous. It also isn’t clear what benefit could come from this. With, I think, one exception. Citations are the currency of academia (last I checked they were still, however not fungible with bitcoins). But citations really only go in one direction (with exceptions for simultaneous works): you cite a paper whose work you build upon (or whose work you demonstrate is wrong, etc). What if a paper exposed a reverse citation index. That is, if I put my paper on the arXiv, and then, when you write your paper showing how my paper is horribly wrong, you can make a call to my paper’s api that mutates my paper and adds to it links to your paper. Of course, this seems crazy: what is to stop rampant back spamming of citations, especially by *ahem* cranks? Here it seems that one could implement a simple approval system for the receiving paper. If this were done on some common system, then you could expose the mutated paper either A) with approved mutations or B) with unapproved mutations (or one could go ‘social’ on this problem and allow voting on the changes).
What benefit would such a system confer? In some ways it would make more accessible something that we all use: the “cited by” index of services like Google Scholar. One difference is that it could be possible to be more precise in the reverse citation: for example while Scholar provides a list of relevant papers, if the API could expose the ability to add links to specific locations in a paper, one could arguably get better reverse citations (because, frankly, the weakness of the cited by indices is their lack of specificity).
What else might a paper API expose? I’m not convinced this isn’t an interesting question to ponder. Thanks for reading another wacko mashup episode of the Quantum Pontiff!
When I Was Young, I Thought It Would Be Different….
When I was in graduate school (back before the earth cooled) I remember thinking the following thoughts:
- Quantum computing is a new field filled with two types of people: young people dumb enough to not know they weren’t supposed to be studying quantum computing, and old, tenured people who understood that tenure meant that they could work on what interested them, even when their colleagues thought they were crazy.
- Younger people are less likely to have overt biases against woman. By this kind of bias I mean that like the math professor at Caltech who told one of my friends that woman were bad at spatial reasoning (a.k.a. Jerks). Maybe these youngsters even had less hidden bias?
- Maybe, then, because the field was new, quantum computing would be a discipline in which the proportion of woman was higher than the typical rates of their parent disciplines, physics and in computer science?
In retrospect, like most of the things I have thought in my life, this line of reasoning was naive.
Reading Why Are There So Few Women In Science in the New York Times reminded me about these thoughts of my halcyon youth, and made me dig through the last few QIP conferences to get one snapshot (note that I just say one, internet comment troll) of the state of woman in the quantum computing (theory) world:
Year | Speakers | Woman Speakers | Percent |
---|---|---|---|
2013 | 41 | 1 | 2.4 |
2012 | 43 | 2 | 4.7 |
2011 | 40 | 3 | 7.5 |
2010 | 39 | 4 | 10.2 |
2009 | 40 | 1 | 2.5 |
Personally, it’s hard to read these numbers and not feel a little disheartened.
Why I Left Academia
TLDR: scroll here for the pretty interactive picture.
Over two years ago I abandoned my post at the University of Washington as a assistant research professor studying quantum computing and started a new career as a software developer for Google. Back when I was a denizen of the ivory tower I used to daydream that when I left academia I would write a long “Jerry Maguire”-esque piece about the sordid state of the academic world, of my lot in that world, and how unfair and f**ked up it all is. But maybe with less Tom Cruise. You know the text, the standard rebellious view of all young rebels stuck in the machine (without any mirror.) The song “Mad World” has a lyric that I always thought summed up what I thought it would feel like to leave and write such a blog post: “The dreams in which I’m dying are the best I’ve ever had.”
But I never wrote that post. Partially this was because every time I thought about it, the content of that post seemed so run-of-the-mill boring that I feared my friends who read it would never ever come visit me again after they read it. The story of why I left really is not that exciting. Partially because writing a post about why “you left” is about as “you”-centric as you can get, and yes I realize I have a problem with ego-centric ramblings. Partially because I have been busy learning a new career and writing a lot (omg a lot) of code. Partially also because the notion of “why” is one I—as a card carrying ex-Physicist—cherish and I knew that I could not possibly do justice to giving a decent “why” explanation.
Indeed: what would a “why” explanation for a life decision such as the one I faced look like? For many years when I would think about this I would simply think “well it’s complicated and how can I ever?” There are, of course, the many different components that you think about when considering such decisions. But then what do you do with them? Does it make sense to think about them as probabilities? “I chose to go to Caltech, 50 percent because I liked physics, and 50 percent because it produced a lot Nobel prize winners.” That does not seem very satisfying.
Maybe the way to do it is to phrase the decisions in terms of probabilities that I was assigning before making the decision. “The probability that I’ll be able to contribute something to physics will be 20 percent if I go to Caltech versus 10 percent if I go to MIT.” But despite what some economists would like to believe there ain’t no way I ever made most decisions via explicit calculation of my subjective odds. Thinking about decisions in terms of what an actor feels each decision would do to increase his/her chances of success feels better than just blindly associating probabilities to components in a decision, but it also seems like a lie, attributing math where something else is at play.
So what would a good description of the model be? After pondering this for a while I realized I was an idiot (for about the eighth time that day. It was a good day.) The best way to describe how my brain was working is, of course, nothing short than my brain itself. So here, for your amusement, is my brain (sorry, only tested using Chrome). Yes, it is interactive.
4 Pages
Walk up to a physicist at a party (we could add a conditional about the amount of beer consumed by the physicist at this point, but that would be redundant, it is a party after all), and say to him or her “4 pages.” I’ll bet you that 99 percent of the time the physicist’s immediate response will be the three words “Physical Review Letters.” PRL, a journal of the American Physical Society, is one of the top journals to publish in as a physicist, signaling to the mating masses whether you are OK and qualified to be hired as faculty at (insert your college name here). I jest! (As an aside, am I the only one who reads what APS stands for and wonders why I have to see the doctor to try out for high school tennis?) In my past life, before I passed away as Pontiff, I was quite proud of the PRLs I’d been lucky enough to have helped with, including one that has some cool integrals, and another that welcomes my niece into the world.
Wait, wht?!? Yes, in “Coherence-Preserving Quantum Bits” the acknowledgement include a reference to my brother’s newborn daughter. Certainly I know of no other paper where such acknowledgements to a beloved family member is given. The other interesting bit about that paper is that we (okay probably you can mostly blame me) originally entitled it “Supercoherent Quantum Bits.” PRL, however, has a policy about new words coined by authors, and, while we almost made it to the end without the referee or editor noticing, they made us change the title because “Supercoherent Quantum Bits” would be a new word. Who would have thought that being a PRL editor meant you had to be a defender of the lexicon? (Good thing Ben didn’t include qubits in his title.)
Which brings me to the subject of this post. This is a cool paper. It shows that a very nice quantum error correcting code due to Bravyi and Haah admits a transversal (all at once now, comrades!) controlled-controlled-phase gate, and that this, combined with another transversal gate (everyone’s fav the Hadamard) and fault-tolerant quantum error correction is universal for quantum computation. This shows a way to not have to use state distillation for quantum error correction to perform fault-tolerant quantum computing, which is exciting for those of us who hope to push the quantum computing threshold through the roof with resources available to even a third world quantum computing company.
What does this have to do with PRL? Well this paper has four pages. I don’t know if it is going to be submitted or has already been accepted at PRL, but it has that marker that sets off my PRL radar, bing bing bing! And now here is an interesting thing I found in this paper. The awesome amazing very cool code in this paper is defined via its stabilizer
I I I I I I IXXXXXXXX; I I I I I I I ZZZZZZZZ,
I I IXXXXI I I IXXXX; I I I ZZZZ I I I I ZZZZ,
IXXI IXXI IXXI IXX; I ZZ I I ZZ I I ZZ I I ZZ,
XIXIXIXIXIXIXIX; Z I Z I Z I Z I Z I Z I Z I Z,
This takes up a whopping 4 lines of the article. Whereas the disclaimer, in the acknowledgements reads
The U.S. Government is authorized to
reproduce and distribute reprints for Governmental pur-
poses notwithstanding any copyright annotation thereon.
Disclaimer: The views and conclusions contained herein
are those of the authors and should not be interpreted
as necessarily representing the official policies or endorse-
ments, either expressed or implied, of IARPA, DoI/NBC,
or the U.S. Government.
Now I’m not some come-of-age tea party enthusiast who yells at the government like a coyote howls at the moon (I went to Berkeley damnit, as did my parents before me.) But really, have we come to a point where the god-damn disclaimer on an important paper is longer than the actual definition of the code that makes the paper so amazing?
Before I became a ghost pontiff, I had to raise money from many different three, four, and five letter agencies. I’ve got nothing but respect for the people who worked the jobs that help supply funding for large research areas like quantum computing. In fact I personally think we probably need even more people to execute on the civic duty of getting funding to the most interesting and most trans-form-ative long and short term research projects. But really? A disclaimer longer than the code which the paper is about? Disclaiming, what exactly? Erghhh.
Taken to School
Here is a fine piece of investigative journalism about a very wide spread scam that is plaguing academia. Definitely worth a watch.
How people in science see each other
US Quantum Computing Theory CS Hires?
I’m trying to put together a list of people who have been hired in the United States universities in CS departments who do theoretical quantum computing over the last decade. So the requirements I’m looking for are (a) hired into a tenure track position in a US university with at least fifty percent of their appointment in CS, (b) hired after 2001, and (c) they would say their main area of research is quantum information science theory.
Here is my current list:
- Scott Aaronson (MIT)
- P. Oscar Boykin (University of Florida, Computer Engineering)
- Amit Chakrabarti (Dartmouth)
- Vicky Choi (Virginia Tech)
- Hang Dinh (Indiana University South Bend)
- Sean Hallgren (Penn State)
- Alexei Kitaev (Caltech)
- Andreas Klappernecker (Texas A&M)
- Igor Markov (Michigan)
- Yaoyun Shi (Michigan)
- Wim van Dam (UCSB)
- Pawel Wocjan (UCF)
Apologies to anyone I’ve missed! So who have I missed? Please comment!
Update: Steve asks for a similar list in physics departments. Here is my first stab at such a list…though it’s a bit harder because the line between quantum computing theorist, and say, AMO theorist who studies systems that might be quantum computing is difficult.
Physicists, quantum computing theory,
- Lorenza Viola (Dartmouth)
- Stephen van Enk (Oregon)
- Alexei Kitaev (Caltech)
- Paolo Zanardi (USC)
- Mark Byrd (Southern Illinois University)
- Luming Duan (Michigan)
- Kurt Jacobs (UMass Boston)
- Peter Love (Haverford)
- Jon Dowling (LSU)
I’m sure I missed a lot hear, please help me fill it in.
Hoisted From the Comments: Funded Research
A while back Michael Nielsen posted a comment in one of my blog posts that I’ve been thinking a lot about lately:
Re your last two paragraphs: a few years ago I wrote down a list of the ten papers I most admired in quantum computing. So far as I know, not a single one of them was funded, except in the broadest possible sense (e.g., undirected fellowship money, that kind of thing). Yet the great majority of work on quantum computing is funded projects, often well funded. My conclusion was that if you’re doing something fundable, then it’s probably not very interesting. (This applies less so to experimental work.)
This, of course, is quite a depressing idea: that the best work is funded at best indirectly by the powers that be. But it hadn’t occurred to me until much more recently that I, as someone who regularly applies for funding can do something about this problem: “My good ideas (all two of them)? Sorry Mr. Funding Agency, I’m not going to let you fund them!” And there is a bonus that if you submit something to an agency and they won’t fund it: well you can live under the illusion that you are doing might make the list of really important research.
Actually I’ve very proud of one research proposal I wrote that got rejected. The reviewers said “this work raises interesting questions” and then “but it’s just too crazy for us.” I mean it sucks to get rejected, but if you’re getting rejected because you’re just too crazy, well then at least you’re eccentric! (A similar story was my dream of becoming a ski bum after getting my Ph.D. in theoretical physics. I mean anyone can be a liftie, but a liftie with a degree in physics? Now that would set you apart! Lifties with Ph.D.s in physics please leave a note in the comment section of this blog 🙂 )
"I sound my lonely trumpet in the dark trying to relax at the edge of precipice which once again faces me"
Steve sends me this gem, arXiv:0905.1039. The title of this blog post being a line from the paper:
Citation entropy and research impact estimation
Z.K. Silagadze
A new indicator, a real valued $s$-index, is suggested to characterize a quality and impact of the scientific research output. It is expected to be at least as useful as the notorious $h$-index, at the same time avoiding some its obvious drawbacks. However, surprisingly, the $h$-index is found to be quite a good indicator for majority of real-life citation data with their alleged Zipfian behaviour for which these drawbacks do not show up. The style of the paper was chosen deliberately somewhat frivolous to indicate that any attempt to characterize the scientific output of a researcher by just one number always has an element of a grotesque game in it and should not be taken too seriously. I hope this frivolous style will be perceived as a funny decoration only.
I wonder if this will provoke a response from my dear friend the Sad Physicist?
Portrait of a Reviewer as a Young Man
Science is dynamic. Sometimes this means that science is wrong, sometimes it means that science is messy. Mostly it is very self-correcting, given the current state of knowledge. At any given time the body of science knows a lot, but could be overturned when new evidence comes in. What we produce through all of this, however, at the end of the day, are polished journal articles. Polished journal articles.
Every time I think about this disparity, I wonder why different versions of a paper, the referee reports, the author responses, and all editorial reviews aren’t part of the scientific record. In an age where online archiving of data such as this is a minor cost, why is so much of the review process revealed to only the authors, the referees, and the editors?