Defenders of the traditional publishing model for medicine say that health-related claims need to be vetted by a referee process. But there are heavy costs. In quantum information, one might know the proof of a theorem (e.g. the Quantum Reverse Shannon Theorem) for years without publishing it. But one would rarely publish using data that is itself secret. Unfortunately, this is the norm in public health. It’s ironic that the solution to the 100-year-old Poincaré conjecture was posted on arxiv.org and rapidly verified, while research on fast-moving epidemics like H5N1 (bird flu) is
delayed so that scientists who control grants can establish priority.
All this is old news. But what I hadn’t realized is that the rest of science needs not only arxiv.org, but also scirate.com. Here is a recent and amazing, but disturbingly common, example of scientific fraud. A series of papers were published with seemingly impressive results, huge and expensive clinical trials were planned based on these papers, while other researchers were privately having trouble replicating the results, or even making sense of the plots. But when they raised their concerns, here’s what happened (emphasis added):
In light of all this, the NCI expressed its concern about what was going on to Duke University’s administrators. In October 2009, officials from the university arranged for an external review of the work of Dr Potti and Dr Nevins, and temporarily halted the three trials. The review committee, however, had access only to material supplied by the researchers themselves, and was not presented with either the NCI’s exact concerns or the problems discovered by the team at the Anderson centre. The committee found no problems, and the three trials began enrolling patients again in February 2010.
As with the Schön affair, there were almost comically many lies, including a fake “Rhodes scholarship in Australia” (which you haven’t heard of because it doesn’t exist) on one of the researcher’s CVs. But what if they lied only slightly more cautiously?
By contrast, with scirate.com, refutations of mistaken papers can be quickly crowdsourced. If you know non-quantum scientists, go forth and spread the open-science gospel!
Hmm… apparently this post was badly timed!
Quantum scientists need scirate.com too – but sadly it seems to be down 🙁 What’s the story?
Slightly off-topic question and assuming that Dave B is reading this: is there any idea when scirate will be back up again?
While I agree wholeheartedly with your argument for doing open science, I don’t understand why the conclusion is that science needs scirate. I guess I think of “open science” and “rating papers” as very different things. For one thing, I like the former but not the latter.
Besides, is the lack of web tools really an obstacle to open science? When people want to do open science, they seem to do a fine job with wikis/blogs/arxiv/etc. The problem is that very few want to do it, and that’s where I imagine the real obstacle is: the current system of academic hiring/tenure/grant-giving/prestige/etc. incentivizes hoarding ideas/data and working in secret.
Of course it’s not just the technical tool that’s necessary but also the associated culture. Still these somewhat go together. Before arxiv.org, people always _could_ self-publish on their websites, and could maintain their publication lists there with cross-references for all of their coauthors, etc. But in hindsight, arxiv.org seems to have really enabled something useful. And similar, scirate.com is (was) used in ways that people were technically capable of before, but never in practice got around to.
Why do you think that “rating science” (using tools like scirate) is not a good idea?
@Aram: I understand what you are saying, but I guess I’m still wondering what it is that scirate does for open science (besides allowing comments, which is a useful but not exactly revolutionary functionality)?
@Michal: I have yet to hear a convincing argument for why it is a good thing; on the other hand, it’s easy to come up with arguments for why it might be detrimental (e.g., it could exacerbate groupthink and “following fashion,” which I think are real problems in academia.)
The best reason (I can come up with) to support scirate is that reading daily through the arxiv for a couple of papers that may be important for research in our field is not easy to do. If you open this process of discovery to everyone, then someone with more time that day, or sharper skills at evaluating research, will get the ball rolling. It is a huge service to be able to look at what the community considers important over the past several months. I am sure there are other reasons why scirate is awesome and how it can (and has been) misused at times. In particular, like every voting scheme, it represents the opinions of those who vote, not of the whole community – even if the whole community follows what’s happening. For example, I use scirate all the time, but haven’t signed up to rate a paper yet. I plan to sign up as soon as the hacking ends.
While I agree with Aram that Scirate-like site is a helpful thing to have, Scirate itself is (was?) more than a bit unpolished – limited features, ugly design, no moderation and, last but not least, lack of critical mass. If a site like that is to be successful in the future, it should incorporate lessons learnt from StackExchange model (mathoverflow, cstheory and theoreticalphysics, which is about to enter public beta phase) – how to gain and maintain critical mass of users, keep the level of comments/answers satisfactory etc.
With the current volume of scientific output, filtering is badly needed. Today, expertise needed to filter significant results from minor ones is distributed among hundreds of experts and is inaccessible to the scientific community at large (apart maybe from a dozen of blogs, like Terry Tao’s). E.g. if I want to know which recent results in a given field, say, random geometry, are important, I ask one of experts at hand, if there are any, or go to a conference and listen to “gossip” – whoever manages to get the technical side and appropriate incentives together into a workable system that distributes exactly this kind of knowledge will have, IMHO, an impact comparable to that of arXiv.
@Marcin: A few of us are considering helping Dave with a reworking of scirate to reflect the features you discuss. It seems that you feel pretty strongly about this, so it would be wonderful to join this process. Dave practically took the site down to ask help in reworking the site. In closing, I want to thank Dave for providing this invaluable service, free of charge, while still contributing to science more than most of us 🙂 Thank you Dave.