Imagine a parallel universe, where it's Saturday night, and I finish writing an article to post to Heureka Labs on Sunday morning. But before hitting publish, I must first send the article to a ghostly editor of this site's publishing platform, who will read the blog post and then decide whether it's interesting enough. If it is, then they send it to a few of you readers who volunteered to give me feedback (e.g. is it really Saturday night?; do we actually have a publishing problem?; why didn't you include my problem?). Then, the platform reads your comments and determines if they are still interested in my blog post. I am then invited to respond to your comments with the overall goal to enhance the article (e.g. yes it is Saturday night, the overwhelming evidence points to a publishing problem, and including your problem is outside the scope of this article). After another cycle of review & comments, the editor makes a decision on whether to publish. If this parallel universe existed, dear readers, you'd never read an article here and I would be stuck in publishing limbo. Unfortunately, this absurd universe describes the academic publishing system.
To appreciate the absurdity, let me provide a little more color to this world. In order to publish a paper, assuming that I have done all of the experiments and have a paper written, the first step is to shop the paper around. This means that I will reach out to editors, and do pre-submission inquiries in order to gauge possible interest. No matter how sound the science is, if an editor is not interested in the paper then it will not be published there. So you need to find a home for your paper. After you find a possible home and a marginally excited editor, the paper is reviewed by a larger group of editors. At this stage, the paper could easily be editorial rejected. "Thanks, but no thanks."
If an editor agrees to send a paper out for review, then usually three, but sometimes four or five reviewers will be asked to provide feedback on your work. These reviewers are not professional reviewers, but rather they are scientists who are asked to volunteer their time to review the work. Reviewing a paper takes anywhere from 1-3 hours of time, so reviewing a single paper is not a heavy lift. However, in my first year, I reviewed almost 30 papers. Each of these papers gets sent back at least once, so that means I reviewed approximately 60 papers in that first year. Continuing this math, I had a paper to review every single week, for every week of the year. Remarkably, there is no compensation for this work; instead, this pro bono work is chalked up to either experience, résumé building, or community service. If I am going to publish a paper in this system, then I will need three colleagues to review my work. In exchange, I am expected to review the work of three other colleagues; and so the system continues. A small but growing movement is trying to change the culture to pay for reviews. But candidly, incentives are misaligned. Why would a for-profit journal begin to pay reviewers when historically this has been done for free? Why would reviewers, who are scientists that need the experience or want to establish relationships with editors, start requesting fees for service when an entire pool of other reviewers will do it for free?
If the editor sends the manuscript out for review, then reviewers send comments back to the editor. At this stage, the editors deliberate amongst themselves and ultimately make a decision... which is to reject. No one has papers accepted on the first submission. Instead, the best you can hope for is reject with opportunity to resubmit. At this stage, the paper's authors must choose to either follow the advice of the reviewers, who left detailed instructions on what they would like to see done, or start the process again with a new editor and set of reviewers. Editors can help determine the priority of experiments, as well as ascertain the likelihood of an ultimate acceptance or not. Generally, editors are approachable, but especially those whom you have relationships with — meaning you have reviewed papers for them previously; do you see how the system is working now?
The editorial decision for the first round comes 2 to 6 months later. If you’re lucky enough to get a revise and resubmit, then you have another 6 to 12 months of experiments before turning the paper around to resubmit it again. The process continues until either the paper is accepted, or the paper is ultimately rejected which would mean the whole process starts again. From the time of the first submission to the time of ultimate acceptance, typically more than one year has passed. Our current publishing system adds at least a year to the length of their program for a young graduate student. Moreover, this arcane process adds at least a year before your findings are shared with the scientific community. In the field of drug discovery, a delay in sharing information about a new drug target or a new mechanism of disease means a delay in translating that information into therapeutic potential.
Sharing information and allowing others to build on it is the fundamental way that science progresses. So for those that are trying to unravel the mysteries of science, your findings cannot be built upon until they are shared. Now take this example of the single case and amplify that across the entire scientific ecosystem, and you have a process that slows the pace of scientific discovery.
How can we share our discoveries more quickly so that they can be replicated, built upon, and translated into impact?
Let’s consider software development as a parallel analogy. A software developer writes code, and then “publishes" that code, similar to a scientist publishing their findings. But first, they ask for the code to be reviewed before being merged into the main codebase. If a software engineer wants to add a new feature, they ask someone within their organization to review the code to make sure it is sound. If open-source, they ask the community. Imagine if code review took 1 to 2 years; that feature would never be integrated into the current software/app. Any software company that instituted a practice for publishing code that mirrored the practice for publishing scientific findings would quickly fail.
Software engineering thrives on the concept of composability. Composability means that the shared code can be integrated and built upon by other engineers. In fact, this is the entire reason that open source software development exists. The code I write can be built upon by you, and then shared back with me, and continue to build upon it, ad infinitum. Whether code snippets or entire open-source software libraries, composability enables developers to build, share, build more, share more, and the process continues. Incidentally, this is also how scientific discovery works. I discover something and share it with the world, allowing someone else to take that discovery and build upon it. In turn, they share their findings with the world, and the cycle continues. Scientific discovery is composability.
The primary theme of the most recent series of problem articles [funding, approach] is to highlight the current problems in the scientific ecosystem and then envision how technology might help overcome those problems. So the last question in this final article: can technology fix the problems with scientific publishing?
- Gatekeepers. One of the major problems with publishing is gatekeepers, which can come in the form of editors but also reviewers. Editors determine what they are interested in publishing or not. Human interests ebb and flow as frequently as modern-day news cycles (see Exhibit A). Further, reviewers are also gatekeepers. If a reviewer doesn’t deem your experiments, your topic, or your approach relevant, then you are done. A single disagreeable reviewer out of three, but sometimes four or five, means that your paper will not have reviewer consensus, and therefore will likely be rejected. In a time where more papers are submitted than could possibly be published, then editors need a reason or a way to triage these papers. Lack of consensus is a death knell.
- Accessibility. A second major problem is accessibility. Scientists need access to read and know the work du jour, and the community needs access for scientific literacy. Single paywalled articles can go for $20-$200 for access; monthly subscriptions can be 10X more. Multiply this across many papers and many journals, and we have an accessibility problem.
- Cycle length. Scientific discovery is like software composability. And because of this, the cycle length and cycle frequency matter. Now just like writing code, the quality of the thing you share matters. Poorly written code that is buggy, broken, or irrelevant will not be used and built upon. Similarly, a scientific discovery that is buggy (artifact), broken (wrong), or irrelevant (niche) will also not be used and built upon. Cycle length is not about making things go faster; instead, it's about how to be more efficient.
All of this could be fixed with a modern publishing system that removes gatekeepers, increases accessibility, and adds efficiencies to reduce cycle length. Like blogs, such as this one. Why can’t scientists publish what they know? What they discover? Why the need for scientific gatekeepers? This site and the articles I write are permissionless…I didn’t ask anyone before publishing. You all — the readers of these articles — determine their utility and composability. Will you incorporate these ideas into your thinking and build upon them? Now candidly, this idea is not without challenges (quality, searchability, etc.). But science, like software, is self-correcting, and modern websites have search engines and optimization.
We call a published scientific manuscript a paper; but it's been many years since these were printed on paper. The antiquated system for scientific publishing should take lessons from open-access software development. A system that allows scientists to publish, correct, share, and update, would accelerate the entire scientific ecosystem. The ethos of this site is a permissionless sharing of what I'm learning by writing articles and publishing them; a future that allows information to be free, decentralized, and open is better for science, and for society.