Questions in Scientific Publishing:
An Interview with Floyd Bloom
A few weeks ago, News&Views sat down with Floyd Bloom, the speaker at this week's commencement and former chair of the Scripps Research Department of Neuropharmacology, to discuss a subject that is of interest to any graduating Ph.D. student—scientific publishing.
Bloom has seen both sides of scientific publishing. He is the author of more than two dozen books and monographs, more than 400 original scientific articles, more than 250 solicited articles and reviews, and a large number of editorials for both scientific and general audiences. He has served on the editorial boards of a number of peer-reviewed scientific journals, including Neuropharmacology, the British Journal of Pharmacology, Alcohol and Alcoholism, and the Journal of Neuroscience. He was co-editor of the journal Regulatory Peptides from 1978 to 1990, and he currently serves as editor-in-chief of the journal Brain Research. From May 1995 until June 2000 he was the editor-in-chief of Science Magazine.
He was asked about issues ranging from the effect of the internet on scientific communication to the future of journals in an open access environment.
News&Views: When you became the Editor-in-chief of Science in early 1995, the magazine had not yet gone online. That wasn't even 10 years ago, and now it's hard to imagine any journal not having an online presence. What are some of the ways the web has changed scientific communication among scientists?
The main thing is that you don't read issues of journals anymore. You read the fine grain of the articles that your browser (like Google or PubMed) finds for you when you enter search terms. One of the joys of library scholarship in the past was when you went to the library to find an article, you usually found two that were more interesting than the one you were looking for while you were looking for one you thought you wanted. That doesn't happen much anymore. Things are moving at a faster pace, and the search engines take you right where you want to go. That satisfies your immediate curiosity, but it is a very piecemeal way of becoming aware of the literature.
You also read more reviews now than you used to because there is so much more detail than there used to be. When I was president of the Society for Neuroscience, I probably knew everybody in the society. Now, I go to a meeting where there are 35,000 people. I can't know everyone—I don't know 10 percent of those people. The same thing is true for the literature. When my field was starting, I was the person who was starting it. I knew everything there was to know about it. [Now], you can't feel certain that you know everything there is to know about any topic—no matter how finely you chop it.
The internet is swamped with information. What we lack is the ability to extract from that whirlpool of information—to find the essence of the knowledge or the criteria by which you would decide what your next experiment should be if you want to solve this problem. That's what we don't have because you are overloaded all the time—and behind.
The other thing is that the internet [puts pressure] on journals. A scientist submits by internet and gets manuscripts to review by internet. The authors who send their materials knowing they are going to be reviewed by internet… will want an immediate answer. The human beings who are in between still take as long as they used to to do the reviewing. The number of reviewers who are competent to help an editor decide about the substance of an article don't increase. What happens is that we spread them out, and end up with a shallow base of knowledge across a wide arrange of topics. It has imposed a lot of stress in the system.
There are a number of journals in other fields, such as physics, where they immediately post pre-peer-reviewed material upon receipt. Do you think that this is something that is going to happen in biology?
I don't think the mindset of biologists is the same as the mindset for physicists. Particle physicists already know the results of most experiments that are going to be reported because they're done on these huge shared instruments. You have got to submit a grant to get a time on the shared device, so you tell them what you are going to do. Then you have to report on what you found so they already know what you found. The fact that they would make a pre-print of a thing that everybody knew about to start with is non-bothersome. They don't have any proprietary interest [at stake, either]—it's another particle. But, in biology, it's not that way. Life sciences is highly competitive, and people want to have their data appear in a high-publicity, high-impact journal, but they don't want it to be known before then, because they have seven competitors trying to do the same experiment.
I am a strong believer in peer review. I go through maybe 100 articles a week for Brain Research. When I get the papers back from the authors, they routinely say, "I thank the reviewers for their helpful and stimulating comments. The paper is now much better than when I sent it in the first time." When you are reading your own stuff, you don't realize that what's clear to you is not necessarily clear to everybody who reads it. Facts that are obvious to you don't have the right controls to be obvious to somebody else. Good reviewers pick that kind of stuff up.
You also have a third angle, which is society-owned journals versus commercial-owned journals. The NIH's stated position is that all research supported by NIH grants should be [available] to the public within 12 months of their acquisition. I think that's going to put an unbearable stress on the system of the future. And, there is the concept of "garbage in, garbage out." The fact that you posted your research in an NIH-accessible place doesn't mean that it is right or accurate. [Probably the majority] of articles that come in for review have intrinsic inaccuracies in them that were overlooked by the authors.
Unreviewed material posted online is a waste of time to read. I wouldn't spend my limited time reading things that have not been vetted. To insist on that is like how they used to insist in the publicly funded genome projects that all the sequences obtained were immediately posted. They would spend hours, days, and months going back and correcting the stuff that was up there incorrectly, on which people had based experiments and couldn't get the answers. So, it's a nice thing to say, it's a nice chest-thumping move to make, but I don't think it helps science.
What is the effect of the NIH's mandate on journals? What will the journals provide to recapture their subscriber base?
I think it's going to be extracted wisdom. I think that the more free information is, the less valuable it is—especially when you are flooded with it all the time. You can't take the time to read 60 articles when I can look at one overview of a field and see what the ten most interesting hot points are today.
For that reason, when I was at Science we started the knowledge-environment concept, where you could essentially browse the titles in the areas that interested you without having to pick up 60 journals. Essentially, an automated robot scans the literature of high-wire published journals, looks for the keywords, and says, "This belongs in category 14-27 of Cell Biology." So, if I want to tune in and find everything there is on mitochondria or on nerve cells in vertebrates or invertebrates, every time I log in there, it's going to tell me what's new since the last time I saw it. It gives me back some of that serendipitous browsing that I got from the physical experience of going to the library.
Can you talk a little about how these knowledge environments arose?
We were looking for some premium service we could offer readers that would make their individual subscription valuable to them. When you poll readers of Science and ask, "What did you read in the last issue?" eighty percent will recall something from up front [in the journal] and one article from the back. But, that up front part is not captured when you go online—you are only looking at the articles in the back. It's very hard to get even to the perspectives, which give you the information you need to understand the significance of the article. So, that up front material gives the individual a much better magazine than just the scientific journal part in the back.
The idea was that we couldn't publish everything—Science rejects more than 90% of what it gets. But, we can call attention to things. That is why we have "Editor's Choice" in the magazine now. This is what the editors who are surveying their specialty fields found in another journal—sometimes rejected by us, but published someplace else—that readers want to know about. This gives readers a way of having Science magazine be their portal to the area of research they want to study. All these features were developed in that way.
Another technology that the web offers is the chat room discussion. What can these offer to scientific publishing?
In my opinion, they don't work for two reasons. The people who disagree don't want to put their disagreement in the public area and sign their name to it. And second, like the dilemma of the library browsing, the internet focuses you so much that you don't have time to get into that chat unless you have a question.
What's useful for me as a user are support forums, like those that exist for computer software. I can go [to the forum] and say, "Has anyone faced this problem? Every time I make this buffer, something precipitates." That's useful. But to comment on the primary literature in most chat rooms, it seems to me, would not be successful. It's a Catch 22—the people who have the time to do that are not the people whose opinions you want to know.
You have written a lot of editorials in your day. Why is editorializing from scientists important?
There were many years [when] I didn't read the editorials at all. But, the further along in my career I went, the more it seemed to me that the kind of wisdom I had would be better used in developing policies to improve the state of my field for those who will remain active after I am gone. That's what an editorial can do—it can call for action, it can propose a possible solution, it can elicit commentary on other proposed solutions or criticize them.
They are generally not factually focused—they are policy-focused: Given these facts, this is what needs to happen in order for progress to occur. An editorial written by somebody whom you don't know doesn't have much impact. So, if you have the name and are willing to stick your neck out, then you can make people aware of certain positions that they might not otherwise have been aware of. Because an editorial might be read by 2 million people, it's a great bully pulpit.
Over the span of your career, what changes have you seen in scientific publishing?
There are more journals, and, because of the power of the tools that have been developed, individuals are able to get better data faster. So there are more publications per active scientist per year. The modern methods we have for probing Mother Nature and for collecting data are just better than anything that has ever existed. You don't have to keep repeating experiments because the data is crystal clear and your variances between experiments are reduced. We're able to pick up signals that were buried in the noise before and make insights that we couldn't before.
So, people are publishing more, people are doing better science, and people have less time to read the science that other people are doing.
There are also more scientists I would imagine.
I think considerably so.
What about the average number of publications per person per year?
Now, even with computers, it's hard to write more than a good paper a month. But if you have a team of good collaborators, each of them can write a paper a month and then you can get your name on a 100 papers a year.
Along with the increase in publication, has there been an increase in the desire to publish?
Yes—and to publish in the higher impact places.
Back when the first scientific journals were started in the seventeenth century, there were cases where people didn't have a desire to publish—like Isaac Newton.
Darwin is another example. He kept all his stuff to himself until somebody else came out with it.
Well, you can't get your grants renewed if you don't publish. You can't get your promotion if you don't publish in good places. Survival requires that you publish.
This is the idea people always talk about—publish or perish.
Once you're in the hierarchical ladder of an academic department, then the criteria for being promoted to the next higher level require publication. So, the "perish" would come at the time of tenure decision. You not only have to have enough publications, but enough good publications. It exists everywhere.
The concept didn't come from scientific departments, more from humanities departments—where you had to write books, not papers. I am on the Board of Trustees of Washington University and I chair the equivalent of what we call our Appointments and Promotions Committee, so we are the last level to approve promotion. In art departments, you will see people whose only criterion is that they have had four shows—in the philosophy department, maybe they've written three books. Those are their publications. But people will have read those books and reviewed them in prominent places.
Send comments to: firstname.lastname@example.org