I read a series of commentaries a couple of weeks ago in Dutch newspaper ‘de Volkskrant’ on the functioning of national research council NWO. As science funding generally works the same across different countries, most of the discussion could be of interest to people outside of the Netherlands too. Lamenting the lack or arbitrariness of grant allocation is of course a common theme and tends to be relatively boring, but thinking about how ‘the system’ could work in a better way is of course always an interesting and worthwhile endeavor. Professor in Public Policy and Governance Willem Trommel at the Free University in Amsterdam wrote a short opinion piece in which he outlines some suggestions for a better way of funding research grants which I found interesting (article in Dutch here).
Very briefly, funding money is down and only a small percentage of proposals get funded. As a result, scientists spend more time on writing more grants and less time on actual research, often playing it safe and proposing less exciting projects (that cannot be shot down by reviewers) and getting demotivated because even very good grants only have a relatively small chance of getting funded. A single grant can be crucial to a researcher’s career: it might allow a person to start a lab, get another job (or keep a job) or not having to let lab members go because there is no money for contract renewal. Also, there is a bit of a downward spiral effect, as not having had a grant will make it more difficult to get another grant. In short: it is all about grants. Having a process in place that rewards the best grant proposals is crucial to promoting good science and necessary for the fair treatment of scientists.
Stupidly, Unfortunately, I have lost the link, but there is evidence that grant reviewers are pretty good at sorting out the crappy proposals from the pretty good (top 20%) proposals. However, this is not so relevant, as there is only money for only a few of the very good grants in the pool anyway. What matters is: are reviewers good in ranking excellent research proposals? Differences in valuations of excellent grants will get increasingly small and increasingly prone to all types of biases. Whether a grant will get funded or not, could depend on the opinion of only one of three reviewers (it could be rated excellent, say an A+, A+, A rating, or just very good, say an A+, A+, B rating).
Grant writers might not write perfect grants, but grant reviewers might not write perfect grant reviews either. Reviewers sometimes misunderstand an experiment or interpretation. They might not be a fan of a particular research area. They might want to see research that is more safe. They might want to see research that is more daring. They might have a grant deadline themselves and not have taken enough time to properly read and assess a grant. More sinister, they might actively dislike the grant proposer or be worried about competition and slightly, but significantly, mark a grant down. Even if I am exaggerating, and grant reviewers are generally unbiased, knowledgeable, unrushed and fair, there is evidence that they still are not very good at their job. (Remember, grant writers and grant reviewers are generally the same pool of people and so I am not specifically dissing grant reviewers but scientists in general.) A study by clinical scientists, eloquently discussed in a post by Jalees Rehman at his SciLogs blog, measured whether there was a good positive correlation between the ranking of grant proposals that had been rewarded and the scientific impact of papers coming out of the same grants. The conclusion was that reviewer ranking was not a predictor of subsequent scientific success.
Trommel proposes to have an initial reviewing round to separate the top grants from the not-so top and outright bad grants, and then use a lottery to decide which ones of the top grants gets funded. I think this is a brilliant idea for several reasons. First, it saves money and time; researchers lose time by writing grants but also by assessing grants. Also, it will speed up the funding process. Second, I reckon that in general, researchers would be less disappointed with having written a top grant that did not get funded through random bad luck rather than through a *&%^%£ reviewer wrongfully shooting it down. You would be able to report back to the university that you were in the top bracket as a measure of scientific excellence (although note that this also could be done in the current system). Third, this system would promote more daring research proposals (a very good proposal could not be downweighted by risk averse reviewers). Fourth, there would be less other forms of personal reviewer biases. The one negative aspect I can think of, is that doing away with a more detailed reviewing process would minimize useful reviewer criticism, because, let’s be fair, there are plenty of constructive reviews too.
For a discussion on some other ways of potentially improving the grant allocation system see this post on the Dynamic Ecology blog by Brian McGill. Note that I have not researched other blogs on which earlier and better posts on this same subject have appeared; please let me know of any relevant posts and I will link to them. I am interested to hear what you think!