Saturday, June 22, 2013

Like Farts in the Wind

I recently heard some interesting 2nd hand advice from Matt Welsh, whose blog has plenty of advice/rants/raves for tenure track junior faculty.   Matt ended up leaving academia for Google but with regards to writing grants and getting funding, he's said "Focus on writing papers, send out proposals like farts in the wind"; basically, spam granting agencies without a lot of thought.

For some reason this really bothers me.  It is effectively concluding that grantsmanship and actual intellectual merit cannot be accurately measure by the review process, that funding is effectively a crapshoot and you just need to play the game lots and lots in order to get funded.   It's succumbing to the view that folks reviewing your proposals are not going to know much about your area or understand the significance. Papers, on the other hand, tend to be reviewed by researchers more close to your area of expertise and this is where you need to spend your time polishing and honing your message.

Now, after many rounds of NSF review, perhaps I need to stop deluding myself and conclude the same.  But I think that the low funding rate has a lot to do with the perception of randomness in the review process.  Many meritorious proposals just cannot be funded at the current rate (although around 20% for the whole of NSF, it varies widely per program).

Perhaps it's time to get "gassy."

Friday, June 14, 2013

From the horses mouth to the fly (on the wall)'s ear: phrases overheard at an NIH section meeting and what a beginning investigator can glean from them

This is a continuation of my blog post yesterday concerning a recent NIH section meeting.  I was there as an Early Career Reviewer -- a great opportunity to learn about the process and listen in on the discussions.  While most reviewers get assigned many more, I was assigned only 4 applications to review. This gave me plenty of time to listen to what folks were saying and really pay attention to the conversation going on about applications being discussed.  Each application gets between 30-10 minutes of discussion (depending on time of day, enthusiasm or discordance among reviewers, etc). Not much can be generalized across all applications we reviewed.  However, in the R01 category , I think several general pieces of advice can be gleaned - at least in the GVE context - with regards to what works an what doesn't (FYI, R01's are those large, major awards -- after you get one of these, you are no longer a "new investigator").  

Note: Although the names of panelists serving on an NIH section are made public, I am keeping confidential the identity of the folks who uttered these phrases and, in addition, I've paraphrased these comments if they would in any way reveal the application under consideration during the discussion.

"What exactly is this proposal aiming to do?"
Make sure this is crystal clear from your aims, as written.  Not so good at conveying this information in prose?  Use a graphic, if you have to!  It can be helpful to sit down and actually think about every step of the project, the kind of data it will produce, and what you will need to analyze the data.  Present that in your application (with appropriate citations or letters of support, if necessary).  

"Most of my enthusiasm for this proposal comes from the papers cited therein, not from the proposal itself" and  "I could not figure out how it could be important to do X"
If you are excited about your project, find ways to engage your reader as well. In your writing, convey your excitement about your work - this will be infectious (in a good way).  Even if you are working on what you think is an important system, remember that your reviewers are coming from a relatively broad audience and they may not agree! My PhD advisor Colleen Cavanaugh always said we should aim to sell our work to our extended family -- if you can convince your uncle that he should fund you, you can convince anyone.

"This is a system I really wanted to love...but"
It's not enough to rely on the "cool" factor of your system or how sexy the topic is in the literature.  Reviewers are intelligent folks with lots of background necessary to find -- and expose -- the gaping holes in your experimental design, background, and understanding.  It may help to have a colleague read the proposal ahead of time (yes, that means writing it ahead of time).

"This is a fishing expedition without any hypothesis"
While reviewers recognize that hypothesis-generating aims are important, without any framework of expectations, it is difficult to ascertain whether or not your strategy will work.  It is always useful to consider the kinds of data that your project would produce (even if exploratory) and how those would be analyzed.  This would allow you to present potential hypothesis based on expected results.  For example (and this is a purely fictional example), instead of "Describing the microbiome associated with subway seats" try "Do commuters carry their microflora to work?"  Although the study is still exploratory, this framing allows a reviewer to see where  you might take this project and potential, downstream hypothesis testing.

A couple of final tidbits to my fellow New Investigators -- those having never been awarded a major grant from NIH:

Be adventurous: "safe"projects and problems may not yield the highest scores
Be enthusiastic without appearing naive: don't over-interpret the literature 
Don't be afraid to involve collaborators or consultants: unlike your well-established colleagues you are as of yet, untested.  Like it or not, you have much to prove. You DO NOT KNOW ALL.  It is a good idea to ask for letters of collaboration or support from folks in areas where you haven't published or using techniques you plan on learning or using for the first time.  Even if you think you know what you are doing, if you don't have a proven track record (read: publication record) of doing it, you should consider a letter of support.  
Proofread...and then proofread again...and then have someone else proofread: Grantsmanship, although not scored, can make reviewers angry. If they are having a hard time understanding your aims due to writing issues, this can only detract from your overall impression.


So get to it! New R01's are due October 5th (Renewals July 5th)

Thursday, June 13, 2013

NIH is not spelled "NSF"


I've often wondered what it is like on an NIH study section, and how it differs from an NSF panel, so I happily agreed to serve this past week on my first Genetic Variation and Evolution (GVE) study section.  

What is an NIH study section like? 

Well, on the face of it, much like an NSF panel.  Before you come to the meeting, you are given a set of applications to review.  Like the NSF, your review of each individual application is based on certain criteria and you are asked to comment on these criteria.  In an NSF review you first score the proposal (Excellent, Very Good, Good, Fair, or Poor).  At NSF, criteria you should comment on or focus on in your review are "Intellectual Merit" and "Broader Impacts" (with strengths and weaknesses for each).   You also give a summary statement at the end of your review that should reflect your score.

 For the NIH, the scored criteria are "Significance", "Investigator", "Innovation", "Approach", and "Environment".  Unlike the NSF, the NIH has tried to make these scores quantitative and comparable across study sections. So,scores of 1-3 are "good" or "high" (contrary to the numerical trend), scores of 4-6 are supposed to be "average" (an application may be of high importance but weaknesses bring down the overall impact or the application topic is of moderate importance), and scores of 7-9 are reserved for applications with serious problems or of low or no importance.  Each of the criteria are scored in this way (so you can get a sense of whether or not the approach is something the panel didn't like or the significance was seen as lacking.  You also write statements reflecting the strengths and weakness for each and you write an "overall impact" section - akin to the summary statement of NSF.  Finally, you provide your "overall" score -- this is what will be used to rank the applications during the section meeting.

You submit your reviews electronically -- like the NSF.  One big deviation from that of the NSF panel review as that after the deadline, you actually get to see the reviews of others and alter your scores and reviews.  This was interesting to me for two reasons: I wondered if folks would generally get bullied into lowering their scores (giving worse scores) or raising their scores (giving better scores).  I also wondered if clearly good or clearly bad proposals would get unanimously consistent scores.  The Scientific Review Officer also assumes a role at this point, guiding folks to match their text (in their reviews) with their scores -- for example, if you only wrote negative comments, why did you give the application a "3"? Or, if you had only glowing things to say about the proposal, why did you rank it at "5"?

Then we all arrive in a hotel at a predetermined location.  Like NSF panels, a relatively large group (25 of us in the case of GVE) meet in one room over a few days to discuss a large set of proposals.  Unlike NSF, NIH sections don't discuss all applications! Let me repeat that -- in case you've never applied to the NIH - if you score poorly, your application won't be discussed.  The only feedback you will get is from the reviews and those scores (which, granted, can be significant).  Many colleagues have told me that if you are not discussed, it is very likely that you will never get that grant funded.  That is because of the "three two strikes" rule at NIH -- you can only submit the same application 2 times.  What's the likelihood you'll go from not-discussed, to discussed or funded in three rounds without the feedback actually provided by a discussion?  Who decides what gets discussed? The Scientific Review Officer -- and this is largely based on how the applications scored.  However, anyone on the study section can "rescue" an application from "triage" by asking for it to be discussed. 

There's a wonderful aspect to the triaging of "bad" applications -- as a reviewer, it means you don't have to rush through a huge pile of proposals, or waste time/energy on clearly poor applications, or spend over 2 days stuck in a room away from your work/family/friends/etc.  However, as an applicant, I rather like that NSF provides pretty substantial feedback even to applications that are not the top of the heap -- although I know that some NSF panels do effectively "triage" poor scoring proposals, most of the applications get discussed at NSF while a large fraction are being triaged at NIH.

Ok, so now we've submitted our reviews, we have looked over the reviews of our colleagues and are in the conference room bright and early awaiting the start of the section. What happens?

- Application discussion order generally follows scores (best up first)
- An application to be discussed is brought up 
- Everyone takes a minute to read the abstract and aims 
- Assigned reviewers present their preliminary scores
- Each reviewer discusses strengths and weaknesses
- The application is opened up to discussion by all
- Human subjects, vertebrate animals, and biohazards are discussed
- Assigned reviewers are asked again what their scores will be (often these change after discussion).  These scores provide a voting "range"  -- that is to say, if the three reviewers ranked your application overall as 2,4 and 5 then everyone on the panel can vote within the range (2-5) or, they have to raise their hand and announce that they will vote out of the range (either above or below -- although this fact remains private).  Although you could imagine a situation where folks feel odd about voting out of range, or somehow peer pressured into the range, this didn't seem to be the case on the GVE panel at all.  In addition, you don't have to say why you are voting out of range, but simply state that you are.
- Everyone casts a vote
- The budget and other non-scoring criteria are discussed

Remember: The only people reading your full proposal are those assigned to review it!!! That is, only THREE people.  Unless they go out of their way to download and read the full proposal <unlikely>, everyone else ONLY READS THE ABSTRACT AND AIMS!! Make those pages good, folks! Make them readable, stand on their own, and capture the imagination.

Come back tomorrow for some nuggets of advice I've learned from being on this panel (albeit from a current unfunded beginning investigator)