NIH Extramural Research & Funding News

Syndicate content
NIH Extramural Nexus
Updated: 12 hours 40 min ago

Working with Human Subjects? New Human Subjects System Replaces NIH’s Inclusion Management System

May 21, 2018 - 4:26pm

As of June 9, 2018, a new Human Subjects System will replace the Inclusion Management System (IMS) currently used for reporting participant sex/gender, race, and ethnicity information for NIH grants. The new system consolidates human subject information submitted in applications and progress reports and will be used for all human subject-related post-submission updates as of its release on June 9.

Recipients will be able to access the system through the Human Subject link that will be on both the eRA Commons Status page and in the RPPR. Investigators and signing officials will be able to make study updates or corrections (including just-in-time or off-cycle updates) through the new system.

Be prepared! Complete any in-progress enrollment records to ensure their migration to the new system. Records that have not been submitted by June 8, 2018 will not be available in the Human Subjects System and will need to be re-entered.

The new system also supports submission of participant-level sex/gender, race, ethnicity and age data using a CSV file to populate the Inclusion Enrollment Report.

Learn more about the transition to the Human Subjects System by reading the guide notice, checking out the website, or by watching a video or viewing other resources on our system training page.

Categories: NIH

Now Available: Delegate Tasks When Working on Interim or Final RPPRs

May 21, 2018 - 2:36pm

The interim Research Performance Progress Report (I-RPPR) and final RPPR (F-RPPR) are submitted online through eRA Commons in the same format as the annual RPPR.  We’ve often been asked if working on I-RPPR and/or F-RPPR be delegated to an Assistant (ASST) role. We are happy to announce that, yes, it is now possible to delegate working on I-RPPR and F-RPPR to anyone with the Assistant (ASST) role.

 

Read more in the eRA Items of Interest.  Want more updates on NIH electronic submission and eRA Commons straight to your mailbox? Subscribe today!

Categories: NIH

NIH Announces Stipend and Benefit Increases for National Research Service Award Recipients

May 16, 2018 - 7:44am

We are pleased to announce that stipends will be increased for those supported by Ruth L. Kirschstein National Research Service Awards (NRSAs). As a result, approximately 15,000 NRSA training grant appointees and fellows spanning career stages from undergraduates, graduate students, and postdoctoral researchers will receive a two percent stipend increase for Fiscal Year 2018. Please see the recently released NIH Guide Notice NOT-OD-18-175 for the specific new stipend levels.

Advisory groups at NIH, including those focused-on physician scientists, recommended that NIH adopt a practice of regular stipend increases. They also recognized that post-doctoral researchers on research project grants, such as those on R01 awards, for example, typically receive better benefits than postdoctoral trainees or fellows receiving NRSA support. It was also noted that inadequate benefits may deter postdocs from applying for fellowships or accepting a slot on a training grant.

In addition to the stipend increases for postdocs, NIH will support increases in Training Related Expenses provided with Institutional Training grants. Institutional Allowances awarded with fellowships will also be increased to support enhanced benefits, particularly health insurance.

To this end, we have noted recommendations from the National Academies of Sciences, Engineering, and Medicine regarding NRSA stipend increases for post-docs. We are currently reviewing these and other recommendations from their recently released report which recommends ways in which NIH may continue enhancing the future of the biomedical workforce.

Please continue to monitor the NIH Research Training website for more information as it becomes available on this and related policies.

Categories: NIH

Reach Out to NIH Staff – We’re Here to Help

May 9, 2018 - 3:25pm

We had the pleasure of interacting with over 900 applicants and grantees at last week’s NIH Regional Seminar on Program Funding and Grants Administration in Washington, DC. A recurring theme in many presentations was the importance of reaching out to NIH staff throughout the grant application and award process.

Most folks know to call the eRA Service Desk when they run into issues with ASSIST or eRA Commons. But, do you know where to go for other support? The best people to talk with about the scientific or administrative information in your particular application or award are in the NIH institute or center that may fund the grant. Our resource on Contacting Staff at the NIH institutes and Centers will help you understand the roles of NIH program officials, scientific review officers, and grants management officials, when to contact them, and where to find their contact information.

Categories: NIH

The Issue that Keeps Us Awake at Night

May 4, 2018 - 11:49am

The most important resource for the successful future of biomedical research is not buildings, instruments, or new technologies – it’s the scientists doing the work. But by now, it’s no longer news that biomedical researchers are stressed – stressed by a hypercompetitive environment that’s particularly destructive for early- and mid-career investigators. But those are the researchers who, if we don’t lose them, will comprise the next generation of leaders and visionaries. Almost 10 years ago, the National Institutes of Health (NIH) took steps to improve funding opportunities for “early stage investigators”, those who were 10 years or less from their terminal research degree or clinical training. Those steps helped, but many stakeholders have concluded that more is needed. Stakeholders include members of Congress, who included a “Next Generation Researchers’ Initiative” (NGRI) in the 2016 21st Century Cures Act. This act asked NIH to support a comprehensive study by the National Academies of Sciences, Engineering, and Medicine (NASEM) on policies affecting the next generation of researchers and to take into consideration the recommendations made in their report. The National Academy began their study in early 2017 and completed it in April 2018. The NIH has initiated steps to fund more early stage investigators to improve opportunities for stable funding among investigators who, while funded, were still beset by unstable prospects. The NIH also convened a special Advisory Committee to the Director (ACD) Working Group, focused on the Next Generation Researchers Initiative (NGRI) with members included from all career stages – from a graduate student through senior faculty.

The NASEM NGRI panel recently released a long-awaited report, “The Next Generation of Biomedical and Behavioral Sciences Researchers: Breaking Through.” The report includes a detailed summary of previous reports and their recommendations, along with a data-driven description of the biomedical research workforce “landscape.” The report offers a number of recommendations that deserve close attention. The NASEM report presents its assessment and recommendations within a multi-actor systems context: “Many stakeholders tend to hold the federal government responsible for this system, placing blame for failures at the feet of NIH, the principal funder of biomedical research. Doing so, however, obscures the important role that other organizations, particularly universities, must play in developing and implementing solutions.” We welcome the chance to work with other stakeholders to find those solutions.

The NASEM panel also calls for greater degrees of data transparency and communications from all stakeholders. It notes that “a lack of comprehensive and easily available data about the biomedical research system itself has impaired progress.”  Therefore, “biomedical research institutions should collect, analyze, and disseminate comprehensive data on outcomes, demographics, and career aspirations of biomedical pre- and postdoctoral researchers using common standards and definitions.” Last December, in a welcome development, the recently formed “Coalition for Next Generation Life Science” announced that 10 major institutions would disseminate data that would help students and early-career researchers make better-informed decisions. These data include information on admissions, enrollment, degree completion rates and time, time spent in post-doctoral research fellowships, and jobs held by former graduate students and postdoctoral researchers.

Like the NASEM NGRI committee members, the ACD Working Group on NGRI is thinking in a systems-oriented data-driven manner. The Working Group is also wrestling with the issue that keeps us awake at night – considering how to make well-informed strategic investment decisions to nurture and further diversify the biomedical research workforce in an environment filled with high-stakes opportunity costs.  If we are going to support more promising early career investigators, and if we are going to nurture meritorious, productive mid-career investigators by stabilizing their funding streams, monies will have to come from somewhere. That will likely mean some belt-tightening in other quarters, which is rarely welcomed by the those whose belts are being taken in by a notch or two.

The NIH looks forward to integrating the recommendations of the NASEM NGRI report with the preliminary recommendations of the ACD NGRI Working Group in June, and their final report in December. We pledge to do everything we can to incorporate those recommendations, along with those of the NASEM panel, in our ongoing efforts to design, test, implement, and evaluate policies that will assure the success of the next generation of talented biomedical researchers.

Categories: NIH

Open Mike Perspective: Healthy Skepticism when Focusing Solely on Surrogate Endpoints in Clinical Research

May 3, 2018 - 5:29am

I recently wrote an essay for the NIH’s Science, Health, and Public Trust series to encourage a healthy bit of skepticism about clinical studies that solely involve surrogate end-points (e.g. changes in “biomarkers” like blood cholesterol levels or findings on an electrocardiogram).

To make my point, I described experiences with a well-known cardiovascular trial — one that focused on the risk of sudden death among heart attack survivors. I encourage readers in this essay to not just assume that treating a surrogate endpoint will automatically treat the underlying condition. Sometimes it does.  But sometimes it may not and may even cause more harm than good.

As I mention in the essay, one way to deal with this problem is to conduct more trials that focus on clinical endpoints. We should consider conducting them in innovative, efficient, cost-effective wayssuch as using pre-existing large-scale databases or “registries. I am open to hearing other thoughts and approaches which have the potential to address this challenge.

The NIH’s Science, Health, and Public Trust series is intended to provide perspectives, tools, and resources to improve the quality and usefulness of information about science and health for the public. It seeks to share strategies and best practices that might contribute to public understanding of the nature of biomedical research and its role in health.

Categories: NIH

Looking for a NIH Program Official in Your Research Area?

April 16, 2018 - 12:03pm

For years researchers have used the Matchmaker feature in NIH RePORTER to identify NIH-funded projects similar to their supplied abstracts, research bios, or other scientific text. Matchmaker was recently enhanced to make it just as easy to identify NIH program officials whose portfolios include projects in your research area.

After entering your scientific text (up to 15,000 characters), Matchmaker will analyze the key terms and concepts to identify up to 500 similar projects. Those projects will continue to show on the Projects tab with handy charts to visualize the results and quickly filter identified projects by Institute/Center, Activity Code, and Study Section. A new Program Official tab identifies the program officials associated with the matched projects and includes its own filters for Institute/Center and Activity Code. From the list of program officials you are one click away from their contact information and matched projects in their portfolios. Never before has it been so easy to answer the question “Who at NIH can I talk to about my research?”

Categories: NIH

“Cover Letters and their Appropriate Use” Podcast Now Available

April 16, 2018 - 11:52am

Ever wonder what you should and shouldn’t put in a grant application cover letter? Dr. Cathleen Cooper, director of the Division of Receipt and Referral in NIH’s Center for Scientific Review, explains just that in the latest addition to our “All About Grants”  podcast series – “Cover Letters and Their Appropriate Use” (MP3, Transcript).

All About Grants podcast episodes are produced by the NIH Office of Extramural Research, and designed for investigators, fellows, students, research administrators, and others just curious about the application and award process. The podcast features NIH staff members who talk about the ins and outs of NIH funding, and provide insights on grant topics from those who live and breathe the information. Listen to more episodes via the All About Grants podcast pagethrough iTunes, or by using our RSS feed in your podcast app of choice.

Categories: NIH

Impact of Teams Receiving NIH Funding

April 4, 2018 - 12:31pm

Almost 11 years ago, Stefan Wuchty, Benjamin Jones, and Brian Uzzi (all of Northwestern University) published an article in Science on “The Increasing Dominance of Team in Production of Knowledge.”  They analyzed nearly 20 million papers published over 5 decades and 2.1 million patents and found that across all fields the number of authors per paper (or patent) steadily increased, that teams were coming to dominate individual efforts, and that teams produced more highly cited research.

In a Science review paper published a few weeks ago, Santo Fortunato and colleagues offered an overview of the “Science of Science.”  One of their key messages was that “Research is shifting to teams, so engaging in collaboration is beneficial.”

I thought it would be worth exploring this concept further using NIH grants. For this post, data were acquired using a specific NIH portfolio analysis tool called iSearch. This platform prvides easy access to carefully curated, extensively-linked datasets of global grants, patents, publications, clinical trials, and approved drugs.

One way of measuring team size is to count the number of co-authors on published papers. Figure 1 shows box-and-whisker plots of author counts for 1,799,830 NIH-supported papers published between 1995 and 2017.  The black diamonds represent the means.  We can see from these data that the author counts on publications resulting from NIH support have steadily increased over time (mean from 4.2 to 7.4, median from 4 to 6).

Figure 2 shows corresponding data for 765,851 papers that were supported only with research (R) grants. In other words, none cited receiving support from program project (P), cooperative agreement (U), career development (K), training (T), or fellowship (F) awards.   We see a similar pattern in which author counts have increased over time (mean from 4.0 to 6.2, median from 4 to 5).  Also, of note is a drifting of the mean away from the median, reflecting an increasingly skewed distribution driven by a subset of papers with large numbers of authors.

Next, let’s look at corresponding data for papers that received support from at least one P grant (N=498,790) or at least one U grant (N=216,600) in Figures 3 and 4 respectively. As we can see, there are similar patterns emerging that were seen for R awards.

Figure 5 focuses on 277,330 R, P, or U-supported papers published between 2015 and 2017 and shows author counts for papers supported on R grants only (49%), P grants only (11%), U grants only (8%), R and P grants (16%), R and U grants (7%), and P and U grants (9%).  The patterns are not surprising – author counts are higher for papers supported by P and U grants—likely as these are large multi-factorial activities inherently involving many researchers—but even for R grant papers the clear majority involve multiple authors.

Finally, in Figure 6 we show a scatter plot (with generalized additive model smoother) of relative citation ratio (RCR) according author count for NIH-supported papers published in 2010. As a reminder, RCR is a metric that uses citation rates to measure influence at the article level. Consistent with previous literature, an increased author count is associated with higher citation influence – in other words, the more authors on a paper, then the more likely it is to be influential in its field.

Summarizing these findings:

  • Consistent with prior literature, we see that NIH-funded extramural research, including research funded by R grants, produce mostly multi-author papers, with increasing numbers of authors per paper over time.  These findings are consistent with the growing importance of team science.
  • Mechanisms designed to promote larger-scale team science (mainly P and U grants) generate papers with greater numbers of authors.
  • There is an association by which greater numbers of authors are associated with greater citation influence.

It is important to understand that, even in this competitive funding environment, research is shifting to teams. And when we look more closely at the impact of the shift, we see that collaboration is proving to move science forward in important ways. How big should teams be? Some recent literature suggests that small teams are more likely than large teams to produce disruptive papers. A few years ago, my colleagues published a paper on the NIH-funded research workforce; they found that the average team size was 6. Is this optimal? We don’t know.

There is much more for us to look at in terms of the role of team science in NIH supported research. In the meantime, it’s great to see more confirmation that scientific collaboration is truly beneficial to moving science forward.

Categories: NIH

Do Reviewers Read References? And If So, Does It Impact Their Scores?

March 30, 2018 - 9:33am

In March 2017, we wrote about federal funders’ policies on interim research products, including preprints. We encouraged applicants and awardees include citations to preprints in their grant applications and progress reports. Some of your feedback pointed to the potential impact of this new policy on the peer review process.

Some issues will take a while to explore as preprints become more prevalent. But some we can dig into immediately. For example, how do references cited in an application impact review?  To start to address this question, we considered another one as well: do peer reviewers look at references – either those cited by applicants or others – while evaluating an application?  We had heard anecdotes, ranging from “Yes, I always do,” to “No, I don’t need to,’ but we didn’t have data one way or the other. And if reviewers do check references, how does it impact their understanding and scoring of an application?

So, together with colleagues from the NIH Center for Scientific Review (CSR), we reached out to 1,000 randomly selected CSR reviewers who handled applications for the January 1, 2018 Council Round. There were an equal number of chartered (i.e. permanent) and temporary reviewers solicited to participate (n=500 each) over a three week period from November 16 to December 8, 2017.

Our survey focused on the last grant where they served as primary reviewer. Specifically, we asked if they looked up any references that were either included in the application (i.e. internal references), and if they also looked up any that were not included in the application (i.e. external references). Depending on their answers to each of these questions, we also proceeded to ask certain respondents follow-up questions to better understand their initial feedback. We felt it would be interesting to know, for example, how reading the paper or abstract impacted their understanding of the application and their score.

We received 615 responses (62% of total), including 306 chartered members and 309 temporary members.  Figure 1 shows the responses related to if they looked up references, either internal or external to the application.  Most reviewers answered yes – particularly for internal references.

Figure 2 goes a bit deeper – as a secondary question, we asked whether the references affected reviewers’ understanding of the applications.  The clear majority said yes. Figure 4, shows that most reviewers (~85%) found the references improved their understanding.

Next, we learned that of those reviewers that checked references, about 2/3 reported that the references affected their scoring for the application (Figure 3). References reviewers found on their own (external references) seemed slightly more influential.  Figure 4 shows references could impact the score in either direction.  References cited in the application were slightly more likely to improve scores than worsen them, and external references were slightly more likely to make scores worse than improve them.

Nearly half of the respondents even provided additional comments for us to consider.  Here is a sampling of their thoughts:

  • “References are of immense value.”
  •  “I look up references to judge the quality of the [principal investigator’s] work in relation to the rest of the field, to learn about the field in general, and to delve into specific questions that might be key to evaluation of the application.  This could result in changes to the score in either direction.”
  • “References are useful and sometimes critical.”

This experience was very enlightening. We were pleased to learn that most reviewers do look up references as part of their work in the peer review process, but preprints, at least for now, are too rarely cited in applications to have a clear impact. Further, both chartered and temporary reviewers shared similar perspectives on looking up references, which they noted often affects their understanding of the applications and resulting scores. Finally, they indicated that references internal to applications often lead to reviewers’ improving their scores.  We may need to revisit this survey as preprints and other interim products become more common.

Overall, this survey demonstrates, yet again, the time and care NIH reviewers spend on applications. They work hard for all of us-  NIH, applicants and the American public, and I am personally grateful to all of them.

I would like to acknowledge Neil Thakur with the NIH Office of extramural research as well as Mary Ann Guadagno, Leo Wu, Huong Tran, Cheng Zhang, Lin Yang, Chuck Dumais, and Richard Nakamura with the NIH Center for Scientific Review for their work on this project.

Categories: NIH

Celebrating Women’s History Month: Scientist Spotlight

March 27, 2018 - 8:07am

Women’s History Month quiz question (and no “Googling” allowed): Who was Joan Procter?

I didn’t know either until a few months ago when I learned that my colleague, Dr. Patricia Valdez, wrote a children’s book, called “Joan Procter, Dragon Doctor.”  Alfred A Knopf published Patricia’s and her illustrator Felicita Sala’s book a few weeks ago, on March 13, 2018.  Critics have already acclaimed the work: Publisher’s Weekly in a starred review wrote, “Valdez paints a portrait of a unique woman whose love for reptiles developed into a gratifying career.”

So, who was Joan Procter?  She was born in London in 1897 and had a rather unusual childhood.  By age 10 she had developed a fascination for reptiles – she read voraciously about them and kept a pet lizard.  At age 16 she brought a pet crocodile to school.  And when she was ready to graduate high school, she worked for Dr. George Boulenger, a curator at the British Museum.  Intestinal ailments prevented her from going to college, but fortunately Dr. Boulenger recognized her dedication and genius, taking her under his wing.

Procter presented her first scientific paper by age 19 (later published as Procter JB, On the variations of the pit viper Lachesis atria.  Proc Zool Soc London 1918:163-182), and in her early 20s she took over as curator when Dr. Boulenger retired.  Later she became reptile curator at the London Zoo, where she oversaw the building of a new reptile house and conducted internationally recognized scientific work on reptile and amphibian taxonomy.  She developed many innovative veterinary procedures.

Perhaps she is best known for her work with two Komodo Dragons — Sumba and Sumbawa, the first live dragons to come to a European zoo.  She demonstrated that in many respects these “fierce lizards” could be quite gentle; she was famous for taking Sumbawa to tea parties with children, to scientific conferences, and on walks around the zoo, steering him with his tail.

Sadly, Procter died at age 34 of her intestinal ailments.  Shortly before she passed away, the University of Chicago granted her an honorary Doctor of Science.

I thoroughly enjoyed Patricia’s book – and I already know of a number of children and parents in my neighborhood who have too. This book is the first of what may well be a series by Dr. Valdez highlighting the achievements of female scientists. I certainly hope the follow-on books come to fruition. We can all benefit from hearing these stories, and inspiring our children to fulfill their dreams.

Categories: NIH

Make Your Voice Heard! We want Your Ideas to Reduce Administrative Burden in Research with Laboratory Animals

March 15, 2018 - 8:27am

NIH has, for many years, been concerned about the increasing burden of applying for, reporting on, and the costs faced by researchers when complying with requirements on  federally-funded research grants— so much so that it is even called out in our strategic plan as an area to address. Today, as we continue to implement the 21st Century Cures Act, NIH is requesting public feedback on some proposed approaches to reduce administrative burden on investigators use of laboratory animals in biomedical research (NOT-OD-18-152 and Federal Register Notice 2018-05173). Together with our colleagues at the U.S. Department of Agriculture (USDA) and the Food and Drug Administration (FDA), we are looking for constructive and thoughtful feedback on this topic from individuals, research institutions, professional societies, animal advocacy organizations, and other interested parties. Input will be accepted electronically during a 90-day comment period, that is until June 12, 2018.

Through your participation, we hope to gain insights into how we can best improve the coordination and harmonization of regulations and policies with respect to research with laboratory animals. This call will help shed further light on where the community feels that regulations and policies are inconsistent, overlapping, or unnecessarily duplicative.

Using animals in research is critical to scientific understanding of biomedical systems leading to useful drugs, therapies, and cures. It is important to note that, even as we strive to identify ways to reduce administrative burden on our supported investigators, we simultaneously aim to maintain the highest standards of integrity and credibility within the biomedical research enterprise. This further extends to NIH continuing to ensure the greatest commitment to the welfare of laboratory animals involved in our supported research endeavors.

As part of examining existing regulations, staff within the NIH, USDA, and FDA have conducted listening sessions on the topic and diligently reviewed published materials aimed at reducing burden faced by investigators within the research community. Such resources span findings from a workshop held last April, National Academies of Science recommendations from 2016, the National Science Board’s considerations from 2014, and a survey about faculty workload published in 2012.

Ideas have been collected and analyzed for their relationship to existing statutes, regulations, and policies, as potential approaches to implement in support of the 21st Century Cures Act requirements to reduce regulatory burden on investigators in their use of animals. Some examples include:

  • Allow investigators to submit protocols for Institutional Animal Care and Use Committee continuing review using a risk-based methodology.
  • Allow institutional annual reporting to the NIH Office of Laboratory Animal Welfare (OLAW) and USDA on the same reporting schedule and as a single report through a shared portal.
  • Harmonize the guidance from NIH and USDA to reduce duplicative considerations of alternatives to painful and distressful procedures.
  • Provide a minimum 60-day comment period for new OLAW policy guidance.

We hope to hear from you during this process. Insights from the community are critical to helping us refine and ensure the final recommendations and implementation plans are appropriate to reducing administrative burden while maintaining our long-standing commitment to the humane care and use of animals in research.

Categories: NIH

Principal Investigators, Delegate!

March 12, 2018 - 1:26pm

Did you know that the eRA Commons allows principal investigators the ability to grant permission to have others at their institution help with some grants administration tasks?  You might want to consider whether delegating any or all of the following tasks is right for you:

All you need is another Commons user with the right role. Learn how!

Categories: NIH

How Many Researchers, Revisited: A Look at Cumulative Investigator Funding Rates

March 7, 2018 - 11:57am

In May 2016, we posted a blog on “How Many Researchers” NIH supports.  We cited the findings of a University of Wisconsin workshop, which concluded that the biomedical research enterprise suffers from two core problems: too many scientists vying for too few dollars and too many post-docs seeking too few faculty positions.  We also noted that NIH leadership and others were increasingly interested in describing the agency’s portfolio not only in terms of the numbers of awards and dollars (as we do each year in our “By the Numbers” reports), but also in terms of the numbers of researchers those awards support.  Today we show updated figures on how many researchers are vying for NIH support and how many are successful.

Figure 1 shows the number of unique applicants and awardees for Research Project Grants (RPGs).  The green (triangle) and red (circle) lines (and points) refer to unique people who applied for and received funding as a principal investigator on at least one RPG.  As before, the number of applicants is based on a 5-year window – that is, the number of applicants seeking funding in 2017 includes scientists who submitted applications in 2017, 2016, 2015, 2014, and 2013.  We see that the number of applicants increased substantially from 2003 (when the NIH doubling ended) to 2015, but since that time has slightly decreased (from a peak of 87,885 in 2015 to 87,567 in 2017).

The number of awardees increased, particularly in the past 3 years (see Figure 2, from 27,761 in 2015 to 29,835 in 2017).  As before, we used these two variables to generate a “cumulative investigator rate” (the ratio of unique awardees in any given year divided by the number of unique applicants over a 5-year window). This allows us to capture a broader view of NIH-supported scientists seeking funding over a window of time.  The cumulative investigator rate increased since 2014 from 31.3% to 34.1% as shown in Figure 1 (blue line with squares) and Figure 3), reversing a long-standing downward trend – but still much lower than 2003 when it was 44.9%.

Figure 4 shows corresponding data for R01 equivalents (R01, R37, RF1 and – for FY 2017 only – DP2 awards).  Since 2015, the number of unique R01 awardees increased from 20,726 to 22,100, but this number is only slightly higher than it was in 2003 when it was 21,447.  The number of unique R01 applicants has declined slightly, from a peak of 59,926 in 2014 to 58,669 in 2017, still much higher than it was in 2003 when it was 46,166.  The 5-year cumulative investigator rate increased from 34.8% in 2014 to 37.6% in 2017, again reversing a long-standing trend, but still much lower than 2003 when it was 46.5%.

Figures 5 and 6 show corresponding data for R21 and P01 awards.  For R21 awards, the patterns are similar as with R01 equivalents, though funding rates are much lower (11.6% for R21s compared to 37.6% for R01s). The number of P01 applicants and awardees continue to decline (Figure 6).

An important caveat is that the data shown in these figures focus on principal investigators, not all scientists and personnel involved with NIH research awards.  Those data are more difficult to capture, though we have reported on those previously and anticipate taking another look before too long.

Looking at these data, it appears that:

  • We are still very much in a state of hyper-competition, though the severity may be a bit less than it was a few years ago.
  • The number of unique applicants appears to be stabilizing after many years of un-interrupted growth.  This may reflect a previously described decline in the numbers of post-doctoral fellows.

The cumulative investigator rate, a person-based metric that looks at how many individual investigators are seeking funding over 5-year windows, has reversed its longstanding decline.  The recent improvement (though still far lower than it was at the end of the NIH-doubling) may reflect increased extramural research budgets as well as increased interest in programs that focus on funding and supporting a larger number of independent investigators.

Categories: NIH

FY 2017 By the Numbers

March 7, 2018 - 9:43am

We recently released our annual web reportssuccess rates and NIH Data Book with updated numbers for fiscal year 2017. Looking at data across both competing and non-competing awards, NIH supports approximately 2,500 organizations.  In 2017 about 640 of these organizations received funding for competing Research Project Grants (RPGs) which involved over 11,000 principal investigators.

The average size of RPGs increased by over 4%, from $499,221 in FY 2016 to $520,429 in FY 2017. Similarly, in FY 2017 the average size of R01-equivalent awards increased from $458,287 to $482,395 (by over 5%).

As we continue to monitor our annual progress in supporting more research, we see a noticeable shift, in 2017, because of an increase in the cost of research from commitments in competing and non-competing grants, or a redistribution of funds because of special initiatives.  Although a total of 54,005 competing RPG applications were reviewed in 2017, there is a slight decrease (0.4%) compared to fiscal year 2016.  Of total competing RPG applications, 31,221 were for R01-equivalent grants (as a reminder, R01-equivalents are mostly R01s, but also include activity codes for similar independent RPG programs such as the R37 MERIT award).

Competing research project grants (RPGs) represented 10,123 awards, a 2.4% decrease compared to fiscal year 2016, but we funded more RPG dollars in fiscal year 2017, which is evident from the increase in average size of RPGs. Though fewer R01 grants were awarded in fiscal year 2017, there was an increase of over $18 million in R01 funding compared to the prior fiscal year. We awarded more than $29 million more towards the High Priority, Short-Term Project Award (R56), used as a mechanism to provide interim support to meritorious applications pending an R01 award. The RF1 is one Multi-Year Funded RPG program used significantly more in FY17; more than twice the amount we funded in 2016- the increase was of nearly $217 million; a large part of which comprised NIA’s funding towards Alzheimer’s research. For more exploratory and/or developmental research, the R61 activity code, launched in fiscal year 2016 in lieu of R21, we spent almost $10 million more than last fiscal year. We awarded over $225 million more for a few other RPG-specific activity codes including, U01, U19, UG3 and UC4, which involve investigator-specific research interest, exploratory and/or developmental research and high impact research and research infrastructure cooperative agreements. These additional funds included some towards NCI’s Cancer Moonshot initiative.

The success rate for competing FY 2017 RPG applications was 18.7% compared to 19.1% in FY 2016. The 2017 success rate for competing R01-equivalent applications was also slightly lower than last year (19.3% compared with 19.96% in FY 2016). Success rates continue to remain far below the 30% levels we saw 15-20 years ago, during the NIH doubling; the low success rates reflect the hypercompetitive environment we continue to face.  For those interested, you can also review success rate data on NIH RePORT throughout the years within each NIH funding Institute or Center or by grant type and activity code.

The table below highlights these and some additional numbers from the 2017 fiscal year as well the two prior fiscal years.

 

2014

2015

2016

2017

Research Project Grants

 

 

 

 

Number of research project grant (RPG) applications:

51,073

52,190

54,220

54,005

Number of new or renewal (competing) RPG awards:

9,241

9,540

10,372

10,123

Success rate of RPG applications:

18.1%

18.3%

19.1%

18.7%

Average size of RPGs:

$472,827

$477,786

$499,221

$520,429

Total amount of NIH funding that went to RPGs (both competing and noncompeting):

$15,635,912,476

$15,862,012,059

 $17,137,754,907

$18,321,187,243


R01-equivalents

 

 

 

 

Number of R01-equivalent grant applications:

27,502

28,970

30,106

31,221

Number of new or renewal (competing) R01-equivalent awards:

5,163

5,467

6,010

6,041

Success rates for R01-equivalent applications:

18.80%

18.90%

19.96%

19.3%

Average size of R01-equivalent awards:

$427,083

$435,525

$458,287

$482,395

Total amount of NIH funding that went to R01-equivalents (both competing and non-competing):

$10,238,888,890

$10,279,687,172

$11,077,251,191

$11,960,007,850

Categories: NIH

Requesting Your Input on the Draft NIH Strategic Plan for Data Science

March 5, 2018 - 12:06pm

To capitalize on the opportunities presented by advances in data science, the National Institutes of Health (NIH) is developing a Strategic Plan for Data Science. This plan describes NIH’s overarching goals, strategic objectives, and implementation tactics for promoting the modernization of the NIH-funded biomedical data science ecosystem. As part of the planning process, NIH has published a draft of the strategic plan today, along with a Request for Information (RFI) to seek input from stakeholders, including members of the scientific community, academic institutions, the private sector, health professionals, professional societies, advocacy groups, patient communities, as well as other interested members of the public.

On behalf of Dr. Jon Lorsch, Director of the National Institute of General Medical Sciences and co-chair of the NIH Scientific Data Council, which is overseeing development of the Strategic Plan for Data Science, I encourage your comments and suggestions. Responses should be submitted via an online form by April 2, 2018.

Categories: NIH

After My Application is Submitted, Can I Include a Copy or Citation of a Preprint as Post-submission Materials?

March 2, 2018 - 1:53pm

No. Pre-prints are not included in the list of allowable post-submission materials, because they do not fall in the category of unanticipated events.

Post-submission materials are not intended to correct oversights or errors discovered after submission of the application, but rather allow applicants the opportunity to respond to unforeseen events.

See NOT-OD-17-066 and our post-submission policy FAQs for more information on NIH’s post-submission material policy,

Categories: NIH

May I Submit Citations of Newly-received Issued Patents Relevant to My Application as Post-submission Materials?

March 2, 2018 - 11:37am
Yes. Citations of newly-received issued patents are allowable post-submission materials, because issuance of the patent is not in the control of the investigator.

A citation of a patent must include the names of the inventors, patent title, issued patent number (including country designation, e.g, US for USA), filing date, and the date the patent was issued:

 Smith, Samuel S., and John J. Jones. Method of citing issued patents. US Patent #,###,###, filed December 31, 2015, and issued December 27, 2016.

The AOR and PI may submit citations of issued patents as post-submission materials, but copies of patent applications or patents, or any other materials related to a patent application or granted patent will not be accepted as post-submission materials, unless specified in the Funding Opportunity Announcement (FOA) for which the application was submitted or in a special Guide Notice.

See NOT-OD-17-066 and our post-submission policy FAQs for more information.

 

Categories: NIH

xTRACT Anticipated to be Required in Fiscal Year 2020

February 22, 2018 - 12:47pm

In October 2015, eRA introduced xTRACT as an electronic system within eRA Commons for creating research training data tables and tracking trainee outcomes. xTRACT permits users to leverage data already in eRA Commons to pre-populate training tables with trainee names, institution information, award information, etc., which can be used both in new application submissions and for progress reports [the Research Performance Progress Report (RPPR)].

While use of xTRACT is not required currently, it is anticipated to be required as of FY 2020 for certain types of training grant applications. RPPRs due on or after the start of October 1, 2019, and all application submissions on or after January 25, 2020 will be required to use xTRACT for training data tables for T32, TL1, T90/R90, and T15 applications and progress reports (see NIH Guide Notice NOT-OD-18-133).

In anticipation of this transition and based on feedback from users, eRA has added new features to xTRACT in recent months to reduce the burden on grantees and improve the quality of data in the tables.  For instance, many institutions felt that the options to upload batch data would be very helpful, since these institutions rely on their internal systems for tracking.  So xTRACT now includes the option to batch upload participating faculty information, batch upload trainee and student information, and upload institutional non-NIH funding sources (past and present). This will save institutions from having to do duplicative data entry. In addition, in the future, institutions will be able to export the data from xTRACT and feed it back to their internal systems by using the XML data attached to the training table PDF.

In addition, xTRACT provides the ability to copy data forward for the current year’s submission, which had been previously entered for the prior year’s RPPR or renewal.

We urge grantees to get familiar with xTRACT today, so the transition will be easier.  A number of resources are available for your use on the xTRACT Overview page. Click on these links for specific instructions on bulk upload of trainee data; upload of funding sources and batch upload of participating faculty.

Categories: NIH

How do you define a “study” for the purposes of providing information on the PHS Human Subject and Clinical Trial form and registering in ClinicalTrials.gov?

February 5, 2018 - 1:31pm
Our application instructions provide guidance to submit a study record for each protocol. When in doubt, NIH supports lumping several aims or hypotheses into a single study record, to the extent that makes sense for your research. Have other questions related to the new PHS Human Subject and Clinical Trial form or NIH clinical trial policies? Find more FAQs and their answers at grants.nih.gov.
Categories: NIH