Skip to main content

Rapid, responsive, relevant (R3) research: a call for a rapid learning health research enterprise


Our current health research enterprise is painstakingly slow and cumbersome, and its results seldom translate into practice. The slow pace of health research contributes to findings that are less relevant and potentially even obsolete. To produce more rapid, responsive, and relevant research, we propose approaches that increase relevance via greater stakeholder involvement, speed research via innovative designs, streamline review processes, and create and/or better leverage research infrastructure. Broad stakeholder input integrated throughout the research process can both increase relevance and facilitate study procedures. More flexible and rapid research designs should be considered before defaulting to the traditional two-arm randomized controlled trial (RCT), but even traditional RCTs can be designed for more rapid findings. Review processes for grant applications, IRB protocols, and manuscript submissions can be better streamlined to minimize delays. Research infrastructures such as rapid learning systems and other health information technologies can be leveraged to rapidly evaluate new and existing treatments, and alleviate the extensive recruitment delays common in traditional research. These and other approaches are feasible but require a culture shift among the research community to value not only methodological rigor, but also the pace and relevance of research.


Despite increasing demands to produce timely and relevant research findings, our traditional research process remains painstakingly slow. Randomized efficacy trials take approximately 5.5 years from the initiation of enrollment to publication [1], and 7 years or longer after adding the time from grant application submission to enrollment initiation. Extensive follow-up periods for relevant outcomes such as morbidity/mortality as well as delays in participant recruitment and publication can extend this time period to a decade or longer. During this period, scientific and technological advances will occur that may make the eventual findings less relevant or even obsolete. For illustration, Figure 1 shows a few of the salient consumer technologies introduced during a typical seven year clinical trial. These recent advances in consumer technologies are most impactful for mobile and wireless health research, [2] but many less mainstream scientific and medical technological advances also occur while clinical trials are being conducted. For example, one explanation for the recently reported negative results of the SAMMPRIS trial was that stent technologies and surgery procedures had advanced substantially since study initiation [3, 4].

Figure 1
figure 1

Consumer technology advances missed during a typical RCT published in 2012.

This protracted period from concept to publication is further exacerbated by the slow and limited uptake of research findings into practice. Balas and Boren have estimated that it takes approximately 17 years from concept to evidence implementation for the 14% of evidence that progresses to implementation [5]. A recent report from the President’s Council of Advisors on Science and Technology (PCAST) estimated that 3,000 treatments are in development and concluded that major upgrades are needed in the research system to evaluate these treatments [6]. An Institute of Medicine report on clinical trials states that “recognition is growing that the clinical trials enterprise in the United States faces substantial challenges impeding the efficient and effective conduct of clinical research to support the development of new medicines and evaluate existing therapies [7].” Clearly, our current research enterprise is too slow, inefficient, and cumbersome to meet the rapidly evolving demand.

What are needed are “rapid-learning research systems” that integrate researchers, funders, health systems, practitioners, and community partners asking clinically relevant questions, using efficient and innovative research designs, and leveraging rich, longitudinal data sets from millions of patients. To begin progress toward such a system, we considered and have described approaches to make research more rapid, responsive and relevant (R3), organized into four sections for the purpose of this paper: 1) stakeholder engagement, 2) design, 3) review, and 4) infrastructure (see Table 1). These suggested approaches are viewed as a starting point for a dialogue among the health research community to challenge our current cumbersome research enterprise and to consider these or other approaches to maintain scientific rigor while speeding the process by which more responsive and relevant research findings are produced.

Table 1 Issues in promoting rapid research by stage of research

Relevance and stakeholder engagement

Broad stakeholder engagement involving patients, providers, health plans, policy makers and other relevant stakeholders may seem counterintuitive as a strategy to speed research, but this time investment has the potential to improve the recruitment and retention of study participants, thus increasing the pace of conducting the study. More importantly, stakeholder engagement increases the likelihood that findings will be relevant to stakeholders and more readily adopted into practice, thereby making the overall research pipeline more efficient. These “evaluability assessments” [8] or participatory approaches are considered key to facilitating the adoption of research findings by practitioners [9]. The Patient Centered Outcomes Research Institute (PCORI), for example, is creating public advisory groups and soliciting patient input on specific comparative effectiveness questions that are relevant to practitioners and stakeholders [10].

An ongoing relationship between researchers, healthcare providers, health plans, and patients is critical to a better, faster research system. Clinical trial recruitment is a major problem with about 90% of US trials failing to meet enrollment goals [7]. The NIH Health Care System Collaboratory (HCSC) offers an important resource for rapid research. Like the HMO Research Network and the VA QUERI program, the HCSC will make available opportunities to conduct large-scale studies within well-organized healthcare delivery systems [11]. Research embedded in organized delivery systems and networks enhances not only research relevance, but also facilitates recruitment, retention, study start-up, operations, data capture, and integration into practice.

An accelerated research system can also use information technologies to speed the process of seeking and obtaining stakeholder feedback. Consistent with a citizen-scientist model [12], a virtual network of various stakeholders can participate and provide feedback throughout the research process using online surveys, virtual meetings, and social media systems [13]. Via innovative technologies, stakeholder feedback can be obtained efficiently to increase research relevance and responsiveness, even for researchers who are not fully integrated into the practice setting or community where their research findings are likely to be translated into practice.

Rapid research designs

Traditional study designs and procedures are well-established, rigorous, and notoriously slow and costly. This belabored research process typically begins with pilot trials that we posit have limited benefit, are used inappropriately to estimate effect size [14], and often prematurely concretize a less than optimal intervention. Instead, we recommend replacing the traditional pilot trial with a more flexible iterative intervention testing and optimization approach, analogous to the agile software development process that places a premium on failing early to succeed later [15]. For example, N-of-1 trial designs provide intervention development flexibility. With the increasing availability of intensive longitudinal data from wireless sensors and mobile devices, N-of-1 trials can be rapidly implemented and provide results congruent with a more personalized medicine approach [16], and Bayesian analyses from a series of such trials [17] may provide sufficient evidence of generalizability to limit the need for a larger trial.

Intervention optimization designs such as fractional factorial and sequential multiple assignment research trials (SMART) are particularly valuable when the intervention development questions involve combinations or sequences of intervention components [18]. Dynamic system models have also been used to optimize treatments [19]. Some optimization approaches may take more time than the traditional pilot trial, but the pace of the overall research enterprise will be improved by more quickly discarding or modifying interventions that are unlikely to be found effective in larger and more expensive trials.

Within the traditional RCT, researchers have a number of design decisions that can increase efficiency. Trials that utilize within-group designs in which participants serve as their own controls can speed the research process by reducing the number of study participants needed to detect outcomes, and can often simplify study procedures as well. The Minimal Intervention Needed for Change (MINC) standard [20] provides a standard pragmatic comparison anchor across studies for comparative effectiveness research. The VA is adopting a “point of care” randomization that computer-randomizes patients to different treatments, and then uses adaptive algorithms to change allocation of new patients as evidence accumulates [21]. Recent technological advances make it possible to conduct “automated RCTs” in which the enrollment, random assignment, intervention delivery, and outcome assessments are fully automated. To fully realize the potential of automated RCTs and other rapid learning systems, the nature of and procedures for informed consent need to be resolved.

Follow-up periods also can be shortened or segmented. Results can be analyzed at the point where the maximal benefit of the intervention is hypothesized to occur. Longer-term outcomes can be modeled from these results, or one of the investigators can remain blind to conduct the follow-up portion of the study and publish the follow-up results separately.

In addition to improving the efficiency of RCTs, we also need to consider alternative designs that may be more appropriate to the research question and provide more rapid and relevant answers. A range of within-subject and quasi-experimental designs such as interrupted time series [22], stepped wedge [23], and regression discontinuity [24] may have less internal validity than the RCT, but offer a number of advantages [25]. For example, these quasi-experimental approaches facilitated participation of the major Minnesota health insurers in the DIAMOND depression treatment program [26]. These designs may be particularly appropriate for evaluating treatments already adopted in practice.

Rapid review processes

It takes 9 to 11 months from NIH grant submission to funding [27]. If revised and resubmitted, and assuming a six month revision period, it can take two years from initial submission to the award of a revised (A1) NIH grant application. During this time, science and technology continue to advance; research partnerships, especially with non-research stakeholders, must be maintained; and the research questions may become less relevant or timely.

Grant review and funding processes could be streamlined in a number of ways. For the Recovery Act Challenge Grants [28], a flexible, two-stage review process was implemented to reduce to five months the time from receipt to funding. Rapid review processes are already used by the NIH for time sensitive natural experiments [29, 30]. In response to the SARS (Severe Acute Respiratory Syndrome) outbreak, the Canadian Institutes of Health Research developed and issued a funding announcement that resulted in 18 submissions within 2 weeks, and these submissions were reviewed and four approved for funding within 10 days of submission [31]. These rapid review examples clearly indicate that it is possible to review and fund research applications quickly when necessary, and that such rapid review systems should be considered for a broader range of research, including timely and pressing clinical and public health questions.

The grant application review process could facilitate more rapid research not only by reviewing more efficiently, but also by placing a greater premium on more rapid and innovative research designs. Despite the addition of innovation as an NIH review criteria, a recent study of grant applications revealed that novelty is associated with a 4.5 percentile point drop, and that feasibility concerns did not contribute substantially to this “novelty penalty” [32]. Innovative designs, especially those that speed the pace of research relative to traditional designs, should be rewarded, not penalized.

Institutional Review Boards (IRBs) also should consider streamlining review procedures. Slowness of research should be considered a risk, both to study participants who may continue in their assigned treatment even as newer treatments become available, and to the broader public who are delayed getting answers to relevant research questions. Revisions to the Common Rule are anticipated to allow for a more flexible and rapid review process [33].

Online and open access publication practices have greatly reduced the time from acceptance to publication [34] but could be further facilitated by a better or more incentivized process for acquiring reviewers and obtaining reviews. As Green noted, new technologies for publication, systematic reviews, and dissemination of evidence-based guidelines reduce the time from research findings to practitioner adoption, but the publication and dissemination process should continue to be reviewed to further reduce the time lapses between the various stages of the dissemination and implementation process [9].

Infrastructure for rapid research

Improving our research infrastructure has the potential not only to speed the pace of research, but also increase its rigor and relevance. The health system has lagged decades behind other sectors in IT implementation [35]. As a result, health research has been severely constrained by a data-poor environment in which acquiring needed research data is expensive, difficult, and time-consuming. Since the rapid-learning health system and learning healthcare system concepts were advanced in 2007 [36], major investments have been made in databases and learning networks to take advantage of the research potential of electronic health records. It is now possible to conduct some studies in weeks or months instead of years. The FDA mini-Sentinel system accesses 125 million patient records to generate several studies per week on drug safety questions [37]. Large biobanks are now coming on-line at Kaiser-Permanente, [38] the Veterans Health Administration [39], the ENCODE network, [40], and the UK Biobank [41]. Using these and other patient databases, researchers have been able to assess the unintended effects of treatments [42] and produce outcome findings comparable to RCTs [43].

Large future investments are now being considered that could offer extraordinary opportunities for researchers and a faster, more efficient infrastructure for rapid learning research. The NIH Director has proposed a new national patient-oriented research system with electronic health records databases, including genomics, for 20–30 million patients [44], and PCORI recently released a funding announcement to support development of the National Patient Centered Clinical Research Network [10]. The Big Data to Knowledge (BD2K) initiative [45] also provides the opportunity to leverage these large data sets for rapid research.

Researchers can accelerate the collective pace of learning with greater attention to reporting comparable data. There have been a number of efforts to encourage the use of common data elements [46]. Standardized outcome data are particularly problematic for patient-reported outcomes. To address this problem, there have been consensus measurement efforts such as PhenX [47] as well as efforts to co-calibrate various patient-reported outcome measures on a single metric [48].

The National Research Council report on Precision Medicine calls for a new science commons and national learning system that will revolutionize biomedical research, clinical care, and public health [49]. One of the benefits of such a system is that researchers can more readily target drug approval studies to predicted high-response populations, and cut years from the drug research process. Gleevec, an anti-cancer drug, was approved in a trial of only 54 patients because nearly all showed benefit [6]. With targeted therapeutics and research, it took only four years from target discovery to drug approval for the lung cancer drug Xalkori [50]. Rapid learning systems of large patient populations appear to provide the infrastructure to rapidly evaluate treatments.


We have outlined a number of actions to enhance relevance, streamline design, speed review, and use new research infrastructures to make research more rapid, relevant, and responsive to the 21st century demands on health research. This transformation to a rapid research learning system will require a concerted effort by research funders, academic institutions, healthcare systems, researchers, and a variety of practice, community, and policy stakeholders to produce a culture change among the health research community. Will we continue to use limited funds to support the currently slow and cumbersome research enterprise that produces costly results that may not be relevant or easily translated into practice, or are we willing to pursue alternative approaches that in other research disciplines have produced rapid and relevant improvements?

The rationale and opportunities for such a culture change have never been greater or rapid answers more needed. Our recommendations to speed research are undoubtedly incomplete, and we invite the research community to contribute additional recommendations to increase the speed and relevance of the research enterprise. This call to streamline and speed the research process is also likely to be met with skepticism, especially among those who fear that methodological rigor might be compromised in a quest for greater efficiency. We believe that the recommendations outlined in this paper can be achieved without compromising scientific rigor, and any efforts to streamline research should be judged based on methodological soundness. We are convinced, however, that the currently dominant health research paradigm is too slow and inefficient to address today’s challenges, and that we must produce a more rapid, responsive, and relevant research enterprise.


  1. Ioannidis JP: Effect of the statistical significance of results on time to completion and publication of randomized efficacy trials. JAMA 1998, 279: 281–286. 10.1001/jama.279.4.281

    Article  CAS  PubMed  Google Scholar 

  2. Nilsen W, Kumar S, Shar A, Varoquiers C, Wiley T, Riley WT, Pavel M, Atienza AA: Advancing the science of mHealth. J Health Communication 2012, 17: 5–10.

    Article  PubMed  Google Scholar 

  3. Chimowitz MI, Lynn MJ, Derderyn CP, Turan TN, Fiorella D, for the SAMMPRIS Trial Investigators: Stenting versus aggressive medical therapy for intracranial arterial stenosis. NEJM 2011, 265: 993–1003.

    Article  Google Scholar 

  4. Qureshi AI: Interpretations and implications of the prematurely terminated stenting and aggressive medical management for preventing recurrent stroke in the intracranial stenosis (SAMMPRIS) trial. Neurosurg 2012, 70: E264-E268. 10.1227/NEU.0b013e318239f318

    Article  Google Scholar 

  5. Balas EA, Boren SA: Managing clinical knowledge for health care improvement: Yearbook of medical informatics. Stuttgart: Schattauer; 2000.

    Google Scholar 

  6. President’s Council of Advisors on Science and Technology (PCAST) Report to the President on Propelling Innovation In Drug Discovery, Development, and Evaluation.

  7. Weisfeld N, English RA, Claiborne AB: Envisioning a Transformed Clinical Trials Enterprise in the United States. Washington, DC: Institute of Medicine, National Academies Press; 2012.

    Google Scholar 

  8. Leviton LC, Khan LK, Rog D, Dawkins N, Cotton D: Evaluability assessment to improve public health policies, programs, and practices. Ann Rev Public Health 2010, 31: 213–233. 10.1146/annurev.publhealth.012809.103625

    Article  Google Scholar 

  9. Green LW: Making research relevant: if it is an evidence-based practice, where’s the practice-based evidence? Fam Pract 2008, 25: i20-i24. 10.1093/fampra/cmn055

    Article  PubMed  Google Scholar 

  10. Patient-Centered Outcomes Research Institute.

  11. NIH HCS Collaboratory.

  12. Silverton J: A new dawn for citizen science. Trends Ecol Evol 2009, 24: 467–471. 10.1016/j.tree.2009.03.017

    Article  Google Scholar 

  13. Yao JT, Liu WN: Web-based dynamic Delphi: a new survey instrument.

  14. Leon AC, Davis LL, Kraemer HC: The role and interpretation of pilot studies in clinical research. J Psychiatr Res 2011, 45: 626–629. 10.1016/j.jpsychires.2010.10.008

    Article  PubMed Central  PubMed  Google Scholar 

  15. Gary K, Enquobahrie A, Ibanez L, Cheng P, Yaniv Z: Agile methods for open source safety-critical software. Softw Pract Exp 2011, 41: 945–962. 10.1002/spe.1075

    Article  PubMed Central  PubMed  Google Scholar 

  16. Lillie EO, Patay B, Diamant J, Issell B, Topol EJ, Schork NJ: The N-of-1 clinical trial: the ultimate strategy for individualizing medicine? Per Med 2011, 8: 161–173. 10.2217/pme.11.7

    Article  PubMed Central  PubMed  Google Scholar 

  17. Zucker DR, Ruthazer R, Schmid CH: Individual (N-of-1) trials can be combined to give population comparative treatment effect estimates: methodologic considerations. J Clin Epidemiology 2010, 63: 1312–1323. 10.1016/j.jclinepi.2010.04.020

    Article  Google Scholar 

  18. Collins LM, Murphy SA, Stretcher V: The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent e-health interventions. Am J Prev Med 2007, 32: S112-S118. 10.1016/j.amepre.2007.01.022

    Article  PubMed Central  PubMed  Google Scholar 

  19. Rivera DE, Pew MD, Collins LM: Using engineering control principles to inform the design of adaptive interventions: a conceptual introduction. Drug Alcohol Depend 2007, 88: S31-S40.

    Article  PubMed Central  PubMed  Google Scholar 

  20. Gierisch JM, DeFrank JT, Bowling JM, Rimer BK, Matuszewski JM: Finding the minimal intervention needed for sustained mammography adherence. Am J Prev Med 2010, 39: 334–344. 10.1016/j.amepre.2010.05.020

    Article  PubMed Central  PubMed  Google Scholar 

  21. Fiore LD, Brophy M, Ferguson RE, D’Avolio L, Hermos JA: A point-of-care clinical trial comparing insulin administered using a sliding scale versus a weight-based regimen. Clin Trials 2011, 8: 183–195. 10.1177/1740774511398368

    Article  PubMed Central  PubMed  Google Scholar 

  22. Linden A, Adams JL: Applying a propensity score-based weighting model to interrupted time series data: Improving causal inference in programme evaluation. J Eval Clin Pract 2011, 17: 1231–1238. 10.1111/j.1365-2753.2010.01504.x

    Article  PubMed  Google Scholar 

  23. Brown CA, Lilford RJ: The stepped wedge trial design: a systematic review. BMC Med Res Methodol 2006, 6: 54–62. 10.1186/1471-2288-6-54

    Article  PubMed Central  PubMed  Google Scholar 

  24. Shadish WR, Galindo R, Wong VC, Steiner PM, Cook TD: A randomized experiment comparing random and cut-off based assignment. Psychol Methods 2011, 16: 179–191.

    Article  PubMed  Google Scholar 

  25. Glasgow RE, Magid DJ, Beck A, Ritzwoller D, Estabrooks PA: Practical clinical trials for translating research to practice: design and measurement recommendations. Medical Care 2005, 43: 551–557. 10.1097/01.mlr.0000163645.41407.09

    Article  PubMed  Google Scholar 

  26. Solberg LI, Glasgow RE, Unutzer J, Jaeckels N, Oftedahl G: Partnership research: a practical trial design for evaluation of a natural experiment to improve depression care. Med Care 2010, 48: 576–582. 10.1097/MLR.0b013e3181dbea62

    Article  PubMed Central  PubMed  Google Scholar 

  27. National Institutes of Health Office of Extramural Research, Grants Review Process.

  28. Recovery Act Limited Competition: NIH Challenge Grants in Health and Science Research (RC1).–003.html

  29. Time-Sensitive Obesity Policy and Program Evaluation (R01).–257.html

  30. Rapid Assessment Post-Impact of Disaster (R21).–181.html

  31. Singh B: Innovation and challenges in funding rapid research responses to emerging infectious diseases: lessons learned from the outbreak of severe acute respiratory syndrome. Infect Dis Med Microbiol 2004, 15: 167–170.

    Google Scholar 

  32. Boudreau KJ, Guinan EC, Lakhani KR, Riedl C: The novelty paradox and bias for normal science: evidence from randomized medical grant proposal evaluations.

  33. Human Subjects Research Protections: Enhancing Protections for Research Subjects and Reducing Burden, Delay, and Ambiguity for Investigators.–07–26/html/2011–18792.htm

  34. Gupta KC: What does the marriage of Open Access and online publication bring? AIDS Res Ther 2004, 14: 1.

    Article  Google Scholar 

  35. Bender MW, Mitwalli AH, Van Kuiken SJ: What’s holding back online medical data. McKinsey Quarterly;

  36. Etheredge LM: A rapid-learning health system. Heal Aff 2007, 26: W107-W118. 10.1377/hlthaff.26.2.w107

    Article  Google Scholar 

  37. Moores K, Gilchrist B, Carnahan R, Abrams T: A systematic review of validated methods for identifying pancreatitis using administrative data. Pharmacoepidemiol Drug Saf 2012, 21(Suppl 1):194–202.

    Article  PubMed  Google Scholar 

  38. Kaiser Permanente Research Program on Genes, Environment, and Health (RPGEH).

  39. VA Specimen Research and Biobanking Program.

  40. The ENCODE Project: ENCyclopedia Of DNA Elements.

  41. UK Biobank.

  42. Hippisley-Cox J, Coupland C: Unintended effects of statins in men and women in England and Wales: population-based cohort study using the QResearch database. BMJ 2010, 340: c2197. 10.1136/bmj.c2197

    Article  PubMed Central  PubMed  Google Scholar 

  43. Tannen RL, Weiner MG, Xie D: Use of primary care electronic medical record database in drug efficacy research on cardiovascular outcomes: comparison of database and randomized controlled trial findings. BMJ 2009, 338: b81. 10.1136/bmj.b81

    Article  PubMed  Google Scholar 

  44. Collins F: A Vision for a National Patient-Centered Research Network.

  45. NIH Undertakes “Big Data” Initiatives.

  46. NIH Common Data Element Research Portal.

  47. Hamilton CM, Strader LC, Pratt JG, Maiese D, Hendershot T: The PhenX Toolkit: get the most from your measures. Am J Epidemiol 2011, 174: 253–260. 10.1093/aje/kwr193

    Article  PubMed Central  PubMed  Google Scholar 

  48. Noonan VK, Cook KF, Bamer AM, Choi SW, Kim J, Amtmann D: Measuring fatigue in persons with multiple sclerosis: creating a crosswalk between the Modified Fatigue Impact Scale and the PROMIS Fatigue Short Form. Qual Life Res 2012, 21: 1123–1133. 10.1007/s11136-011-0040-3

    Article  PubMed  Google Scholar 

  49. National Research Council: Toward Precision Medicine: Building a New Knowledge Network for Biomedical Research and a New Taxonomy of Disease. Washington DC: National Academies Press; 2011.

    Google Scholar 

  50. Sollano JA: Delivering Value Through Personalized Medicine: An Industry Perspective.–24-july.html

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to William T Riley.

Additional information

Competing interests

The authors declare that they have no competing interests. Preparation of this manuscript was supported in part by a grant from The Robert Wood Johnson Foundation to George Washington University to the third author. The funders had no role in this manuscript. The opinions expressed are those of the authors and do not necessarily reflect those of the Robert Wood Johnson Foundation or the National Cancer Institute.

Authors’ contributions

WTR led the writing and preparation of this paper. REG, LE, and APA all contributed sections to this paper, and revised and edited drafts of the paper. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Riley, W.T., Glasgow, R.E., Etheredge, L. et al. Rapid, responsive, relevant (R3) research: a call for a rapid learning health research enterprise. Clin Trans Med 2, 10 (2013).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: