Building the Evidence Base of Interventions
The focus of this quarter’s newsletter is improving our understanding of how we approach and build the evidence-base of interventions. We aim to clarify concepts of evidence with the goal of supporting a comprehensive vision for the use of research evidence in developing, evaluating, implementing, and sustaining evidence-based interventions (EBIs) to ultimately improve public health and well-being at scale.
Evidence comes in multiple forms.
Brownson and colleagues (2022) describe evidence across three broad domains:
- Type 1: Evidence on etiology (i.e., the study of the causes of diseases/behavioral problems) and burden (the impact of disease/behavioral problem on a population);
- Type 2: Evidence on effectiveness of interventions; and
- Type 3: Evidence on dissemination and implementation (D&I).
The authors explain that the three types of evidence are often not linear, but interconnected, iterative, and overlapping—they shape one another (e.g., if we have limited type 2 evidence then the ability to apply type 3 evidence is hampered).
Meanwhile, several versions of an evidence continuum have been developed to clarify and define standards of the “best available research evidence.” Most examples focus on what Brownson and colleagues (2022) would qualify as “type 1” and “type 2” evidence.
For example, Colorado is one of many states that has created a framework called Colorado’s Evidence Continuum, which is used to strengthen the state’s budget investment choices, encourage innovation, and expand the use of evaluation to understand whether programs “work.” Colorado’s continuum includes five ascending steps, each of which represent the stages of building and assessing program information.
- Step 1: Program design, which requires a logic model showing how program activities should cause the desired changes.
- Step 2: Identify outputs, which refers to observable measures of service delivery provided (e.g., number of individuals served).
- Step 3: Assess outcomes, such as conducting a pre- and posttest involving individuals who participate in the program.
- Step 4: Attain initial evidence, which involves examining outcomes for individuals who receive the program (i.e., treatment group) compared to those who do not receive the program.
- Step 5: Attain causal evidence, which often includes conducting one or more randomized controlled trials (RCTs) – or experiments in which researchers randomly assign participants to a treatment group or a control group.
According to Colorado’s framework, as programs move up the five steps of the evidence continuum by using more advanced research designs, studies with higher levels of precision offer decision-makers greater confidence that investments could lead to intended outcomes. These research levels include:
- “Theory-informed” at the lowest level, denoting advice from program providers, experts, and user satisfaction surveys.
- “Evidence-informed” in the middle, indicating preliminary evaluation with a before and after (pre-test and posttest) design.
- “Proven” at the highest level, requiring one RCT or two or more comparison studies with strict statistical controls.
Interventions with a Promising, Model or Model Plus rating on the Blueprints registry are in the “proven” stage of Colorado’s evidence continuum.
Another framework for considering the evidence base of an EBI evolved from a work group formed in 2013 by the National Institutes of Health to address dissemination and implementation research. Reported on in Brown et al. (2017), this translational evidence pipeline is summarized as follows:
- Evidence begins with basic research to inform the development of a new program. (Type 1 Evidence)
- This program is evaluated in the efficacy stage in which a highly trained research team typically delivers the program with careful monitoring and supervision to ensure high fidelity. Efficacy trials can answer only questions of whether a program could work under rigorous conditions. (Type 2 Evidence)
- Next is the effectiveness stage, where developers are less involved. What we know could work under ideal conditions is answered in efficacy trials versus what we know about program delivery in practice and in community settings is the focus of effectiveness trials. (Type 2 Evidence)
- Dissemination and implementation (D&I) research represents a distinct stage that occurs after efficacy and effectiveness trials. This stage of evidence building aims to improve the adoption, appropriate adaptation, delivery, and sustainment of EBIs by providers, clinics, organizations, communities, and systems of care. (Type 3 Evidence)
Moullin and colleagues (2019) describe an EPIS framework for D&I research that occurs in four phases:
- Exploration, which refers to whether a service delivery system (e.g., health care, social service, school) or community organization would find a particular EBI useful, given its outer context (e.g., service system, federal policy, funding) and inner context (e.g., organizational climate, provider experience).
- Preparation, which refers to putting into place the collaborations, policies, funding, supports, and processes needed across the multilevel outer and inner contexts to introduce a new EBI into a setting once stakeholders decide to adopt it. In this phase, adaptations to the service system, service delivery organizations, and the clinical/preventive intervention itself are considered and prepared.
- Implementation, which refers to the support processes that are developed to recruit, train, monitor, and supervise facilitators to deliver the EBI with fidelity and, if necessary, to adapt systematically to the local context.
- Sustainment, which refers to how EBI providers maintain the program, especially after the initial funding period has ended.
The traditional translational pipeline reported on in Brown et al. (2017) is commonly used by the National Institutes of Health (NIH) and other research-focused organizations to move scientific knowledge from basic and other preintervention research to efficacy and effectiveness trials and to a stage that reaches the public.
Synthesizing Evidence Frameworks
We created Figure 1 (below) to summarize these different forms of evidence into one diagram.
Figure 1 – Summary of Evidence Frameworks
A challenge we face is advising the field on whether Blueprints-certified interventions will work with adaptation (versus whether they need to be implemented exactly as designed in diverse communities).
Our Spring 2023 newsletter provided guidance on research-informed adaptation strategies.
The question, however, of whether an EBI applies for a specific population and/or community involves a set of scientific considerations that include balancing of fidelity to the original EBI with adaptations needed for replication and scale-up (see #7 in Figure 1), as well as considerations as to when there may be a need to “start from scratch” (i.e., shifting from #7 to #1 of Figure 1) in developing, evaluating and scaling a new intervention as opposed to refining or adapting an existing EBI (i.e., #8 of Figure 1).
We touched on this issue in a talk, Addressing Health Equity and Social Justice within Prevention Registries, which is recorded and uploaded on the Blueprints website.
In consultation with the Blueprints advisory board, Blueprints staff seek guidance from the Program Developers/Owners and evaluators of an EBI to conduct a qualitative analysis of the program’s core components (i.e., activities directly related to a program’s theory of change, which proposes the mechanisms by which a program works) to determine if a well-evaluated program (i.e., meets Blueprints standards) is the same or an adapted version of the original well-evaluated program.
Below are examples of Blueprints-certified programs that have undergone this review, resulting in the decision for Blueprints to certify an adapted version of the original program.
This newsletter covers a lot of terminology used to describe how the evidence base of EBIs is generated. In addition, we aimed to connect the traditional evidence pipeline of developing and evaluating a program with the newer evaluation field of dissemination and implementation research. While these terms and frameworks can be confusing, what is most important is that decisions should be based on and informed by evaluation and research findings generated using rigorous methods.
And as always, thank you for your continued interest in and support of Blueprints.
Pamela Buckley, PhD
PI, Blueprints Initiative
Karl G. Hill, PhD
Board Chair and Co-PI
- Brown, C. H., Curran, G., Palinkas, L. A., Aarons, G. A., Wells, K. B., Jones, L., … Cruden, G. (2017). An overview of research and evaluation designs for dissemination and implementation. Annual Review of Public Health, 38, 1-22. https://doi.org/10.1146/annurev-publhealth-031816-044215
- Brownson, R.C., Shelton, R.C., Geng, E.H., & Glasgow, R. E. (2022). Revisiting concepts of evidence in implementation science. Implementation Science, 17, 26. https://doi.org/10.1186/s13012-022-01201-y
- Moullin, J. C., Dickson, K. S., Stadnick, N. A., Rabin, B., & Aarons, G. A. (2019). Systematic review of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. Implementation Science, 14, 1. https://doi.org/10.1186/s13012-018-0842-6
Blueprints for Healthy Youth Development is developed and managed by the University of Colorado Boulder, Institute of Behavioral Science, with current support from Arnold Ventures and former funding from the Annie E. Casey Foundation. Each intervention included in the Blueprints database has been reviewed carefully by an independent advisory panel that looked at research on the intervention’s impact, practical focus and potential for implementation in public systems.
Featured Model Program
Multisystemic Therapy® (MST®)
Blueprints Certified: 1997
Ages Served: Early Adolescence (12-14) – Middle School, Late Adolescence (15-18) – High School
Program Outcomes: Close Relationship with Parents, Conduct Problems, Delinquency and Criminal Behavior, Externalizing, Internalizing, Mental Health – Other, Positive/Prosocial Behavior, Prosocial with Peers, Violence
Goal and Target Population: A juvenile crime prevention program designed to improve the real-world functioning of youth by changing their natural settings – home, school, and neighborhood – in ways that promote prosocial behavior while decreasing antisocial behavior.
Learn more about Multisystemic Therapy® (MST®)
Featured Promising Program
SNAP® (Stop Now And Plan) Boys
Blueprints Certified: 2023
Ages Served: Late Childhood (5-11) – K/Elementary
Program Outcomes: Antisocial-aggressive Behavior, Anxiety, Conduct Problems, Delinquency and Criminal Behavior, Depression, Emotional Regulation, Externalizing, Internalizing
Goal and Target Population: A cognitive behavioral multi-component training program designed to reduce antisocial behavior and/or police contact among boys at risk for such engagement by decreasing the factors that make children susceptible for continued delinquency and strengthening the protective factors of the parents, the child, and the family structure.
Learn more about SNAP® Boys
Blueprints Interventions in the News
Relevant Articles and Helpful Resources
In case you have missed them, here are a few news articles and web postings that discuss Blueprints and/or feature some of our Blueprints Model/Model Plus and Promising Programs:
- We are excited to announce the launch of the Annie E. Casey Foundation’s Evidence2Success Tool Kit, which is aimed at aiding communities of every size — from small rural school districts to large metropolitan areas — gather data and align funding to improve outcomes for young people and families. The tool kit utilizes community-tested tools, strategies and technical assistance to help educators, policymakers and organizations better understand and address social and emotional issues — such as substance use and abuse, bullying and more — that impact children and their families. Blueprints is listed as a resource available via the tool kit. Find out more in the Tool Kit announcement.
- Congratulations to our PI Dr. Pamela Buckley on receiving the 2023 Society for Prevention (SPR) Research Nan Tobler Award, which is given to an individual or a team of individuals for contributions to the summarization or articulation of the empirical evidence relevant to prevention. SPR noted how Dr. Buckley’s leadership in the use of research generated from the Blueprints registry has contributed to the importance of ensuring that evidence-based interventions are not only based on sound science but are also representative of diverse populations, readily accessible, and transparently presented.
- Justin Milner is the new Executive Vice President of Evidence and Evaluation at Arnold Ventures (AV), Blueprints’ current funder. In this Q&A with him that was posted to AV’s website, Mr. Miller mentions Blueprints by name as a good resource for finding what programs have been effective in rigorous evaluations.
- Larimer County’s three school districts will use some of Colorado’s opioid settlement funds to ramp up school-based drug use prevention via a grant earned by Colorado State University’s Prevention Research Center. The effort will involve implementing the Blues Program, which is a Blueprints Model program shown to decrease student substance use and mental health challenges including depression. The Blues Program developers will train personnel from all three districts this fall 2023. Find out more about the Larimer County drug prevention efforts.
- The Wyman Center recently received a $1.4 million grant to continue their work supporting teen mental health and preventing teen pregnancy with the Blueprints-certified Promising Wyman’s Teen Connection Project and Wyman’s Teen Outreach Program.
- The Blueprints-certified Model+ Accelerated Study in Associate Programs (ASAP) was recently featured on PBS NewsHour.