Identifying proven programs is the core of the work we do at Blueprints for Healthy Youth Development. In order to fulfill our mission, our team evaluates hundreds of interventions each year with the intent of identifying ones that provide positive outcomes. Interventions certified by Blueprints are rated as either Promising, Model or Model Plus.
– Promising interventions meet the minimum standard of effectiveness.
– Model interventions meet a higher standard and provide greater confidence in the program’s capacity to change behavior and targeted outcomes.
– Model Plus interventions meet an additional standard of independent replication.
Only Model and Model Plus programs are ready for scale.
This section of our website provides a closer look at what it means for a program to be certified by Blueprints. Our standards are described below.
Promising programs meet all the following minimum standards:
- The intended participants to receive the intervention are clearly identified. The relevant sociodemographic characteristics (age, gender, ethnic group, socio‐economic status, urban/suburban/rural residence) of those targeted by the intervention are stated. If the intended participants are those who have been screened based upon some characteristic(s) (e.g., a risk condition, protective factor status, a minimum level of the study outcome, or some personal or family attribute), these screening criteria and the screening process must be fully described. All inclusion or exclusion criteria for program participation must be noted.
- The outcomes of the intervention are clearly specified and are one of the Blueprints outcomes.
- The intervention’s theoretical rationale or logic model is discussed explaining how the intervention is expected to have a positive effect on intended outcomes and whether/how changes in risk or protection factors will affect the specified outcome(s). It must be clear how the intervention is expected to achieve the desired change in outcomes.
- There is documentation of the intended intervention structure, content and delivery process. A clear description of the planned intervention is reported, such as what service, activity or treatment is provided, to whom, by whom, over what period, with what intensity and frequency, and in what setting. This can include (a) the content of the intervention (e.g., information, advice, training, money, advocacy), (b) the nature of the provider (e.g., social worker, teacher, psychologist, volunteer), (c) the duration of the intervention (e.g., 3 hours, 6 weeks, a school year), (d) the length of participation at each session/contact (e.g., 2 hours), (e) the frequency of sessions/contacts (e.g., daily, weekly, monthly), (f) the setting (e.g., school, community center, health clinic) and (g) the mode of delivery (e.g., group‐based, one‐to‐one). In the case of a multi‐component intervention – for example, one that has components for children only, parents only, and children and parents together – it is necessary for each component to be described in these terms.
Blueprints intervention specificity standard is a screening standard and is determined by Blueprints staff before an intervention is submitted to the Blueprints Advisory Board for review of evaluation quality and intervention impact.
Blueprints requires that the intervention be evaluated by at least one randomized controlled trial (RCT) OR two quasi‐experimental (QED) evaluations. In addition, the Blueprints assessment of the quality of the intervention evaluation relies on a set of key principles that define strong experimental designs.
- The process of assignment to the intervention and control groups should be clearly defined, and what participants received in the intervention (i.e., the treatment group) and the nature of the control or comparison condition (i.e., those who did not receive the intervention) should also be described. The study should specify how the unit of assignment (e.g., individual, classroom, school, etc.) was defined at a level appropriate to the intervention.
- The timing of participant consent in research should be clearly defined and consent rates should be reported by condition. Consent is the process where a participant is informed about all aspects of the evaluation to make a voluntary decision to participate in the study. If the unit of assignment is individuals, consent should be obtained before assignment so that knowledge of assignment does not influence consent. If the unit of assignment is clusters (e.g., classrooms or schools), consent should come before randomization at the cluster level but not necessarily at the individual level. For example, principals should have consented before knowing of the assignment of their school. However, students need not consent beforehand, as such a procedure is impractical. It is still important to report on the difference across conditions in individual-level consent rates.
- The study should clearly describe the sample size (overall and by treatment and control conditions) at each stage of data gathering (e.g., at randomization/assignment/matching, at baseline, and at every follow-up) so that attrition (i.e., the loss of participants over time) is reported clearly.
- The intervention should assess at least one behavioral outcome (e.g., drug use), and not only measures of knowledge, attitudes or intentions that may not reflect actual behavior (e.g., measures of attitudes toward drug use). In addition, outcome measures should be distinct from the content of the intervention; that is, they are not measures of what was specifically delivered during the intervention.
- Outcome measures should not be rated solely by the person or people delivering the intervention but should be rated or assessed independently. For example, a school‐based intervention to reduce children’s antisocial behavior would not meet this criterion if (a) teachers implemented the intervention and (b) the measure of antisocial behavior relied solely on those same teachers’ ratings. Self-reports by youth participants in a study are generally considered independent, however.
- The study should use valid and reliable outcome measures that are appropriate for the intervention population of focus and desired behavioral outcomes. The data collection method should be specified (e.g., observation, self‐report, interview, administrative records, teacher report, etc.), and information on reliability and validity of these measures from the study sample or other citations should be presented.
- Analyses should follow an “intent‐to‐treat” protocol in which investigators attempt to include all participants assigned to each study condition in their analysis, regardless of amount of participation in the intervention. For example, once a participant is randomized to a condition, they should remain in that condition for analysis even if they never received the intended intervention, received only part of the intended intervention, or “crossed over” into the other condition during the study.
- The methods used to analyze results should be appropriate given the data being analyzed and the purpose of the analysis. Statistical models should control for baseline measures of the outcome measure (or a valid proxy of the outcome measure) and sociodemographic characteristics as appropriate (see criterion 9 below). The intervention condition should be modeled at the level of assignment with adjustment for clustering when the cluster is the unit of assignment, and the individual is the unit of analysis.
- Baseline equivalence of the randomized/matched/assigned sample (i.e., before any subjects drop out) should be established. Analysis of baseline differences indicates equivalence between intervention and comparison/control groups. Tests for baseline differences between conditions should be performed at the individual level and, for clustered trials, also at the cluster level. If significant or large differences exist, they should be controlled for in the analyses. If such differences are numerous, the randomization may have been compromised.
- There is no evidence of differential attrition. That is, attrition should not substantially alter the composition of the randomized/matched/assigned conditions. Statistical tests should show that attrition has little relationship to the sociodemographic characteristics and baseline outcomes of dropouts and completers. Tests for differential attrition should be performed at the individual level and, for clustered trials, also at the cluster level.
- The evaluation should show consistent, beneficial effects relative to the number of outcomes assessed on samples that are not small (i.e., a pilot study) or narrowly defined (e.g., relying on only one clinic or school).
- There is evidence of a consistent and statistically significant positive impact on a Blueprints outcome in a preponderance of studies that meet Blueprints evaluation quality criteria. There should be a statement of (a) the population and any specific subgroups with whom the intervention has been demonstrated to be effective and (b) any relevant conditions under which the effectiveness was found to vary (e.g., relating to setting, levels of risk or features of implementation). It is desirable that effect sizes (e.g., Cohen’s d, Odds Ratio, or Hedges’ g) or differences in proportions between intervention and control groups be reported, along with the significance levels of those differences, or that it is possible to calculate the effect size from the data reported (i.e., means and standard deviations or proportions for intervention and control groups are provided).
- There must be no evidence of the intervention having a statistically significant harmful main effect on participants in relation to any of the Blueprints outcomes from studies that meet Blueprints evaluation quality criteria. It may be permissible for harmful effects in areas that are not critically important for Blueprints outcomes, for example, if a program significantly and substantially lowers actual teen pregnancy but has a slight negative effect on attitudes towards sex.
The review of programs on evaluation quality and intervention impact is completed by the Blueprints Advisory Board based upon preliminary reviews and the recommendation of Blueprints staff. Programs must meet all the criteria for evaluation quality and intervention impact to be certified as Promising, Model or Model Plus by the Blueprints Advisory Board.
- There are explicit processes for ensuring the intervention gets to the right persons.
- There is a clear description of the activities of the intervention, and ideally, there are training materials, protocols and explicit implementation procedures. Ideally, there are also materials and instructions that specify the intervention content and guide the implementation of the intervention (such as a manual or series of manuals specifying in detail what the intervention comprises; levels of formal training or qualifications for those delivering the intervention; and/or training and technical assistance provided by certified trainers/technical assistance providers).
- The financial resources required to deliver the intervention are specified, where possible. Ideally, there is a description of costs associated with implementing the program, including start‐up costs; intervention implementation costs; intervention implementation support costs, such as technical assistance and training; and costs associated with fidelity monitoring and evaluation. A breakdown of cost for these separate components, when appropriate, is identified.
- There is reported information on the human resources required to deliver the intervention, where possible. Ideally, there is a description of staff resources needed to deliver the intervention, including required staff ratios, the required level of qualifications and skills for staff, and the amount of staff time allocated for implementation (to cover training, preparation, delivery, supervision and travel).
- The program that was evaluated is still available. The version of the intervention that met Blueprints evaluation quality and intervention impact criteria is currently available for sites wishing to implement it – for example, that it has an up‐to‐date website and program materials can be ordered.
The dissemination readiness standard is not considered until an intervention has met Blueprints intervention specificity, evaluation quality and intervention impact standards. Blueprints staff make this determination in consultation with the intervention developer or evaluator. This often requires obtaining information not readily available in existing publications.
Model Programs meet the following two standards in addition to all of those required for certification as a Promising Program.
There are two well-conducted RCTs or one high-quality RCT and one high-quality QED evaluation. These evaluations must meet all the methodological requirements spelled out in the Promising evaluation quality criteria.
There is a minimum of one long‐term follow‐up (at least 12 months following completion of the intervention) on at least one outcome measure indicating that results are sustained after leaving the intervention. Data on sustainability must be available for both treatment and control groups. For interventions that are designed to extend over many years (e.g., Alcoholics Anonymous), evidence that effects are sustained after several years of participation in the program, even though participation is continuing, will be accepted as evidence of sustainability.
Model Plus Programs
Model Plus programs are Model programs that meet one additional standard.
Independent Research Team
In at least one high-quality study demonstrating desired outcomes, authorship, data collection, and analysis has been conducted by a researcher who is neither a current or past member of the developer’s research team and who has no financial interest in the intervention.