Identifying proven programs is the core of the work we do at Blueprints for Healthy Youth Development. In order to fulfill our mission, our team evaluates hundreds of interventions each year with the intent of identifying ones that provide positive outcomes. Interventions certified by Blueprints are rated as either Promising, Model or Model Plus.
– Promising interventions meet the minimum standard of effectiveness.
– Model interventions meet a higher standard and provide greater confidence in the program’s capacity to change behavior and targeted outcomes.
– Model Plus interventions meet an additional standard of independent replication.
Only Model and Model Plus programs are ready for scale.
This section of our website provides a closer look at what it means for a program to be certified by Blueprints. Our standards are described below.
Promising programs meet all the following minimum standards:
- The intended participants to receive the intervention are clearly identified. The relevant sociodemographic characteristics (age, gender, ethnic group, socio‐economic status, urban/suburban/rural residence) of those targeted by the intervention are stated. If the intended participants are those who have been screened based upon some characteristic(s) (e.g., a risk condition, protective factor status, a minimum level of the study outcome, or some personal or family attribute), these screening criteria and the screening process must be fully described. All inclusion or exclusion criteria for program participation must be noted.
- The outcomes of the intervention are clearly specified and are one of the Blueprints outcomes.
- The intervention’s theoretical rationale or logic model is discussed explaining how the intervention is expected to have a positive effect on intended outcomes and whether/how changes in risk or protection factors will affect the specified outcome(s). It must be clear how the intervention is expected to achieve the desired change in outcomes.
- There is documentation of the intended intervention structure, content and delivery process. A clear description of the planned intervention is reported, such as what service, activity or treatment is provided, to whom, by whom, over what period, with what intensity and frequency, and in what setting. This can include (a) the content of the intervention (e.g., information, advice, training, money, advocacy), (b) the nature of the provider (e.g., social worker, teacher, psychologist, volunteer), (c) the duration of the intervention (e.g., 3 hours, 6 weeks, a school year), (d) the length of participation at each session/contact (e.g., 2 hours), (e) the frequency of sessions/contacts (e.g., daily, weekly, monthly), (f) the setting (e.g., school, community center, health clinic) and (g) the mode of delivery (e.g., group‐based, one‐to‐one). In the case of a multi‐component intervention – for example, one that has components for children only, parents only, and children and parents together – it is necessary for each component to be described in these terms.
Blueprints intervention specificity standard is a screening standard and is determined by Blueprints staff before an intervention is submitted to the Blueprints Advisory Board for review of evaluation quality and intervention impact.
The intervention must be evaluated by at least one randomized controlled trial (RCT) OR two quasi‐experimental (QED) evaluations in which all the criteria listed below are met.
- The process of assignment to the intervention and control groups should be clearly defined, and what participants received in the intervention (i.e., the treatment group) and the nature of the control condition (i.e., those who did not receive the intervention) should also be described. The study should specify how the unit of assignment (e.g., individual, classroom, school) was defined at a level appropriate to the intervention.
- The intervention must assess at least one behavioral outcome (e.g., drug use), and not only measures of knowledge, attitudes or intentions that may not reflect actual behavior (e.g., measures of attitudes toward drug use). In addition, outcome measures must be independent of the content of the intervention; that is, they are not measures of what was specifically delivered during the intervention.
- Outcome measures are not rated solely by the person or people delivering the intervention. At least one Blueprints outcome showing a statistically significant positive impact is rated or assessed independently from the person(s) delivering the intervention. For example, a school‐based intervention to reduce children’s antisocial behavior would not meet this criterion if (a) teachers implemented the intervention and (b) the measure of antisocial behavior relied solely on those same teachers’ ratings.
- The study should clearly describe the sample size (overall and by treatment and control conditions) at each stage of data gathering (e.g., at randomization, at baseline, at follow-up). Attrition (i.e., the loss of participants over time) is reported clearly.
- The study should use valid and reliable measures that are appropriate for the intervention population of focus and desired behavioral outcomes. For standardized measures, the name of the measure, its reported reliability and validity (e.g., alpha coefficient and scale length), and a reference to a suitable source containing this information is required. Administrative data and archival indicators (e.g., rates of arrest, detention, school suspension, grade retention, etc.) should be described in detail and their source indicated. In both cases, the measure should be appropriate for the construct in question and the population to whom it is applied. The data collection method should be specified (e.g., observation, self‐report, interview, archival search, teacher report, etc.).
- Analysis is based on “intent‐to‐treat.” This standard requires evidence that investigators attempted to include all participants assigned to each study condition in their analysis, regardless of amount of participation in the intervention. For example, once a participant is randomized to a condition, they should remain in that condition for analysis even if they never received the intended intervention, received only part of the intended intervention, or “crossed over” into the other study condition during the study.
- There are appropriate statistical analyses. The methods used to analyze results are appropriate given the data being analyzed and the purpose of the analysis. For example, the statistical method should be suitable for the type of data used (categorical, ordinal, ratio, etc.) and capable of answering the research question (testing for differences in averages between groups, predicting categorical or linear outcomes, etc.). Statistical models should control for baseline measures of the outcome measure (or a valid proxy of the outcome measure) and sociodemographic characteristics (see criterion 8 below). The treatment condition should be modeled at the level of assignment with adjustment for clustering when the cluster is the unit of assignment.
- Baseline equivalence of the randomized/matched sample (i.e., before any subjects drop out) is established. Analysis of baseline differences indicates equivalence between intervention and comparison/control groups. There should be no statistically significant differences between intervention and control groups on pretest measures of the outcome (or a valid proxy of the outcome measure) or sociodemographic characteristics. If there are significant differences, they need to be controlled for in the analyses. If such differences are numerous and large, randomization/matching is likely compromised, and the study would be considered seriously flawed.
- There is no evidence of significant differential attrition.This requires results of statistical tests examining (a) differences in baseline outcomes and sociodemographic characteristics of participants who drop out of the study compared to participants who remain in the study (i.e., attriters and completers), and (b) whether baseline outcomes and sociodemographic characteristics differed between attriters in the treatment group compared to attriters in the control group.
- The evaluation shows consistent, beneficial effects relative to the number of outcomes assessed on samples that are not small (i.e., a pilot study) or narrowly defined (e.g., relying on only one clinic or school).
- There is evidence of a consistent and statistically significant positive impact on a Blueprints outcome in a preponderance of studies that meet Blueprints evaluation quality criteria. There should be a statement of (a) the population and any specific subgroups with whom the intervention has been demonstrated to be effective and (b) any relevant conditions under which the effectiveness was found to vary (e.g., relating to setting, levels of risk or features of implementation). It is desirable that effect sizes (e.g., Cohen’s d, Odds Ratio, or Hedges’ g) or differences in proportions between intervention and control groups be reported, along with the significance levels of those differences, or that it is possible to calculate the effect size from the data reported (i.e., means and standard deviations or proportions for intervention and control groups are provided).
- There must be no evidence of the intervention having a statistically significant harmful main effect on participants in relation to any of the Blueprints outcomes from studies that meet Blueprints evaluation quality criteria. It may be permissible for harmful effects in areas that are not critically important for Blueprints outcomes, for example, if a program significantly and substantially lowers actual teen pregnancy but has a slight negative effect on attitudes towards sex.
The review of programs on evaluation quality and intervention impact is completed by the Blueprints Advisory Board based upon preliminary reviews and the recommendation of Blueprints staff. Programs must meet all the criteria for evaluation quality and intervention impact to be certified as Promising, Model or Model Plus by the Blueprints Advisory Board.
- There are explicit processes for ensuring the intervention gets to the right persons.
- There is a clear description of the activities of the intervention, and ideally, there are training materials, protocols and explicit implementation procedures. Ideally, there are also materials and instructions that specify the intervention content and guide the implementation of the intervention (such as a manual or series of manuals specifying in detail what the intervention comprises; levels of formal training or qualifications for those delivering the intervention; and/or training and technical assistance provided by certified trainers/technical assistance providers).
- The financial resources required to deliver the intervention are specified, where possible. Ideally, there is a description of costs associated with implementing the program, including start‐up costs; intervention implementation costs; intervention implementation support costs, such as technical assistance and training; and costs associated with fidelity monitoring and evaluation. A breakdown of cost for these separate components, when appropriate, is identified.
- There is reported information on the human resources required to deliver the intervention, where possible. Ideally, there is a description of staff resources needed to deliver the intervention, including required staff ratios, the required level of qualifications and skills for staff, and the amount of staff time allocated for implementation (to cover training, preparation, delivery, supervision and travel).
- The program that was evaluated is still available. The version of the intervention that met Blueprints evaluation quality and intervention impact criteria is currently available for sites wishing to implement it – for example, that it has an up‐to‐date website and program materials can be ordered.
The dissemination readiness standard is not considered until an intervention has met Blueprints intervention specificity, evaluation quality and intervention impact standards. Blueprints staff make this determination in consultation with the intervention developer or evaluator. This often requires obtaining information not readily available in existing publications.
Model Programs meet the following two standards in addition to all of those required for certification as a Promising Program.
There are two well-conducted RCTs or one high-quality RCT and one high-quality QED evaluation. These evaluations must meet all the methodological requirements spelled out in the Promising evaluation quality criteria.
There is a minimum of one long‐term follow‐up (at least 12 months following completion of the intervention) on at least one outcome measure indicating that results are sustained after leaving the intervention. Data on sustainability must be available for both treatment and control groups. For interventions that are designed to extend over many years (e.g., Alcoholics Anonymous), evidence that effects are sustained after several years of participation in the program, even though participation is continuing, will be accepted as evidence of sustainability.
Model Plus Programs
Model Plus programs are Model programs that meet one additional standard.
Independent Research Team
In at least one high-quality study demonstrating desired outcomes, authorship, data collection, and analysis has been conducted by a researcher who is neither a current or past member of the developer’s research team and who has no financial interest in the intervention.