Sign Up For Newsletter

Blueprints Standards

Identifying proven programs is the core of the work we do at Blueprints for Healthy Youth Development. In order to fulfill our mission, our team evaluates hundreds of interventions each year with the intent of identifying ones that provide positive outcomes. Interventions certified by Blueprints are rated as either Promising, Model or Model Plus.

Promising interventions meet the minimum standard of effectiveness.
Model interventions meet a higher standard and provide greater confidence in the program’s capacity to change behavior and targeted outcomes.
Model Plus interventions meet an additional standard of independent replication.

Only Model and Model Plus programs are ready for scale.

This section of our website provides a closer look at what it means for a program to be certified by Blueprints.  Our standards are described below.

Promising Programs

 

Promising programs meet all the following minimum standards:

Evaluation Quality

A. The intervention has been evaluated by at least one randomized controlled trial (RCT) OR two quasi‐experimental (QED) evaluations (initial quasi‐experimental evaluation and a replication) in which all the criteria listed below are all met.

A1. Assignment to the intervention is at a level appropriate to the intervention, i.e. individual, classroom, school, etc.). This requires specification of the unit of allocation (e.g., individual, class, school). For example, if it is a school‐based intervention, schools should be assigned to different conditions (not students, or classrooms), whereas if it is a single‐site evaluation of a community‐based parent training program, families should be assigned to the different conditions.

A2. There is use of valid and reliable measures that are appropriate for the intervention population of focus and desired outcomes. For standardized measures, the name of the measure, its reported reliability and validity (e.g., alpha coefficient and scale length), and a reference to a suitable source containing this information is required.  Administrative and archival indicators (e.g., rates of arrest, detention, school suspension, grade retention) should be described in detail and their source indicated. In both cases the measure should be appropriate for the construct in question and the population to whom it is applied. The data collection method should be specified (e.g., observation, self‐report, interview, archival search, teacher report, etc.).

A3. Analysis is based on ‘intent‐to‐treat’. This standard requires evidence that investigators attempted to include all participants assigned to each study condition in their analysis, regardless of participant’s level of participation. For example, once a participant is randomized to a condition, they should remain in that condition for analysis even if they never received the intended intervention, received only part of the intended intervention or ‘crossed over’ into the other study condition during the study.

A4. There are appropriate statistical analyses. The methods used to analyze results are appropriate given the data being analyzed and the purpose of the analysis. For example, the statistical method should be suitable for the type of data used (categorical, ordinal, ratio / parametric or non‐parametric, etc.) and capable of answering the research question (testing for differences in averages between groups, predicting categorical or linear outcomes, etc.). Statistical models should control for baseline differences on outcome measures and demographic characteristics. Treatment condition should be modeled at the level of assignment (or deviations from that strategy should be justified statistically).

A5. Baseline equivalence of the randomized / matched sample ( i.e., before any subjects drop out ) is established. Analysis of baseline differences indicates equivalence between intervention and comparison/control groups. There should be no statistically significant differences between intervention and control groups on pretest measures of the outcome or demographic characteristics. If there are significant differences, they need to be controlled for in the analyses. If such differences are numerous and large, randomization/matching is likely compromised and the study would be considered seriously flawed.

B. There is a clear statement of the demographic characteristics of the population participating in the study. This description should include the age, gender, race/ethnicity and when possible, socio‐economic status and community characteristics (rural, suburban and urban) of participants. Any discrepancy from the intervention’s intended population should be noted and justified.

C. There is documentation of what participants received in the intervention condition/s and any significant departures from the intervention as designed must be described. The nature of the control condition should also be described. This is critical for determining if the evaluation study delivered the intervention with fidelity and if it is a replication of a given program or a new program.  It should also be noted whether all participants in a given condition received the same intervention. Measures of intervention fidelity ( including both dosage and a quantitative measure of quality), and results of data collected using these measures, should be reported.  

D. There is no evidence of significant differential attrition. This requires results of statistical tests examining (a) differences in baseline outcomes and sociodemographic characteristics of participants who drop out of the study compared to participants who remain in the study, and (b) baseline outcomes and sociodemographic characteristics of the “analysis sample” – meaning participants with complete data in the treatment group compared to participants with complete data in the control group.  

E. Outcome measures are not dependent on the unique content of the intervention. Outcome measures must be independent of the content of the intervention; that is, they are not measures of what was specifically delivered during the intervention. 

F. Outcome measures are not rated solely by the person or people delivering the intervention. At least one Blueprints outcome is rated or assessed independently from the person(s) delivering the intervention. For example, a school‐based intervention to reduce children’s antisocial behavior would not meet this criterion if (a) the measure of antisocial behavior relied solely on teacher ratings and (b) those same teachers implemented the intervention.   

Intervention Impact

A. There is evidence of a consistent and statistically significant positive impact on a Blueprints outcome in a preponderance of studies that meet Blueprints evaluation quality criteria.  There should be a statement of (a) the population and any specific subgroups with whom the intervention has been demonstrated to be effective and (b) any relevant conditions under which the effectiveness was found to vary (e.g., relating to setting, levels of risk or features of implementation). It is desirable that effect sizes (e.g., Cohen’s d, Odds Ratio, or Hedges g) or differences in proportions between intervention and control groups, be reported, along with the significance levels of those differences, or that it is possible to calculate the effect size from the data reported (means and standard deviations or proportions for intervention and control groups).

B. There is an absence of iatrogenic effects for intervention participants (this includes all sub‐groups and behavioral outcomes).  There must be no evidence of the intervention having a statistically significant harmful effect on participants, either as a whole or sub‐groups of participants, in relation to any of the Blueprints outcomes from studies that meet  Blueprints evaluation quality criteria.  It may be permissible for harmful effects in areas that are not critically important for Blueprints outcomes, for example, if a program significantly and substantially lowers actual teen pregnancy but has a slight negative effect on attitudes towards sex.  

The review of programs on evaluation quality and evaluation impact is completed by the Blueprints Advisory Board based upon preliminary reviews and the recommendation of Blueprints staff. Programs must meet all of the criteria for these two standards to be certified as Promising or Model by the Blueprints Advisory Board.

Intervention Specificity

A.  The intended subjects or clients to receive the intervention are clearly identified. The relevant demographic characteristics (age, gender, ethnic group, socio‐economic status, urban/suburban/rural residence) of those targeted by the intervention must be stated. If the intended subjects or clients are those who have been screened based upon some characteristic(e.g., a risk condition, protective factor status, a minimum level of the study outcome, or some personal or family attribute) these screening criteria and the screening process must be fully described.   All inclusion or exclusion criteria for program participation must be noted.

B. The outcomes of the intervention are clearly specified and are one of the Blueprints outcomes. The outcomes identified are either specific Blueprints outcomes or outcomes that logically fit into one of the five Blueprints developmental outcome areas. The expected direction of change (increase or decrease) also must be noted.  

C. The intervention’s theoretical rationale or logic model is discussed explaining how the intervention is expected to have a positive effect on intended outcomes and whether/how changes in risk or protection factors will affect the specified outcome(s).  It must be clear how the intervention is expected to achieve the desired change in outcomes.

D. There is documentation of the intended intervention structure, content and delivery process. A clear description of the planned intervention is required, including what service, activity or treatment is provided, to whom, by whom, over what period, with what intensity and frequency, and in what setting. This should include (a) the content of the intervention (e.g., information, advice, training, money, advocacy), (b) the nature of the provider (e.g., social worker, teacher, psychologist, volunteer), (c) the duration of the intervention (e.g., 3 hours, 6 weeks, a school year), (d) the length of participation at each session/contact (e.g., 2 hours), (e) the frequency of sessions/contacts (e.g., daily, weekly, monthly), (f) the setting (e.g., school, community center, health clinic) and (g) the mode of delivery (e.g., group‐based, one‐to‐one). In the case of a multi‐component intervention – for example, one that has components for children only, parents only, and also children and parents together – it is necessary for each component to be described in these terms. Blueprints specificity standard is a screening standard and is determined by Blueprints staff before an intervention is submitted to the Blueprints Advisory Board for review of evaluation quality and impact

Dissemination Readiness

A. There are explicit processes for ensuring the intervention gets to the right persons.

B. There is a clear description of the activities of the intervention, and ideally there are training materials, protocols and explicit implementation procedures. There are materials and instructions that specify the intervention content and guide the implementation of the intervention. Ideally, this includes a manual or series of manuals specifying in detail what the intervention comprises; levels of formal training or qualifications for those delivering the intervention; and ideally includes training and technical assistance provided by certified trainers/technical assistance providers.

C. The financial resources required to deliver the intervention are specified, where possible.  Ideally, there is a description of costs associated with implementing the program, including: start‐up costs; intervention implementation costs; intervention implementation support costs, such as technical assistance and training; and costs associated with fidelity monitoring and evaluation. A breakdown of cost for these separate components, when appropriate, is identified.    

D. There is reported information on the human resources required to deliver the intervention. Ideally, we are looking for a description of staff resources needed to deliver the intervention, including required staff ratios, the required level of qualifications and skills for staff, and the amount of time they will need to allocate (to cover delivery, training, supervision, preparation and travel) .

E. The program that was evaluated is still available. The version of the intervention that met Blueprints evaluation quality and intervention impact criteria is currently available for sites wishing to implement it – for example, that it has an up‐to‐date website and program materials can be ordered. Our readiness standard is not considered until an intervention has met Blueprints specificity, evaluation quality and intervention impact standards. Blueprints staff make this determination in consultation with the intervention developer or evaluator. This often requires obtaining information not readily available in existing publications.

Model Programs

 

Model Programs meet the following two standards in addition to all of those required for certification as a Promising Program.

Evaluation Quality

A. There are two well conducted RCTs or one RCT and one QED evaluation.  These evaluations must meet all the methodological requirements spelled out in the Promising evaluation quality criteria (A‐F).

Intervention Impact

A. There is a minimum of one long‐term follow‐up (at least 12 months following completion of the intervention) on at least one outcome measure indicating that results are sustained after leaving the intervention. Data on sustainability must be available for both treatment and control groups. For interventions that are designed to extend over many years (e.g., Alcoholics Anonymous), evidence that effects are sustained after several years of participation in the program, even though participation is continuing, will be accepted as evidence of sustainability.

Model Plus Programs

 

Model Plus programs are Model programs that meet one additional standard.

Independent Replication

In at least one high-quality study demonstrating desired outcomes, authorship, data collection, and analysis has been conducted by a researcher who is neither a current or past member of the developer’s research team and who has no financial interest in the program.  

Contact

Blueprints for Healthy Youth Development
University of Colorado Boulder
Institute of Behavioral Science
UCB 483, Boulder, CO 80309

Email: blueprints@colorado.edu

Sign up for Newsletter

If you are interested in staying connected with the work conducted by Blueprints, please share your email to receive quarterly updates.

Blueprints for Healthy Youth Development is
currently funded by Arnold Ventures (formerly the Laura and John Arnold Foundation) and historically has received funding from the Annie E. Casey Foundation and the Office of Juvenile Justice and Delinquency Prevention.