Sign Up For Newsletter

Blueprints For Healthy Youth Development logo

Reasons for Non-Certification

Blueprints for Healthy Youth Development uses rigorous criteria to review all articles/reports that contain evaluation information for a given intervention. The only requirement is that the study describes an intervention that uses a Randomized Control Trial (RCT) or Quasi-Experimental Design (QED) to evaluate the program.

Randomized Control Trial (RCT)

A research design in which participants are randomly assigned to treatment or control conditions by the researchers. If conditions are the same at baseline, and nothing changes except the intervention, posttest differences (outside of chance error) can be attributed to the intervention.

Quasi-Experimental Design (QED)

A research design in which participants are not randomly assigned to conditions.

Useful when randomization is impractical or unethical.

However, without random assignment to ensure the two conditions were the same at baseline, QED’s cannot conclusively attribute posttest differences to the intervention.

In addition to rating the certified interventions that meet Blueprints criteria, non-certified interventions are also rated based on how well they meet Blueprints criteria for strong designs and meaningful impacts. These non-certified ratings, and reasons for why interventions receive these ratings, are listed below.

Not Dissemination Ready
Several interventions meet Blueprints standards for intervention specificity, impact, and evaluation quality but fail to meet the dissemination readiness standard necessary for Blueprints certification. We consider the following criteria when determining whether an intervention is “dissemination ready.” The intervention has the necessary:

  • organizational capability,
  • manuals,
  • training,
  • technical assistance, and
  • other support required for implementation with fidelity in communities and public service systems.

If an intervention does not meet most or any of the criteria listed above but has a high-quality design, that intervention will receive a “not dissemination ready” rating – meaning it has met criteria for evaluation quality (as determined by the Blueprints Advisory Board) but has not yet met the dissemination readiness criteria.

Inconclusive Evidence
This rating is given to interventions for which there is only one quasi-experimental study that meets Blueprints evaluation quality standards. Blueprints certification requires one randomized control trial or two quasi-experimental design studies that meet Blueprints evaluation quality standards. More often, however, studies that receive an inconclusive rating have limitations related to measurement or analysis. Reasons for an inconclusive rating by Blueprints include one or more of the following:

Not controlling for baseline outcomes

Baseline Outcomes – Study variables measured before the intervention program begins, which can then be used to compare the difference between treatment and control conditions in terms of change from before to after the intervention.

Missing or incomplete tests for baseline equivalence

Baseline Equivalence – Whether the treatment and control conditions are equivalent on demographic characteristics, outcomes, and other study variables before the intervention program begins.

Failing to report information on attrition

Attrition – The loss of participants over the course of an intervention and/or study, resulting in a reduced sample size relative to the initial sample.

Missing or incomplete tests for differential attrition

Differential attrition – Attrition that is selective. Blueprints examines differential attrition at two levels:

  1. Whether those who do not complete the study (“attritors”) differ systematically from those who are retained in the study (“completers”) in terms of demographic characteristics, baseline outcomes, and other study variables.
  2. Whether the “completers” in the treatment and control conditions are still equivalent on baseline characteristics and outcomes after attrition (using research language, this means that baseline equivalence for the analysis sample is achieved).
Inconsistent, weak, or unreliable program effects

e.g., few significant beneficial program effects relative to the number of tests

Small or narrowly defined samples

e.g., feasibility or pilot studies; relying on only one clinic or school

Insufficient Evidence
Studies of an intervention rated as Insufficient by Blueprints may have one or more of the following methodological weaknesses:

Designs with no control group

Control group – A group of individuals who do not receive the intervention. The characteristics of this group should match those of the treatment group (or those who receive the intervention) as closely as possible.

Designs with limited or no matching of the control group to the treatment group in a quasi-experimental evaluation

e.g., participants or clusters non-randomly assigned to the treatment and control groups are not matched on baseline characteristics, or are matched on only a few baseline characteristics

Using only measures of knowledge, attitudes, or intentions that may not reflect actual behavior

e.g., measures of attitudes toward drug use but not actual drug use; measures of knowledge of parenting but not parenting behaviors

Not using intent-to-treat analysis

Intent-to-treat – An analysis that includes all available data from participants based on their assignment to the treatment or control conditions, regardless of whether the participant received or fully complied with the treatment.

Not adjusting for clustering when the cluster was the unit of assignment

Clustering occurs in data that are organized at different levels – that is, individual units (e.g., students) are clustered within aggregate units (e.g., schools)

Not including any independently-measured behavioral outcomes

Outcome measures are considered independent if they are free from bias due to a strong incentive for a positive intervention outcome.

Self-report surveys, behaviors coded by a researcher “blind” to conditions, and other observable, objective outcomes (e.g., recidivism, medical diagnosis) are all examples of independent measures.

Examples of non-independent measures include teacher ratings of child behaviors following a classroom based intervention delivered by the teacher, and parent ratings of child behaviors following a parenting intervention administered to parents.

Contact

Blueprints for Healthy Youth Development
University of Colorado Boulder
Institute of Behavioral Science
UCB 483, Boulder, CO 80309

Email: blueprints@colorado.edu

Sign up for Newsletter

If you are interested in staying connected with the work conducted by Blueprints, please share your email to receive quarterly updates.

Blueprints for Healthy Youth Development is
currently funded by Arnold Ventures (formerly the Laura and John Arnold Foundation) and historically has received funding from the Annie E. Casey Foundation and the Office of Juvenile Justice and Delinquency Prevention.