Sign Up For Newsletter

Blueprints For Healthy Youth Development logo

Guidelines for Becoming a Blueprints-Certified Program

The Review Process

The Blueprints mission is to identify evidence-based prevention and intervention programs that are effective in reducing antisocial behavior and promoting a healthy course of youth development and adult maturity. Despite a raft of well-intentioned programs, however, very few of them have robust evidence demonstrating their effectiveness. Blueprints promotes only those programs with the strongest scientific support. It does so by providing a list of Blueprints certified Promising, Model and Model+ programs that potential users can adopt with confidence.

Many program developers who recognize the value of certification by Blueprints ask about the procedure to get their program approved or recommended. Blueprints has a well-deserved reputation for using high standards – it does a rigorous review of all of the scientific evidence of program effectiveness and relatively few programs meet the Blueprints standard. To aid developers, we summarize the process of getting program approval and common reasons many programs fail to get approved.

First, Blueprints selects programs to review in a variety of ways. Most come to our attention through published scientific studies that we find in comprehensive searches of the evaluation literature, databases, and journals. We also review programs with a request from program developers. The only requirement is that the program has evidence of its effectiveness in a scientific study.

Second, a program review begins with an internal evaluation by Blueprints staff. A detailed description of all the studies that evaluate the program is written that summarizes the methodological strengths and weaknesses of the studies, along with the evidence that subjects participating in the program improve. The Blueprints Director then selects programs that meet requirements for strong methods and evidence to undergo a review by the Blueprints Advisory Board, a distinguished panel of methodological experts in the field of youth development. See names and credentials.

Third, the Board reads the articles and the Blueprints summary evaluation, discusses the strengths and weaknesses of the program studies, and makes a decision to approve, ask for more information, or reject. Programs that provide more information then go back to the Board for re-review. Approved programs are designated as Promising, Model or Model+, depending on the extensiveness of the evidence. Check out the criteria for these ratings.

Fourth, for those programs approved by the Board, the Blueprints Director gathers information on readiness of the program for dissemination to users. Programs are recommended only if they have published materials available for use by others and can provide training and technical assistance for users needing help. Evidence-based programs produce better outcomes when implemented with fidelity, and fidelity is best achieved when implementation supports are available.

Fifth, approved programs that are ready to be disseminated are added to the Blueprints web page as Promising, Model or Model+ programs. The webpage makes it easy to search for programs that fit a user’s needs.

Common Methodological Problems

Most programs fail to receive Blueprints certification because their scientific evaluations have major limitations. These limitations are such that apparently positive results could emerge not from participation in the program but from design problems in the evaluation. The most obvious problems are not having a control group (in which participants are compared to nonparticipants) or not assigning participants to the intervention and control groups randomly. Randomized controlled trials do the most to ensure that control and intervention groups are identical at the start of a program in all ways except that one receives the intervention and the other does not. Some quasi-experimental designs, in which participants are statistically matched to nonparticipants on key sociodemographic and/or behavioral measures, come close to the quality of randomized trials in ensuring group equivalence at the start of a program.

Even randomized controlled trials and quasi-experimental designs can face serious problems, however. Programs approved by Blueprints must take appropriate steps to overcome these common problems. We cannot offer a comprehensive list of methodological problems that the Board considers in their program reviews but can highlight ones that often prevent Blueprints certification.

Biased Measures. The best outcome measures are obtained from independent sources. Researchers who rate subjects participating in their programs or mothers in parenting programs who rate their child’s behavior are prone to bias. Strong studies have more objective measures based on subject self-reports, blind ratings, or external data sources. Additionally, the study should collect outcome data for treatment and control subjects in the same way and at the same time, and report the intervention’s effects on all outcomes the study measured and not just the positive ones.

Limited Outcomes. The outcomes to be measured should be behavioral and not simply attitudes or intervening risk and protective factors that are closely related to program content. Success in changing attitudes or intervening factors does not always translate into success in changing the ultimate behavioral outcomes of interest to Blueprints. For instance, a program designed to reduce drug use should measure actual drug use patterns and not just intentions or attitudes towards drugs.

Dropping Subjects. Believing that only subjects who complete an intervention should be evaluated, researchers sometimes drop non-participants. However, this approach selects only the best subjects and leads to biased results. Strong studies use an “intent-to-treat” approach that analyzes all subjects in the condition of their original assignment.

Non-Equivalent Groups. Even in randomized trials, studies need to examine the baseline equivalence of the intervention and control groups. Statistical tests should show few significant differences between the groups before the intervention begins on all sociodemographic and outcome measures.

Differential Attrition. Most studies face loss of subjects, but if the loss differs between the intervention and control groups, it introduces potential bias. Strong studies track subjects by condition at baseline and at each follow up measure to ensure participants lost to attrition do not differ across conditions, baseline measures, or the interaction of condition by baseline measures.

Incorrect Level of Analysis. Researchers often randomize at one level, such as the school level, but conduct analyses at a different level (e.g., students). When individuals are part of larger groups or organizations that are randomized, it violates the assumption made in tests of statistical significance that the observations are independent. To avoid overstating the statistical significance of the results, multilevel models, robust standard errors, or other adjustments should be used.

Inconsistent or Weak Program Benefits. Most studies examine program effects on multiple outcomes, often including a wide range of measures, risk and protective factors, and behaviors. To minimize the role chance may play in multiple statistica tests, Blueprints looks for consistent results. Although it is hard to provide a simple formula to define consistency, the Board looks for robust and reliable benefits that
are not limited to a small portion of the outcome measures and hold across time points and subgroups. Ideally, the program benefits are strong enough that they will show when the program is used with other samples and in different contexts.

Contact

Blueprints for Healthy Youth Development
University of Colorado Boulder
Institute of Behavioral Science
UCB 483, Boulder, CO 80309

Email: blueprints@colorado.edu

Sign up for Newsletter

If you are interested in staying connected with the work conducted by Blueprints, please share your email to receive quarterly updates.

Blueprints for Healthy Youth Development is
currently funded by Arnold Ventures (formerly the Laura and John Arnold Foundation) and historically has received funding from the Annie E. Casey Foundation and the Office of Juvenile Justice and Delinquency Prevention.