Blueprints Conference announces 2020 keynote speakers

Hundreds of professionals focused on effective programs to promote healthy development for youth and families are expected to convene in Colorado to hear international experts on best practices, research and public policy.

Boulder, CO Blueprints for Healthy Youth Development out of the University of Colorado Boulder is hosting the 2020 Blueprints Conference on April 27-29, 2020, at The Westin Westminster in Colorado. Approximately 750 professionals from around the world are expected to convene to hear from international experts, including Jon Baron, vice president of evidence-based policy initiative at Arnold Ventures and Cindy Guy, vice president of research, evaluation, evidence and data at The Annie E. Casey Foundation, who will participate in a keynote panel designed to discuss their foundations’ strategies for promoting social programs that work. 

This year’s conference theme is Action to Outcome: The Rising Results of Blueprints and the purpose is to bridge the gap between research and practice by convening evaluators, prevention experts, program designers, policymakers, community leaders, advocates, practitioners and funders to share ideas and learn about experimentally-proven interventions (EPIs) designed to prevent problem behavior and enhance positive development in youth, adults and families. 

The conference will also include a keynote panel presentation on public sector investments in EPIs by highlighting states that have successfully scaled up interventions certified by Blueprints. Conference planners hope this panel will serve as a forum to stimulate conversations around how states can implement systems that support the delivery of EPIs. Panelists will share lessons learned, barriers to success and efforts to work through those barriers. 

Blueprints is currently accepting proposals for breakout session speakers. Information on conference sessions and application instructions can be found on the conference website. Submissions are due by October 4. 

Sponsorship and exhibit opportunities during the conference are available for Blueprints model-plus, model and promising programs, as well as agencies that implement and promote Blueprints interventions. Qualified organizations can find more information on the conference website.

ABOUT BLUEPRINTS FOR HEALTHY YOUTH DEVELOPMENT Blueprints for Healthy Youth Development is a project within the Institute of Behavioral Science at the University of Colorado Boulder providing a comprehensive registry of experimentally-proven interventions effective in reducing antisocial behavior and promoting a healthy course of youth development and adult maturity. Blueprints’ process for certifying interventions is widely recognized as the most rigorous in use. Blueprints hosts a biennial conference that disseminates science-based information on programs that have been proven, using high scientific standards to promote positive behaviors such as academic achievement, emotional well-being, physical health and positive relationships. Blueprints-certified interventions also target the prevention of negative behaviors, including crime, violence, delinquency and substance abuse. Follow Blueprints on Facebook and Twitter or visit their website at blueprintsprograms.org to learn more. 

 

 

PROSPER

PROSPER (Promoting School-Community-University Partnerships to Enhance Resilience) is a delivery system that fosters implementation of evidence-based youth and family interventions, complete with ongoing needs assessments, monitoring of implementation quality and partnership functions, and evaluation of intervention outcomes. The partnership includes (1) state-level university researchers and Extension-based program directors, (2) a prevention coordinator team typically based in the Cooperative Extension System (CES), and (3) local community strategic teams, consisting of a CES team leader, a representative from the public elementary/secondary school systems who serves as a co-leader, representatives of local human service agencies and other relevant service providers, and other community stakeholders, such as youths and parents. 

As PROSPER teams develop, they involve other stakeholders who can positively influence program recruitment, program implementation, and sustainability (such as individuals from various church groups, parent groups, businesses, law enforcement agencies, and/or the media). The local strategic teams receive technical support from the university-level and CES prevention coordinator team members, who attend the local team meetings. This technical assistance is proactive, meaning contact is made with local team members frequently (weekly or biweekly) in order to actively engage in collaborative problem solving. Once formed, the local team selects evidenced-based, universal-level family-focused and school-based programs to implement with middle school youth and their families in the local school district.

In an evaluation of PROSPER, 28 school districts from Iowa and Pennsylvania were recruited to participate in a randomized, cohort sequential design involving two cohorts. Communities were blocked on school district size and geographic location and then randomly assigned to either the treatment (14 districts) or control (14 districts) condition. The family intervention was delivered in the sixth-grade year, while the school-based intervention was delivered in the seventh-grade year. Combined, the interventions lasted for 1.5 years. Spoth et al. (2007) reported significant program effects after both the family- and school-focused interventions were delivered, for PROSPER youth relative to controls, on lifetime use of gateway (cigarettes, alcohol, marijuana) and illicit drugs. Also PROSPER youth were less likely than controls to initiate use of marijuana, inhalants, methamphetamines, and ecstasy. In a one-year follow-up, Redmond et al. (2009) found substance use expectancies and association with antisocial peers were significantly improved. In addition, child-to-mother affective quality, parent-child activities, and family environment improved at posttest and the 1-year follow-up. Meanwhile, Spoth et al. (2015) reported significantly fewer conduct problem behaviors compared to the control group up to five years after the intervention ended.

In terms of cost-benefit analysis, Washington State Institute for Public Policy (December 2018) reports $1.57 in measured benefits per $1 spent in implementing PROSPER.

References:

Spoth, R., Redmond, C., Shin, C., Greenberg, M., Clair, S., & Feinberg, M. (2007). Substance-use outcomes at 18 months past baseline: The PROSPER community-university partnership trial. American Journal of Preventive Medicine, 32(5), 395-402.

Redmond, C., Spoth, R. L., Shin, C., Schainker, L. M., Greenberg, M. T., & Feinberg, M. (2009). Long-term protective factor outcomes of evidence-based interventions implemented by community teams through a community-university partnership. Journal of Primary Prevention, 30, 513-530.

Spoth, R. L., Trudeau, L. S., Redmond, C. R., Shin, C., Greenberg, M. T., Feinberg, M. E., & Hyun, G. (2015). PROSPER partnership delivery system: Effects on adolescent conduct problem behavior outcomes through 6.5 years past baseline. Journal of Adolescence, 45, 44-55.

Read the Program Fact Sheet

Return to Blueprints Bulletin Issue 10. August 2019.

Treatment Foster Care Oregon

The Treatment Foster Care Oregon (TFCO) program was developed as an alternative to institutional, residential, and group care placement for adjudicated teenagers with histories of chronic and severe criminal behavior. The two main goals of TFCO are to create opportunities for youth to successfully live in a family setting and to simultaneously help parents (or other long-term family resource) provide effective parenting. The rationale for TFCO is that adolescent adjustment can be enhanced by the extent to which parents are able to effectively supervise their teenager, follow through with consequences when necessary, and promote positive involvement in school and other normative activities.

Community foster families are recruited, trained, and closely supervised to provide TFCO-placed adolescents with treatment and intensive supervision at home, in school, and in the community; clear and consistent limits with follow-through on consequences; positive reinforcement for appropriate behavior; a relationship with a mentoring adult; separation from delinquent peers along with access to prosocial peers; and an environment that supports daily school attendance and homework completion. 

TFCO utilizes a behavior modification program based on a three-level point system by which the youth are provided with structured daily feedback. As youth accumulate points, they are given more freedom from adult supervision. Youth are provided weekly meetings with an individual therapist who provides support and assists in teaching skills needed to relate successfully to adults and peers. Family therapy sessions help parents prepare for the youth’s return home and help them become more effective at supervising, encouraging, supporting, and following through with consequences. Case managers closely supervise and support the youths and their foster families through daily phone calls. 

Throughout the six- to nine-month placement in foster homes, there is an emphasis on teaching interpersonal skills and on participation in positive social activities including sports, hobbies, and other forms of recreation. Aftercare services remain in place for as long as the parents want, but typically last about one year.

The initial evaluation certified by Blueprints involved three articles (Chamberlain, 1997; Chamberlain et al., 1996; and Eddy et al., 2004) in which 79 boys, who were mandated into out-of-home care by the juvenile court, were randomly assigned to treatment (n = 37) or control (n = 42) between 1991 and 1995. The boys were from 12 to 17 years old, had an average of thirteen previous arrests and 4.6 prior felonies, and half had committed at least one crime against a person. After one year, TFCO boys, relative to controls, were incarcerated 60% fewer days, had fewer arrests, reported less self-reported drug use, and were less likely to run away from their program. TFCO boys also experienced less tobacco and marijuana use, and other drug use at 18 months. After two years, TFCO boys had fewer violent offense referrals and self-reported violent offenses than controls. 

A second Blueprints-certified study (Leve et al., 2005 and Chamberlain et al., 2007) involved girls who were mandated to community-based, out-of-home care because of problems with chronic delinquency.  Girls were 13-17 years of age at baseline, and were only recruited if they had at least one criminal referral in the prior 12 months, were not currently pregnant, and were placed in out-of-home care within 12 months following referral. Girls were randomly assigned to TFCO (n=37) or group care (n=44). TFCO girls, relative to controls, experienced fewer days in locked settings, fewer criminal referrals, lower caregiver-reported delinquency, and more time on homework at 12 months post-baseline. The reductions on days spent in locked settings and criminal referrals remained at 24-months post-baseline, along with reductions in self-reported delinquency. 

The third Blueprints-certified evaluation (Kerr et al., 2009) included two consecutively run randomized controlled trials involving the girls from the second study plus an additional 85 girls selected based upon the same eligibility criteria.  In total, the trials included 166 girls with 81 randomized to treatment and 85 to control. Results indicated that the odds of girls in group care (control) becoming pregnant were 2.44 times that of girls in TFCO. Reporting on the same sample, Rhoades et al. (2014) found that girls randomly assigned to TFCO when they were 13-17 years old reported significant decreases in drug use over a 2-year period in young adulthood (7-9 years after the study began), while those assigned to group care did not report significant decreases in drug use during this time.

TFCO has been adapted to meet the needs of other populations, including adolescents with severe emotional and behavioral problems referred by mental health and child welfare systems, youth with developmental disabilities who also have a history of sexual acting out, and preschoolers. Evaluations conducted with these populations have not been thoroughly tested.

TFCO is a cost-effective alternative to group or residential treatment, incarceration, and hospitalization for adolescents who have problems with chronic antisocial behavior, emotional disturbance, and delinquency. The Washington State Institute for Public Policy estimates a return of $1.85 for every dollar invested. 

References:

Chamberlain, P. (1997, April). The effectiveness of group versus family treatment settings for adolescent juvenile offenders. Paper presented at the Society for Research on Child Development Symposium, Washington, D.C.

Chamberlain, P., Ray, J., & Moore, K. (1996). Characteristics of residential care for adolescent offenders: A comparison of assumptions and practices in two models. Journal of Child and Family Studies, 5, 285-297.

Eddy, J., Whaley, R., & Chamberlain, P. (2004) The prevention of violent behavior by chronic and serious male juvenile offenders: A 2-year follow-up of a randomized clinical trial. Journal of Emotional and Behavioral Disorders, 12(1), 2-8.

Leve, L. D., Chamberlain, P., & Reid, J. B. (2005). Intervention outcomes for girls referred from juvenile justice: Effects on delinquency. Journal of Consulting and Clinical Psychology, 73(6), 1181-1185.

Chamberlain, P., Leve, L. D., & DeGarmo, D. S. (2007). Multidimensional Treatment Foster Care for girls in the juvenile justice system: 2-year follow-up of a randomized clinical trial. Journal of Consulting and Clinical Psychology, 75 (1), 187-193.

Kerr, D. C. R., Leve, L. D., & Chamberlain, P. (2009). Pregnancy rates among juvenile justice girls in two randomized controlled trials of Multidimensional Treatment Foster Care. Journal of Counseling and Clinical Psychology, 77(3), 588-593.

Read the Program Fact Sheet

Return to Blueprints Bulletin Issue 10. August 2019.

Achieving Successful Implementation of a Blueprints Program

A Letter from Our Director

There is an age-old saying, “It is not what you do, but how well you do it that counts.” This statement is a gentle reminder that we tend to emphasize the outcome of our efforts over the process. Much attention has focused on identifying effective research-based programs. In contrast, there has been much less awareness of the factors needed to successfully implement such programs. In other words, we now know what to implement, but we know very little about how.

The importance of the process of implementation cannot be overstated. In fact, as the adage suggests, the process of implementation influences the product. While programs are often thought of as a uniform set of elements that are provided to clients in a consistent manner, there can be great variability in the manner in which programs are delivered. For example, it is likely that adopting sites will vary in their level of support from key staff; organizational capacity to plan for and support the program; and availability of capable, trained staff to conduct the program. Deficits in these areas may undermine program effectiveness. Therefore, it becomes clear that improving the health and well-being of youth requires us to pay close attention to how programs are implemented.

Although schools have become a primary locus of prevention efforts, the National Study of Delinquency Prevention in Schools concluded that the quality of school prevention activities is generally poor, and prevention activities are not being implemented with sufficient strength and fidelity to produce a measurable difference in the desired outcomes (Gottfredson, et al., 2000).

These types of findings underscore the need for greater attention to the quality of implementation. Strategies used to help teachers implement with high fidelity (e.g., yearly training, onsite visits, classroom observations) and lessons learned from a large-scale Blueprints dissemination project of the LifeSkills Training program (featured in this Newsletter) are highlighted below.

The Center for the Study and Prevention of Violence (CSPV) was awarded corporate funding to disseminate and monitor fidelity of a Blueprints-certified Model Plus program, LifeSkills Training (LST). The 2016-2017 academic year marked the first year of a three-year funding cycle, and LST was delivered to 129,051 middle school students from 148 school districts across 14 states. Throughout LST implementation, CSPV provides yearly teacher training, training of trainers, and sustainability training. Process evaluation data is collected to monitor and reinforce program fidelity through annual site visits, classroom observations, and site personnel surveys. Site visits entail semi-structured interviews with LST instructors and school administration as well as observations of LST lessons. The goal of site visits is to identify and rectify challenges during implementation and prevent these issues from recurring. Additionally, throughout program delivery, on-site classroom observers conduct intermittent, unscheduled observations using a standardized worksheet, which assess the thoroughness and quality of the LST lesson. In order to ensure reliability of observations, observers are trained by CSPV prior to program implementation and complete a co-observation with CSPV during the site visit. Lastly, after the curriculum is delivered, teachers provide feedback through an online questionnaire pertaining to implementation fidelity, dosage, and student response.

Using these methods, in 2016-2017, CSPV obtained favorable results in regards to program fidelity. Ninety-three percent of eligible students within the grant-funded school districts received LST. Moreover, the majority of instructors reported implementing the entire core curriculum (84%) and delivering LST at least once per week for consecutive weeks (88%) in accordance with fidelity guidelines. In addition, observation data revealed that instructors covered an average of 87% of key lesson points and objectives in the portion of the lesson taught in a given session. Furthermore, classroom observers indicated that instructors amply incorporated the interactive teaching techniques recommended by the program developer, comprising discussion (40% of the time, on average), behavioral rehearsal/practice (21%), and coaching/demonstration (13%).

These findings demonstrate that practices exist to assuage concerns related to fidelity of prevention program delivery. Monitoring of implementation by an outside agency helps maintain program momentum and quality implementation, which ultimately result in successful outcomes.

Gottfredson, G.D., Gottfredson, D.C., & Czeh, E.R. (2000). Summary: National study of delinquency prevention in schools. Ellicott City, MD: Gottfredson Associates, Inc.

Return to Blueprints Bulletin Issue 6. April 2018.

Join Us for the 7th Biennial Blueprints Conference

A Letter from Our Director

There’s excitement in the air as we approach the date of our 7th biennial Blueprints Conference. This is a fantastic opportunity for persons engaged in youth violence prevention and healthy youth development to come together to network, share information, and learn from experts in the field. The overarching goal of the conference is to provide information on evidence-based programs and guidance and tools to help consumers implement these programs successfully.

Keynote presenters include Delbert Elliott, Founding Director of the Center for the Study and Prevention of Violence, and Director of the Blueprints Initiative. Dr. Elliott is a renowned advocate for the widespread use of evidence-based programs. In his farewell address to our conference, he will discuss the major challenges facing registries of prevention programs and practices and proposed strategies for addressing these challenges.

James Mercy, Director, Division of Violence Prevention, Centers for Disease Control and Prevention, will discuss the cross-cutting impact that successful violence prevention can have on a broad range of health and social challenges such as mental illness, chronic disease, and even some infectious diseases such as HIV.

David Hawkins, Emeritus Endowed Professor of Prevention and Founding Director of the Social Development Research Group, University of Washington, will outline seven essential actions that could unleash the power of prevention and substantially reduce the prevalence of behavioral health problems and inequality in health and behavior outcomes in the U.S. population.

Forty-four 90-minute workshops will cover broad topical areas such as Blueprints programs, implementation supports, financing and sustaining programs, effective leadership and policy strategies. Preconferences will include day-long presentations by the purveyors of Communities That Care and Functional Family Therapy (featured below). Other preconference sessions will focus on prevention strategies for addressing the opioid epidemic, how to understand the evidence in evidence-based programs, and issues related to implementing LifeSkills Training (a middle school drug and violence prevention program).

Please join us on April 30 for the preconference and May 1-2, 2018, for the main conference in Westminster, Colorado, for this unique experience that unites a broad spectrum of participants from policy, program development, research, and implementation. We promise that you will not be disappointed!

Please register for the conference here.

Return to Blueprints Bulletin Issue 5. January 2018.

Why Maintain a Rigorous Standard of Evidence – Lessons Learned

A Letter from Our Director

While the term evidence-based is commonly used, the actual level of scientific evidence supporting a given program varies widely, often lacking any credible research standards. Throughout the 20 years of the Blueprints project, the most important lesson that we have learned, and have strived to impart, is that the standards for recognizing an evidence-based program must be high in order to maintain the public’s confidence.

In the earliest years of the project, Blueprints identified a model program, Quantum Opportunities, based upon a multi-site evaluation. This evaluation examined outcomes in each of the national sites participating in the study, and the demonstration appeared successful. Later, a large multi-site replication by the Department of Labor found only a few, largely inconsistent effects, and some of the primary behavioral outcomes were negative at one of the replicating sites. This program was subsequently removed from the Blueprints list.

A similar situation occurred with another program, CASASTART, a comprehensive case management strategy for preventing drug use and delinquency for small groups of high-risk adolescents, living in highly distressed neighborhoods. This program was listed on Blueprints as promising based upon one successful large-scale randomized trial conducted by the Urban Institute, but a later randomized controlled evaluation conducted by the Blueprints team found no effects, and some iatrogenic effects. This program was subsequently retracted by the program developers and removed from the Blueprints list. However, many agencies already using the program were left with decisions of whether to continue or abandon the program. As can be imagined, there was much confusion as to how to handle the unexpected news.

In the examples above, the original trials were conducted in a rigorous manner with random assignment, and later studies failed to replicate. This reinforces the need for replication as a part of the evidentiary standard. Replication is a necessary standard to meet the Blueprints Model criteria, and independent replication (conducted by researchers independent of the program developers) is necessary for Model Plus. Although Blueprints Promising programs do not have to have replication if their success was based on a randomized controlled trial, we also suggest that these programs should not be taken to scale in national initiatives, although they may be adopted in local settings.

These examples reinforce the reasoning for maintaining a high standard. We cannot afford to take programs to scale prior to rigorous testing. When programs that have not been adequately tested are added to registries and later found to be ineffective, this shakes the confidence of the public in our ability to identify evidence-based programs. While there is always a chance of such instances occurring, to accept programs with a low standard of evidence only increases the odds. The way to avoid such confusion is for all registries and funders to endorse standards of the highest quality. Programs must demonstrate in high-quality evaluations evidence of a beneficial change in targeted outcomes, sustained effects, multiple site replication, and other prescribed factors before going to scale. Blueprints can help decision makers understand the difference between programs that only claim to be evidence-based and those that meet the true scientific standard.

Return to Blueprints Bulletin Issue 4. November 2017.

Major Reasons Programs Fail to Meet Blueprints Criteria

A Letter from Our Director

I’m often asked why a particular program did not meet Blueprints criteria. Most programs fail to receive Blueprints certification because their scientific evaluations have major limitations. These limitations are such that apparently positive results could emerge not from participation in the program but from design problems in the evaluation. The most obvious problems are not having a control group (in which participants are compared to nonparticipants) or not assigning participants to the intervention and control groups randomly. Randomized controlled trials do the most to ensure that control and intervention groups are identical at the start of a program in all ways except that one receives the intervention and the other does not. Some quasi-experimental designs, in which participants are statistically matched to non-participants on key sociodemographic and/or behavioral measures, come close to the quality of randomized trials in ensuring group equivalence at the start of a program.

Even randomized controlled trials and quasi-experimental designs can face serious problems, however. Programs approved by Blueprints must take appropriate steps to overcome these common problems. We cannot offer a comprehensive list of methodological problems that the Board considers in their program reviews but can highlight ones that often prevent Blueprints certification.

Some of the Most Common Methodological Problems

Biased Measures. The best outcome measures are obtained from independent sources. Researchers who rate subjects participating in their programs or mothers in parenting programs who rate their child’s behavior are prone to bias. Strong studies have more objective measures based on subject self-reports, blind ratings, or external data sources. Additionally, the study should collect outcome data for treatment and control subjects in the same way and at the same time, and report the intervention’s effects on all outcomes the study measured and not just the positive ones.

Limited Outcomes. The outcomes to be measured should be behavioral and not simply attitudes or intervening risk and protective factors that are closely related to program content. Success in changing attitudes or intervening factors does not always translate into success in changing the ultimate behavioral outcomes of interest to Blueprints. For instance, a program designed to reduce drug use should measure actual drug use patterns and not just intentions or attitudes towards drugs.

Dropping Subjects. Believing that only subjects who complete an intervention should be evaluated, researchers sometimes drop non-participants. However, this approach selects only the best subjects and leads to biased results. Strong studies use an “intent-to-treat” approach that analyzes all subjects in the condition of their original assignment.

Non-Equivalent Groups. Even in randomized trials, studies need to examine the baseline equivalence of the intervention and control groups. Statistical tests should show few significant differences between the groups before the intervention begins on all sociodemographic and outcome measures.

Differential Attrition. Most studies face loss of subjects, but if the loss differs between the intervention and control groups, it introduces potential bias. Strong studies track subjects by condition at baseline and at each follow up measure to ensure participants lost to attrition do not differ across conditions, baseline measures, or the interaction of condition by baseline measures.

Incorrect Level of Analysis. Researchers often randomize at one level, such as the school level, but conduct analyses at a different level (e.g., students). When individuals are part of larger groups or organizations that are randomized, it violates the assumption made in tests of statistical significance that the observations are independent. To avoid overstating the statistical significance of the results, multilevel models, robust standard errors, or other adjustments should be used.

Inconsistent or Weak Program Benefits. Most studies examine program effects on multiple outcomes, often including a wide range of measures, risk and protective factors, and behaviors. To minimize the role chance may play in multiple statistical tests, Blueprints looks for consistent results. Although it is hard to provide a simple formula to define consistency, the Board looks for robust and reliable benefits that are not limited to a small portion of the outcome measures and hold across time points and subgroups. Ideally, the program benefits are strong enough that they will show when the program is used with other samples and in different contexts.

Return to Blueprints Bulletin Issue 3. July 2017.

Blueprints Process for Selecting Programs

Blueprints’ standards for program certification are recognized as the most rigorous in the field. Many registries solicit nominations and review only those studies submitted by the program developers, potentially omitting studies with null results. Blueprints not only reviews nominated programs, but also performs an exhaustive search of the literature each month to identify programs addressing outcomes of interest (delinquency, substance use, emotional and physical well-being, academic success, and positive relationships). All program evaluations are considered in creating a detailed write-up covering the program description, target audience, risk and protective factors, evaluation methodology, outcomes, generalizability, and limitations of each evaluation. Blueprints staff conduct an initial review of the program. Programs that pass this screening advance to an external review conducted by the Blueprints Advisory Board.

Blueprints Programs must meet the following criteria:

  1. Evaluation quality. Studies must be of sufficient methodological quality to confidently attribute results to the program.
  2. Intervention impact. The preponderance of evidence from the high-quality evaluations indicates significant positive change in intended outcomes that can be attributed to the program, and there is no evidence of harmful effects.
  3. Intervention specificity. The program description clearly identifies the intended outcome, specific risk and/or protective factors targeted to produce this change in outcome, the population for which the program is intended, and how intervention components work to produce this change.
  4. Dissemination readiness. The program is available for dissemination and has the organizational capability, manuals, training, technical assistance, and other support required for implementation with fidelity in communities and public service systems. Cost information and monitoring tools also are available.

Programs meeting the four criteria above are rated as Promising, Model, or Model Plus. Promising programs meet the minimum standard of effectiveness. Model and Model Plus programs meet a higher standard and provide greater confidence in the capacity to change targeted behavior and developmental outcomes.

  • Promising Programs: Minimum of one high-quality randomized controlled trial (RCT) or two high-quality quasi-experimental evaluations.
  • Model Programs: Minimum of two high-quality RCTs, or one high-quality randomized controlled trial plus one high-quality quasi-experimental evaluation. Additionally, positive intervention impact sustained for a minimum of 12 months after the program intervention ends.
  • Model Plus Programs: Model criteria, plus independent replication conducted by a researcher who is neither a current or past member of the program developer’s research team and who has no financial interest in the program.

Return to Blueprints Bulletin Issue 2. May 2017.

Promoting Health Among Teens! (Comprehensive)

The program is a 12-hour HIV/sexually transmitted infections (STI) and pregnancy-prevention intervention for African American teens that offers optional booster sessions and one-on-one meetings for up to two years following program completion. It aims to increase knowledge of HIV and STIs, strengthen behavioral beliefs supporting abstinence, strengthen behavioral beliefs supporting condom use, increase skills to negotiate abstinence, and increase skills to use condoms and negotiate condom use among 6th and 7th grade students.

Read the Program Fact Sheet

Return to Blueprints Bulletin Issue 3. July 2017.

Parent Management Training (PMT)

Parent Management Training is a 14-week parent-training intervention that is designed to improve oppositional/defiant behaviors in children and adolescents. The program is delivered to parents and children in 12 consecutive weekly sessions lasting 75 minutes, with the 13th session occurring 2 weeks later. Parents learn to be more consistent and contingent in their behavior management practices, including use of clear and direct commands, differential attention, contingent reinforcement, response cost, and time-out from reinforcement. The parent-child joint sessions allow parents to practice new strategies in the clinical setting.

Read the Program Fact Sheet

Return to Blueprints Bulletin Issue 3. July 2017.