Developing a pool of high-quality interventions is essential to address the problem of child abuse and neglect. Equally important is understanding how best to replicate, sustain, and integrate these programs into an effective system of care. Unfortunately, in child abuse and neglect as in other areas of health, mental health, and social services, a wide gap exists between available evidence-based interventions and practices and effective methods for their dissemination, implementation, and sustainment. This is a critical concern because the potential public health benefit of these interventions will be severely limited or unrealized if they are not implemented and sustained effectively in usual-care practice, be it in child welfare, mental health, substance abuse, or primary health care settings (Balas and Boren, 2000). Indeed, the success of efforts to improve services designed to support the well-being of children and families is influenced as much by the process used to implement innovative practices as by the practices selected for implementation (Aarons and Palinkas, 2007; Fixsen et al., 2009; Greenhalgh et al., 2004; Palinkas and Aarons, 2009; Palinkas et al., 2008). It is increasingly recognized that investment in the development of interventions without attention to how they align with service systems, organizations, providers, and consumers results in poor application of evidence-based practices.
Indeed, once evidence-based practices are taken to scale, the outcomes and effect sizes documented in their initial clinical trials often are not replicated. One reason for this is that complex interventions frequently are simplified over time in ways that impact key program objectives and strategies (Mildon and Shlonsky, 2011). Poor implementation has been cited as the reason for weakened effects in programs addressing conduct problems (Lee et al., 2008), learning delays (Hagermoser Sanetti and Kratochwill, 2009), crime prevention (Welsh et al., 2010), home visiting (Matone et al., 2012), and various child welfare reforms (Daro and Dodge, 2009). If replicating an evidence-based intervention does not produce a corresponding replication of impact, the intervention cannot be expected to reduce the incidence of the problem it was designed to address. Unless incidence is significantly reduced, the dramatic cost savings purported to follow major investments in high-quality treatment and prevention services may not materialize.
As evidence-based practices move from controlled settings to a real-world context, tension arises between remaining rigidly faithful to the original model and adapting it to local circumstances and needs (Backer, 2001; Bauman et al., 1991). Although adaptation may or may not be a deliberate choice, some form of adaptation is likely to be the rule rather than the exception in community care (Aarons et al., 2012). Ideally, such adaptation does not change the core elements of evidence-based practices, that is, those required elements that fundamentally define the nature of the practices and produce their main effects (Backer, 2001; Bauman et al., 1991; Cardona et al., 2009; Gandelman and Rietmeijer, 2004; Harshbarger et al., 2006; McKleroy et al., 2006; Veniegas et al., 2009).
Understanding when and how to alter a program in ways that enhance rather than diminish its effects represents a major social service challenge. Since the 1993 NRC report was issued, significant research has been conducted on how to define the concept of program fidelity, understand the role of race and culture in determining when and how to adapt evidence-based practices, identify those factors that facilitate or compromise the replication of evidence-based practices with fidelity, and clarify how research can be incorporated into the overall programming planning process. In addition, increased attention is being paid to the costs of interventions relative to their overall impact, resulting in an increased demand for more consistent and comparable methods of quantifying and tracking program expenditures and their long-term impacts on public budgets. This section summarizes this body of research and identifies those areas in need of additional study.
Fidelity as a Strategy for Enhancing Impact
At the most basic level, faithfully replicating programs that have been found effective in rigorous experimental studies is believed to result in a higher likelihood of achieving desired outcomes than replicating programs that lack a strong evidentiary base (Fixsen et al., 2005). Investing in direct service programs with a proven track record offers policy makers a hedge on their investment and provides increased confidence that outcomes also can be replicated. Central to this hypothesis, however, is ensuring that sites replicating a model maintain fidelity to its original design and intent.
As replication of evidence-based programs becomes more commonplace, it is increasingly important to design and implement frameworks for defining program fidelity, as well as data management systems that can track the implementation process at the level of specificity needed to ensure consistent replication. Researchers use several theoretical frameworks to define fidelity and address issues of appropriate modification. In summarizing work in this area, Carroll and colleagues (2007) identify five elements of implementation fidelity: (1) adherence to the service model as specified by the developer, (2) service exposure or dosage, (3) the quality or manner in which services are delivered, (4) participants’ response or engagement, and (5) understanding of essential program elements not subject to adaptation or variation.
The rise of implementation science and the need to replicate and scale up evidence-based programs with fidelity across a range of disciplines has led to the development of a number of frameworks identifying an array of factors that should be considered to ensure that replication is faithful to both the structure and intent of the original model (Bagnato et al., 2011; Berkel et al., 2011; Damschroder and Hagedorn, 2011; Dane and Schneider, 1998; Gearing et al., 2011; Hagermoser et al., 2011). These factors include an appropriate target population, staff skills and training, supervision, caseloads, curriculum, and service dosage and duration, as well as the manner in which services are provided and participants are engaged in the service delivery process. Maintaining fidelity is especially important in practice-based research networks and learning collaboratives because it allows networks to gauge outcomes that can be used to make necessary practice and science improvements. Attention to these factors is necessary both in the initial planning process and throughout implementation.
Evidence-Based Treatments and Culturally Diverse Populations
The importance of cultural processes in shaping human functioning is increasingly being recognized. It is therefore critical to understand whether child abuse and neglect interventions are effective with ethnic minority youth who are at risk for or experience child abuse or neglect. A number of scholars have argued that culture matters in the development and testing of prevention and intervention strategies, as well as in the replication and adaptation of evidence-based practices for distinct populations or groups (e.g., Barrera et al., 2011; Bernal et al., 2009; Lau, 2006). According to this perspective, the culturally related processes underlying parenting and sociocultural risks that can lead to or exacerbate abuse and neglect must be considered to ensure the social validity and practical application of an intervention (Lau, 2006).
Another body of literature comprises evaluation of evidence-based interventions with ethnic minority youth and families, focusing on such questions as (1) Are evidence-based interventions effective for ethnic minority youth?, (2) Do minority youth benefit more when interventions are responsive to their cultural context?, and (3) Is there evidence for either culturally specific or culturally adapted youth interventions? (Huey and Polo, 2008, 2010). This literature is still in its infancy. As discussed earlier in this chapter, the extant literature shows that evidence-based interventions delivered to African American and Latino youth can be effective (for additional discussion of this issue, see Huey and Polo, 2008, 2010). These interventions target a range of concerns, including anxiety-related problems, attention-deficit hyperactivity disorder (ADHD), conduct problems, depression, substance use problems, trauma-related problems, and mixed/comorbid problems. Of note, only four interventions have shown effectiveness with ethnic minority youth across multiple trials: CBT, MST, interpersonal therapy (IPT), and brief solution-focused therapy (BSFT). In addition to these interventions targeting mental health and adjustment problems, a child welfare intervention targeting American Indian parents (Chaffin et al., 2012b) has shown effectiveness. Evidence-based interventions appear to work equally well for African American and Latino youth and European American youth, indicating no consistent effects of moderation (Huey and Polo, 2008).
Although most of the interventions investigated in these studies did not explicitly target ethnic minority youth who were abused and neglected, those interventions that did explicitly include this population yielded similar findings regarding effectiveness, moderation, and the impact of cultural adaptation. However, the discussion of cultural elements in reports on evidence-based interventions varies considerably (Huey and Polo, 2008), which may impede understanding of the impact of cultural adaptation; in particular, reporting of the development and evaluation of many culturally adapted interventions is characterized by a relative lack of theory and conceptual framing. Thus, more research is needed to test key assumptions and hypotheses regarding minority youth and the effectiveness of interventions.
A critical gap in this literature is that evidence-based interventions have been tested primarily with African American and Latino youth; with few exceptions, little is known about the effectiveness of evidence-based interventions with Asian American and American Indian youth. For example, there have been few studies on the effectiveness of home visiting models that involve structured, protocol-driven approaches with families in tribal communities (Del Grosso et al., 2012). One noteworthy effort is the randomized controlled trial of Family Spirit, modeled on Healthy Families America, which found that a family-strengthening home visiting program delivered by paraprofessionals significantly increased mothers’ child care knowledge and involvement (Walkup et al., 2009).
To illustrate these issues, interventions targeting American Indian and Alaska Native families and communities need to take account of their history, culture, and tribal diversity (DeBruyn et al., 2001; Weaver, 2003). Thus, addressing child abuse and neglect and trauma among these populations presents unique opportunities to develop culturally sensitive interventions that align with traditional circular and contextual world views and to adapt or enhance evidence-based practices that are based in authentic practitioner-researcher partnerships (Poupart et al., 2009; Spicer et al., 2012). One prominent example, Project Making Medicine, provides training in the clinical treatment of child physical and sexual abuse based on the cultural adaptation of TF-CBT. Entitled Honoring Children, Mending the Circle, the curriculum features an indigenous orientation to well-being and the use of traditional healing practices. Cultural adaptations to family preservation approaches involve using genograms, wraparounds, talking circles, kinship care, healing ceremonies, and traditional adoptions with Native families. This intervention also incorporates tribal elders and extended family in the use of specific cultural approaches, such as storytelling, sweat lodges, feasts, and use of Native languages (Bigfoot and Funderburk, 2011). The effectiveness of these adaptations of clinical tools and interventions merits further research.
In sum, the field of evidence-based interventions for cultural minority populations is still developing. Research is needed on understudied populations, as well as on key assumptions, hypotheses, and implementation issues of culturally adapted evidence-based interventions. Guidelines on when to consider making a cultural adaptation and how specifically to do so would provide important support for the field. Lau (2006) offers an evidence-based approach to making such decisions. Her framework calls for the selective identification of target problems and/or communities for which adaptations are appropriate. More specifically, populations that face unique sociocultural contexts of risk or resilience that differ from those targeted by the original evidence-based intervention may be appropriate candidates for cultural adaptation. When it is determined that cultural adaptation is warranted, Lau further suggests a data-based approach to decisions on the adaptations to implement. Surface-structure adaptations (Resnicow et al., 2000) (e.g., language translation, use of videos or books that depict a cultural group, interventionists who share the same cultural background as target families) are designed to make interventions more accessible, whereas deep-structure changes are designed to make interventions more effective and target underlying cultural values.
One example of this data-based approach is the cultural adaption of the evidence-based program Guiando a Niños Activos (Guiding Active Children, or GANA), a version of PCIT for Mexican American families (McCabe et al., 2005). A multistep process was used, including a review of the clinical literature on Mexican American families; identification of known barriers to treatment access and effectiveness; use of focus groups; and interviews with Mexican American mothers, fathers, and therapists to learn how PCIT could be modified to be more culturally effective. The process culminated in an expert panel review of the intervention (Lau, 2006). Another example, The Children and Families (as part of the National Child Traumatic Stress Network), addressed the treatment and service needs of traumatized Latino children and families through the creation of adaptation guidelines for practitioners and researchers. These guidelines address micro- and macro-level domains related to child abuse and neglect, including assessment, provision of therapy, communication and linguistic competence, cultural values, immigration/documentation, child welfare/resource families, service utilization and case management, diversity among Latinos, research, therapist training and support, organizational competence, system challenges, and policy (Workgroup on Adapting Latino Services, 2008). Child welfare staff were trained to implement a systems of care approach—an existing evidence-based framework—to improve practice and service delivery for immigrant Latino children at the system level (Dettlaff and Rycraft, 2010).
In such efforts, it is important to attend to the theoretical, implementation, and evaluation issues involved. Perhaps the data-based framework articulated by Lau (2006) can help inform a more rigorous articulation of the circumstances in which evidence-based interventions should be culturally adapted and of the methods that should be used to evaluate the adapted interventions.
The Implementation Process
Since the 1993 NRC report was issued, significant work has been done on how to define and monitor the program implementation process itself and on the critical factors related to higher-quality implementation and sustainability. Consensus exists on important key factors, such as availability of funding; leadership in implementation efforts; ongoing consultation and training, especially in the early implementation phases; and the need to address the impact of staff turnover. In many cases, however, research on these factors is lacking (Aarons et al., 2009a). Consensus also exists that multicomponent implementation strategies are needed, as many different factors need to be addressed in sequence or in tandem for effective implementation that sustains public health impact (Ferlie and Shortell, 2001; Fixsen et al., 2009; Glisson and Schoenwald, 2005; Grimshaw et al., 2001; Grol and Grimshaw, 1999).
Implementation frameworks have been developed to expand and distill theories, structures, and processes into manageable approaches for understanding and identifying key facilitators of and barriers to effective implementation. Most theories provide guidance regarding implementation research and practice, while particular tenets and assumptions of frameworks require further empirical testing to determine whether they actually lead to more effective implementation (Aarons et al., 2011). Implementation researchers typically test components of models (e.g., technology-assisted coaching, organizational improvement) rather than more comprehensive implementation and scale-up strategies. Notable exceptions include studies of system-level implementation in the context of child welfare, such as the use of community development teams to scale up multidimensional treatment foster care in multiple counties (Chamberlain et al., 2012) and the use of interagency collaborative teams to scale up SafeCare across an entire large county.
To support program fidelity, effective and efficient measurement methods that can be readily utilized in usual care settings are needed (Schoenwald et al., 2011). In addition, there must be a feedback system coupled with supportive quality improvement or coaching to help providers maintain fidelity (Aarons et al., 2012). In many cases, however, little ongoing attention is paid to fidelity once an intervention has been implemented. Delivery of an intervention without attention to its fidelity fails to ensure that services are effective.
Efforts have been made to integrate fidelity assessment for psychosocial interventions in systems that involve child abuse and neglect; however, these efforts may or may not be part of implementation studies. One effectiveness trial found that incorporating ongoing coaching to direct service providers in the delivery of a child neglect intervention supported service efficacy (Chaffin et al., 2012a). This statewide trial was also examined in an implementation study that found benefits for organizations and service teams in reduced provider burnout and turnover. There is also increasing interest in the use of technology to support real-time fidelity assessment.
It is important to recognize that many program implementation efforts have occurred in the context of funded research studies. Outside research funding often covers the costs associated with initial monitoring and documentation of the implementation process, including the collection and analysis of participant-level data to document service dosage, duration, and content. In some cases, study subjects have been paid for their participation in the program and may have received reimbursement for child care or transportation expenses related to their participation. As evidence-based practices move from the research venue to standard practice, some entity must pay these costs.
Increasingly, evidence-based practice models are factoring into their per-participant cost projections those expenses associated with initial and ongoing training for direct service staff, supervisory standards, and data reporting requirements. State agencies or community-based service providers seeking to implement these models are required to cover these costs as part of purchasing the program. It remains unclear whether these program-driven standards will be sufficient to sustain program fidelity and quality over time and achieve the level of participant engagement required to both sustain program fidelity and replicate outcomes.
Integration of Research into Practice
Most implementation plans for evidence-based practices include methods for transferring research evidence from the program developers to potential users. Some of these models focus explicitly on the use of research evidence (Honig and Coburn, 2008; Kennedy, 1984; Nutley et al., 2007); in other cases, the use of research evidence is embedded in broader processes of innovation, including the dissemination and implementation of evidence-based practices (Fixsen et al., 2005; Greenhalgh et al., 2004).
Many of these models represent typologies of research use. For instance, several researchers have distinguished between an instrumental model, in which “use” consists of making a decision, and research evidence is assumed to be instructive to that decision, and a conceptual model, in which “use” consists of thinking about the evidence. Whereas the central feature of the instrumental model is the decision, the central feature of the conceptual model is the human information processor. Hence, the instrumental model focuses on the outcome of using evidence, while the conceptual model focuses on the process of using evidence (Kennedy, 1984).
Conceptual models of evidence acknowledge that the use of research evidence to make or support decisions is often a collective endeavor rather than an activity performed by any individual decision maker (Spillane et al., 2001). This collective endeavor involves the utilization of social capital (Honig and Coburn, 2008; Spillane et al., 2001), social networks (Valente, 1995; Valente et al., 2003), and the exchange of knowledge or information between researchers and practitioners and within networks of practitioners (Lomas, 2000; Mitton et al., 2007; Nutley et al., 2007).
Preliminary research (Palinkas et al., 2012) conducted on leaders in child welfare, mental health, and juvenile justice systems implementing multidimensional treatment foster care (Chamberlain et al., 2007) found that published information (journal articles, treatment manuals, Internet searches) was the most frequently accessed source of information on evidence-based practices, followed by local experts and knowledgeable personal contacts. Feasibility of implementation was the primary criterion used to evaluate this evidence. However, further research is needed to identify components of feasibility that may drive implementation decisions.
Capacity to Identify Costs and Cost-Effectiveness Across Approaches
Policy makers, program administrators, and researchers increasingly acknowledge the importance of understanding the costs, cost-effectiveness, and returns on investment of child abuse and neglect programs.
Policy makers want information on costs and how they compare with outcomes of interest for determining how to allocate scarce resources; program administrators want to identify which programs to implement; and researchers are interested in economic evaluation because it makes their program evaluations more comprehensive (Corso and Lutzker, 2006; Courtney, 1999). The demand for economic analysis is evident in strategic planning being developed at the federal level. In the Centers for Disease Control and Prevention’s research plan for injury and violence prevention, for example, a top priority is to describe the use and impact of service delivery as well as the costs of interventions for child abuse and neglect. (Corso and Filene, 2009, p. 78)
Assessment of the economic costs of implementing an intervention is called programmatic cost analysis. The process involves the systematic collection, categorization, and analysis of intervention delivery costs, including those entailed during the preimplementation (developing the program delivery infrastructure) and implementation (delivering the program) phases (Corso and Filene, 2009). A standardized methodology for determining costs for child abuse and neglect interventions does not currently exist, although guidelines available in other fields could be applied (Foster et al., 2003, 2007; Haddix et al., 2003; Yates, 2009). To address this need, efforts are under way at the Children’s Bureau within the Administration for Children and Families to develop a manual on how to conduct programmatic cost analyses specifically within the child welfare community.
Once the costs of a program have been determined, they can be compared with a program’s expected and realized short- and long-term outcomes. This comparison of costs with outcomes is referred to as economic evaluation and includes a number of analyses, such as benefit-cost analysis and return on investment, whereby outcomes are valued in monetary terms, and cost-effectiveness analysis, whereby outcomes are valued in natural units, such as cases of child abuse and neglect prevented or improvements in quality of life. Although some guidelines for conducting economic evaluations do exist for community-level interventions in general (Haddix et al., 2003; Shiell et al., 2008), the literature is sparse on how specifically to conduct economic evaluations of family and child development interventions.
Despite the need for information on the economic cost and impact of implementing child and family development or child abuse and neglect prevention programs, few cost analyses (Corso and Filene, 2009) or economic evaluations have been conducted in this area since the 1993 NRC report was issued (Barlow et al., 2007; Dalziel and Segal, 2012; DePanfilis et al., 2008; Karoly et al., 1998; McIntosh et al., 2009; Olds, 1993). More studies have focused specifically on economic evaluation of interventions designed to improve outcomes for children at risk for or currently involved in the child welfare system (these studies are systematically reviewed and summarized by Goldhaber-Fiebert and colleagues [2011]).
Remaining challenges to conducting programmatic cost analysis and economic evaluation in the fields of child abuse and neglect intervention and child welfare include the need for (1) the development and consistent use of standardized methodology for assessing program costs; (2) multisite assessment of programs in which program-, provider-, and community-level variables may impact program-level costs and outcomes; (3) better tools for assessing the impact of child abuse and neglect on health-related quality of life, which is an important outcome measure in economic evaluations within other health fields; (4) assessment of the long-term costs of child abuse and neglect to determine the potential benefits of prevention and successful child welfare services; and (5) the development and use of model-based economic evaluations to support decision making within the child welfare system (Goldhaber-Fiebert et al., 2011).
The Bottom Line
As policy makers place greater emphasis on evidence-based decision making and the implementation of programs that have been proven effective through rigorous evaluation, research will be needed to understand how these high-quality interventions are replicated, adapted to diverse populations, and incorporated into the overall service delivery system. At present, little is known about the most effective strategies for ensuring that evidence-based practices are replicated with fidelity to their intent and structural elements. Central here is determining which service attributes are most essential to achieving the desired impacts and therefore should not be altered and which can or should be modified to address the needs of specific subpopulations. Equally important is understanding the costs associated with the emphasis on replicating with fidelity in terms of (1) monitoring the service delivery process; (2) providing the required levels of supervision and infrastructure support, including the development of data collection systems; and (3) determining how the data will be integrated into subsequent practice and policy decisions.
Finding: Despite a growing body of theoretical and applied research in the area, a wide gap exists between available evidence-based interventions and practices for treating and preventing child abuse and neglect and methods of effective dissemination, implementation, and sustainment of those interventions. It is increasingly recognized that investment in developing interventions alone, without attention to how they align with service systems, organizations, providers, and consumers, results in poor application of evidence-based practices. Therefore, more research is needed to support the translation of model programs for effective use in real-world settings.
Finding: Little is known about the most effective strategies for ensuring that evidence-based interventions are replicated with fidelity to their intent and structural elements. Further research is needed to determine which service attributes are most essential to achieving the desired impacts and therefore should not be altered and which can or should be modified to address the needs of specific subpopulations.
Finding: More research is needed on the development of evidence-based interventions for cultural minority populations, with a particular focus on understudied populations. Also needed is research that carefully examines key assumption, hypotheses, and implementation issues of culturally adapted evidence-based interventions. Guidelines on when to consider making a cultural adaptation and what the specific adaptation should be would provide important support to the field.
Finding: Significant advances have been achieved in how the program implementation process itself is defined and monitored and in the identification of critical factors related to higher-quality implementation and sustainability. Consensus exists on key factors, but in many cases, research on these factors is lacking. Consensus also exists that multicomponent implementation strategies are needed to address the challenges of effective implementation.
Finding: Despite the need for information on the economic cost and impact of implementing child and family development or child abuse and neglect prevention programs, few studies have conducted programmatic cost analyses or economic evaluations in this area. This type of research is needed to guide policy makers and program administrators.