Education Drivers
Dimensions of Treatment Integrity Overview
Historically, treatment integrity has been defined as implementation of an intervention as planned (Gresham, 1989). More recently, treatment integrity has been reimagined as multidimensional (Dane & Schneider, 1998). In this conceptualization of treatment integrity are four dimensions relevant to practice: (a) exposure (dosage), (b) adherence, (c) quality of delivery, and (d) student responsiveness.
Exposure (dosage) refers to the amount (frequency and duration) of an intervention a student is receiving. For example, a student’s supplemental reading support may call for the intervention to occur three times per week and 30 minutes per session. If the delivery is less frequent or sessions are shorter, then the student’s exposure to the intervention is less than optimal and the outcome may be compromised.
Adherence is the most commonly measured dimension of treatment integrity (Sanetti, Chafouleas, Christ, & Gritter, 2009). It is the extent to which those responsible for implementing an intervention are doing so as prescribed. Most interventions are multicomponent packages and in some instances very complex. Whether a specific feature of an intervention occurred as planned is usually how adherence is measured.
Quality of delivery is the degree to which the implementation is executed with enthusiasm and sincerity. This dimension is underrepresented in the scholarly literature primarily because of its subjective nature; however, it is an important facet of treatment integrity and warrants more research. Consider the following: Many interventions for challenging behavior include praising students when they are behaving appropriately. Some teachers are effusive with their praise and vary it in many ways so that it does not become rote. Others may praise in a very monotone and rote manner. The differences in the way praise is delivered is likely to influence the impact of the intervention even if both individuals who are praising are doing so with 100% adherence to the intervention protocol.
Student responsiveness is the degree to which the student is engaged during the intervention. This dimension is a bit controversial. Some argue that it should not be a part of treatment integrity measures because it is a measure of student behavior and measures of treatment integrity should reflect what adult educators are doing. The counterargument is that even with high integrity for exposure, adherence, and quality of delivery, it is possible that the student’s lack of engagement with the intervention may negatively impact the intervention. For example, a student receiving an intervention to improve fluency in basic math may minimally participate in instruction even though the intervention is implemented with high integrity across all other dimensions of treatment integrity. This poor participation may be a function of placing the student in the instructional program at his or her failure level. Conversely, a student placed in the instructional program at his or her mastery level might not be engaged because the instruction is boring. Student responsiveness to an intervention can be an important indicator of the appropriateness of the instructional program.
Each of these dimensions can influence the impact of an intervention, but it is also important to be mindful of the interaction among variables. Consider the previously mentioned intervention protocol that calls for a student to receive supplemental reading support three times a week for 30 minutes each session. Both of these measures are part of the exposure dimension. If the student receives only one session per week and that session lasts for 30 minutes, then he or she is exposed to the intervention a third of the time prescribed. Similarly, the student could receive the reading intervention three times a week but for only 10 minutes each session. The student is still exposed to the intervention a third of the prescribed time. To complicate matters, even if the instructor perfectly implements the intervention during the session (adherence), the outcome is likely to be degraded because exposure was limited. This example highlights the importance of measuring all of the dimensions of treatment integrity and not just adherence. If just adherence is assessed and the outcome is less than desired, it might be determined that the intervention was ineffective even with a high level of adherence. Failure to consider all dimensions of treatment integrity can result is errors in decision making.
Citations
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18(1), 23–45.
Gresham, F. M. (1989). Assessment of treatment integrity in school consultation and prereferral intervention. School Psychology Review, 18(1), 37–50.
Sanetti, L. M. H., Chafouleas, S. M., Christ, T. J., & Gritter, K. L. (2009). Extending use of direct behavior rating beyond student assessment: Applications to treatment integrity assessment within a multitiered model of school-based intervention delivery. Assessment for Effective Intervention, 34(4), 251–258.
Publications
Treatment integrity is a core component of data-based decision making (Detrich, 2013). The usual approach is to consider student data when making decisions about an intervention; however, if there are no data about how well the intervention was implemented, then meaningful judgments cannot be made about effectiveness.
To produce better outcomes for students two things are necessary: (1) effective, scientifically supported interventions (2) those interventions implemented with high integrity. Typically, much greater attention has been given to identifying effective practices. This review focuses on features of high quality implementation.
Detrich, R. (2014). Treatment integrity: Fundamental to education reform. Journal of Cognitive Education and Psychology, 13(2), 258-271.
Schools are often expected to implement innovative instructional programs. Most often these initiatives fail because what we know from implementation science is not considered as part of implementing the initiative. This chapter reviews the contributions implementation science can make for improving outcomes for students.
Detrich, R. Innovation, Implementation Science, and Data-Based Decision Making: Components of Successful Reform. Handbook on Innovations in Learning, 31.
Reform efforts tend to come and go very quickly in education. This paper makes the argument that the sustainability of programs is closely related to how well those programs are implemented.
Detrich, R., Keyworth, R. & States, J. (2010). Treatment Integrity: A Fundamental Unit of Sustainable Educational Programs. Journal of Evidence-Based Practices for Schools, 11(1), 4-29.
Strategies designed to increase treatment integrity fall into two categories: antecedent-based strategies and consequence-based strategies.
Detrich, R., States, J. & Keyworth, R. (2017). Approaches to Increasing Treatment Integrity. Oakland, Ca. The Wing Institute
Historically, treatment integrity has been defined as implementation of an intervention as planned (Gresham, 1989). More recently, treatment integrity has been reimagined as multidimensional (Dane & Schneider, 1998). In this conceptualization of treatment integrity are four dimensions relevant to practice: (a) exposure (dosage), (b) adherence, (c) quality of delivery, and (d) student responsiveness. It is important to understand that these dimensions do not stand alone but rather interact to impact the ultimate effectiveness of an intervention. It is important for educators to assess all dimensions of treatment integrity to assure that it is being implemented as intended.
Detrich, R., States, J. & Keyworth, R. (2017). Dimensions of Treatment Integrity Overview. Oakland, Ca. The Wing Institute
The usual approach to determining if an intervention is effective for a student is to review student outcome data; however, this is only part of the task. Student data can only be understood if we know something about how well the intervention was implemented. Student data without treatment integrity data are largely meaningless because without knowing how well an intervention has been implemented, no judgments can be made about the effectiveness of the intervention. Poor outcomes can be a function of an ineffective intervention or poor implementation of the intervention. Without treatment integrity data, the is a risk that an intervention will be judged as ineffective when, in fact, the quality of implementation was so inadequate that it would be unreasonable to expect positive outcomes.
Detrich, R., States, J. & Keyworth, R. (2017). Treatment Integrity in the Problem Solving Process. Oakland, Ca. The Wing Institute.
This paper discusses common elements of successfully sustaining effective practices across a variety of disciplines.
Fixsen, D. L., Blase, K. A., Duda, M., Naoom, S. F., & Van Dyke, M. (2010). Sustainability of evidence-based programs in education. Journal of Evidence-Based Practices for Schools, 11(1), 30-46.
This paper examines a range of education failures: common mistakes in how new practices are selected, implemented, and monitored. The goal is not a comprehensive listing of all education failures but rather to provide education stakeholders with an understanding of the importance of vigilance when implementing new practices.
States, J., & Keyworth, R. (2020). Why Practices Fail. Oakland, CA: The Wing Institute. https://www.winginstitute.org/roadmap-overview
Student achievement scores in the United States remain stagnant despite repeated attempts to reform the education system. New initiatives promising hope arise, only to disappoint after being adopted, implemented, and quickly found wanting. The cycle of reform followed by failure has had a demoralizing effect on schools, making new reform efforts problematic. These efforts frequently fail because implementing new practices is far more challenging than expected and require that greater attention be paid to how initiatives are implemented. Treatment integrity is increasingly recognized as an essential component of effective implementation in an evidence-based education model that produces results, and inattention to treatment integrity is seen as a primary reason new initiatives fail. The question remains, what strategies can educators employ to increase the likelihood that practices are implemented as designed? The Wing Institute overview on the topic of Treatment Integrity Strategies examines the essential practice elements indispensable for maximizing treatment integrity.
States, J., Detrich, R. & Keyworth, R. (2017). Overview of Treatment Integrity Strategies. Oakland, CA: The Wing Institute. http://www.winginstitute.org/effective-instruction-treatment-integrity-strategies.
Presentations
Treatment integrity is a core component of data-based decision making (Detrich, 2013). The usual approach is to consider student data when making decisions about an intervention; however, if there are no data about how well the intervention was implemented, then meaningful judgments cannot be made about effectiveness.
This book is compiled from the proceedings of the sixth summit entitled “Performance Feedback: Using Data to Improve Educator Performance.” The 2011 summit topic was selected to help answer the following question: What basic practice has the potential for the greatest impact on changing the behavior of students, teachers, and school administrative personnel?
States, J., Keyworth, R. & Detrich, R. (2013). Introduction: Proceedings from the Wing Institute’s Sixth Annual Summit on Evidence-Based Education: Performance Feedback: Using Data to Improve Educator Performance. In Education at the Crossroads: The State of Teacher Preparation (Vol. 3, pp. ix-xii). Oakland, CA: The Wing Institute.
We reviewed parametric analyses of treatment integrity levels published in 10 behavior analytic journals through 2017. We discuss the general findings from the identified literature as well as directions for future research.
Brand, D., Henley, A. J., Reed, F. D. D., Gray, E., & Crabbs, B. (2019). A review of published studies involving parametric manipulations of treatment integrity. Journal of Behavioral Education, 28(1), 1-26.
Treatment fidelity data (descriptive and statistical) are critical to interpreting and generalizing outcomes of intervention research. Despite recommendations for treatment fidelity reporting from funding agencies and researchers, past syntheses have found treatment fidelity is frequently unreported in educational interventions and fidelity data are seldom used to analyze its relation to student outcomes.
Capin, P., Walker, M. A., Vaughn, S., & Wanzek, J. (2018). Examining how treatment fidelity is supported, measured, and reported in K–3 reading intervention research. Educational psychology review, 30(3), 885-919.
Collecting treatment integrity data is critical for (a) strengthening internal validity within a research study, (b) determining the impact of an intervention on student outcomes, and (c) assessing the need for implementation supports. Although researchers have noted the increased inclusion of treatment integrity data in published articles, there has been limited attention to how treatment integrity is assessed.
Collier-Meek, M. A., Fallon, L. M., & Gould, K. (2018). How are treatment integrity data assessed? Reviewing the performance feedback literature. School Psychology Quarterly, 33(4), 517.
Dane and Schneider propose treatment integrity as a multi-dimensional construct and describe five dimensions that constitute the construct.
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: are implementation effects out of control. Clinical psychology review, 18(1), 23-45.
To prevent academic failure and promote long-term success, response-to-intervention (RtI) is designed to systematically increase the intensity of delivering research-based interventions. Interventions within an RtI framework must not only be effective but also be implemented with treatment fidelity and delivered with the appropriate level of treatment intensity to improve student mathematics achievement.
DeFouw, E. R., Codding, R. S., Collier-Meek, M. A., & Gould, K. M. (2019). Examining dimensions of treatment intensity and treatment fidelity in mathematics intervention research for students at risk. Remedial and Special Education, 40(5), 298-312.
The impact of an intervention is influenced by how well it fis into the context of a classroom. This paper suggests a number of variables to consider and how they might be measured prior to the development of an intervention.
Detrich, R. (1999). Increasing treatment fidelity by matching interventions to contextual variables within the educational setting. School Psychology Review, 28(4), 608-620.
To produce better outcomes for students two things are necessary: (1) effective, scientifically supported interventions (2) those interventions implemented with high integrity. Typically, much greater attention has been given to identifying effective practices. This review focuses on features of high quality implementation.
Detrich, R. (2014). Treatment integrity: Fundamental to education reform. Journal of Cognitive Education and Psychology, 13(2), 258-271.
Presentation by Wing Institute with goals: Make the case that treatment integrity monitoring is a necessary part of service delivery; describe dimensions of treatment integrity; suggest methods for increasing treatment integrity; place treatment integrity within systems framework .
Detrich, R. (2015). Treatment integrity: A wicked problem and some solutions. Missouri Association for Behavior Analysis 2015 Conference. http://winginstitute.org/2015-MissouriABA-Presentation-Ronnie-Detrich
Over the last fifty years, there have been many educational reform efforts, most of which have had a relatively short lifespan and failed to produce the promised results. One possible reason for this is for the most part these innovations have been poorly implemented. In this chapter, the author proposes a data-based decision making approach to assuring high quality implementation.
Detrich, R. Innovation, Implementation Science, and Data-Based Decision Making: Components of Successful Reform. In M. Murphy, S. Redding, and J. Twyman (Eds). Handbook on Innovations in Learning, 31. Charlotte, NC: Information Age Publishing
In this conceptualization of treatment integrity, there are four dimensions relevant to practice: (a) exposure (dosage), (b) adherence, (c) quality of delivery, and (d) student responsiveness. It is important to understand that these dimensions do not stand alone but rather interact to impact the ultimate effectiveness of an intervention.
Detrich, R., States, J. & Keyworth, R. (2017). Dimensions of Treatment Integrity Overview. Oakland, Ca. The Wing Institute
The usual approach to determining if an intervention is effective for a student is to review student outcome data; however, this is only part of the task. Student data can only be understood if we know something about how well the intervention was implemented. Student data without treatment integrity data are largely meaningless because without knowing how well an intervention has been implemented, no judgments can be made about the effectiveness of the intervention. Poor outcomes can be a function of an ineffective intervention or poor implementation of the intervention. Without treatment integrity data, there is a risk that an intervention will be judged as ineffective when, in fact, the quality of implementation was so inadequate that it would be unreasonable to expect positive outcomes.
Detrich, R., States, J. & Keyworth, R. (2017). Treatment Integrity in the Problem Solving Process. Oakland, Ca. The Wing Institute.
For the best chance of producing positive educational outcomes for all children, two conditions must be met: (a) adopting effective empirically supported (evidence-based) practices and (b) implementing those practices with sufficient quality that they make a difference (treatment integrity)
Detrich, R., States, J., & Keyworth, R. (2107). Overview of Treatment Integrity. Oakland, Ca. The Wing Institute.
This study evaluated the impact of public feedback in RtI team meetings on the quality of implementation. Feedback improved poor implementation and maintained high level implementation.
Duhon, G. J., Mesmer, E. M., Gregerson, L., & Witt, J. C. (2009). Effects of public feedback during RTI team meetings on teacher implementation integrity and student academic performance. Journal of School Psychology, 47(1), 19-37.
The first purpose of this review is to assess the impact of implementation on program outcomes, and the second purpose is to identify factors affecting the implementation process.
Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American journal of community psychology, 41(3-4), 327-350.
This paper summarizes survey results about the acceptability of different methods for monitoring treatment integrity and performance feedback.
Easton, J. E., & Erchul, W. P. (2011). An Exploration of Teacher Acceptability of Treatment Plan Implementation: Monitoring and Feedback Methods. Journal of Educational & Psychological Consultation, 21(1), 56-77. Retrieved from http://www.tandfonline.com/doi/abs/10.1080/10474412.2011.544949?journalCode=hepc20.
This paper discusses common elements of successfully sustaining effective practices across a variety of disciplines.
Fixsen, D. L., Blase, K. A., Duda, M., Naoom, S. F., & Van Dyke, M. (2010). Sustainability of evidence-based programs in education. Journal of Evidence-Based Practices for Schools, 11(1), 30-46.
This is a comprehensive literature review of the topic of Implementation examining all stages beginning with adoption and ending with sustainability.
Fixsen, D. L., Naoom, S. F., Blase, K. A., & Friedman, R. M. (2005). Implementation research: A synthesis of the literature.
To help states and districts make informed decisions about the PD they implement to improve reading instruction, the U.S. Department of Education commissioned the Early Reading PD Interventions Study to examine the impact of two research-based PD interventions for reading instruction: (1) a content-focused teacher institute series that began in the summer and continued through much of the school year (treatment A) and (2) the same institute series plus in-school coaching (treatment B).
Garet, M. S., Cronen, S., Eaton, M., Kurki, A., Ludwig, M., Jones, W., ... Zhu, P. (2008). The impact of two professional development interventions on early reading instruction and achievement. NCEE 2008-4030. Washington, DC: National Center for Education Evaluation and Regional Assistance.
Many students with autism spectrum disorder (ASD) receive behavioral interventions to improve academic and prosocial functioning and remediate current skill deficits. Sufficient treatment integrity is necessary for these interventions to be successful.
Gould, K. M., Collier-Meek, M., DeFouw, E. R., Silva, M., & Kleinert, W. (2019). A systematic review of treatment integrity assessment from 2004 to 2014: Examining behavioral interventions for students with autism spectrum disorder. Contemporary School Psychology, 23(3), 220-230.
Technical issues (specification of treatment components, deviations from treatment protocols and amount of behavior change, and psychometric issues in assessing Treatment Integrity) involved in the measurement of Treatment Integrity are discussed.
Gresham, F. M. (1989). Assessment of treatment integrity in school consultation and prereferral intervention. School Psychology Review, 18(1), 37-50.
The concept of treatment fidelity (integrity) is important across a diversity of fields that are involved with providing treatments or interventions to individuals. Despite variations in terminology across these diverse fields, the concern that treatments or interventions are delivered as prescribed or intended is of paramount importance to document that changes in individuals’ functioning (medical, nutritional, psychological, or behavioral) are due to treatments and not from uncontrolled, extraneous variables.
Gresham, F. M. (2016). Features of fidelity in schools and classrooms: Constructs and measurement. In Treatment fidelity in studies of educational intervention (pp. 30-46). Routledge.
The concept of treatment integrity is an essential component to data-based decision making within a response-to-intervention model. Although treatment integrity is a topic receiving increased attention in the school-based intervention literature, relatively few studies have been conducted regarding the technical adequacy of treatment integrity assessment methods.
Gresham, F. M., Dart, E. H., & Collins, T. A. (2017). Generalizability of Multiple Measures of Treatment Integrity: Comparisons Among Direct Observation, Permanent Products, and Self-Report. School Psychology Review, 46(1), 108-121.
This study reviewed all intervention studies published between 1980-1990 in Journal of Applied Behavior Analysis in which children were the subjects of the study. The authors found that treatment integrity was reported in only 16% of the studies.
Gresham, F. M., Gansle, K. A., & Noell, G. H. (1993). Treatment ?integrity in ?applied behavior analysis with children. Journal of Applied ?Behavior Analysis, 26(2), 257-263.
The authors reviewed three learning disabilities journals between 1995-1999 to determine what percent of the intervention studies reported measures of treatment integrity. Only 18.5% reported treatment integrity measures.
Gresham, F. M., MacMillan, D. L., Beebe-Frankenberger, M. E., & Bocian, K. M. ?(2000). Treatment integrity in learning disabilities intervention research: Do we really know how treatments are implemented. Learning Disabilities ?Research & Practice, 15(4), 198-205.
This review examines the treatment integrity data of literacy interventions for students with emotional and/or behavioral disorders (EBD). Findings indicate that studies focusing on literacy interventions for students with EBD included clear operational definitions and data on treatment integrity to a higher degree than have been found in other disciplines.
Griffith, A. K., Duppong Hurley, K., & Hagaman, J. L. (2009). Treatment integrity of literacy interventions for students with emotional and/or behavioral disorders: A review of literature. Remedial and Special Education, 30(4), 245-255.
This study evaluated the differences in estimates of treatment integrity be measuring different dimensions of it.
Hagermoser Sanetti, L. M., & Fallon, L. M. (2011). Treatment Integrity Assessment: How Estimates of Adherence, Quality, and Exposure Influence Interpretation of Implementation. Journal of Educational & Psychological Consultation, 21(3), 209-232.
Determining whether or not the treatment or innovation under study is actually in use, and if so, how it is being used, is essential to the interpretation of any study. The concept of Levels of Use of the Innovation (LoU) permits an operational, cost-feasible description and documentation of whether or not an innovation or treatment is being implemented.
Hall, G. E., & Loucks, S. F. (1977). A developmental model for determining whether the treatment is actually implemented. American Educational Research Journal, 14(3), 263-276.
This study examines adoption and implementation of the US Department of Education's new policy, the `Principles of Effectiveness', from a diffusion of innovations theoretical framework. In this report, we evaluate adoption in relation to Principle 3: the requirement to select research-based programs.
Hallfors, D., & Godette, D. (2002). Will the “principles of effectiveness” improve prevention practice? Early findings from a diffusion study. Health Education Research, 17(4), 461–470.
This article discusses the current focus on using teacher observation instruments as part of new teacher evaluation systems being considered and implemented by states and districts.
Hill, H., & Grossman, P. (2013). Learning from teacher observations: Challenges and opportunities posed by new teacher evaluation systems. Harvard Educational Review, 83(2), 371-384.
Used a direct observation-based approach to identify behavioral conditions in sending (i.e., special education) and in receiving (i.e., regular education) classrooms and to identify targets for intervention that might facilitate mainstreaming of behavior-disordered (BD) children.
Hoier, T. S., McConnell, S., & Pallay, A. G. (1987). Observational assessment for planning and evaluating educational transitions: An initial analysis of template matching. Behavioral Assessment.
The purposes of this manuscript are to propose core features that may apply to any practice or set of practices that proposes to be evidence-based in relation to School-wide Positive Behavior Support (SWPBS).
Horner, R. H., Sugai, G., & Anderson, C. M. (2010). Examining the evidence base for school-wide positive behavior support. Focus on Exceptional Children, 42(8), 1.
“Contextual fit” is based on the premise that the match between an intervention and local context affects both the quality of intervention implementation and whether the intervention actually produces the desired outcomes for children and families.
Horner, R., Blitz, C., & Ross, S. (2014). The importance of contextual fit when implementing evidence-based interventions. Washington, DC: U.S. Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation. https://aspe.hhs.gov/system/files/pdf/77066/ib_Contextual.pdf
This piece describes the widely held perception among education leaders that we already know how to help teachers improve, and that we could achieve our goal of great teaching in far more classrooms if we just applied what we know more widely.
Jacob, A., & McGovern, K. (2015). The mirage: Confronting the hard truth about our quest for teacher development. Brooklyn, NY: TNTP. https://tntp.org/assets/documents/TNTP-Mirage_2015.pdf.
The Good Behavior Game (GBG) is a well-documented group contingency designed to reduce disruptive behavior in classroom settings. However, few studies have evaluated the GBG with students who engage in severe problem behavior in alternative schools, and there are few demonstrations of training teachers in those settings to implement the GBG.
Joslyn, P. R., & Vollmer, T. R. (2020). Efficacy of teacher‐implemented Good Behavior Game despite low treatment integrity. Journal of applied behavior analysis, 53(1), 465-474.
This book provides research as well as case studies of successful professional development strategies and practices for educators.
Joyce, B. R., & Showers, B. (2002). Student achievement through staff development. ASCD.
Considers design issues and strategies by comparative outcome studies, including the conceptualization, implementation, and evaluation of alternative treatments; assessment of treatment-specific processes and outcomes; and evaluation of the results. It is argued that addressing these and other issues may increase the yield from comparative outcome studies and may attenuate controversies regarding the adequacy of the demonstrations.
Kazdin, A. E. (1986). Comparative outcome studies of psychotherapy: methodological issues and strategies. Journal of consulting and clinical psychology, 54(1), 95.
The authors proposed a preliminary FI theory (FIT) and tested it with moderator analyses. The central assumption of FIT is that FIs change the locus of attention among 3 general and hierarchically organized levels of control: task learning, task motivation, and meta-tasks (including self-related) processes.
Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological bulletin, 119(2), 254.
This book examines the use of video recording to to improve teacher performance. The book shows how every classroom can easily benefit from setting up a camera and hitting “record”.
Knight, J. (2013). Focus on teaching: Using video for high-impact instruction. (Pages 8-14). Thousand Oaks, CA: Corwin.
This paper examines school-based experimental studies with individuals 0 to 18 years between 1991 and 2005. Only 30% of the studies provided treatment integrity data. Nearly half of studies (45%) were judged to be at high risk for treatment inaccuracies.
McIntyre, L. L., Gresham, F. M., DiGennaro, F. D., & Reed, D. D. (2007). Treatment integrity of school‐based interventions with children in the Journal of Applied Behavior Analysis 1991–2005. Journal of Applied Behavior Analysis, 40(4), 659–672.
Fidelity of treatment in outcome research refers to confirmation that the manipulation of the independent variable occurred as planned. Verification of fidelity is needed to ensure that fair, powerful, and valid comparisons of replicable treatments can be made.
Moncher, F. J., & Prinz, R. J. (1991). Treatment fidelity in outcome studies. Clinical psychology review, 11(3), 247-266.
This study investigated treatment fidelity in social work research. The authors systematically reviewed all articles published in five prominent social work journals over a 5- year period.
Naleppa, M. J., & Cagle, J. G. (2010). Treatment fidelity in social work intervention research: A review of published studies. Research on Social Work Practice, 20(6), 674-681.
This study compared the effects of discussing issues of implementation challenges and performance feedback on increasing the integrity of implementation. Performance feedback was more effective than discussion in increasing integrity.
Noell, G. H., & Witt, J. C. (2000). Increasing intervention implementation in general education following consultation: A comparison of two follow-up strategies. Journal of Applied Behavior Analysis, 33(3), 271.
Noell, G. H., Witt, J. C., Gilbertson, D. N., Ranier, D. D., & Freeland, J. T. (1997). Increasing teacher intervention implementation in general education settings through consultation and performance feedback. School Psychology Quarterly, 12(1), 77.
This book looks at how new ideas spread via communication channels over time. Such innovations are initially perceived as uncertain and even risky. To overcome this uncertainty, most people seek out others like themselves who have already adopted the new idea. Thus the diffusion process typically takes months or years. But there are exceptions: use of the Internet in the 1990s, for example, may have spread more rapidly than any other innovation in the history of humankind.
Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York, NY: Free Press.
The purpose of this chapter is to explain the role of treatment integrity assessment within the “implementing solutions” stage of a problem-solving model.
Sanetti, L. H., & Kratochwill, T. R. (2005). Treatment integrity assessment within a problem-solving model. Assessment for intervention: A problem-solving approach, 314-325.
This paper reviews options for treatment integrity measurement emphasizing how direct behavior rating technology might be incorporated within a multi-tiered model of intervention delivery.
Sanetti, L. M. H., Chafouleas, S. M., Christ, T. J., & Gritter, K. L. (2009). Extending Use of Direct Behavior Rating Beyond Student Assessment. Assessment for Effective Intervention, 34(4), 251-258.
Both student outcomes and treatment fidelity data are necessary to draw valid conclusions about intervention effectiveness. Reviews of the intervention outcome literature in related fields, and prior reviews of the school psychology literature, suggest that many researchers failed to report treatment fidelity data.
Sanetti, L. M. H., Charbonneau, S., Knight, A., Cochrane, W. S., Kulcyk, M. C., & Kraus, K. E. (2020). Treatment fidelity reporting in intervention outcome studies in the school psychology literature from 2009 to 2016. Psychology in the Schools, 57(6), 901-922.
Over the past two decades, the role of school psychologists internationally has shifted from a more narrow focus on assessment to a broader emphasis on problem solving and delivering intervention services via consultation. Defining interventions is important for replication and translation of practice. Further, to make valid, data-based decisions about intervention effectiveness, school psychologists need to consider student outcomes in light of treatment integrity data.
Sanetti, L. M. H., Dobey, L. M., & Gallucci, J. (2014). Treatment integrity of interventions with children in School Psychology International from 1995–2010. School Psychology International, 35(4), 370-383.
Over the past two decades, the role of school psychologists internationally has shifted from a more narrow focus on assessment to a broader emphasis on problem solving and delivering intervention services via consultation. Defining interventions is important for replication and translation of practice. Further, to make valid, data-based decisions about intervention effectiveness, school psychologists need to consider student outcomes in light of treatment integrity data.
Sanetti, L. M. H., Dobey, L. M., & Gallucci, J. (2014). Treatment integrity of interventions with children in School Psychology International from 1995–2010. School Psychology International, 35(4), 370-383.
The authors reviewed all intervention studies published in the Journal of Positive Behavior Interventions between 1999-2009 to determine the percent of those studies that reported a measure of treatment integrity. Slightly more than 40% reported a measure of treatment integrity.
Sanetti, L. M. H., Dobey, L. M., & Gritter, K. L. (2012). Treatment Integrity of Interventions With Children in the Journal of Positive Behavior Interventions: From 1999 to 2009. Journal of Positive Behavior Interventions, 14(1), 29-46.
The authors reviewed four school psychology journals between 1995-2008 to estimate the percent of intervention studies that reported some measure of treatment integrity. About 50% reported a measure of treatment integrity.
Sanetti, L. M. H., Gritter, K. L., & Dobey, L. M. (2011). Treatment integrity of interventions with children in the school psychology literature from 1995 to 2008. School Psychology Review, 40(1), 72-84.
A commonly used research design in applied behavior analysis involves comparing two or more independent variables. There were some consistencies across studies, with half resulting in equivalent outcomes across comparisons. In addition, most studies employed the use of an alternating treatments or multi-element single-subject design and compared a teaching methodology.
Shabani, D. B., & Lam, W. Y. (2013). A review of comparison studies in applied behavior analysis. Behavioral Interventions, 28(2), 158-183.
This literature review was conducted to evaluate the current state of evidence supporting communication interventions for individuals with severe intellectual and developmental disabilities. Many researchers failed to report treatment fidelity or to assess basic aspects of intervention effects, including generalization, maintenance, and social validity.
Snell, M. E., Brady, N., McLean, L., Ogletree, B. T., Siegel, E., Sylvester, L., ... & Sevcik, R. (2010). Twenty years of communication intervention research with individuals who have severe intellectual and developmental disabilities. American journal on intellectual and developmental disabilities, 115(5), 364-380.
This book is written for school administrators, staff developers, behavior specialists, and instructional coaches to offer guidance in implementing research-based practices that establish effective classroom management in schools. The book provides administrators with practical strategies to maximize the impact of professional development.
Sprick, et al. (2010). Coaching Classroom Management: Strategies & Tools for Administrators & Coaches. Pacific Northwest Publishing.
This paper examines a range of education failures: common mistakes in how new practices are selected, implemented, and monitored. The goal is not a comprehensive listing of all education failures but rather to provide education stakeholders with an understanding of the importance of vigilance when implementing new practices.
States, J., & Keyworth, R. (2020). Why Practices Fail. Oakland, CA: The Wing Institute. https://www.winginstitute.org/roadmap-overview
Inattention to treatment integrity is a primary factor of failure during implementation. Treatment integrity is defined as the extent to which an intervention is executed as designed, and the accuracy and consistency with which the intervention is implemented
States, J., Detrich, R. & Keyworth, R. (2017). Treatment Integrity Strategies. Oakland, CA: The Wing Institute. https://www.winginstitute.org/effective-instruction-treatment-integrity-strategies.
This study compared indirect training and direct training methods as a means of impacting levels of treatment integrity. Direct training methods produced better outcomes.
Sterling-Turner, H. E., Watson, T. S., & Moore, J. W. (2002). The effects of direct training and treatment integrity on treatment outcomes in school consultation. School Psychology Quarterly, 17(1).
The present study was conducted to investigate the relationship between training procedures and treatment integrity.
Sterling-Turner, H. E., Watson, T. S., Wildmon, M., Watkins, C., & Little, E. (2001). Investigating the relationship between training type and treatment integrity. School Psychology Quarterly, 16(1), 56.
A review of 20 experimental, shared book reading (SBR) interventions using questioning strategies with preschool children was conducted. The studies were analyzed in terms of their quality, focus, and the questioning strategies employed. Although there were few methodological concerns about the studies conducted, treatment fidelity and replicability of the reported interventions are raised as issues needing attention in future research.
Walsh, R. L., & Hodge, K. A. (2018). Are we asking the right questions? An analysis of research on the effect of teachers’ questioning on children’s language during shared book reading with young children. Journal of Early Childhood Literacy, 18(2), 264-294.
In a randomized control study, Head Start teachers were assigned to either an intervention group that received intensive, ongoing professional development (PD) or to a comparison group that received the “business as usual” PD provided by Head Start. The PD intervention provided teachers with conceptual knowledge and instructional strategies that support young children’s development of vocabulary, alpha- bet knowledge, and phonological sensitivity.
Wasik, B. A., & Hindman, A. H. (2011). Improving vocabulary and pre-literacy skills of at-risk preschoolers through teacher professional development. Journal of Educational Psychology, 103(2), 455.
The relationships among independent variables and three measures of treatment integrity were evaluated.
Wickstrom, K. F., Jones, K. M., LaFleur, L. H., & Witt, J. C. (1998). An analysis of treatment integrity in school-based behavioral consultation. School Psychology Quarterly, 13(2), 141.
Treatment integrity is essential for the implementation of interventions in schools as it determines the accuracy or consistency with which different components of a treatment are implemented. There are no current standards regarding the best practices in treatment integrity measurement; however, higher integrity is associated with enhanced student outcomes.
Wilson, E. (2017). Generalizability of multiple measures of treatment integrity: An empirical replication.
This study evaluated the effects of performance feedback on increasing the quality of implementation of interventions by teachers in a public school setting.
Witt, J. C., Noell, G. H., LaFleur, L. H., & Mortenson, B. P. (1997). Teacher use of interventions in general education settings: Measurement and analysis of ?the independent variable. Journal of Applied Behavior Analysis, 30(4), 693.