This book is compiled from the proceedings of the sixth summit entitled “Performance Feedback: Using Data to Improve Educator Performance.” The 2011 summit topic was selected to help answer the following question: What basic practice has the potential for the greatest impact on changing the behavior of students, teachers, and school administrative personnel?
States, J., Keyworth, R. & Detrich, R. (2013). Introduction: Proceedings from the Wing Institute’s Sixth Annual Summit on Evidence-Based Education: Performance Feedback: Using Data to Improve Educator Performance. In Education at the Crossroads: The State of Teacher Preparation (Vol. 3, pp. ix-xii). Oakland, CA: The Wing Institute.
A 10-year comparison of graduates from 4- and 5-year teacher education programs at the same institution revealed significant differences between graduates of the two programs. Limitations of the study and alternative explanations for these differences are discussed.
Andrew, M. D. (1990). Differences between graduates of 4-year and 5-year teacher preparation programs. Journal of Teacher Education, 41, 45–51
This study evaluated institutional sustainability of the Early Risers “Skills for Success” conduct problems prevention program.
August, G. J., Bloomquist, M. L., Lee, S. S., Realmuto, G. M., & Hektner, J. M. (2006). Can evidence-based prevention programs be sustained in community practice settings? The Early Risers’ advanced-stage effectiveness trial. Prevention Science, 7(2), 151-165.
Differential reinforcement of appropriate behavior is an important skill for classroom teachers. This study examined the use of performance feedback to increase the rate of differential reinforcement by pre-service teachers.
Auld, R. G., Belfiore, P. J., & Scheeler, M. C. (2010). Increasing Pre-service Teachers’ Use of Differential Reinforcement: Effects of Performance Feedback on Consequences for Student Behavior. Journal of Behavioral Education, 19(2), 169-183.
In this paper, the author surveys 117 published and unpublished studies spanning more than 25 years. He structures the literature review using a conceptual model shown as Program Implementation Stages
Backer, T. E. (2001). Finding the balance: Program fidelity and adaptation in substance abuse prevention: A state-of-the-art review. Rockville, MD: Center for Substance Abuse Prevention.
The modified Research, Development, and Diffusion (RD&D) model as exemplified by
change agents in federal organizations were examined as a viable strategy for
disseminating social program innovations.
Blakely, C. H., Mayer, J. P., Gottschalk, R. G., Schmitt, N., Davidson, W. S., Roitman, D. B., & Emshoff, J. G. (1987). The fidelity‐adaptation debate: Implications for the implementation of public sector social programs. American journal of community psychology, 15(3), 253-268.
This paper review the promise of fidelity measures for advancing the research and practice of one area of mental health services, namely, psychiatric rehabilitation. This paper also provide a historical context for the development of fidelity measurement.
Bond, G. R., Evans, L., Salyers, M. P., Williams, J., & Kim, H. W. (2000). Measurement of fidelity in psychiatric rehabilitation. Mental health services research, 2(2), 75-87.
This study examines the importance of implementation integrity for problem-solving teams (PST) and response-to-intervention models.
Burns, M. K., Peters, R., & Noell, G. H. (2008). Using performance feedback to enhance implementation fidelity of the problem-solving team process. Journal of School Psychology, 46(5), 537-550.
The current study hypothesized that providing performance feedback, which has consistently been shown to increase implementation integrity, to PSTs would enhance the procedural integrity of the process.
Burns, M. K., Peters, R., & Noell, G. H. (2008). Using performance feedback to enhance implementation fidelity of the problem-solving team process. Journal of School Psychology, 46(5), 537-550.
This paper discusses the search for a “magic metric” in education: an index/number that would be generally accepted as the most efficient descriptor of school’s performance in a district.
Celio, M. B. (2013). Seeking the Magic Metric: Using Evidence to Identify and Track School System Quality. In Performance Feedback: Using Data to Improve Educator Performance (Vol. 3, pp. 97-118). Oakland, CA: The Wing Institute.
The purpose of this study is to examine the effects of feedback on treatment integrity for implementing behavior support plans.
Codding, R. S., Feinberg, A. B., Dunn, E. K., & Pace, G. M. (2005). Effects of immediate performance feedback on implementation of behavior support plans. Journal of Applied Behavior Analysis, 38(2), 205-219.
The authors examined the extent to which program integrity (i.e., the degree to which programs were implemented as planned) was verified and promoted in evaluations of primary and early secondary prevention programs published between 1980 and 1994.
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: are implementation effects out of control?. Clinical psychology review, 18(1), 23-45.
No Child Left Behind (NCLB) as federal policy required the use of scientifically based instruction to improve educational outcomes. It is a long way from policy to impacting what actually happens in a classroom. In this chapter, the author reviews what is known from implementation science to assure that policy becomes practice.
Detrich, R. (2008). From Policy to Practice: Evidence-based Practice and IDEIA. In Grigorenko, E. L. (Ed.) Educating Individuals with Disabilities: IDEIA 2004 and Beyond, 85. Springer Publishing Company
Over the last fifty years, there have been many educational reform efforts, most of which have had a relatively short lifespan and failed to produce the promised results. One possible reason for this is for the most part these innovations have been poorly implemented. In this chapter, the author proposes a data-based decision making approach to assuring high quality implementation.
Detrich, R. Innovation, Implementation Science, and Data-Based Decision Making: Components of Successful Reform. In M. Murphy, S. Redding, and J. Twyman (Eds). Handbook on Innovations in Learning, 31. Charlotte, NC: Information Age Publishing
This article reviews implementation issues in prevention trials and specifically highlights the study of implementation in the 34 programs determined to be effective in a recent review conducted by the Prevention Research Center for the Center for Mental Health Services.
Domitrovich, C. E., & Greenberg, M. T. (2000). The study of implementation: Current findings from effective programs that prevent mental disorders in school-aged children. Journal of Educational and Psychological Consultation, 11(2), 193-221.
The National Council of Teacher Quality (NCTQ) review examines teacher preparation program progress in adopting the necessary components of evidence-based reading instruction. The report continues the effort of two previous reports offering educators a look at trends on preparation program progress on providing this essential training.
Drake, G., et al. (2020). Teacher Prep Review: Program Performance in Early Reading Instruction. National Council on Teacher Quality.https://www.nctq.org/dmsView/NCTQ_2020_Teacher_Prep_Review_Program_Performance_in_Early_Reading_Instruction
a brief overview of findings from the Blueprints for Violence Prevention-Replication Initiative is presented, identifying factors that enhance or impede a successful implementation of these programs.
Elliott, D. S., & Mihalic, S. (2004). Issues in disseminating and replicating effective prevention programs. Prevention Science, 5(1), 47-53.
Part One provides the reader with information essential to understanding not only the content of the section that follow but also the wealth of material that exists in the literature on program evaluation. Part Two introduces you to different approaches to evaluation to enlarge your understanding of the diversity of choices that evaluators and stakeholders make in undertaking evaluation.
Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2003). Program Evaluation: Alternative Approaches and Practical Guidelines.
This paper discusses common elements of successfully sustaining effective practices across a variety of disciplines.
Fixsen, D. L., Blase, K. A., Duda, M., Naoom, S. F., & Van Dyke, M. (2010). Sustainability of evidence-based programs in education. Journal of Evidence-Based Practices for Schools, 11(1), 30-46.
This is a comprehensive literature review of the topic of Implementation examining all stages beginning with adoption and ending with sustainability.
Fixsen, D. L., Naoom, S. F., Blase, K. A., & Friedman, R. M. (2005). Implementation research: A synthesis of the literature.
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., Wallace, F., Burns, B., ... & Shern, D. (2005). Implementation research: A synthesis of the literature.
This article presents the Fidelity of Implementation Rating System (FIMP), an observation-based measure assessing competent adherence to the Oregon model of Parent Management Training (PMTO).
Forgatch, M. S., Patterson, G. R., & DeGarmo, D. S. (2005). Evaluating fidelity: Predictive validity for a measure of competent adherence to the Oregon model of parent management training. Behavior therapy, 36(1), 3-13.
This commentary offers such a framework and then they consider how the articles constituting this special issue address the various questions posed in the framework.
Fuchs, L. S., & Fuchs, D. (2006). A framework for building capacity for responsiveness to intervention. School Psychology Review, 35(4), 621.
The authors have been evaluating the impact of five principal preparation programs in the United States on student outcomes. This information should be considered as one aspect of preparation program improvement and accountability. The study team lays out its recommendations in this policy paper.
George W. Bush Institute & Education Reform Initiative. (2016). Developing Leaders: The Importance--and the Challenges--of Evaluating Principal Preparation Programs, Retrieved from https://gwbcenter.imgix.net/Resources/gwbi-importance-of-evaluating-principal-prep.pdf
This study reviewed the reporting rates of treatment integrity for school based interventions.
Gresham, F. M., & Gansle, K. A. (1993). Treatment integrity of school-based behavioral intervention studies: 1980-1990. School Psychology Review, 254.
This study reviewed all intervention studies published between 1980-1990 in Journal of Applied Behavior Analysis in which children were the subjects of the study. The authors found that treatment integrity was reported in only 16% of the studies.
Gresham, F. M., Gansle, K. A., & Noell, G. H. (1993). Treatment ?integrity in ?applied behavior analysis with children. Journal of Applied ?Behavior Analysis, 26(2), 257-263.
The authors reviewed three learning disabilities journals between 1995-1999 to determine what percent of the intervention studies reported measures of treatment integrity. Only 18.5% reported treatment integrity measures.
Gresham, F. M., MacMillan, D. L., Beebe-Frankenberger, M. E., & Bocian, K. M. ?(2000). Treatment integrity in learning disabilities intervention research: Do we really know how treatments are implemented. Learning Disabilities ?Research & Practice, 15(4), 198-205.
This paper suggests a model for selecting interventions that match the context of classrooms.
Harn, B., Parisi, D., & Stoolmiller, M. (2013). Balancing fidelity with flexibility and Fit: What do we really know about fidelity of implementation in schools?. Exceptional Children, 79(2), 181-193.
In this review, we explore the extent to which researchers evaluating the efficacy of Tier 2 elementary reading interventions within the framework of Response to Intervention reported on fidelity of implementation and alignment of instruction between tiers. However, researchers frequently neglect to report on fidelity of intervention in Tier 1, potentially limiting claims that can be made about the efficacy of subsequent Tier 2 intervention.
Hill, D. R., King, S. A., Lemons, C. J., & Partanen, J. N. (2012). Fidelity of implementation and instructional alignment in response to intervention research. Learning Disabilities Research & Practice, 27(3), 116-124.
The purpose of this study was to compare the effects of constant time delay delivered with high procedural fidelity to constant time delay with high procedural fidelity on all variables except delivery of the controlling prompt
Holcombe, A., Wolery, M., & Snyder, E. (1994). Effects of two levels of procedural fidelity with constant time delay on children's learning. Journal of Behavioral Education, 4(1), 49-73.
Response to Intervention (RtI) has gained increased attention with the reauthorization of the Individuals with Disabilities Education Improvement Act. Since RtI was introduced at the policy level as a mechanism for use in the learning disability identification process, much of the implementation work has focused on this application.
Keller‐Margulis, M. A. (2012). Fidelity of implementation framework: A critical need for response to intervention models. Psychology in the Schools, 49(4), 342-352.
The present study used cross-sectional data from 1,438 schools to examine relations between fidelity self-assessment and team-based fidelity measures in the first 4 years of implementation of School-Wide Positive Behavioral Interventions and Supports (SWPBIS). Results showed strong positive correlations between fidelity self-assessments and a team-based measure of fidelity at each year of implementation.
Khoury, C. R., McIntosh, K., & Hoselton, R. (2019). An Investigation of Concurrent Validity of Fidelity of Implementation Measures at Initial Years of Implementation. Remedial and Special Education, 40(1), 25-31.
In 1990, the Commonwealth of Pennsylvania implemented a statewide instructional support team (IST) process to provide prereferral assessment and intervention for at-risk students in 500 school districts. The current study examined the academic performance of students affected by this process as contrasted with other at-risk students who did not have access to it.
Kovaleski, J. F., Gickling, E. E., Morrow, H., & Swank, P. R. (1999). High versus low implementation of instructional support teams: A case for maintaining program fidelity. Remedial and Special Education, 20(3), 170-183.
The authors conducted a comprehensive review of research to identify the impact of coaching on changes in preservice and in-service teachers’ implementation of evidence-based practices.
Kretlow, A. G., & Bartholomew, C. C. (2010). Using coaching to improve the fidelity of evidence-based practices: A review of studies. Teacher Education and Special Education, 33(4), 279-299.
The authors conducted a comprehensive review of research to identify the impact of coaching on changes in preservice and in-service teachers’ implementation of evidence-based practices. They identified a total of 13 studies from the 20 years of literature they searched.
Kretlow, A. G., & Bartholomew, C. C. (2010). Using coaching to improve the fidelity of evidence-based practices: A review of studies. Teacher Education and Special Education, 33(4), 279-299.
Assessing fidelity of implementation of school-based interventions is a critical factor in successful implementation and sustainability. The Tiered Fidelity Inventory (TFI) was developed as a comprehensive measure of all three tiers of School-Wide Positive Behavioral Interventions and Supports (SWPBIS) and is intended to measure the extent to which the core features of SWPBIS are implemented with fidelity.
Massar, M. M., McIntosh, K., & Mercer, S. H. (2019). Factor validation of a fidelity of implementation measure for social behavior systems. Remedial and Special Education, 40(1), 16-24.
This study examined whether children receiving one year of EIBI (N = 35) would make larger gains in adaptive behaviors than a group of children receiving treatment as usual (TAU; N = 24).
McEachin, J. J., Smith, T., & Lovaas, O. I. (1993). Outcome in adolescence of autistic children receiving early intensive behavioral treatment. American Journal of Mental Retardation, 97, 359-372.
Full and durable implementation of school-based interventions is supported by regular evaluation of fidelity of implementation. Multiple assessments have been developed to evaluate the extent to which schools are applying the core features of school-wide positive behavioral interventions and supports (SWPBIS).
McIntosh, K., Massar, M. M., Algozzine, R. F., George, H. P., Horner, R. H., Lewis, T. J., & Swain-Bradway, J. (2017). Technical adequacy of the SWPBIS tiered fidelity inventory. Journal of Positive Behavior Interventions, 19(1), 3-13.
Several reliable and valid fidelity surveys are commonly used to assess Tier 1 implementation in School-Wide Positive Behavioral Interventions and Supports (SWPBIS); however, differences across surveys complicate consequential decisions regarding school implementation status when multiple measures are compared. Compared with other measures, the PBIS Self-Assessment Survey (SAS) was more sensitive to differences among schools at higher levels of implementation. Implications for SWPBIS research and fidelity assessment are discussed.
Mercer, S. H., McIntosh, K., & Hoselton, R. (2017). Comparability of fidelity measures for assessing tier 1 school-wide positive behavioral interventions and supports. Journal of Positive Behavior Interventions, 19(4), 195-204.
Fidelity of treatment in outcome research refers to confirmation that the manipulation of the independent variable occurred as planned. Verification of fidelity is needed to ensure that fair, powerful, and valid comparisons of replicable treatments can be made.
Moncher, F. J., & Prinz, R. J. (1991). Treatment fidelity in outcome studies. Clinical psychology review, 11(3), 247-266.
This study evaluated the effects of performance feedback on the implementation of a classroom intervention.
Mortenson, B. P., & Witt, J. C. (1998). The use of weekly performance feedback to increase teacher implementation of a prereferral academic intervention. School Psychology Review, 613-627.
This literature review highlights central elements of the residency component of school leader preparation programs by aggregating the results of studies conducted on existing principal preparation programs.
New York City Leadership Academy. (2013). Literature Review of Principal Preparation Programs. Retrieved from https://www.nycleadershipacademy.org/wp-content/uploads/2018/06/residency-design-literature-review.pdf
This literature review highlights central elements of the residency component of school leader preparation programs by aggregating the results of studies conducted on existing principal preparation programs.
New York City Leadership Academy. (2013). Literature Review of Principal Preparation Programs. Retrieved from https://www.nycleadershipacademy.org/wp-content/uploads/2018/06/residency-design-literature-review.pdf
This study compared the effects of discussing issues of implementation challenges and performance feedback on increasing the integrity of implementation. Performance feedback was more effective than discussion in increasing integrity.
Noell, G. H., & Witt, J. C. (2000). Increasing intervention implementation in general education following consultation: A comparison of two follow-up strategies. Journal of Applied Behavior Analysis, 33(3), 271.
This study evaluated the effects of consultation and performance feedback in a public school setting.
Noell, G. H., Witt, J. C., Gilbertson, D. N., Ranier, D. D., & Freeland, J. T. (1997). ?Increasing teacher intervention implementation in general education settings through consultation and performance feedback. School Psychology Quarterly, 12(1), 77-88.
This study looks at treatment integrity with which general education teachers implemented a reinforcement based intervention designed to improve the academic performance of elementary school students.
Noell, G. H., Witt, J. C., Gilbertson, D. N., Ranier, D. D., & Freeland, J. T. (1997). Increasing teacher intervention implementation in general education settings through consultation and performance feedback. School Psychology Quarterly, 12(1), 77.
Education researchers are being asked to conduct rigorous, scientifically based studies of K–12 curriculum interventions; therefore, the need for measuring fidelity of implementation and empirically relating it to outcomes (the chief rationale for this review) is warranted to ensure internal and external validity. The results of this review indicate that there are too few studies to guide researchers on how fidelity of implementation to core curriculum interventions can be measured and related to outcomes, particularly within efficacy and effectiveness studies, where the requirements for fidelity measures differ.
O’Donnell, C. L. (2008). Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K–12 curriculum intervention research. Review of educational research, 78(1), 33-84.
This article describes early aspects of the nationwide implementation of an evidence‐based program (EBP) in Norway and the design for studying program fidelity over time.
Ogden, T., Forgatch, M. S., Askeland, E., Patterson, G. R., & Bullock, B. M. (2005). Implementation of parent management training at the national level: The case of Norway. Journal of Social Work Practice, 19(3), 317-329.
We sought to identify, examine, and summarize empirical literature focused on early childhood behavior interventions examined using a single case research designs (SCD) and published between 2001 and 2018. The findings of the current review suggest: promoting implementation fidelity through implementation support to improve social validity outcomes, providing guidelines for timing and frequency of social validity assessment, and development of social validity assessment tools designed to assess each of the social validity dimensions.
Park, E. Y., & Blair, K. S. C. (2019). Social validity assessment in behavior interventions for young children: A systematic review. Topics in Early Childhood Special Education, 39(3), 156-169.
The authors reviewed all of the intervention studies published in the Journal of Applied Behavior Analysis between 1968-1980 to determine the percent of studies that reported measures of treatment integrity. The overall average across those years was 16%.
Peterson, L., Homer, A. L., & Wonderlich, S. A. (1982). Integrity of independent ?variables in behavior analysis. Journal of Applied Behavior Analysis, 15, 477-492.
First Step to Success is an empirically supported intervention for young elementary school students that can be implemented by public school teachers with training. This study evaluated the effects of coaching feedback on teachers who not effectively implemented First Step following training.
Rodriguez, B. J., Loman, S. L., & Horner, R. H. (2009). A preliminary analysis of the effects of coaching feedback on teacher implementation fidelity of First Step to Success. Behavior Analysis in Practice, 2(2), 11-21.
The authors reviewed all intervention studies published in the Journal of Positive Behavior Interventions between 1999-2009 to determine the percent of those studies that reported a measure of treatment integrity. Slightly more than 40% reported a measure of treatment integrity.
Sanetti, L. M. H., Dobey, L. M., & Gritter, K. L. (2012). Treatment Integrity of Interventions With Children in the Journal of Positive Behavior Interventions: From 1999 to 2009. Journal of Positive Behavior Interventions, 14(1), 29-46.
If educational programs are to be effective they must be implemented with sufficient integrity to assure benefits. To have a significant impact on schools, solutions must be scalable. This study evaluated the effects of using existing school personnel to provide performance feedback to teachers regarding the quality of implementation.
Sanetti, L. M. H., Fallon, L. M., & Collier-Meek, M. A. (2013). Increasing teacher treatment integrity through performance feedback provided by school personnel. Psychology in the Schools, 50(2), 134-150.
The authors reviewed four school psychology journals between 1995-2008 to estimate the percent of intervention studies that reported some measure of treatment integrity. About 50% reported a measure of treatment integrity.
Sanetti, L. M. H., Gritter, K. L., & Dobey, L. M. (2011). Treatment integrity of interventions with children in the school psychology literature from 1995 to 2008. School Psychology Review, 40(1), 72-84.
A critical review of reading programs requires objective and in-depth analysis. For these reasons, the authors offer the following recommendations and procedures for analyzing critical elements of programs.
Simmons, D. C., & Kame’enui, E. J. (2003). A consumer’s guide to evaluating a core reading program grades K-3: A critical elements analysis. Retrieved December, 19, 2006.
These publications have laid out guidelines to successfully implement the changes that they suggest will improve students' understandings of science. In keeping with these suggestions, several research groups have created, piloted, and implemented curricular programs in schools in the hopes of increasing students' science content understanding as well as their complex reasoning skills.
Songer, N. B., & Gotwals, A. W. (2005, April). Fidelity of implementation in three sequential curricular units. In Annual Meeting of the American Educational Research Association, Montreal, Canada.
We report findings from a quasi-experimental evaluation of the recently implemented US$5,000 retention bonus program for effective teachers in Tennessee’s Priority Schools. We estimate the impact of the program on teacher retention using a fuzzy regression discontinuity design by exploiting a discontinuity in the probability of treatment conditional on the composite teacher effectiveness rating that assigns bonus eligibility.
Springer, M. G., Swain, W. A., & Rodriguez, L. A. (2016). Effective teacher retention bonuses: Evidence from Tennessee. Educational Evaluation and Policy Analysis, 38(2), 199-221.
In this article, the authors address the following questions: How does the level of on-site technical assistance affect student outcomes? Do teachers’ fidelity of treatment implementation and their perceptions of school climate mediate effects on student performance?
Stein, M. L., Berends, M., Fuchs, D., McMaster, K., Sáenz, L., Yen, L., ... & Compton, D. L. (2008). Scaling up an early reading program: Relationships among teacher support, fidelity of implementation, and student performance across different sites and years. Educational Evaluation and Policy Analysis, 30(4), 368-388.
Treatment fidelity reporting practices are described for journals that published general and special education intervention research with high impact factors from 2005 through 2009. The authors reviewed research articles, reported the proportion of intervention studies that described fidelity measurement, detailed the components of fidelity measurement reported, and determined whether the components of fidelity reported differed based on the research design, the type of intervention, or the number of intervention sessions.
Swanson, E., Wanzek, J., Haring, C., Ciullo, S., & McCulley, L. (2013). Intervention fidelity in special and general education research journals. The Journal of Special Education, 47(1), 3-13.
This study examined the fidelity of problem-solving implementation by multidisciplinary
teams (MDTs) in 227 schools and the relationship to student outcomes.
Telzrow, C. F., McNamara, K., & Hollinger, C. L. (2000). Fidelity of problem-solving implementation and relationship to student performance. School Psychology Review, 29(3), 443.
Intended for state officials involved in the assessment and approval of university and other programs to train future school principals, this report describes five design principles for effective program evaluation.
UCEA and New Leaders (2016). Improving state evaluation of principal preparation programs. Retrieved from: www.sepkit.org
Implementation fidelity is often thought of as a necessary condition to achieve internal validity and as having a relation to student outcomes. To examine the nature of this relation, we reviewed reading intervention studies for students in K-12 in which measures of implementation fidelity were included in final data analysis.
van Dijk, W., Lane, H., & Gage, N. A. (2019). The Relation Between Implementation Fidelity and Students’ Reading Outcomes: A Systematic Review of the Literature.
This study evaluated the effects of performance feedback on increasing the quality of implementation of interventions by teachers in a public school setting.
Witt, J. C., Noell, G. H., LaFleur, L. H., & Mortenson, B. P. (1997). Teacher use of interventions in general education settings: Measurement and analysis of ?the independent variable. Journal of Applied Behavior Analysis, 30(4), 693.