Implementation integrity is one of the most critical features to ensure that evidence-based practices are effective. Educators are always in search of methods that will increase implementation integrity. One of the most researched approaches is performance feedback. Much less evaluated is the combination of goal setting and feedback. Criss and colleagues (2022) conducted a systematic review of the effects of goal setting combined with performance feedback. They identified 22 studies that met the inclusion criteria. Overall, the results suggest that goal setting, combined with performance feedback, was an efficacious method for increasing implementation integrity. Additionally, goal setting and feedback factors likely contributed to better outcomes.
Interestingly, teacher created goals resulted in better outcomes than when a consultant made the goals alone or when the teacher and consultant collaboratively developed goals. In most studies, goals were established during baseline. They produced moderate to large effect sizes in randomized clinical studies, and moderate success estimates in studies based on single case designs. Providing feedback on progress toward goals was most effective and closely followed by discussing goals during feedback. Setting goals during baseline with no further discussion resulted in the weakest effects.
Practices related to feedback that seemed to increase effectiveness were identified. Data presented visually yielded high success estimates in single case studies and moderate to large effect sizes in randomized clinical trials. Verbal feedback, written feedback, and emailed written feedback all produced more moderate effects but were still effective.
This review does not clearly show how much feedback contributed to these outcomes and how much goal setting was responsible for them. Regardless, educators can feel confident combining the two strategies can increase implementation integrity. The specific methods used for providing feedback will depend on the resources available. Ideally, feedback would be related to teacher-created goals with visual feedback for progress toward goals. Resources may not be available to meet individually with each teacher, so it may be necessary to send written and visual feedback to each of the teachers about their progress toward their goals.
Citation:
Criss, C. J., Konrad, M., Alber-Morgan, S. R., & Brock, M. E. (2022). A Systematic Review of Goal Setting and Performance Feedback to Improve Teacher Practice. Journal of Behavioral Education, 1-22.
In the review above, we discussed the effectiveness of First Step Next. Based on the available evidence, it is an effective intervention for preschool through third grade students. The next question for school leaders is how cost-effective it is? In other words, how much do they pay for the obtained outcomes? School financial resources are limited, so educators must be cost-conscious. The next evolution in the evidence-based practice movement in education is conducting cost-effectiveness analysis. Regardless of effectiveness, if an intervention has a poor cost-benefit ratio, then it is not likely to be adopted. The authors of this study evaluated the cost-effectiveness of an efficacy study of First Step Next. Cost-effectiveness was calculated for the combined intervention of First Step Next and homeBase, First Step Next alone, and homeBase alone. Cost-effectiveness was evaluated for classified as ADHD, students classified Conduct Disorder, and those students with comorbid ADHD and Conduct Disorder. Treatment effectiveness was defined as movement from the clinical range into the borderline or normative range or from the borderline to the normative range post-intervention.
The combined intervention of First Step Next and homeBase was the most cost-effective. The combined package cost more to implement but produced a greater return on the investment than First Step Next alone or homeBase alone. First Step Next alone was the next most cost-effective, and homeBase was the least cost-effective. In terms of treating the clinical syndromes addressed in this study, it was most expensive to produce improvement in the comorbid condition of ADHD and Conduct Disorder, followed by Conduct Disorder, and then ADHD.
This study highlights the complexity of decision-making for school leaders. The combined package of First Step Next and homeBase is the most expensive intervention but produces the greatest return on investment. It is not always possible for school leaders to offer the multi-component intervention because parents may refuse to participate or district policies may prohibit home-based services. The school leaders will still achieve a reasonable return on their investment by offering First Step Next alone. Adding to the complexity of decision-making is the differential cost-effectiveness of the different clinical populations. School leaders will get the greatest return on investment for addressing ADHD. Providing First Step Next to address problems associated with the comorbid condition of ADHD and Conduct Disorder is more expensive, thus reducing the cost-benefit ratio. It may be that First Step Next is still more cost-effective than some other interventions developed to address this population. Comparisons with different treatments were not conducted in this analysis.
These data should be taken with a “grain of salt.” The data were derived from a large-scale efficacy study. These tend to be more expensive since researchers are so closely involved in the implementation of the intervention. Efficacy studies usually produce greater effects than when usual school resources implement an intervention. The outcomes would not be as strong, but the costs may be less, so the cost-benefit ratio may be approximately the same. These analyses are a welcome addition to the evidence base for specific interventions. It would be beneficial to have cost-effectiveness data for an intervention across different contexts and populations.
Citation:
Kuklinski, M. R., Small, J. W., Frey, A. J., Bills, K., & Forness, S. R. (2022). Cost Effectiveness of School and Home Interventions for Students with Disruptive Behavior Problems. Journal of Emotional and Behavioral Disorders, 10634266221120521.
The COVID pandemic required educators to move rapidly from in-person classes to remote learning. Many educators had little experience with online instruction, especially at the scale necessary to educate all students. The question became, what impact would this rapid change in instructional approaches have on student achievement? Education Week has reported the results of the National Assessment of Education Progress (NAEP) testing data from the spring of 2022. The previous testing period was in the spring of 2020, just before the pandemic resulted in the widespread closure of schools and the shift to remote learning. Comparing data from 2020 with scores from 2022 clearly shows that COVID significantly negatively impacted education nationally.
Overall, math scores dropped by 7 points. This is the first decline in math scores in the fifty years of assessing academic achievement by the NAEP. Reading scores fell by 9 points, the largest drop since 1990. For example, White and Black, students all saw decreases in math and reading; however, these groups had substantial differences. In math, White students’ scores fell by 5 points. Black students scores dropped 13 points. The gap between these two groups increased from 25 points to 33 points between 2020 and 2022. Students across all regions of the country had lower scores in math; every region except the West had lower reading scores.
Figure 1 describes changes across achievement levels in math between 2020 and 2022. Students in the lower-performing groups were most adversely effects by the pandemic. For example, students in the 90th percentile in 2020 dropped 3 points in 2022; students at the 25th percentile in 2020 dropped by 11 points in 2022.
Figure 1: Changes in math scores for nine-year-old students across achievement levels from 2020-2022. Image from National Center for Education Statistics.
The data in Figure 2 reflect the changes in reading across achievement levels. The pattern is the same as in math. The pandemic negatively impacted students in the lower-performing groups more significantly.
Figure 2: Scores for nine-year-old students across achievement levels from 2020-2022. Image from National Center for Education Statistics.
It is unclear what accounts for the differences across the achievement groups, but one possible contributing factor was access to support for remote learning. For students scoring at the 75th percentile or higher, 83% reported having access to a desktop, laptop, or tablet all the time during remote learning. For students at the 25th percentile or lower, only 58% reported that they had the same access. These data are suggestive and do not necessarily reflect a causal relation. Considerably more research is needed to establish a causal role.
Please read the complete report in Education Week for more data regarding the impact of the COVID pandemic.
Citation:
Schwartz, S. (Sept. 1, 2022) Students Math and Reading Plummet, Erasing Gains, National Assessment Finds. Education Week.
School-Based Interventions for Middle School Students With Disruptive Behaviors: A Systematic Review of Components and Methodology. Middle school students are more likely than elementary or high school students to be disruptive (Erikson & Gresham, 2019). This presents difficult problems for classroom teachers trying to provide instruction and maintain order in the classroom. It has been estimated that 2.5 hours per week are lost to disruptive behavior each week (Education Advisory Board, 2019). This level of disruptive also contributes to teacher turnover with 39% of teachers reporting that disruptive behavior was one of the primary reasons for resigning (Bettini et al., 2020). To address challenges presented by middle school students, Alperin and colleagues (2021) completed a systematic review to identify programs that had a positive effect on disruptive behavior and the characteristics of those programs. They identified 51 studies that met their inclusion criteria. Of those 51 studies, 40 of them specified the function of behavior (gain attention or escape demands) that the program addressed; 16 of the studies included a home-based component with 7 of the studies providing parent training; 22 of the interventions had a manual guiding the implementation of the intervention; and encouragingly, 42 of the studies assessed intervention implementation. Effect sizes for seven of the studies were computed for intervention that involved class-wide intervention strategies. The effects ranged from small to large across the studies. Fourteen of the studies evaluated skill acquisition for small groups or individuals and the effect sizes again ranged from small to large. Seven of the studies evaluated reinforcement strategies for reinforcement-based interventions for small groups or individuals and reported effect sizes that ranged from small to large. Two studies evaluated interventions for escape from demands for small groups or individuals. Both of these studies reported large effect sizes. The data from this study are important as they can provide guidance to educators seeking to reduce disruptive behavior of middle school students. Ultimately, the educators will have to consider the contextual fit for each of these interventions for the settings in which they work. This study narrows the range of options to those that have some demonstrated level of effectiveness rather than leaving the educator to choose from all available options.
Citation: Alperin, A., Reddy, L. A., Glover, T. A., Bronstein, B., Wiggs, N. B., & Dudek, C. M. (2021). School-Based Interventions for Middle School Students With Disruptive Behaviors: A Systematic Review of Components and Methodology. School Psychology Review, 1-26.
Scaling and Disseminating Brief Bullying Prevention Programming: Strengths, Challenges, and Considerations. One of the persistent problems in education and other human service disciplines is the research to practice gap (some would call it a chasm). In an effort to disseminate an effective bullying program (Free2B), Leff and colleagues applied the logic of Diffusion of Innovations (Rogers, 2003). This logic proposes that innovations are more likely to be adopted if the innovation has (1) a relative advantage over current practices, (2) is easy to use, (3) is compatible with the values, beliefs, experiences of the users, (4) can be implemented on a trial basis before large scale implementation, and (5) the opportunity for others to observe implementation and the effects of implementation. Leff and colleagues followed these recommendations in implementing the Free2B anti-bullying program in 40 middle schools. The authors concluded that it was an attractive alternative to many anti-bullying programs because the intervention was delivered in a school assembly that schools were already providing, so it required no additional time allocation. Additionally, the video format made the delivery very easy compared to school-wide programs that are more time and resource intensive. The students reported that it addressed important topics. Prior to implementation, Leff and colleagues presented pilot data to key stakeholders at the state’s Office of Safe Schools who were able to leverage adoption by schools across the state. In addition to measuring adoption they also measured the impact on students and founds positive effects across all measures.
Citations: Leff, S. S., Waasdorp, T. E., Paskewich, B. S., & Winston, F. K. (2020). Scaling and Disseminating Brief Bullying Prevention Programming: Strengths, Challenges, and Considerations. School Psychology Review, 1-15.
Rogers, E. M. (1962). Diffusion of innovations. New York: Free Press of Glencoe.
Overview of Professional Judgment. Educators make many decisions regarding services for students. Even when there is abundant evidence to guide their decisions, educators must use their judgment about what is appropriate in a given situation. Only on rare occasion does the available evidence perfectly match the service context of concern to the educator. To bridge the gap between research and local circumstance, the educator must make a series of judgments such as defining the problem, determining which evidence is relevant, and deciding which features of the local context are likely to require adaptations to the selected evidence-based intervention. Professional judgment is a cornerstone of evidence-based practice, as are best available evidence, stakeholder values, and the context in which services are provided. In this definition of evidence-based practice, the integration of these variables influences decisions. No one cornerstone can be substituted for the others. Judgment must be informed and constrained by the best available evidence, stakeholder values, and context.
Principal Evaluation. The field of principal evaluation, while gaining increased research interest in recent years, lags behind teacher evaluation in terms of conclusions that can be made regarding effective practice. Prior to Race to the Top and ESEA waivers, principal evaluation was implemented inconsistently and evaluation systems lacked instruments with validity and/or reliability, had a tenuous relationship with leadership standards, failed to include measures of student/school outcomes, and had mixed purposes as to their intended use (e.g., sometimes as formative information to help principals improve, while other times as summative information to make personnel decisions). However, today’s evaluation systems have evolved to incorporate multiple measures of principal performance that evaluate principals on research-based principles of effective leadership, often include student outcomes (which is often controversial, however), and are used both to help principals improve and to hold them accountable for their performance. Ongoing and more frequent observations, often conducted by the principal supervisor, who often also serves as a coach/mentor and directs the principal towards needed professional learning, show promise as an effective practice. Using the results from principal evaluations for personnel decisions, such as offering incentives through pay-for-performance programs, yields mixed results and warrants further research attention.
Citation: Donley, J., Detrich, R., States, J., & Keyworth, (2021). Principal Evaluation Oakland, CA: The Wing Institute. https://www.winginstitute.org/quality-leadership-principal-evaluation
Teacher Preparation Program Models Overview. Teacher preparation began in the mid-19th century with the normal school, a 2-year course of study that prepared candidates for teaching. This model remained unchanged until the early 20th century, when universities created the undergraduate model, which currently predominates. Teacher candidates are required to spend 4 years obtaining a bachelor’s degree built around a prescribed course of education study. A second relatively recent modification is the 5-year credential model, requiring candidates to obtain a bachelor’s degree before beginning a 5th year of instruction in teaching. The driving force behind the postgraduate model was the belief that teachers were not respected. It was assumed that a post-bachelor’s and/or graduate degree certificate would confer greater esteem on the profession. This model is offered across the country and is mandated for all new teachers in California. A third option, the alternative credential (AC) model, arose as a solution to teacher shortages. The AC model is distinct from the traditional models in that candidates receive formal preparation coursework while already employed in the classroom. Currently, little evidence exists to support the superiority of any one method over the others.
Teacher Preparation: Instructional Effectiveness Overview. Discussions of teacher preparation generally focus on content (what to teach) rather than pedagogy (how to teach). Teacher training has changed little in 100 years. Preparation programs rely on lectures supplemented with 8 weeks of student teaching under minimal university oversight. Lecturing persists for various reasons: It requires nominal effort, instructors have greater control of what is presented, and assessing mastery of the material is easy using tests and papers. There are significant disadvantages to lecturing. Listening to a lecturer and answering questions during the lecture are very different from being able to perform skillfully in a real-world setting. Research shows that the most effective training of complex skills occurs when the training follows the elementary paradigm “I do,” “we do,” “you do.” This model relies on introducing skills through lectures and discussions, in tandem with demonstrating the skills (I do). This is followed by learners practicing the skills alongside a coach (we do), and finally the student teacher performing independently with feedback from the coach (you do). Research suggests it is only when coaching is added to the mix that skills are fully mastered and used effectively in the classroom.
Citation: Cleaver, S., Detrich, R., States, J. & Keyworth, R. (2021). Teacher Preparation: Instructional Effectiveness. Oakland, CA: The Wing Institute. https://www.winginstitute.org/pre-service-teacher-instructional-effectiveness.
Misconceptions about data-based decision making in education: An exploration of the literature. Research on data-based decision making has proliferated around the world, fueled by policy recommendations and the diverse data that are now available to educators to inform their practice. Yet, many misconceptions and concerns have been raised by researchers and practitioners. This paper surveys and synthesizes the landscape of the data-based decision-making literature to address the identified misconceptions and then to serve as a stimulus to changes in policy and practice as well as a roadmap for a research agenda.
Citation: Mandinach, E. B., & Schildkamp, K. (2021). Misconceptions about data-based decision making in education: An exploration of the literature. Studies in Educational Evaluation, 69, 100842.