Decide Monitor Implement

The Traditional Education Model

For more than 40 years the American education system has been challenged by questions of effectiveness. As far back as the launch of Sputnik in 1957, doubts about the effectiveness of our schools have captured the public’s attention. Throughout most of the history of the American education system, teachers selected educational interventions based primarily on their own professional judgment, the professional opinions of colleagues, and sales pitches by publishers of curricula (Hollweg, Burrill, Knapp, & Weiss, 2001). Only during the past 25 years has the adoption of rigorous methods of research on educational interventions been embraced on a large scale. Only with the passage of No Child Left Behind in 1981 was an evidence-based model of education codified. Unfortunately, the complexities of successfully applying an evidence-based model of education have produced less than stunning results and left educators asking the question, how do we proceed from here?

Evidence-Based Roadmap

The Problem

Poorly selected interventions ill suited to the needs of education practitioners and students are frequently adopted (Yeh, 2007). Shoddily implemented interventions have resulted in an education landscape strewn with procedures that had proven research but failed miserably when taken to scale. Too many programs with inadequate or no research backing achieved widespread acceptance and persist in schools despite poor results (West & O'Neal, 2004).

A Roadmap

These failures in U.S. schools created an education environment that has left principals, teachers, and parents jaded, skeptical, and cynical about the capacity of an empirical approach to solve their problems. The purpose of this Roadmap is to address the shortcomings of past attempts at education reform and to overcome obstacles by providing educators with well-researched practices vetted in training settings as well as in the real world. The Roadmap also translates the often-challenging technical terminology of research into user-friendly language that educators can apply to meet the needs of their unique settings. Finally, the Roadmap recognizes that even the most effective interventions are not useful unless they can be implemented and sustained as designed. To accomplish this, the Roadmap looks to achieve its aims by relying on three basic steps: decision making, implementation, and monitoring.

 


 

Decision Making (How do we decide?): Many variables go into a decision about which intervention to adopt and implement. An evidence-based decision-making framework emphasizes three components: (1) best available evidence, (2) client values, and (3) professional judgment.

  1. Best available evidence (How do we select practices in the real world?): One of the cornerstones of evidence-based education is basing decisions on the best available evidence. This is often interpreted to mean only the highest quality evidence (Slocum et al., 2014); however, for many decisions there is little or no high-quality evidence to guide the decision. In these instances, it is better for practitioners to base their decisions on the best available evidence rather than abandon evidence as a basis for decisions.
  • The term “best available evidence” implies that evidence falls on a continuum. If we accept this     notion, then there will almost always be some evidence, even if less than perfect, to guide decision  making. Regardless of the quality of the evidence, it is very likely practitioners will have to make some inferential leaps based on the particular situations they are trying to solve.

     In judging evidence, it’s important to look at both efficacy and effectiveness studies.

  • Efficacy research (What works?): An efficacy study doesn’t try to determine whether a promising intervention will work with all students in the real world but rather with a small group of students in a controlled setting under ideal conditions, often in a small elite school associated with a university. In a rigorous, high-quality study, the research team implements and closely supervises the intervention to determine if it is potentially beneficial. Whether the intervention will work in the real world is the subject of effectiveness research.
  • Effectiveness research (When does it work?): This type of research is conducted in real-world settings, such as public schools, in which teacher skills, student backgrounds, and school resources vary greatly. The research is carried out with existing personnel within the school, and the researcher operates at arm’s length. The purpose of effectiveness research is to see if educators can expect positive outcomes across students of different ages and backgrounds, and if the results are worth the cost of implementation.
  1. Client values (How to select interventions that respect the needs of the consumer?): Client values can inform an intervention’s goals and methods. Failing to include the student and his or her family in the decision-making process can result in goals and interventions that are inconsistent with the family’s value system.
  1. Professional judgment (How do we put it all together?): The inclusion of professional judgment as the third cornerstone of an evidence-based decision-making framework recognizes that judgment is necessary, and not just inevitable, at every step in the process of solving a problem. Judgment is required to determine which evidence is the best available, how relevant that evidence is to a specific situation, how to adapt the intervention to fit the setting in which it is occurring, and whether the intervention is sufficiently effective as implemented or whether changes are necessary to improve benefit.

 


 

Implementation (How do we make it work?): This second critical feature of the Roadmap addresses all relevant variables so that an intervention can be successfully adopted and sustained in a particular setting. Successful implementation requires careful consideration of the goals of intervention, adequate resources for the intervention, training and support for those responsible for the intervention, and a method for evaluating the impact of the intervention and making rapid adjustments as needed to improve benefit.

 


 

Monitoring (Is it working?): Because no intervention will be universally effective, frequent monitoring of effects is necessary so that decisions can be made on how to proceed. Infrequent monitoring only wastes resources and time when an ineffective intervention is left in place too long.

It is also essential to monitor how well the intervention is implemented (treatment integrity). That’s the only way to know whether an intervention is ineffective or whether it was implemented so poorly that benefit cannot reasonably be expected. Knowing about the quality of implementation allows practitioners to make data-informed judgments about the effects of an intervention. Making judgments about the effects in the absence of data about the quality results in guessing and, in some instances, discontinuing interventions that would be effective if implemented properly.

 


 

Citations

 

Chingos, M. M., & Whitehurst, G. J. (2011, May 11). Class size: What research says and what it means for state policy. Washington, DC: Brown Center on Education Policy, Brookings Institution.

 

Slocum, T. A., Detrich, R., Wilczynski, S. M., Spencer, T. D., Lewis, T., & Wolfe, K. (2014). The evidence-based practice of applied behavior analysis. The Behavior Analyst, 37, 41–56.

 

Weiss, I. R., Knapp, M. S., Hollweg, K. S., & Burrill, G. (Eds.). (2001). Investigating the influence of standards: A framework for research in mathematics, science, and technology education. Washington, DC: National Academies Press.

 

West, S. L., & O'Neal, K. K. (2004). Project D.A.R.E. outcome effectiveness revisited. American Journal of Public Health, 94(6), 1027–1029.

 

Yeh, S. S. (2007). The cost-effectiveness of five policies for improving student achievement. American Journal of Evaluation, 28(4), 416–436.