Education Drivers

Client Values

Client Values Overview

 

Introduction

Implementing effective, evidence-based practices in educational settings is key for increasing academic skills and positive behaviors and decreasing undesirable behaviors. When evaluating the effectiveness of an intervention, it is crucial to consider not only the direct, objective outcomes, but also how the intervention aligns with community values. Methods for assessing this alignment, or social validity, have been described in the literature (e.g., Carroll & St. Peter, 2014; Kamps et al. 1998; Wolf, 1978). A key component of social validity is treatment acceptability (e.g., Harrison et al. 2016).

Social validity can be measured indirectly, through questionnaires or interviews, or directly, by offering treatment choices or observing if an intervention continues to be used. The extent to which social validity outcomes are reported in published studies has also been explored. Ultimately, the results of social validity assessments should inform any changes to the intervention.

Social Validity

Wolf (1978) indicated that objective measurement of behavioral outcomes should be supplemented with subjective measurement of social significance, acceptability, and helpfulness. In other words, society should evaluate the work of interventionists on three levels:

  1. Goals. Are the goals of the intervention socially important? Approaching consumers, such as representatives of the community or the clients themselves, can help determine which problems are socially significant.
  2. Procedures. Are the intervention procedures acceptable? Considerations include ethics, cost, practicality, and fairness.
  3. Effects. Are the results of the intervention satisfactory? This includes unplanned outcomes or side effects. Only the consumer can determine if the intervention was helpful.

Assessment of social validity should be ongoing and used to refine intervention goals, procedures, and measures. When discrepancies between objective and subjective data occur, further investigation is warranted. To minimize potential coercion or undue influence of subjective data, Wolf recommended educating respondents about all treatment options and making responses anonymous; in other words, “...we must establish that set of conditions under which people can be assumed to be the best evaluators of their own treatment needs, procedural preferences, and posttreatment satisfaction” (p. 212). The alignment of objective measurements and what is most important to consumers might need reevaluation. 

Wolf’s questions about social validity referenced “consumers,” but who are consumers? Schwartz and Baer (1991) identified four subtypes of consumers:

  1. Direct consumers are the primary recipients of the intervention. In education settings, direct consumers are usually students, although they can also be teachers.
  2. Indirect consumers are significantly affected by the behavior change produced by the intervention, but they are not the direct recipients. They could be individuals who seek out the intervention (e.g., purchase an intervention manual, hire a consultant); therefore, their satisfaction is critical to its sustained use.
  3. Members of the immediate community are individuals who regularly interact with direct and indirect consumers and therefore are indirectly affected by the intervention. 
  4. Members of the extended community are individuals who do not interact with direct and indirect consumers but are part of their community.

Table 1 shows examples of two interventions and identifies possible consumers at each of the four levels.

Table 1

 

Sample Levels of Consumers for Two Interventions

Intervention

Direct Consumers

Indirect Consumers

Members of immediate community

Members of extended community

Praise-note system to increase prosocial behaviors among middle school students

 

Middle school students

The students’ teachers and parents, school administrators who commissioned the intervention

Bus drivers, after-school program staff

District superintendent, school board members, taxpayers

Training program to increase use of praise and decrease reprimands by elementary school teachers

Elementary school teachers

The teachers’ students, paraprofessionals working in the teachers’ classrooms, school administrators who commissioned the intervention

Students’ parents, teachers’ family members

District superintendent, school board members, taxpayers

 

Treatment Acceptability

The second question of social validity—Are the intervention procedures acceptable?—may be evaluated as an independent construct. Components of treatment acceptability include suitability, perceived benefit, and convenience (Harrison et al., 2016). State et al. (2017) evaluated 336 high school teachers’ perceptions of acceptability and feasibility of several interventions for students with emotional and behavioral challenges. The interventions included strategies that were classwide (expectations and routines, opportunities to respond, positive student-teacher interactions) and individual (accommodations, de-escalation, organizational and study skills). The teachers completed questionnaires in which they were asked to rate if each intervention was acceptable and feasible, and if not, to indicate why not; they were also asked to rank the interventions by priority of implementation. Teacher perceptions were measured before, during, and after implementation.

The results revealed a number of illuminating findings. Although individualized interventions, such as study skills and organizational skills, were most frequently recommended by project facilitators, these interventions were least preferred by teachers due to the time required to implement them. Before, during, and after implementation, teachers indicated a strong preference for classwide interventions. When asked to prioritize interventions, teachers most frequently chose positive student-teacher interactions.

Methods of Assessment

Social validity and treatment acceptability can be assessed through a variety of means depending on the consumers participating in the assessment. Three methods of assessment include questionnaires and interviews, direct choice, and treatment integrity and maintenance. 

Questionnaires and Interviews

When assessing social validity with typically developing adults (e.g., teachers, parents, school administrators), questionnaires and interviews are used most frequently. Frey et al. (2010) assessed social validity of schoolwide positive behavior supports (SWPBS) with indirect consumers and members of the immediate community. The primary assessment method was the focus group interview. Participants included indirect consumers (participating teachers, or teachers implementing SWPBS) and members of the immediate community (e.g., nonparticipating teachers, family service workers, disability liaisons). Two facilitators asked questions based on an open-ended interview protocol and encouraged discussion. Eight focus group interviews were conducted, each with 6 to 14 participants and lasting between 45 and 60 minutes. All interviews were audio recorded and transcribed. The researchers also collected teacher survey data and conducted classroom observations to contextualize the focus group findings.

The results identified program strengths, areas of concern, impressions of outcomes, and suggested changes. Participants liked the fact that participation in the intervention was not mandatory. Concerns included lack of time and resources, inconsistency of implementation across locations, and poor internal communication. For outcomes, participants shared examples of students demonstrating social-emotional skills. Some were optimistic that behavior change would occur in time; others were skeptical about long-term effectiveness, citing other causes of challenging behavior, such as poverty. Suggestions included modifying the intervention for students with diverse needs (e.g., younger children, students with disabilities), supplying additional professional development opportunities, and providing opportunities to engage with other teachers and reflect on experiences. Notably, teachers were much more enthusiastic about learning from one another than from “experts.” This finding aligns with literature on effective dissemination through collaboration with community leaders (Detrich, 2018; Rogers, 2010).

Questionnaires and interviews can be used with children. Kamps et al. (1998) evaluated social validity of peer-mediated interventions for children with disabilities in public school settings. Peer-mediated interventions involved encouraging typically developing students to engage with students with disabilities, including group tutoring, social skills groups, and special buddies (i.e., peer assistants). The interventions involved students in kindergarten through fifth grade. The researchers conducted interviews and administered questionnaires with the typically developing peers. Interviews were conducted with one peer at a time (with a school staff member also present) and lasted approximately 15 minutes. Interview questions included “What did you like most/least about the groups?” and “Was this a good activity for (child with disability)?” For the satisfaction surveys, the procedures were modified slightly based on the age of the students participating. For kindergarten through second-grade students, the survey was administered in a group setting. The classroom teacher or experimenter read each question aloud, and students were asked to circle a happy face if they agreed with the statement, a neutral face if they neither agreed nor disagreed, and a sad face if they disagreed. Third- through fifth-grade students completed the survey independently, rating each statement on a Likert scale of 1 to 5 (1 = strongly disagree, 5 = strongly agree). Students completed the surveys in 20 to 30 minutes.

Students shared largely positive comments about the groups, some specific to working with children with disabilities (e.g., “It’s fun to work with them”; “they need people to play with”) and some general (e.g., “Working together with friends”; “getting to share stuff”). Students also gave suggestions of additional activities they would like to do in the groups, such as “sing more songs,” “could have gone outside,” and “more games and different types.” Survey responses indicated that most students liked participating in the groups (80%) and would like to do it again (88%). They also largely agreed that they liked the procedures, such as playing games (90%) and earning stickers (76%). The researchers measured social interactions between the children with disabilities and typically developing peers, and found that social interactions universally increased after the programs were implemented.

Direct Choice Measures

Questionnaires and interviews may not be effective in assessing social validity with individuals with developmental disabilities or limitations in language skills. As these individuals are frequent consumers of educational interventions, seeking their input is critical. Hanley (2010) outlined a process for objectively measuring social validity by presenting direct consumers (often children or adults with disabilities) a choice of which intervention they would like to experience. Hanley et al. (1997) associated three switches of different colors with three treatment options. First, the researchers taught the participants what each switch represented by prompting them to press the switch and then implementing the corresponding intervention. Subsequently, the researchers presented participants with the three switches and allowed them to choose which one they wanted to press (see Figure 1). Participants consistently chose the option of asking for teacher attention.

Figure 1 

 

Sample Arrangement of Objective Social Validity Assessment 

 

Blue Switch

 

Teacher provided attention if participant asked for it

Red Switch

 

Teacher provided attention periodically, regardless of participant’s behavior

White Switch

 

Teacher did not provide attention

(Adapted from Hanley, 2010)

In a more recent example, Huntington and Schwartz (2022) used a video-based preference assessment to evaluate the social validity of three treatment options to increase on-task behavior and decrease inappropriate behavior. The participants were three male students diagnosed with autism and/or ADHD in Grades 4 through 6. The intervention options included scheduled breaks, rewards for on-task behavior, choices of activities, and requesting attention or assistance from teachers. The researchers collaborated with classroom staff to develop intervention procedures in alignment with the school culture and classroom schedule. The researchers presented the participants with videos modeling three or four intervention options, answered any questions the participants had, and then asked the participants to choose which one they wanted to do in their classroom. To assess consistency of response, each participant was presented with the choices 6 to 12 times. Two participants consistently chose scheduled breaks and one participant consistently chose rewards for on-task behavior.

Treatment Integrity and Maintenance

Another way to assess acceptability with those implementing the intervention is to observe if they continue to use it. Allinder and Oats (1997) explored the relation between acceptability and implementation of curriculum-based measurement (CBM; a system of frequently assessing student progress, graphing results, and making data-based instructional decisions) for mathematics with 22 elementary special education teachers. To assess acceptability, the researchers asked teachers to rate several statements on a 6-point Likert scale (1 = strongly disagree, 6 = strongly agree), such as “I would suggest the use of CBM to other teachers.” Based on these results, the researchers separated the participants into “high acceptability” and “low acceptability” groups. To assess implementation, the researchers scored the teachers on five aspects: (1) number of CBM tests administered, (2) ambitiousness of the goal set by the teacher, (3) number of times the teacher increased the goal, (4) number of times the teacher made instructional changes, and (5) timing of changes made. The results indicated that teachers in the high acceptability group administered more frequent CBM tests and set more ambitious goals.

Anderson and Daly (2013) examined how providing teachers with choices of interventions affected treatment integrity and student disruptive behavior. The participants were two middle school teachers, one elementary school teacher, and three students for whom each teacher had requested consultation due to disruptive classroom behavior. The researchers alternated between instructing the teachers to implement an intervention recommended by an expert and giving the teachers a choice of several intervention options. Both conditions resulted in decreases in disruptive behavior by all three students, but treatment integrity was consistently higher when teachers were given a choice of intervention components. 

Combining Assessment Methods

As an intervention likely will have multiple consumers at various levels (Schwartz & Baer, 1991), a combination of methods is often necessary to fully assess social validity and treatment acceptability. Carroll and St. Peter (2014) assessed social validity of behavior intervention plans (BIPs) using three methods: student choice, teacher report, and program maintenance. 

The student choice assessment used two different BIPs, each associated with picture cards. Initially, teachers exposed participants to each BIP by presenting the accompanying picture card, briefly explaining the BIP, and then implementing the BIP. Next, the teachers presented both picture cards and asked participants to choose one. The intervention associated with the card the participant chose was implemented for that day. The researchers repeated this process for five consecutive school days. For the teacher report, teachers were asked to complete open-ended questionnaires about both BIPs and the choice procedure. Researchers conducted a follow up observation 1 month later to assess intervention maintenance; they observed if teachers implemented the BIP chosen most frequently by each student during the choice assessment and if they implemented it correctly. 

The results of the three assessments were largely consistent. All participants reliably chose one BIP over the other, and the teachers found both BIPs and the choice procedure acceptable. The teachers also continued to implement the student-chosen BIPs at the 1-month follow-up.

Incorporating Client Values in Empirical Literature

Although experimental research typically focuses on objective results, assessing social validity and treatment acceptability is a critical component of studies conducted in applied settings. Silva et al. (2020) explored the extent to which treatment acceptability was assessed and reported in school psychology literature published between 2005 and 2017. Out of 268 evaluated articles, 108 (40%) reported an assessment of treatment acceptability. Acceptability was most often assessed with teachers (74.07% of articles), but also with students (59.26%) and parents (22.22%). Self-report was the most frequently used acceptability measure (98.15% of articles). Approximately equal numbers of studies used a published, validated measure (44.34%) versus a measure developed by the researchers (44.34%); 11.37% of articles reported both a published measure and a researcher-developed measure.

Several different published measures were used across the included studies, but the three most frequently used measures were the Behavior Intervention Rating Scale (BIRS; 19.44% of articles), the Children’s Intervention Rating Profile (CIRP; 19.44%), and the Intervention Rating Profile–15 (IRP-15; 18.52%). The researchers also discovered several statistically significant differences among key variables. Studies that evaluated interventions delivered one-on-one were more likely to assess acceptability than those delivered in small group settings. Studies that evaluated individual and classwide interventions were more likely to assess acceptability than studies that evaluated schoolwide interventions. Studies that evaluated interventions targeting behavioral skills were more likely to report acceptability than those targeting academic skills or mental health outcomes. Interestingly, studies that also assessed treatment integrity were more likely to assess acceptability than those that did not assess treatment integrity.

Snodgrass et al. (2018) analyzed the prevalence of social validity assessment in single-case design studies published in six special education journals (Exceptional Children, Journal of Learning Disabilities, American Journal of Intellectual and Developmental Disabilities, Research in Autism Spectrum Disorders, Journal of Intellectual Disability Research, and Research in Developmental Disabilities) between 2005 and 2016. The analysis included 429 articles. The researchers evaluated each publication in three phases: (1) Was a social validity assessment of any kind conducted?, (2) Did the social validity assessment evaluate the three key components of social validity (goals, procedures, outcomes)?, and (3) How scientifically rigorous were the social validity assessments; that is, did the assessment follow the six steps of the scientific method? (See Table 2.)

Table 2

Evaluating Scientific Rigor of Social Validity Assessments 

1. Was a research question about social validity posed?

2. Was background on the intervention’s social validity included in the literature review?

3. Did the authors make a prediction about social validity of the intervention (overt or implied)?

4. What methods were used to test the prediction?

5. Were analysis procedure(s) documented?

6. Were results reported?

(Adapted from Snodgrass et al., 2018)

Out of 429 articles, 115 (26.81%) included some kind of social validity assessment. Of these, 28 articles included all three components of social validity in their assessment (6.52% of all articles, 24.35% of articles that evaluated social validity). No articles included all six steps of the scientific method in their social validity assessment; 11 articles included five out of six steps, 15 included four out of six steps. The most commonly omitted step was documenting an analysis procedure. 

The researchers reported additional characteristics of the social validity assessments. The assessment was most frequently in the form of a questionnaire (71.43%). Direct participants were the most frequent respondents (67.86%), although this was not clearly defined—as discussed above, this could refer to direct consumers (those directly receiving the intervention) or indirect consumers (those implementing the intervention). In addition, social validity was most frequently assessed after the intervention (96.43%).

Refining Interventions With Social Validity Findings

In empirical literature, social validity is frequently assessed at the end of a study. Schwartz and Baer (1991) argued that social validity assessments must not only gather information about the acceptability and importance of interventions, but also use that information to improve the intervention: “It is important to begin the analysis of why some programs are liked and others disliked, so eventually social validity assessment can become a calculated prediction rather than an empirically assessed early warning or endorsement” (p. 191).

Strain et al. (2012) provided several examples of assessing social validity throughout the intervention development process and using the findings to modify the intervention. Prevent-Teach-Reinforce (PTR) is a model designed to aid school teams in developing behavior support plans. The model is based on the evidence-based strategies of positive behavioral supports, but “...a distinguishing characteristic of the PTR model is that it incorporates all of Wolf’s (1978) dimensions of social validity” (p. 186). Input from consumers is vital to each step in the process, from identifying goals to selecting from a menu of intervention options.

One part of the assessment process involves teachers collecting information about student behavior. In the initial PTR model, teachers were asked to fill out a form requiring open-ended written responses. After teachers provided feedback that this task took too much time, the researchers modified the form to include closed-ended questions with checkboxes. The researchers also used multiple methods of social validity assessment. Participating teachers were asked to rate statements, such as “How willing are you to carry out the PTR behavior plan?” on a 5-point Likert scale (5 indicated most positive rating; average rating was 4.8). In directly measuring teacher fidelity in implementing the program, the researchers found that over 80% of teachers implemented accurately.

Conclusions and Implications

Evaluating social validity and treatment acceptability, or how an intervention aligns with the values of a community, is a critical component of evidence-based practice. The three key components of social validity include (1) Are the goals of the intervention important?, (2) Are the procedures acceptable?, and (3) Are the outcomes significant? Treatment acceptability focuses on the second of these questions, and addresses suitability, perceived benefits, and convenience. Social validity should be assessed across categories of consumers, including those who are directly and indirectly impacted by the intervention.

The numerous methods of assessing social validity include interviews, questionnaires, and measurement of treatment integrity and maintenance. For individuals with limited language skills, social validity can be assessed objectively by providing choices of treatment options and measuring which option is selected most frequently.

Although attention to social validity and treatment acceptability in published literature has increased, widespread adoption is still needed. Ideally, these crucial constructs should be assessed before, during, and after the intervention with a variety of methods and with multiple consumers. The results of social validity assessments should then be used to refine intervention components, allowing for optimal alignment with client values.

 

 

 

 

 

References

Allinder, R. M., & Oats, R. G. (1997). Effects of acceptability on teachers’ implementation of curriculum-based measurement and student achievement in mathematics computation. Remedial and Special Education, 18(2), 113–120. https://doi.org/10.1177/074193259701800205

Anderson, M., & Daly, E. J. (2013). An experimental examination of the impact of choice of treatment components on treatment integrity. Journal of Educational and Psychological Consultation, 23(4), 231–263. https://doi.org/10.1080/10474412.2013.845493  

Carroll, R. A., & St. Peter, C. C. (2014). Methods for assessing social validity of behavioral intervention plans for children with attention deficit hyperactivity disorder. Acta de Investicación Psicológica, 4(3), 1642–1655.

Detrich, R. (2018). Rethinking dissemination: Storytelling as a part of the repertoire. Perspectives on Behavior Science, 41(2), 541–549. https://doi.org/10.1007/s40614-018-0160-y

Frey, A. J., Park, K. L., Browne-Ferrigno, T., & Korfhage, T. L. (2010). The social validity of program-wide positive behavior support. Journal of Positive Behavior Interventions, 12(4), 222–235. https://doi.org/10.1177/1098300709343723

Hanley, G. P. (2010). Toward effective and preferred programming: A case for the objective measurement of social validity with recipients of behavior-change programs. Behavior Analysis in Practice, 3(1), 13–21.

Hanley, G. P., Piazza, C. C., Fisher, W. W., Contrucci, S. A., & Maglieri, K. A. (1997). Evaluation of client preference for function‐based treatment packages. Journal of Applied Behavior Analysis, 30(3), 459–473.

Harrison, J. R., State, T. M., Evans, S. W., & Schamberg, T. (2016). Construct and predictive validity of social acceptability: Scores from high school teacher ratings on the School Intervention Rating Form. Journal of Positive Behavior Interventions, 18(2), 111–123. https://doi.org/10.1177/1098300715596135

Huntington, R. N., & Schwartz, I. S. (2022). The use of stimulus preference assessments to determine procedural acceptability for participants. Journal of Positive Behavior Interventions, 24(4), 325–336. https://doi.org/10.1177/10983007211042651

Kamps, D. M., Kravits, T., Gonzalez Lopez, A., Kemmerer, K., Potucek, J., & Garrison Harrell, L. (1998). What do peers think? Social validity of peer-mediated programs. Education and Treatment of Children, 21(2), 107–134. https://www.jstor.org/stable/42899525

Rogers, E. M. (2010). Diffusion of innovations. Simon and Schuster.

Schwartz, I. S., & Baer, D. M. (1991). Social validity assessments: Is current practice state of the art? Journal of Applied Behavior Analysis, 24(2), 189–204. https://doi.org/10.1901/jaba.1991.24-189

Silva, M. R., Collier-Meek, M. A., Codding, R. S., & DeFouw, E. R. (2020). Acceptability assessment of school psychology interventions from 2005 to 2017. Psychology in the Schools, 57(1), 62–77. https://doi.org/10.1002/pits.22306

Snodgrass, M. R., Chung, M. Y., Meadan, H., & Halle, J. W. (2018). Social validity in single-case research: A systematic literature review of prevalence and application. Research in Developmental Disabilities, 74, 160–173. https://doi.org/10.1016/j.ridd.2018.01.007

State, T. M., Harrison, J. R., Kern, L., & Lewis, T. J. (2017). Feasibility and acceptability of classroom-based interventions for students with emotional/behavioral challenges at the high school level. Journal of Positive Behavior Interventions, 19(1), 26–36. https://doi.org/10.1177/1098300716648459

Strain, P. S., Barton, E. E., & Dunlap, G. (2012). Lessons learned about the utility of social validity. Education and Treatment of Children, 35(2), 183–200. https://www.jstor.org/stable/42900154

Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11(2), 203–214.

Publications

TITLE
SYNOPSIS
CITATION
No Child Left Behind, Contingencies, and Utah’s Alternate Assessment.

This paper dissusses the contingencies that create opportunities and obstacles for the use of effective educational practices in a state-wide system.

Hager, K. D., Slocum, T. A., & Detrich, R. (2007). No Child Left Behind, Contingencies, and Utah’s Alternate Assessment. Journal of Evidence-Based Practices for Schools, 8(1), 63–87.

The Evidence-Based Practice of Applied Behavior Analysis

Applied behavior analysis emphasizes being scientifically-based In this paper, we discuss how the core features of evidence-based practice can be integrated into applied behavior analysis.

Slocum, T. A., Detrich, R., Wilczynski, S. M., Spencer, T. D., Lewis, T., & Wolfe, K. (2014). The Evidence-Based Practice of Applied Behavior Analysis. The Behavior Analyst, 37(1), 41-56.

Evidence-based Practice: A Framework for Making Effective Decisions

Synopsis: Evidence-based practice is characterized as a framework for decision-making integrating best available evidence, clinical expertise, and client values and context.  This paper reviews how these three dimensions interact to inform decisions.

Spencer, T. D., Detrich, R., & Slocum, T. A. (2012). Evidence-based practice: A framework for making effective decisions. Education and Treatment of Children, 35(2), 127-151.

 

Presentations

TITLE
SYNOPSIS
CITATION
A Systematic Approach to Data-based Decision Making in Education: Building School Cultures

This paper examines the critical pracitce elements of data-based decision making and strategies for building school cultures to support the process.

Keyworth, R. (2009). A Systematic Approach to Data-based Decision Making in Education: Building School Cultures [Powerpoint Slides]. Retrieved from 2009-campbell-presentation-randy-keyworth.

Building a Data-based Decision Making Culture through Performance Management

This paper examines the issues, challenges, and opportunities of creating a school culture that uses data systematically in all of its decision making.

Keyworth, R. (2009). Building a Data-based Decision Making Culture through Performance Management [Powerpoint Slides]. Retrieved from 2008-aba-presentation-randy-keyworth.

A Systematic Approach to Data-based Decision Making in Education

Systematic data-based decision making is critical to insure that educators are able to identify, implement, and trouble shoot evidence-based interventions customized to individual students and needs.

Keyworth, R. (2010). A Systematic Approach to Data-based Decision Making in Education [Powerpoint Slides]. Retrieved from 2010-hice-presentation-randy-keyworth.

Contingencies for the Use of Effective Educational Practices: Developing Utah’s Alternate Assessment

This paper dissusses the contingencies that create opportunities and obstacles for the use of effective educational practices in a state-wide system.

Slocum, T. (2006). Contingencies for the Use of Effective Educational Practices: Developing Utah’s Alternate Assessment [Powerpoint Slides]. Retrieved from 2006-wing-presentation-tim-slocum.

Research Based Dissemination: Or Confessions of a Poor Disseminator"
This paper shares research on what makes ideas "stick" (gain acceptance, maintain) within a culture and provided an acronym from the results: SUCCESS (simple, unexpected, concrete, credible, emotional, involve stories).
Cook, B. (2014). Research Based Dissemination: Or Confessions of a Poor Disseminator" [Powerpoint Slides]. Retrieved from 2014-wing-presentation-bryan-cook.
If We Want More Evidence-based Practice, We Need More Practice-based Evidence
This paper discusses the importance, strengths, and weaknesses of using practice-based evidence in conjunction with evidence-based practice.
Cook, B. (2015). If We Want More Evidence-based Practice, We Need More Practice-based Evidence [Powerpoint Slides]. Retrieved from 2015-wing-presentation-bryan-cook.
The Four Assumptions of the Apocalypse
This paper examines the four basic assumptions for effective data-based decision making in education and offers strategies for addressing problem areas.
Detrich, R. (2009). The Four Assumptions of the Apocalypse [Powerpoint Slides]. Retrieved from 2009-wing-presentation-ronnie-detrich.
Evidence-based Practice for Applied Behavior Analysts: Necessary or Redundant
Evidence-based practice has been described as a decision making framework. This presentation describes the features and challenges of this perspecive.
Detrich, R. (2015). Evidence-based Practice for Applied Behavior Analysts: Necessary or Redundant [Powerpoint Slides]. Retrieved from 2013-aba-presentation-ronnie-detrich-tim-slocum-teri-lewis-trina.
Workshop: Evidence-based Practice of Applied Behavior Analysis.
Evidence-based practice is a decision-making framework that integrates best available evidence, professional judgement, and client values and context. This workshop described the relationship across these three dimensions of decision-making.
Detrich, R. (2015). Workshop: Evidence-based Practice of Applied Behavior Analysis. [Powerpoint Slides]. Retrieved from 2015-missouriaba-workshop-presentation-ronnie-detrich.
TITLE
SYNOPSIS
CITATION
Effects of Acceptability on Teachers' Implementation of Curriculum-Based Measurement and Student Achievement in Mathematics Computation

Acceptability is a proxy measure of how well an intervention fits into the context of the intervention setting.

Allinder, R. M., & Oats, R. G. (1997). Effects of Acceptability on Teachers’ Implementation of Curriculum-Based Measurement and Student Achievement in Mathematics Computation. Remedial & Special Education, 18(2), 113. Retrieved from http://psycnet.apa.org/index.cfm?fa=search.displayRecord&UID=1997-03796-005

Effects of Behavior Support Team Composition on the Technical Adequacy and Contextual Fit of Behavior Support Plans

Benazzi and colleagues examined the contextual fit of interventions when they were deveopled by different configurations of individuals.

Benazzi, L., Horner, R. H., & Good, R. H. (2006). Effects of Behavior Support Team Composition on the Technical Adequacy and Contextual Fit of Behavior Support Plans. Journal of Special Education, 40(3), 160-170.

Misconceptions about data-based decision making in education: An exploration of the literature

Research on data-based decision making has proliferated around the world, fueled by policy recommendations and the diverse data that are now available to educators to inform their practice. Yet, many misconceptions and concerns have been raised by researchers and practitioners. This paper surveys and synthesizes the landscape of the data-based decision-making literature to address the identified misconceptions and then to serve as a stimulus to changes in policy and practice as well as a roadmap for a research agenda.

Mandinach, E. B., & Schildkamp, K. (2021). Misconceptions about data-based decision making in education: An exploration of the literature. Studies in Educational Evaluation69, 100842.

Implementation, context and complexity

Implementation of an intervention always occurs in a specific context.  This papers considers the complexity that context contributes to implementation science.

May, C. R., Johnson, M., & Finch, T. (2016). Implementation, context and complexity. Implementation Science, 11(1), 141.

Building Capacity to Implement and Sustain Effective Practices to Better Serve Children

This article provides an overview of contextual factors across the levels of an educational system that influence implementation.

Schaughency, E., & Ervin, R. (2006). Building Capacity to Implement and Sustain Effective Practices to Better Serve Children. School Psychology Review, 35(2), 155-166. Retrieved from http://eric.ed.gov/?id=EJ788242

 

 

 

The Evidence-based Practice of Applied Behavior Analysis

The paper describes the relationship between the three cornerstones of evidence-based practice including context.

Slocum, T. A., Detrich, R., Wilczynski, S. M., Spencer, T. D., Lewis, T., & Wolfe, K. (2014). The Evidence-based Practice of Applied Behavior Analysis. The Behavior Analyst, 37, 41-56.

Evidence-based Practice: A Framework for Making Effective Decisions.

Evidence-based practice is a decision-making framework.  This paper describes the relationships among the three cornerstones of this framework.

Spencer, T. D., Detrich, R., & Slocum, T. A. (2012). Evidence-based Practice: A Framework for Making Effective Decisions. Education & Treatment of Children (West Virginia University Press), 35(2), 127-151.

TITLE
SYNOPSIS
Cambridge Center for Behavioral Studies
The mission of the organization is to advance the scientific study of behavior and its humane application to the solution of practical problems in the home, school, community, and the workplace
Center for Research and Reform in Education (CRRE)

CRRE is a research center who’s major goal is to improve the quality of education through high-quality research and evaluation studies and the dissemination of evidence-based research.

Cochrane Collaboration
Cochrane is an independent network of health practitioners, researchers, patient advocates and others, responding to the challenge of making the vast amounts of evidence generated through research useful for informing decisions about health.
Current Controlled Trials - Medicine
This is an example from medicine of dissemination of evidence-based practices.
Daniel Willingham - Web Site
Daniel Willingham is a resource to help those interested in issues of education to find practical, helpful information on what works and what doesn’t. His videos are of special interest.
Journal of Contemporary Clinical Trials
Contemporary Clinical Trials is an international journal that publishes manuscripts pertaining to the design, methods and operational aspects of clinical trials.
Logical Positivism
An overview of Logical Positivism and it’s impact on science and the issue of verifiability.
National Education Policy Center
The mission of the National Education Policy Center is to produce and disseminate high-quality, peer-reviewed research to inform education policy discussions.
Positive Behavioral Interventions and Supports (PBIS)

The Technical Assistance Center on PBIS provides support states, districts and schools to establish, scale-up and sustain the PBIS framework.

Back to Top