The present paper makes the case for systematic assessment and evaluation in clinical practice. The purpose of systematic evaluation is to enhance client care and to improve the basis for drawing inferences about treatment and therapeutic change.
Kazdin, A. E. (1993). Evaluation in clinical practice: Clinically sensitive and systematic methods of treatment delivery. Behavior Therapy, 24(1), 11-45.
We analyze the relationship between inequality and economic growth from two directions. The first part of the survey examines the effect of inequality on growth. The second part analyzes several mechanisms whereby growth may increase wage inequality, both across and within education cohorts.
Aghion, P., Caroli, E., & Garcia-Penalosa, C. (1999). Inequality and economic growth: The perspective of the new growth theories. Journal of Economic literature, 37(4), 1615-1660.
This paper examines the benefits and challenges inherent in using randomized clinical trials and quasi-experimental designs in the field of education research.
Angrist, J. D. (2003). Randomized trials and quasi-experiments in education research. NBER Reporter Online, (Summer 2003), 11-14.
Evaluated the use of the N. S. Jacobson et al (see record 1985-00073-001) criteria for clinical significance in psychotherapy data analysis.
Ankuta, G. Y., & Abeles, N. (1993). Client satisfaction, clinical significance, and meaningful change in psychotherapy. Professional Psychology: Research and Practice, 24(1), 70-74.
Objective: To review alternative treatments (Tx) of Attention-Deficit/Hyperactivity Disorder (ADHD)those other than psychoactive medication and behavioral/psychosocial Tx-for the November, 1998 National Institute of Health (NIH) Consensus Development Conference on ADHD.
Arnold, L. E. (1999). Treatment alternatives for attention-deficit! hyperactivity disorder (ADHD). Journal of attention disorders, 3(1), 30-48.
This chapter reviews a set of behavioral science findings derived from the November 1993 NIDA Technical Review, “Reviewing the Behavioral Science Knowledge Base on Technology Transfer.” This is not intended to be a complete recapitulation of the arguments and conclusions drawn by the authors of the 14 papers presented in this monograph.
Backer, T. E., & David, S. L. (1995). Synthesis of behavioral science learnings about technology transfer. NIDA research monograph, 155, 262-279.
In this article, we attempt to distinguish between the properties of moderator and mediator variables at a number of levels.
Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of personality and social psychology, 51(6), 1173.
s. In this study, the authors used meta-analytic procedures to test one possible factor contributing to the attenuation of effects: structural inequalities between placebo and active treatments.
Baskin, T. W., Tierney, S. C., Minami, T., & Wampold, B. E. (2003). Establishing specificity in psychotherapy: a meta-analysis of structural equivalence of placebo controls. Journal of consulting and clinical psychology, 71(6), 973.
The author discusses what he believes to be the promise of this approach, its influence on the role of the school psychologist, and what educators can do if they choose to pursue the leads offered by this group.
Bijou, S. W. (1970). What psychology has to offer education—now. Journal of Applied Behavior Analysis, 3(1), 65.
This research evaluated the outcomes of a school psychology training practicum by replicating intervention-based service delivery procedures established in prior research.
Bonner, M., & Barnett, D. W. (2004). Intervention-based school psychology services: Training for child-level accountability; preparing for program-level accountability. Journal of School Psychology, 42(1), 23-43.
Observational data collected in ecologically valid measurement contexts are likely to be influenced by contextual factors irrelevant to the research question. Using multiple sessions and raters often improves the stability of scores for variables from such contexts.
Bruckner, C. T., Yoder, P. J., & McWilliam, R. A. (2006). Generalizability and decision studies: An example using conversational language samples. Journal of Early Intervention, 28(2), 139-153.
This article describes an evaluation of a prisoner-run delinquency prevention program at Hawaii's major prison.
Buckner, J. C., & Chesney-Lind, M. (1983). Dramatic cures for juvenile crime: An evaluation of a prisoner-run delinquency prevention program. Criminal Justice and Behavior, 10(2), 227-247.
As pressure increases for the demonstration of effective treatment for children with mental disorders, it is essential that the field has an understanding of the evidence base. To address this aim, the authors searched the published literature for effective interventions for children and adolescents and organized this review
Burns, B. J., Hoagwood, K., & Mrazek, P. J. (1999). Effective treatment for mental disorders in children and adolescents. Clinical child and family psychology review, 2(4), 199-254.
Over many decades, educators have developed countless interventions and theories about how to create lasting change. In other words, implementation research is an endeavor to understand if and how educational efforts are accomplishing their goals.
Century, J., & Cassata, A. (2016). Implementation research: Finding common ground on what, how, why, where, and who. Review of Research in Education, 40(1), 169-215.
Eight comprehensive chapters cover the common problems of disruptive behavior, anxiety, sleep disorders, nocturnal enuresis, encopresis, habit disorders (such as tics and thumbsucking), the treatment of pain and, finally, helping children adhere to medical regimens. The book describes diagnosis and treatment, with an emphasis on practicality.
Christophersen, E. R., & Mortweet, S. L. (2001). Treatments that work with children: Empirically supported strategies for managing childhood problems. Washington, DC, US: American Psychological Association.
This guide presents the tools therapists need to incorporate outcomes measurement effectively and meaningfully into everyday clinical work.
Clement, P. W. (1999). Outcomes and incomes: How to evaluate, improve, and market your psychotherapy practice by measuring outcomes. Guilford Press.
The effects of changes in depression-relevant cognition were examined in relation to subsequent change in depressive symptoms for outpatients with major depressive disorder randomly assigned to cognitive therapy (COT; n = 32) vs those assigned to pharmacotherapy only (NoCT; n = 32).
DeRubeis, R. J., Evans, M. D., Hollon, S. D., Garvey, M. J., Grove, W. M., & Tuason, V. B. (1990). How does cognitive therapy work? Cognitive change and symptom change in cognitive therapy and pharmacotherapy for depression. Journal of Consulting and Clinical Psychology, 58(6), 862-869.
This article explored developmental and intervention evidence relevant to iatrogenic effects in peer-group interventions. Longitudinal research revealed that "deviancy training" within adolescent friendships predicts increases in delinquency, substance use, violence, and adult maladjustment.
Dishion, T. J., McCord, J., & Poulin, F. (1999). When interventions harm: Peer groups and problem behavior. American psychologist, 54(9), 755.
This comprehensive textbook is an essential primer for all practitioners and students who are grappling with the new age of evidence-based practice. The contributors explore some of the complex challenges in implementing EBPs, and highlight the meaningful opportunities that are inherent in this paradigm shift.
Drake, R. E., Merrens, M. R., & Lynde, D. W. (Eds.). (2005). A Norton professional book. Evidence-based mental health practice: A textbook. New York, NY, US: W W Norton & Co.
This article discusses the use of randomized controlled trials as required by the Department of Education in evaluating the effectiveness of educational practices.
EDUC, A. R. O. (2005). Can randomized trials answer the question of what works?.
The authors argue that important evidence about best practice comes from case-based research, which builds knowledge in a clinically useful manner and complements what is achieved by multivariate research methods.
Edwards, D. J., Dattilio, F. M., & Bromley, D. B. (2004). Developing evidence-based practice: The role of case-based research. Professional Psychology: Research and Practice, 35(6), 589.
Cover, copy, compare (CCC) has been used with success to improve spelling skills. This study adds to existing research by completing an analysis of the rewriting component of the intervention. The impact of varying the number of times a subject copied a word following an error was examined with four elementary age students.
Erion, J., Davenport, C., Rodax, N., Scholl, B., & Hardy, J. (2009). Cover-copy-compare and spelling: One versus three repetitions. Journal of Behavioral Education, 18(4), 319-330.
This book analyzes the findings of a treatment program which integrated antisocial and delinquent youths into prosocial peer groups in a suburban community center in St. Louis.
Feldman, R. A., Caplinger, T. E., & Wodarski, J. S. (1983). The St. Louis conundrum: The effective treatment of antisocial youths. Englewood Cliffs, NJ: Prentice-Hall.
In this paper we will review some of the examples from industrial innovation and dissemination, provide some data on replications of the Achievement Place/Teaching-Family Model over 20 years, and try to share some of the philosophical, practical, and technological guidelines we have come to accept.
Fixsen, D. L., & Blase, K. A. (1993). Creating new realities: Program development and dissemination. Journal of Applied Behavior Analysis, 26(4), 597-615.
Predictions are made in the form of if-then statements. If reliable predictions define science and testing predictions is the work of scientists, then implementation science is a science to the extent that 1) predictions are made and 2) those predictions are tested in practice.
The resource can be found on the "Resources" tab of the Active Implementation website.
Fixsen, D. L., Van Dyke, M., & Blase, K. A. (2019). Implementation science: Fidelity predictions and outcomes.
The standard reference in the field, this acclaimed work synthesizes findings from hundreds of carefully selected studies of mental health treatments for children and adolescents.
Fonagy, P., Cottrell, D., Phillips, J., Bevington, D., Glaser, D., & Allison, E. (2014). What works for whom?: a critical review of treatments for children and adolescents. Guilford Publications.
This article introduces a special section addressing these resource allocation issues in the context of prevalent disorders
Haaga, D. A. F. (2000). Introduction to the special section on stepped care models in psychotherapy. Journal of Consulting and Clinical Psychology, 68(4), 547-548.
This study examines adoption and implementation of the US Department of Education's new policy, the `Principles of Effectiveness', from a diffusion of innovations theoretical framework. In this report, we evaluate adoption in relation to Principle 3: the requirement to select research-based programs.
Hallfors, D., & Godette, D. (2002). Will the “principles of effectiveness” improve prevention practice? Early findings from a diffusion study. Health Education Research, 17(4), 461–470.
Hattie’s book is designed as a meta-meta-study that collects, compares and analyses the findings of many previous studies in education. Hattie focuses on schools in the English-speaking world but most aspects of the underlying story should be transferable to other countries and school systems as well. Visible Learning is nothing less than a synthesis of more than 50.000 studies covering more than 80 million pupils. Hattie uses the statistical measure effect size to compare the impact of many influences on students’ achievement, e.g. class size, holidays, feedback, and learning strategies.
Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New York, NY: Routledge.
This unique and ground-breaking book is the result of 15 years research and synthesises over 800 meta-analyses on the influences on achievement in school-aged students. It builds a story about the power of teachers, feedback, and a model of learning and understanding.
Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. routledge.
This book examines the use of randomized controlled trial (RCT) studies in education.
Hilton, M., & Towne, L. (Eds.). (2004). Implementing Randomized Field Trials in Education:: Report of a Workshop. National Academies Press.
A statistics textbook appropriate for graduate students and researchers conducting quasi-experimental design and analysis.
Hyman, R. (1982). Quasi-experimentation: design and analysis issues for field settings (book). Journal of Personality Assessment, 46(1), 96-97.
This essay discusses issues and concerns that too many research findings may be false. The paper examines reasons a study may prove inaccurate including: the study power and bias, the number of other studies on the same question, and the ratio of true to no relationships. Finally, it considers the implications these problems create for conducting and interpreting research.
Ioannidis, J. P. (2005). Why most published research findings are false. PLoS medicine, 2(8), e124.
A comprehensive, easily digestible introduction to research methods for undergraduate and graduate students. Readers will develop an understanding of the multiple research methods and strategies used in education and related fields, including how to read and critically evaluate published research and how to write a proposal, construct a questionnaire, and conduct an empirical research.
Johnson, R. B., & Christensen, L. (2019). Educational research: Quantitative, qualitative, and mixed approaches. Sage publications.
This paper is a review of primary research investigating the Feingold hypothesis which suggests diet modification as an efficacious treatment for hyperactivity.
Kavale, K. A., & Forness, S. R. (1983). Hyperactivity and diet treatment: A meta-analysis of the Feingold hypothesis. Journal of Learning Disabilities, 16(6), 324-330.
This chapter traces the history of behavior modification as a general movement. Individual conceptual approaches and techniques that comprise behavior modification are obviously important in tracing the history, but they are examined as part of the larger development rather than as ends in their own right.
Kazdin, A. E. (1982). History of behavior modification. In International handbook of behavior modification and therapy (pp. 3-32). Springer, Boston, MA.
The previous articles in this special section make the case for the importance of evaluating the clinical significance of the therapeutic change, present key measures and innovative ways in which they are applied, and more generally provide important guidelines for evaluating therapeutic change.
Kazdin, A. E. (1999). The meanings and measurement of clinical significance.
The review by Sheldrick et al. evaluates treatments for children and adolescents with conduct disorder and whether they produce clinically significant change
Kazdin, A. E. (2001). Almost clinically significant (pClinical psychology: Science and practice, 8(4), 455-462.
The focus of this chapter is on psychotherapy research and a call for research on mechanisms of therapeutic change.
Kazdin, A. E. (2006). Mechanisms of Change in Psychotherapy: Advances, Breakthroughs, and Cutting-Edge Research (Do Not Yet Exist).
In this article, we discuss the importance of studying mechanisms, the logical and methodological requirements, and why almost no studies to date provide evidence for why or how treatment works.
Kazdin, A. E., & Nock, M. K. (2003). Delineating mechanisms of change in child and adolescent therapy: Methodological issues and research recommendations. Journal of Child Psychology and Psychiatry, 44(8), 1116-1129.
The authors developed a methodological basis for investigating how risk factors work together. Better methods are needed for understanding the etiology of disorders, such as psychiatric syndromes, that presumably are the result of complex causal chains.
Kraemer, H. C., Stice, E., Kazdin, A., Offord, D., & Kupfer, D. (2001). How do risk factors work together? Mediators, moderators, and independent, overlapping, and proxy risk factors. American journal of psychiatry, 158(6), 848-856.
This paper describes an analytic framework to identify and distinguish between moderators and mediators in RCTs when outcomes are measured dimensionally.
Kraemer, H. C., Wilson, G. T., Fairburn, C. G., & Agras, W. S. (2002). Mediators and moderators of treatment effects in randomized clinical trials. Archives of general psychiatry, 59(10), 877-883.
This paper by a What Works Clearinghouse the panel provides an overview of singlr-subject designs (SCDs), specifies the types of questions that SCDs are designed to answer, and discusses the internal validity of SCDs. The panel then proposes standards to be implemented by the WWC.
Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case designs technical documentation. What Works Clearinghouse.
The present study attempted to examine the causal relationships among changes in automatic thoughts, dysfunctional attitudes, and depressive symptoms in a 12-week group cognitive behavior therapy (GCBT) program for depression.
Kwon, S. M., & Oei, T. P. (2003). Cognitive change processes in a group cognitive behavior therapy of depression. Journal of Behavior Therapy and Experimental Psychiatry, 34(1), 73-85.
This bestselling resource presents authoritative thinking on the pressing questions, issues, and controversies in psychotherapy research and practice today.
Lambert, M. J., Garfield, S. L., & Bergin, A. E. (2004). Handbook of psychotherapy and behavior change. New York: John Wiley & Sons.
In the present correlational study of 199 treated adolescents, the authors used a multitrait-multimethod analysis to examine psychometrically measured pathology change (pre- and postassessment of symptoms and functioning), consumer satisfaction, and perceived improvement reported by multiple informants.
Lambert, W., Salzer, M. S., & Bickman, L. (1998). Clinical outcome, consumer satisfaction, and ad hoc ratings of improvement in children's mental health. Journal of Consulting and Clinical Psychology, 66(2), 270-278.
The chapter focuses on the historically perceived poor methodological rigor and low scientific credibility of most educational/psychological intervention research.
Levin, J. R., & Kratochwill, T. R. (2012). Educational/psychological intervention research circa 2012. Handbook of Psychology, Second Edition, 7.
Despite increased attention to methodological rigor in education research, the field has focused heavily on experimental design and not on the merit of replicating important results. The present study analyzed the complete publication history of the current top 100 education journals ranked by 5-year impact factor and found that only 0.13% of education articles were replications. Contrary to previous findings in medicine, but similar to psychology, the majority of education replications successfully replicated the original studies. However, replications were significantly less likely to be successful when there was no overlap in authorship between the original and replicating articles. The results emphasize the importance of third-party, direct replications in helping education research improve its ability to shape education policy and practice.
Makel, M. C., & Plucker, J. A. (2014). Facts are more important than novelty: Replication in the education sciences. Educational Researcher, 43(6), 304–316.
Used metaâanalysis to examine the efficacy of bibliotherapy. Bibliotherapy treatments were compared to control groups and therapistâadministered treatments.
Marrs, R. W. (1995). A metaâanalysis of bibliotherapy studies. American journal of community psychology, 23(6), 843-870.
In a large sample of children from the general population this research found no association between parent, teacher, and self-reports of ADDH behaviors and a history of allergic disorders (asthma, eczema, rhinitis, and urticaria) at ages 9 or 13 years.
McGee, R., Stanton, W. R., & Sears, M. R. (1993). Allergic disorders and attention deficit disorder in children. Journal of abnormal child psychology, 21(1), 79-88.
Developed a fidelity index of program implementation for assertive community treatment (ACT). In Study 1, 20 experts rated the importance of 73 elements proposed as critical ACT ingredients, also indicating ideal model specifications for elements.
McGrew, J. H., Bond, G. R., Dietzen, L., & Salyers, M. (1994). Measuring the fidelity of implementation of a mental health program model. Journal of Consulting and Clinical Psychology, 62(4), 670-678.
This paper examines school-based experimental studies with individuals 0 to 18 years between 1991 and 2005. Only 30% of the studies provided treatment integrity data. Nearly half of studies (45%) were judged to be at high risk for treatment inaccuracies.
McIntyre, L. L., Gresham, F. M., DiGennaro, F. D., & Reed, D. D. (2007). Treatment integrity of schoolâbased interventions with children in the Journal of Applied Behavior Analysis 1991–2005. Journal of Applied Behavior Analysis, 40(4), 659–672.
A variety of researches are examined from the standpoint of information theory. It is shown that the unaided observer is severely limited in terms of the amount of information he can receive, process, and remember. However, it is shown that by the use of various techniques, e.g., use of several stimulus dimensions, recoding, and various mnemonic devices, this informational bottleneck can be broken.
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological review, 63(2), 81.
This article covers current efforts by the National Institute of Mental Health to bridge this gap. Included are discussions of problems with the current research portfolio and new efforts in expanding the research portfolio, innovative methodological research, and expansion of training programs.
Norquist, G., Lebowitz, B., & Hyman, S. (1999). Expanding the frontier of treatment research. Prevention & Treatment, 2(1). Article ID 1a.
The relationship of client satisfaction to outcome was investigated for adult outpatients (N = 152) from 3 urban community mental health centers. Clients completed a problem self-rating and the Brief Symptom Inventory (BSI) at intake, 10 weeks later, and 5 months later.
Pekarik, G., & Wolff, C. B. (1996). Relationship of satisfaction to symptom change, follow-up adjustment, and clinical significance. Professional Psychology: Research and Practice, 27(2), 202-208.
This award-winning twelve-volume reference covers every aspect of the ever-fascinating discipline of psychology and represents the most current knowledge in the field. This ten-year revision now covers discoveries based in neuroscience, clinical psychology's new interest in evidence-based practice and mindfulness, and new findings in social, developmental, and forensic psychology.
Pianta, R. C., Hamre, B., Stuhlman, M., Reynolds, W. M., & Miller, G. E. (2003). Handbook of psychology: Educational psychology.
Describes treatment of autism, a severe, chronic developmental disorder that results in significant lifelong disability for most persons, with few persons ever functioning in an independent and typical lifestyle.
Rogers, S. J. (1998). Empirically supported comprehensive treatments for young children with autism. Journal of clinical child psychology, 27(2), 168-179.
Current systems for listing empirically supported therapies (ESTs) provide recognition to treatment packages, many of them proprietary and trademarked, without regard to the principles of change believed to account for their effectiveness.
Rosen, G. M., & Davison, G. C. (2003). Psychology should list empirically supported principles of change (ESPs) and not credential trademarked therapies or other treatment packages. Behavior modification, 27(3), 300-312.
Two messages are conveyed in the report: Mental health is fundamental to health, and mental disorders are real health conditions. The surgeon general's report summarizes the Office's detailed review of more than 3,000 research articles, plus 1st-person accounts from individuals who have been afflicted with mental disorders.
Satcher, D. (2000). Mental health: A report of the Surgeon General--Executive summary. Professional Psychology: Research and Practice, 31(1), 5-13.
This article suggests the routine use of replications in field studies.
Schafer, W. D. (2001). Replication: A design principle for field research. Practical Assessment, Research & Evaluation, 7(15), 1-7.
The challenges of specifying a complex and individualized treatment model and measuring fidelity thereto are described, using multisystemic therapy (MST) as an example.
Schoenwald, S. K., Henggeler, S. W., Brondino, M. J., & Rowland, M. D. (2000). Multisystemic therapy: Monitoring treatment fidelity. Family Process, 39(1), 83-103.
This study evaluated the impact of intensive behavioral treatment on the development of young autistic children.
Sheinkopf, S. J., & Siegel, B. (1998). Home-based behavioral treatment of young children with autism. Journal of autism and developmental disorders, 28(1), 15-23.
This paper outlines the best practices for researchers and practitioners translating research to practice as well as recommendations for improving the process.
Shriver, M. D. (2007). Roles and responsibilities of researchers and practitioners for translating research to practice. Journal of Evidence-Based Practices for Schools, 8(1), 1-30.
For many centuries, the professor was the primary source of information, the font of knowledge. Books were nonexistent or scarce, as they still are today in developing countries of the world, and information was passed orally from teacher to pupil. The didactic lecture is an effective method for conveying information from one person to a larger number of students, but, as most of us have experienced, simply telling information to someone does not ensure that learning takes place.
Silverthorn, D. U. (2006). Teaching and learning in the interactive classroom. Advances in Physiology Education, 30(4), 135-140.
Did you know that plants and plant products can be used to improve people’s cognitive, physical, psychological, and social functioning? Well, they can, and Horticulture as Therapy is the book to show you how!
Simson, S., & Straus, M. (1997). Horticulture as therapy: Principles and practice. CRC Press.
A recent large-scale evaluation of Reading Recovery, a supplemental reading program for young struggling readers, supports previous research that found it to be effective. In a 4 year, federally funded project, almost 3,500 students in 685 schools found that generally students benefitted from the intervention. Students receiving Reading Recovery receive supplemental services in a 1:1 instructional setting for 30 minutes 5 days a week from an instructor trained in Reading Recovery. In the study reported here, students who received Reading Recovery had effect sizes of .35-.37 relative to a control group across a number of measures of reading. These represent moderate effect sizes and account for about a 1.5 month increase in skill relative to the control group. Even though the research supports the efficacy of the intervention, it also raises questions about its efficiency. The schools that participated in the study served about 5 students and the estimated cost per student has ranged from $2,000-$5,000. These data raise questions about the wisdom of spending this much money per student for growth of about a month and a half.
Sirinides, P., Gray, A., & May, H. (2018). The Impacts of Reading Recovery at Scale: Results From the 4-Year i3 External Evaluation. Educational Evaluation and Policy Analysis, 0162373718764828.
Since 1980, 12 peer-reviewed outcome studies (nine on behavior analytic programs, one on Project TEACCH, and two on Colorado Health Sciences) have focused on early intervention for children with autism. Mean 10 gains of 7-28 points were reported in studies of behavior analytic programs, and 3-9 in studies on TEACCH and Colorado.
Smith, T. (1999). Outcome of early intervention for children with autism. Clinical Psychology: Science and Practice, 6(1), 33-49.
A soon to be published meta-analysis of Direct Instruction (DI) curricula that reviews research on DI curricula between 1966-2016 reports that DI curricula produced moderate to large effect sizes across the curriculum areas reading, math, language, and spelling. The review is notable because it reviews a much larger body of DI research than has occurred in the past and covers a wide range of experimental designs (from single subject to randomized trials). 328 studies were reviewed and almost 4,000 effects were considered. Given the variability in research designs and the breadth of the effects considered, it suggests that DI curricula produce robust results. There was very little decline during maintenance phases of the study and greater exposure to the curricula resulted in greater effects.
Stockard, J., Wood, T. W., Coughlin, C., & Rasplica Khoury, C. (2018). The effectiveness of direct instruction curricula: A meta-analysis of a half century of research. Review of Educational Research, 88(4), 479-507.
The purpose of this article is to describe how effective practices are incorporated into an approach termed schoolwide positive behavior supports (SWPBS)
Sugai, G., & Horner, R. H. (2010). School-wide positive behavior support: Establishing a continuum of evidence based practices. Journal of Evidence-Based Practices for Schools, 11(1), 62-83.
At the request of David Barlow, President of Division 12, and under the aegis of Section III, this task force was constituted to consider methods for educating clinical psychologists, third party payors, and the public about effective psychotherapies
Task Force on Promotion and Dissemination of Psychological Procedures, Division of Clinical Psychology, American Psychological Association. (1995). Training in and Dissemination of Empirically-Validated Psychological Treatments: Report and Recommendations. The Clinical Psychologist, 48, 3-23.
This paper examines the types of research to consider when evaluating programs, how to know what “evidence’ to use, and continuums of evidence (quantity of the evidence, quality of the evidence, and program development).
Twyman, J. S., & Sota, M. (2008). Identifying research-based practices for response to intervention: Scientifically based instruction. Journal of Evidence-Based Practices for Schools, 9(2), 86-101.
Simple experimental studies randomize study participants into two groups: a treatment group that includes participants who receive the offer to participate in a program or intervention, and a control group that includes participants who do not receive that offer. Such studies primarily address questions about the program impacts on the average outcomes of participants.
Unlu, F., Bozzi, L., Layzer, C., Smith, A., Price, C., & Hurtig, R. (2016). Linking implementation fidelity to impacts in an RCT (pp. 108-137). Routledge.
The text and graphics contained in the 26th Annual Report to Congress were developed primarily from data from the Office of Special Education Programs (OSEP) Data Analysis System (DANS). DANS is a repository for all the data mandated by the Individuals with Disabilities Education Act (IDEA) to be collected from states annually.
US Department of Education. (1998). Twentieth annual report to Congress on the implementation of the Individuals with Disabilities Education Act.
The unrealistically high rate of positive results within psychology has increased the attention to replication research. However, researchers who conduct a replication and want to statistically combine the results of their replication with a statistically significant original study encounter problems when using traditional meta-analysis techniques. The original study’s effect size is most probably overestimated because it is statistically significant, and this bias is not taken into consideration in traditional meta-analysis. We have developed a hybrid method that does take the statistical significance of an original study into account and enables (a) accurate effect size estimation, (b) estimation of a confidence interval, and (c) testing of the null hypothesis of no effect. We analytically approximate the performance of the hybrid method and describe its statistical properties. By applying the hybrid method to data from the Reproducibility Project: Psychology (Open Science Collaboration, 2015), we demonstrate that the conclusions based on the hybrid method are often in line with those of the replication, suggesting that many published psychological studies have smaller effect sizes than those reported in the original study, and that some effects may even be absent. We offer hands-on guidelines for how to statistically combine an original study and replication, and have developed a Web-based application (https://rvanaert.shinyapps.io/hybrid) for applying the hybrid method.
van Aert, R. C. M., & van Assen, M. A. L. M. (2018). Examining reproducibility in psychology: A hybrid method for combining a statistically significant original study and a replication. Behavior Research Methods, 50(4),1515–1539.
The Innovation Journey presents the results of a major longitudinal study that examined the process of innovation from concept to implementation of new technologies, products, processes, and administrative arrangements.
Van de Ven, A. H., Polley, D. E., Garud, R., & Venkataraman, S. (1999). The Innovation Journey, New York: Oxford Univ.
This book offers concrete examples from educational testing to illustrate the importance of empirically and logically scrutinizing the evidence used to make education policy decisions. Wainer uses statistical evidence to show why some of the most widely held beliefs in education may be wrong.
Wainer, H. (2011). Uneducated guesses: Using evidence to uncover misguided education policies. Princeton University Press.
The Child Task Force report represents an important initial step in this direction. Here they offer both praise and critique, suggesting a number of ways the task force process and product may be improved.
Weisz, J. R., & Hawley, K. M. (1998). Finding, evaluating, refining, and applying empirically supported treatments for children and adolescents. Journal of Clinical Child Psychology, 27(2), 206-216.
This article addresses the gap between clinical practice and the research laboratory. We focus on the issue as it relates specifically to interventions for children and adolescents.
Weisz, J. R., Donenberg, G. R., Han, S. S., & Weiss, B. (1995). Bridging the gap between laboratory and clinic in child and adolescent psychotherapy. Journal of consulting and clinical psychology, 63(5), 688.
The study does suggest that "more is not always better" (L. Bickman, 1996), but more of what? Little is known about the specific interventions that were combined to form the Fort Bragg system of care, so the study does not really reveal what failed or what needs to be changed.
Weisz, J. R., Han, S. S., & Valeri, S. M. (1997). More of what? Issues raised by the Fort Bragg study.
The Society of Clinical Psychology's task forces on psychological intervention developed criteria for evaluating clinical trials, applied those criteria, and generated lists of empirically supported treatments. Building on this strong base, the task force successor, the Committee on Science and Practice, now pursues a threeâpart agenda
Weisz, J. R., Hawley, K. M., Pilkonis, P. A., Woody, S. R., & Follette, W. C. (2000). Stressing the (other) three Rs in the search for empirically supported treatments: Review procedures, research quality, relevance to practice and the public interest. Clinical Psychology: Science and Practice, 7(3), 243-258.
This slide show presents what is EBE and what are EBE goals in education.
Whitehurst, G. J. (2002). Evidence-based education (EBE). Washington, DC. Retrieved Juanuary, 9(2), 6.