Types Of Evidence

All Research

TITLE
SYNOPSIS
CITATION
Creating Single-Subject Design in Microsoft Excel
The article provides a task analyses for constructing various types of commonly used single-subject design graphs in Microsoft Excel
Dixon, M. R., Jackson, J. W., Small, S. L., Horner?King, M. J., Lik, N. M. K., Garcia, Y., & Rosales, R. (2009). CCreating Single-Subject Design in Microsoft Excel™ 2007. Journal of applied behavior analysis, 42(2), 277-293.
Some recommendations for the reporting of quantitative studies

This editorial offers recommendations aimed at providing examples of a series of elements that may significantly contribute towards demonstrating the robustness of quantitative results. It is therefore not a methodological guide but instead a guide that acts as a reminder of some basic principles when reporting quantitative research.

López, X., Valenzuela, J., Nussbaum, M., & Tsai, C. C. (2015). Some recommendations for the reporting of quantitative studies. Computers & Education, 91(C), 106-110.

Understanding Bias Due to Measures Inherent to Treatments in Systematic Reviews in Education

This paper contrasts effect sizes in What Works Clearinghouse and Best Evidence Encyclopedia reading and math reviews to explore the degree to which these measures produce different estimates.

Slavin, R. E., & Madden, N. A. (2008). Understanding bias due to measures inherent to treatments in systematic reviews in education. In annual meeting of the Society for Research on Effective Education, Crystal City, VA.

Randomized Trials and Quasi-Experiments in Education Research
This paper examines the benefits and challenges inherent in using randomized clinical trials and quasi-experimental designs in the field of education research.
Angrist, J. D. (2003). Randomized trials and quasi-experiments in education research. NBER Reporter Online, (Summer 2003), 11-14.
The Core Analytics of Randomized Experiments for Social Research
This paper examines the elements of randomized experiments for social research.
Bloom, H. S. (2006). The core analytics of randomized experiments for social research.
Randomized, Controlled Trials, Observational Studies, and the Hierarchy of Research Designs
A study comparing the efficacy of randomized controlled trials to observational studies.
Concato, J., Shah, N., & Horwitz, R. I. (2000). Randomized, controlled trials, observational studies, and the hierarchy of research designs. New England Journal of Medicine, 342(25), 1887-1892.
Can Randomized Trials Answer the Question of What Works?
This article discusses the use of randomized controlled trials as required by the Department of Education in evaluating the effectiveness of educational practices.
EDUC, A. R. O. (2005). Can randomized trials answer the question of what works?.
Statistical Significance and Effect Size: Two Sides of a Coin
This paper suggests that statistical significance testing and effect size are two sides of the same coin; they complement each other, but do not substitute for one another
Fan, X. (1999). Statistical Significance and Effect Size: Two Sides of a Coin.
New Federal Policy Favors Randomized Trials in Education Research
Thisis an article from the Chronicle of Higher Education discussing the pros and cons of randomized controlled trials in education
Glenn, D. A. V. I. D. (2005). New federal policy favors randomized trials in education research. The Chronicle of Higher Education, Retrieved March, 25, 2005.
Implementing Randomized Field Trials in Education: Report of a Workshop
This book examines the use of randomized controlled trial (RCT) studies in education.
Hilton, M., & Towne, L. (Eds.). (2004). Implementing Randomized Field Trials in Education:: Report of a Workshop. National Academies Press.
The Use of Single-Subject Research to Identify Evidence-Based Practice in Special Education
The defining features of single-subject research are presented, the contributions of single-subject research for special education are reviewed, and a specific proposal is offered for using single-subject research to document evidence-based practice.
Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71(2), 165-179.
Single-Case Research: Documenting Evidence-based Practice
Making the case that single-subject design can and should be accepted as an alternative to randomized controlled trials in determining efficacy of practices.
Horner, R. University of Oregon.
Single-Case Designs for Educational Research
This paper examines the benefits and challenges inherent in using of randomized clinical trials and quasi-experimental designs in the field of education research.
Kazdin, A. E. (2011). Single-case research designs: Methods for clinical and applied settings . Oxford University Press.
Single-Case Designs for Educational Research
This book provides a thorough summary of information about the use of single-subject experimental designs in educational research
Kennedy, C. H. (2005). Single-case designs for educational research. Pearson/A & B.
Single-case research design and analysis: New directions for psychology and education.
This book provides a thorough summary of information about the use of single-subject experimental designs.
Kratochwill, T. R., & Levin, J. R. (1992). Single-case research design and analysis: New directions for psychology and education. Lawrence Erlbaum Associates, Inc.
Single-Case Design Technical Documentation
Single case design has made important contributions to identifying effective educational practices. Until recently, there have been no standards for evaluating the quality and quality of studies across a topic area. These standards were developed by the Institute for Education Science.
Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M. & Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from What Works Clearinghouse website: http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf.
What Works Clearinghouse: SINGLE?CASE DESIGN TECHNICAL DOCUMENTATION
This paper by a What Works Clearinghouse the panel provides an overview of singlr-subject designs (SCDs), specifies the types of questions that SCDs are designed to answer, and discusses the internal validity of SCDs. The panel then proposes standards to be implemented by the WWC.
Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case designs technical documentation. What Works Clearinghouse.
Comparing Results of Systematic Reviews: Parallel Reviews of Research on Repeated Reading.
This paper demonstrates that different well accepted methods for reviewing research on repeated readings produces different results.
O’Keeffe, B. V., Slocum, T. A., Burlingame, C., Snyder, K., & Bundock, K. (2012). Comparing Results of Systematic Reviews: Parallel Reviews of Research on Repeated Reading. Education & Treatment of Children (West Virginia University Press), 35(2), 333-366
Combining estimates of effect size
This book is an in depth examination of literature synthesis along with useful advice when one attempts to interpret the results of a meta-analysis.
Shadish, W. R., & Haddock, C. K. (2009). Combining estimates of effect size. The Handbook of Research Synthesis and Meta-analysis, 257-277.
The state of the science in the meta-analysis of single-case experimental designs
This is a review of the issues and methods for conducting a meta-analysis of single-case design research studies.
Shadish, W. R., Rindskopf, D. M. & Hedges, L. V. (2008). The state of the science in the meta-analysis of single-case experimental designs. Evidence-Based Communication Assessment and Intervention, 2(3), 188-196. doi:10.1080/17489530802581603
Effects of Sample Size on Effect Size in Systematic Reviews in Education
This study uses data from the elementary and secondary mathematics explores the effects of sample size on effect size in program evaluations in education.
Slavin, R.E., & Smith, D. (2009). The relationship between sample sizes and effect sizes in systematic reviews in education. Educational Evaluation and Policy Analysis, 31 (4), 500-506.
Evaluating the validity of systematic reviews to identify empirically supported treatments
Systematic reviews are a process for assessing the quality of the literature to determine if a particular practice has met criteria for empirically supported. As with any assessment process there are issues of validity. The concepts and methodological tools of measurement validity can be applied to systematic reviews to identify their strengths and weaknesses.
Slocum, T. A., Detrich, R., & Spencer, T. D. (2012). Evaluating the validity of systematic reviews to identify empirically supported treatments. Education and Treatment of Children, 35(2), 201-233.
Randomized Trials Flourish in Developing Countries
This article reviews issues regarding the use of randomized trials in developing countries.
Viadero, D. (2006). Randomized Trials Flourish in Developing Countries. Education Week, Retrieved October 30, 2006.
Back to Top