Formative Assessment overview PDF.pdf
Formative Assessment
States, J., Detrich, R. & Keyworth, R. (2017). Overview of Formative Assessment. Oakland, CA: The Wing Institute. http://www.winginstitute.org/student-formative-assessment.
For teachers, few skills are as important or powerful as formative assessment (also known as progress monitoring and rapid assessment). This process of frequent and ongoing feedback on the effects of instruction gives teachers insight on when and how to adjust instruction to maximize learning. The assessment data are used to verify student progress and act as indicators to adjust interventions when insufficient progress has been made or a particular concept has been mastered (VanDerHeyden, 2013). For the past 30 years, formative assessment has been found to be effective in typical classroom settings. The practice has shown power across student ages, treatment durations, and frequencies of measurement, as well as with students with special needs (Hattie, 2009).
Another important assessment tool commonly used in schools that should not be confused with formative assessment is summative assessment. Formative assessment and summative assessment play important but very different roles in an effective model of education. Both are integral in gathering information necessary for maximizing student success, but they differ in important ways (see Figure 1).
Summative assessment evaluates the overall effectiveness of teaching at the end of a class, end of a semester, or end of the school year. This type of assessment is used to determine at a particular time what students know and do not know. It is most often associated with standardized tests such as state achievement assessments but are also commonly used by teachers to assess the overall progress of students in determining grades (Geiser & Santelices, 2007). Since the advent of No Child Left Behind, summative assessment has increasingly been used to hold schools and teachers accountable for student progress and its use is likely to continue under the Every Student Succeeds Act.
In contrast, formative assessment is a practical diagnostic tool for routinely determining student progress. Formative assessment allows teachers to quickly ascertain if individual students are progressing at acceptable rates and provides insight into when and how to modify and adapt lessons, with the goal of making sure all students are progressing satisfactorily.
Comparing Formative Assessment and Summative Assessment
Figure 1. Comparing two types of assessment
Both formative assessment and summative assessment are essential components of information gathering, but they should be used for the purposes for which they were designed.
Figure 2 offers a data display examining the relative impact of formative assessment and summative assessment (the latter in the form of high-stakes testing). Research shows a clear advantage for formative assessment in improving student performance.
Figure 2. Comparison of formative assessment and summative assessment impact on student achievement
Research consistently lists formative assessment in the top tier of variables that make a difference in improving student achievement (Hattie, 2009; Marzano, 1998). In 1986, Fuchs and Fuchs conducted the first comprehensive quantitative examination of formative assessment. They found that it had an impressive 0.90 effect size on student achievement. Figure 3 provides the effect size of formative assessment, gleaned from multiple studies over more than 40 years of research on the topic.
Figure 3: Effect size of formative assessment
At its core, formative assessment uses feedback to improve student performance. It furnishes teachers with indicators of each student’s progress, which can be used to determine when and how to adjust instruction to maximize learning. Feedback is ranked at or near the top of practices known to significantly raise student achievement (Kluger & DeNisi, 1996; Marzano, Pickering, & Pollock, 2001; Walberg, 1999). It is not surprising that data-based decision-making approaches such as response to intervention (RtI) and positive behavior interventions and supports (PBIS) depend heavily on formative assessment.
Another important feature of well-designed formative assessment is the incorporation of grade-level norms into the assessment process. Grade-level norms are a valuable yardstick enabling teachers to more efficiently compare each student’s performance against normed standards (McLaughlin & Shepard, 1995). In addition to allowing teachers to determine whether a student met or missed a target, grade-level norms offer teachers a clear picture of whether students are meeting important goals in the standards and quickly identify struggling students who need more intensive support.
Fuchs and Fuchs conducted the first extensive quantitative examination of formative assessment in 1986. This meta-analysis added considerably to the knowledge base by identifying the essential practice elements that increase the impact of ongoing formative assessment. The impact is equivalent to raising student achievement in an average nation such as the United States to that of the top five nations (Black & Wiliam, 1998). As can be seen in Figure 4, Fuchs and Fuchs reported that the impact of formative assessment is significantly enhanced by the cumulative effect of three practice elements. The practice begins with collecting data weekly (0.26 effect size). When teachers interact with the collected data by graphing it, the effect size increases to 0.70. Adding decision rules to aid teachers in analyzing the graphed data increases the effect size to 0.90.
Figure 4: Impact of formative assessment on student achievement
Why Is Formative Assessment Important?
Much has been said about the importance of selecting evidence-based practices for use in schools. One of the most common failures in building an evidence-based culture is overreliance on selecting interventions and underreliance on managing the interventions (VanDerHeyden & Tilly, 2010). Adopting an evidence-based practice, although an important first step, does not guarantee that the practice will produce the desired results. Even if every action leading up to implementation is flawless, if the intervention is not implemented as designed, it will likely fail and learning will not occur (Detrich, 2014). A growing body of research is now available to help teachers identify and overcome obstacles to implementing practices accurately (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Witt, Noell, LaFleur, & Mortenson, 1997). Formative assessment and treatment integrity checks constitute the basic tool kit enabling schools to avoid or quickly remedy failures during implementation.
The fact is, not all practices produce positive outcomes for all students. In medicine, all patients do not respond positively to a given treatment. The same holds true in education: Not all students respond identically to an education intervention. Given the possibility that even good practices may produce poor outcomes, it is incumbent on educators to monitor student progress frequently. Formal and routine sampling of student performance significantly reduces the likelihood that struggling students will fall through the cracks.
Common informal sampling methods such as having students answer questions by raising their hands aren’t sufficient. It is imperative that teachers have a clear understanding of each student’s progress toward mastery of standards. This is important not just for the lesson at hand but also for future success. A systematically planned curriculum builds on learned skills across a school year. Skills learned in one assignment are very often the foundation skills needed for success in subsequent lessons. Today’s failure may increase the possibility of failure tomorrow. For example, students who fall behind in reading by the third grade have been found to have poorer academic success, including a significantly greater likelihood of dropping out of high school (Celio, & Harvey, 2005; Lesnick, Goerge, Smithgall, & Gwynne, 2010).
It is only through ongoing monitoring that problems can be identified early and adjustments made to teaching strategies to ensure greater success for all students. In this way, formative assessment guides teachers on when and how to improve instructional delivery and make effective adjustments to the curriculum. This is necessary for helping struggling students as well as adapting instruction for gifted students.
In addition to formative assessment’s notable impact on achievement is its impressive return on investment compared with other popular reform practices. In a cost-effectiveness analysis of frequently adopted education interventions, Yeh (2007) found that formative assessment (which he referred to as rapid assessment) outperformed other common reform practices. He found the advantage for formative assessment striking compared with a 10% increase in spending, vouchers, charter schools, or high-stakes testing (see Figure 5).
Figure 5: Return on investment of common education interventions
The Figure 5 data display compares Yeh’s 2007 and The Wing Institute analysis cost-effectiveness analysis of formative assessment with six common structural interventions.
Yeh compared the cost and outcomes of alternative practices to aid education decision makers in selecting economical and productive choices (Levin, 1988; Levin & McEwan, 2002). Educational cost-effectiveness analyses are designed to assess key educational outcomes, such as student achievement relative to the monetary resources needed to achieve worthy results. Cost-effectiveness analyses provide a practical and systematic architecture that permits educators to more effectively compare the real impact of interventions.
Although the structural interventions identified in Figure 5 are designed to address an array of differing issues impacting schools, a fair comparison can be made because all the interventions aim to improve student achievement. In the end, decision makers need to know which approaches produce the greatest benefit for the dollars invested. A given practice may be very effective, but if it costs more than the resources available for implementation, the practice is of little use to the average school.
Summary
It is clear from years of rigorous research that formative assessment produces important results. It is also true that ongoing assessment carried out through the school year is necessary for teachers to grasp when and how to adjust instruction and curriculum to meet the various needs of struggling students as well as gifted students. Finally, cost-effectiveness research reveals that formative assessment is not only effective, but one of the most cost-effective interventions available to schools for boosting student performance.
Citations
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74.
Bloom, B. S. (1976). Human characteristics and school learning. New York, NY: McGraw-Hill.
Celio, M. B., & Harvey, J. (2005). Buried treasure: Developing a management guide from mountains of school data. Seattle, WA: University of Washington, Center on Reinventing Public Education.
Detrich, R. (2014). Treatment integrity: Fundamental to education reform. Journal of Cognitive Education and Psychology, 13(2), 258–271.
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature (FMHI Publication No. 231). Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, the National Implementation Research Network.
Fuchs, L. S. & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta-analysis. Exceptional Children, 53(3), 199–208.
Geiser, S., & Santelices, M. V. (2007). Validity of high-school grades in predicting student success beyond the freshman year: High-school record vs. standardized tests as indicators of four-year college outcomes (Research and Occasional Paper Series CSHE. 6.07). Berkeley, CA: University of California, Berkeley, Center for Studies in Higher Education.
Haller, E. P., Child, D. A., & Walberg, H. J. (1988). Can comprehension be taught? A quantitative synthesis of “metacognitive” studies. Educational Researcher, 17(9), 5–8.
Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New York, NY: Routledge.
Kavale, K. A. (2005). Identifying specific learning disability: Is responsiveness to intervention the answer? Journal of Learning Disabilities, 38(6), 553–562.
Kluger, A. N., & DeNisi, A. S. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284.
Lesnick, J., Goerge, R., Smithgall, C., & Gwynne, J. (2010). Reading on grade level in third grade: How is it related to high school performance and college enrollment? Chicago, IL: Chapin Hall at the University of Chicago, 1, 12.
Levin, H. M. (1988). Cost-effectiveness and educational policy. Educational Evaluation and Policy Analysis, 10(1), 51–69.
Levin, H. M., & McEwan, P. J., eds. (2002). Cost-effectiveness and educational policy. Larchmont, NY: Eye on Education.
Marzano, R. J. (1998). A theory-based meta-analysis of research on instruction. Aurora, CO: Mid-Continent Regional Educational Laboratory.
Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2001). Classroom instruction that works: Research-based strategies for increasing student achievement. Alexandria, VA: Association for Supervision and Curriculum Development.
McLaughlin, M. W., & Shepard, L. A. (1995). Improving education through standards-based reform. A report by the National Academy of Education Panel on Standards-Based Education Reform. Palo Alto, CA: Stanford University Press.
Scheerens, J., & Bosker, R. J. (1997). The foundations of educational effectiveness. Oxford, UK: Pergamon.
VanDerHeyden, A. (2013). Are we making the differences that matter in education? In R. Detrich, R. Keyworth, & J. States (Eds.), Advances in evidence-based education: Vol 3. Performance feedback: Using data to improve educator performance (pp. 119–138). Oakland, CA: The Wing Institute. http://www.winginstitute.org/uploads/docs/Vol3Ch4.pdf
VanDerHeyden, A. M., & Tilly, W. D. (2010). Keeping RtI on track: How to identify, repair and prevent mistakes that derail implementation. Horsham, PA: LRP Publications.
Walberg H. J. (1999). Productive teaching. In H. C. Waxman & H. J. Walberg (Eds.), New directions for teaching, practice, and research (pp. 75–104). Berkeley, CA: McCutchen.
Witt, J. C., Noell, G. H., LaFleur, L. H., & Mortenson, B. P. (1997). Teacher use of interventions in general education settings: Measurement and analysis of the independent variable. Journal of Applied Behavior Analysis, 30(4), 693–696.
Yeh, S. S. (2007). The cost-effectiveness of five policies for improving student achievement. American Journal of Evaluation, 28(4), 416–436.
TITLE
SYNOPSIS
CITATION
LINK
Learning About Learning: What Every New Teacher Needs to Know
This paper examines teacher education textbooks for discussion of research-based strategies that every teacher candidate should learn in order to promote student learning and retention.
Learning About Learning: What Every New Teacher Needs to Know Retrieved from http://www.nctq.org/dmsView/Learning_About_Learning_Report.
Introduction: Proceedings from the Wing Institute’s Sixth Annual Summit on Evidence-Based Education: Performance Feedback: Using Data to Improve Educator Performance.
This book is compiled from the proceedings of the sixth summit entitled “Performance Feedback: Using Data to Improve Educator Performance.” The 2011 summit topic was selected to help answer the following question: What basic practice has the potential for the greatest impact on changing the behavior of students, teachers, and school administrative personnel?
States, J., Keyworth, R. & Detrich, R. (2013). Introduction: Proceedings from the Wing Institute’s Sixth Annual Summit on Evidence-Based Education: Performance Feedback: Using Data to Improve Educator Performance. In Education at the Crossroads: The State of Teacher Preparation (Vol. 3, pp. ix-xii). Oakland, CA: The Wing Institute.
Assessment and classroom learning
This paper is a review of the literature on classroom formative assessment.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in education, 5(1), 7-74.
Assessment and classroom learning. Assessment in Education: principles, policy & practice
This is a review of the literature on classroom formative assessment. Several studies show firm evidence that innovations designed to strengthen the frequent feedback that students receive about their learning yield substantial learning gains.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: principles, policy & practice, 5(1), 7-74.
Human characteristics and school learning
This paper theorizes that variations in learning and the level of learning of students are determined by the students' learning histories and the quality of instruction they receive.
Bloom, B. (1976). Human characteristics and school learning. New York: McGraw-Hill.
Formative assessment strategies for every classroom: An ASCD action tool (2nd ed.)
The best formative assessment involves both students and teachers in a recursive process. It starts with the teacher, who models the process for the students. At first, the concept of what good work "looks like" belongs to the teacher. The teacher describes, explains, or demonstrates the concepts or skills to be taught, or assigns student investigations—reading assigned material, locating and reading materials to answer a question, doing activities or experiments—to put content into students' hands.
Brookhart, S. M. (2010). Formative assessment strategies for every classroom: An ASCD action tool. ASCD.
Effective Teaching: What Is It and How Is It Measured?
This paper examines how to measure teacher performance and the practices necessary for increasing teacher trust in systems designed to effectively measure performance.
Cantrell, S., & Scantlebury, J. (2011). Effective Teaching: What Is It and How Is It Measured?. Effective Teaching as a Civil Right, 28.
Buried Treasure: Developing a Management Guide From Mountains of School Data
This report provides a practical “management guide,” for an evidence-based key indicator data decision system for school districts and schools.
Celio, M. B., & Harvey, J. (2005). Buried Treasure: Developing A Management Guide From Mountains of School Data. Center on Reinventing Public Education.
Treatment Integrity: Fundamental to Education Reform
To produce better outcomes for students two things are necessary: (1) effective, scientifically supported interventions (2) those interventions implemented with high integrity. Typically, much greater attention has been given to identifying effective practices. This review focuses on features of high quality implementation.
Detrich, R. (2014). Treatment integrity: Fundamental to education reform. Journal of Cognitive Education and Psychology, 13(2), 258-271.
Implementation Research: A Synthesis of the Literature
This is a comprehensive literature review of the topic of Implementation examining all stages beginning with adoption and ending with sustainability.
Fixsen, D. L., Naoom, S. F., Blase, K. A., & Friedman, R. M. (2005). Implementation research: A synthesis of the literature.
Effects of Systematic Formative Evaluation: A Meta-Analysis
In this meta-analysis of studies that utilize formative assessment the authors report an effective size of .7.
Fuchs, L. S., & Fuchs, D. (1986). Effects of Systematic Formative Evaluation: A Meta-Analysis. Exceptional Children, 53(3), 199-208.
Effects of systematic formative evaluation: A meta-analysis
This meta-analysis investigated the effects of formative evaluation procedures on student achievement. The data source was 21 controlled studies, which generated 96 relevant effect sizes, with an average weighted effect size of .70. The magnitude of the effect of formative evaluation was associated with publication type, data-evaluation method, data display, and use of behavior modification. Implications for special education practice are discussed.
Fuchs, L. S., & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta-analysis. Exceptional children, 53(3), 199-208.
Validity of High-School Grades in Predicting Student Success beyond the Freshman Year: High-School Record vs. Standardized Tests as Indicators of Four-Year College Outcomes
High-school grades are often viewed as an unreliable criterion for college admissions, owing to differences in grading standards across high schools, while standardized tests are seen as methodologically rigorous, providing a more uniform and valid yardstick for assessing student ability and achievement. The present study challenges that conventional view. The study finds that high-school grade point average (HSGPA) is consistently the best predictor not only of freshman grades in college, the outcome indicator most often employed in predictive-validity studies, but of four-year college outcomes as well.
Geiser, S., & Santelices, M. V. (2007). Validity of High-School Grades in Predicting Student Success beyond the Freshman Year: High-School Record vs. Standardized Tests as Indicators of Four-Year College Outcomes. Research & Occasional Paper Series: CSHE. 6.07. Center for studies in higher education.
Dealing with Flexibility in Assessments for Students with Significant Cognitive Disabilities
Alternate assessment and instruction is a key issue for individuals with disabilities. This report presents an analysis, by assessment system component, to identify where and when flexibility can be built into assessments.
Gong, B., & Marion, S. (2006). Dealing with Flexibility in Assessments for Students with Significant Cognitive Disabilities. Synthesis Report 60. National Center on Educational Outcomes, University of Minnesota.
What teacher preparation programs teach about K–12 assessment: A review.
This report provides information on the preparation provided to teacher candidates from
teacher training programs so that they can fully use assessment data to improve classroom
instruction.
Greenberg, J., & Walsh, K. (2012). What Teacher Preparation Programs Teach about K-12 Assessment: A Review. National Council on Teacher Quality.
Can comprehension be taught? A quantitative synthesis of “metacognitive” studies
This quantitative review examines 20 studies to establish an effect size of .71 for the impact of “metacognitive” instruction on reading comprehension.
Haller, E. P., Child, D. A., & Walberg, H. J. (1988). Can comprehension be taught? A quantitative synthesis of “metacognitive” studies. Educational researcher, 17(9), 5-8.
Visible learning: A synthesis of over 800 meta-analyses relating to achievement
Hattie’s book is designed as a meta-meta-study that collects, compares and analyses the findings of many previous studies in education. Hattie focuses on schools in the English-speaking world but most aspects of the underlying story should be transferable to other countries and school systems as well. Visible Learning is nothing less than a synthesis of more than 50.000 studies covering more than 80 million pupils. Hattie uses the statistical measure effect size to compare the impact of many influences on students’ achievement, e.g. class size, holidays, feedback, and learning strategies.
Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New York, NY: Routledge.
Learning from teacher observations: Challenges and opportunities posed by new teacher evaluation systems
This article discusses the current focus on using teacher observation instruments as part of new teacher evaluation systems being considered and implemented by states and districts.
Hill, H., & Grossman, P. (2013). Learning from teacher observations: Challenges and opportunities posed by new teacher evaluation systems. Harvard Educational Review, 83(2), 371-384.
A Longitudinal Examination of the Diagnostic Accuracy and Predictive Validity of R-CBM and High-Stakes Testing
The purpose of this study is to compare different statistical and methodological approaches to standard setting and determining cut scores using R- CBM and performance on high-stakes tests
Hintze, J. M., & Silberglitt, B. (2005). A longitudinal examination of the diagnostic accuracy and predictive validity of R-CBM and high-stakes testing. School Psychology Review, 34(3), 372.
Identifying Specific Learning Disability: Is Responsiveness to Intervention the Answer?
Responsiveness to intervention (RTI) is being proposed as an alternative model for making decisions about the presence or absence of specific learning disability. The author argue that there are many questions about RTI that remain unanswered, and radical changes in proposed regulations are not warranted at this time.
Kavale, K. A. (2005). Identifying specific learning disability: Is responsiveness to intervention the answer?. Journal of Learning Disabilities, 38(6), 553-562.
Proceedings from the Wing Institute’s Fifth Annual Summit on Evidence-Based Education: Education at the Crossroads: The State of Teacher Preparation
This article shared information about the Wing Institute and demographics of the Summit participants. It introduced the Summit topic, sharing performance data on past efforts of school reform that focused on structural changes rather than teaching improvement. The conclusion is that the system has spent enormous resources with virtually no positive results. The focus needs to be on teaching improvement.
Keyworth, R., Detrich, R., & States, J. (2012). Introduction: Proceedings from the Wing Institute’s Fifth Annual Summit on Evidence-Based Education: Education at the Crossroads: The State of Teacher Preparation. In Education at the Crossroads: The State of Teacher Preparation (Vol. 2, pp. ix-xxx). Oakland, CA: The Wing
Formative Assessment: A Meta?Analysis And A Call For Research
This meta-analysis examines the impact of formative assessment.
Kingston, N., & Nash, B. (2011). Formative assessment: A meta?analysis and a call for research. Educational Measurement: Issues and Practice, 30(4), 28-37.
Reading on grade level in third grade: How is it related to high school performance and college enrollment.
This study uses longitudinal administrative data to examine the relationship between third- grade reading level and four educational outcomes: eighth-grade reading performance, ninth-grade course performance, high school graduation, and college attendance.
Lesnick, J., Goerge, R., Smithgall, C., & Gwynne, J. (2010). Reading on grade level in third grade: How is it related to high school performance and college enrollment. Chicago: Chapin Hall at the University of Chicago, 1, 12.
Cost-effectiveness and educational policy.
This article provides a summary of measuring the fiscal impact of practices in education
educational policy.
Levin, H. M., & McEwan, P. J. (2002). Cost-effectiveness and educational policy. Larchmont, NY: Eye on Education.
A Theoretical Framework for Data-Driven Decision Making
The purpose of this paper is to provide a model for more effective data-driven decision making in classrooms, schools, and districts.
Mandinach, E. B., Honey, M., & Light, D. (2006, April). A theoretical framework for data-driven decision making. In annual meeting of the American Educational Research Association, San Francisco, CA.
A Theory-Based Meta-Analysis of Research on Instruction.
This research synthesis examines instructional research in a functional manner to provide guidance for classroom practitioners.
Marzano, R. J. (1998). A Theory-Based Meta-Analysis of Research on Instruction.
Classroom Instruction That Works: Research Based Strategies For Increasing Student Achievement
This is a study of classroom management on student engagement and achievement.
Marzano, R. J., Pickering, D., & Pollock, J. E. (2001). Classroom instruction that works: Research-based strategies for increasing student achievement. Ascd
Improving education through standards-based reform.
This report offers recommendations for the implementation of standards-based reform and outlines possible consequences for policy changes. It summarizes both the vision and intentions of standards-based reform and the arguments of its critics.
McLaughlin, M. W., & Shepard, L. A. (1995). Improving Education through Standards-Based Reform. A Report by the National Academy of Education Panel on Standards-Based Education Reform. National Academy of Education, Stanford University, CERAS Building, Room 108, Stanford, CA 94305-3084..
The Foundations of Educational Effectiveness
This book looks at research and theoretical models used to define educational effectiveness with the intent on providing educators with evidence-based options for implementing school improvement initiatives that make a difference in student performance.
Scheerens, J. and Bosker, R. (1997). The Foundations of Educational Effectiveness. Oxford:Pergmon
Formative assessment: A systematic review of critical teacher prerequisites for classroom practice.
Formative assessment has the potential to support teaching and learning in the classroom. This study reviewed the literature on formative assessment to identify prerequisites for effective use of formative assessment by teachers. The review sought to address the following research question: What teacher prerequisites need to be in place for using formative assessment in their classroom practice?
Schildkamp, K., van der Kleij, F. M., Heitink, M. C., Kippers, W. B., & Veldkamp, B. P. (2020). Formative assessment: A systematic review of critical teacher prerequisites for classroom practice. International Journal of Educational Research, 103, 101602.
Effective Teachers Make a Difference
This analysis examines the available research on effective teaching, how to impart these skills, and how to best transition teachers from pre-service to classroom with an emphasis on improving student achievement. It reviews current preparation practices and examine the research evidence on how well they are preparing teachers
States, J., Detrich, R. & Keywroth, R. (2012). Effective Teachers Make a Difference. In Education at the Crossroads: The State of Teacher Preparation (Vol. 2, pp. 1-46). Oakland, CA: The Wing Institute.
Keeping RTI on track: How to identify, repair and prevent mistakes that derail implementation
Keeping RTI on Track is a resource to assist educators overcome the biggest problems associated with false starts or implementation failure. Each chapter in this book calls attention to a common error, describing how to avoid the pitfalls that lead to false starts, how to determine when you're in one, and how to get back on the right track.
Vanderheyden, A. M., & Tilly, W. D. (2010). Keeping RTI on track: How to identify, repair and prevent mistakes that derail implementation. LRP Publications.
Productive teaching
This literature review examines the impact of various instructional methods
Walberg H. J. (1999). Productive teaching. In H. C. Waxman & H. J. Walberg (Eds.) New directions for teaching, practice, and research (pp. 75-104). Berkeley, CA: McCutchen Publishing.
Teacher use of interventions in general education settings: Measurement and analysis of? the independent variable
This study evaluated the effects of performance feedback on increasing the quality of implementation of interventions by teachers in a public school setting.
Witt, J. C., Noell, G. H., LaFleur, L. H., & Mortenson, B. P. (1997). Teacher use of interventions in general education settings: Measurement and analysis of ?the independent variable. Journal of Applied Behavior Analysis, 30(4), 693.
The Cost-Effectiveness of Five Policies for Improving Student Achievement
This study compares the effect size and return on investment for rapid assessment, between, increased spending, voucher programs, charter schools, and increased accountability.
Yeh, S. S. (2007). The cost-effectiveness of five policies for improving student achievement. American Journal of Evaluation, 28(4), 416-436.
Measurably superior instruction means close, continual contact with the relevant outcome data: Revolutionary!
The chapter looks at the critical importance of how to effectively measure performance to achieve the greatest impact.
Bushell, D., & Baer, D. M. (1994). Measurably superior instruction means close, continual contact with the relevant outcome data: Revolutionary. Behavior analysis in education: Focus on measurably superior instruction, 3-10.
Synthesis of research on reviews and tests.
This study looks at the use of properly spaced reviews and tests as a practice that can dramatically improve classroom learning and retention.
Dempster, F. N. (1991). Synthesis of Research on Reviews and Tests. Educational leadership, 48(7), 71-76.
Using Data-Based Inquiry and Decision Making To Improve Instruction.
This study examines six schools using data-based inquiry and decision-making process to improve instruction.
Feldman, J., & Tung, R. (2001). Using Data-Based Inquiry and Decision Making To Improve Instruction. ERS Spectrum, 19(3), 10-19.
Using Student Achievement Data to Support Instructional Decision Making
The purpose of this practice guide is to help teachers and administrators use student achievement data to make instructional decisions.
Hamilton, L., Halverson, R., Jackson, S. S., Mandinach, E., Supovitz, J. A., & Wayman, J. C. (2009). Using Student Achievement Data to Support Instructional Decision Making. IES Practice Guide. NCEE 2009-4067. National Center for Education Evaluation and Regional Assistance.
Effective Behavior Support: A Systems Approach to Proactive School-wide Management
This study describes Effective Behavioral Support, a systems approach to enhancing the capacity of schools to adopt and sustain use of effective processes for all students.
Lewis, T. J., & Sugai, G. (1999). Effective Behavior Support: A Systems Approach to Proactive Schoolwide Management. Focus on Exceptional Children, 31(6), 1-24.
Making sense of data-driven decision making in education.
This paper uses research to show how schools and districts are analyzing achievement test results and other types of data to make decisions to improve student success.
Marsh, J. A., Pane, J. F., & Hamilton, L. S. (2006). Making sense of data-driven decision making in education.
Measuring reading comprehension and mathematics instruction in urban middle schools: A pilot study of the Instructional Quality Assessment
The purpose of this research is to investigate the reliability and potential validity of the ratings of Instructional Quality Assessment..
Matsumura, L. C., Slater, S. C., Junker, B., Peterson, M., Boston, M., Steele, M., & Resnick, L. (2006). Measuring Reading Comprehension and Mathematics Instruction in Urban Middle Schools: A Pilot Study of the Instructional Quality Assessment. CSE Technical Report 681. National Center for Research on Evaluation, Standards, and Student Testing (CRESST).
Data-based Decision Making in Education.
This book scrutinizes research from seven countries to answer the following questions: Why is data use important in schools? How does policy influence data use? Which factors enable effective data use? What are the effects of data use?
Schildkamp, K., Lai, M. K., & Earl, L. (2013). Data-based Decision Making in Education.
Involving teachers in data-driven decision making: Using computer data systems to support teacher inquiry and reflection.
This paper outlines effective practices such as accountability reporting and user-friendly data access in the use of student data.
Wayman, J. C. (2005). Involving teachers in data-driven decision making: Using computer data systems to support teacher inquiry and reflection. Journal of Education for Students Placed at Risk, 10(3), 295-308.