Categories for Decision Making
March 23, 2020
The Adoption of Curricula in K-12 Schools: An Exploratory Qualitative Analysis. This exploratory qualitative study investigated how school districts engage in the process of adopting curricula for use in grades K-12 and what factors influence administrators when making adoption decisions. The author and a graduate student used a semi-structured interview protocol to interview 21 building- and district-level administrators employed by an economically and geographically diverse sample of school districts in the United States. After completing the interviews, the author and four researchers employed thematic analysis to analyze the data. Results suggest that the curriculum adoption process varies between school districts and, for some, from one curriculum adoption to the next. Most respondents reported engaging in at least one of the following activities during the adoption process: gathering information, initial screening, engaging committees, reviewing potential programs, piloting, and obtaining approval. The factors that influence administrators’ adoption decisions fall into four categories: alignment, need, evidence, and aspects of programs. Based on the data obtained in this study, the author proposes a sequence of activities to follow during a curriculum adoption.
Citation: Rolf, K. (2020). The Adoption of Curricula in K-12 Schools: An Exploratory Qualitative Analysis. Utah State University. https://drive.google.com/file/d/1O_rvmZKGE8rCf_nVTdOwgy4AVk-Gw6hH/view
Link: https://drive.google.com/file/d/1O_rvmZKGE8rCf_nVTdOwgy4AVk-Gw6hH/view
March 12, 2020
What Works Clearinghouse: Procedures Handbook, Version 4.1. The WWC systematic review process offers educators and policy-makers a mechanism to assure consistent, objective, and transparent standards and procedures for assessing the impact of practices and interventions. The review procedures handbook includes the following changes: (1) Removal of the “substantively important” designation; (2) Addition of standard error calculations for all effect sizes; (3) Addition of single-case design (SCD) procedures for synthesizing SCD study findings using design-comparable effect sizes; (4) Addition of methods to estimate effects from regression discontinuity designs (RDDs); (5) Clarification of decision rules determining the use of difference-in-difference effect sizes; (6) Synthesis of studies within intervention reports using a fixed-effects model; (7) Modification of the intervention report effectiveness rating; and (8) Levels of evidence in practice guides.
Citation: What Works Clearinghouse: Procedures Handbook, Version 4.1. Princeton, NJ: What Works Clearinghouse https://files.eric.ed.gov/fulltext/ED602035.pdf
Link: https://files.eric.ed.gov/fulltext/ED602035.pdf
March 6, 2020
What do surveys of program completers tell us about teacher preparation quality? Over the past twenty years, educators, policymakers, and the public have increasingly expressed interest in finding out which teacher preparation programs (TPP) produce the best teachers. One of the tools offered to identify exemplary pre-service training is satisfaction surveys of graduates. A 2019 teacher survey finds, “only 30 percent of general education teachers feel ‘strongly’ that they can successfully teach students with learning disabilities, and only 50 percent believe those students can reach grade-level standards.” Surveys highlight a misalignment between the intended outcomes of teacher preparation and the actual worth of the training teacher candidates receive. Given the potential importance of teacher surveys, it is imperative that policymakers and teacher educators better understand the efficacy of polling for providing program accountability and information for improving TTP performance.
This study provides a large-scale examination of how new teacher’s perception of the quality TTP training is associated and predictive of quality instruction. The study finds that perceptions of TTP are modestly associated with the effectiveness and retention of first and second-year teachers. The authors find that new teachers who perceive training to be supportive in critical skills were more productive on the job, and were more likely to remain a teacher after the first year in the classroom. Supportive learning environments were associated with extensive training in establishing orderly and positive classroom learning environments, communicating high expectations for students, and forming supportive relationships with all students. Those teachers who received training in classroom management were more effectively develop strategies for addressing conduct issues that arise on the job. This evidence of supportive learning environments suggests that TPPs should consider ways, to enhance the quality of preparation opportunities to master classroom management, building relationships with students, and creating high expectations for student success.
Citation: Bastian, K. C., Sun, M., & Lynn, H. (2018). What Do Surveys of Program Completers Tell Us About Teacher Preparation Quality?. Journal of Teacher Education, 0022487119886294.
Link: https://journals.sagepub.com/doi/abs/10.1177/0022487119886294
March 4, 2020
Does Peer Assessment Promote Student Learning? A Meta-Analysis. Peer assessment has become a popular education intervention. In a peer assessment, student’s work is evaluated by a peer as opposed to the teacher. Extensive research is available on the reliability and validity of peer assessment results. A review of the literature finds few studies on the impact of Peer Review on student outcomes. This meta-analysis examines the effect sizes found in 58 studies. The paper finds a positive relationship for peer assessment on student outcomes. The study went on to identify the specific practice elements that comprise the practice of Peer Assessment to identify those elements that have the most significant impact on student performance. The study identified five components rater training, rating format, rating criteria, and frequency of peer assessment. The most critical factor of those examined is rater training.
Citation: Li, H., Xiong, Y., Hunter, C. V., Guo, X., & Tywoniw, R. (2020). Does peer assessment promote student learning? A meta-analysis. Assessment & Evaluation in Higher Education, 45(2), 193-211.
Link: https://www.researchgate.net/profile/Hongli_Li4/publication/333571244_Does_peer_assessment_promote_student_learning_A_meta-analysis/links/5d276f9d92851cf4407a70c2/Does-peer-assessment-promote-student-learning-A-meta-analysis
February 21, 2020
The Current Controversy About Teaching Reading: Comments for Those Left with Questions After Reading the New York times Article. This Op-Ed commentary by Daniel Willingham discusses the current knowledge base on effective reading instruction in the context of a recent New York Times article on the topic. For over twenty years, the core components of effective reading (phonics, phonemic awareness, vocabulary, fluency, and comprehension) have been available to educators. Despite ample evidence, a large number of teacher preparation programs do not adequately train teachers on the best available evidence, many relying on an approach, “Balanced Literacy.” Balanced literacy was offered as a compromise to end the conflict between those advocating for phonics instruction and instructors promoting the immersion in relevant texts designed to motivate student’s learning. In practice, when Balanced Literacy is implemented, phonics instruction is frequently not included in the curriculum. Willingham concludes that decoding is the most thoroughly researched aspect of reading, decoding’s efficacy is well documented, and he suggests it is about time educators take advantage of this work.
Citation: Willingham, D. (2020). The Current Controversy About Teaching Reading: Comments for Those Left with Questions After Reading the New York times Article. University of Virginia: Daniel Willingham-Science & Education. http://www.danielwillingham.com/daniel-willingham-science-and-education-blog
Link: http://www.danielwillingham.com/daniel-willingham-science-and-education-blog
New York Times Article: An Old and Contested Solution. https://www.nytimes.com/2020/02/15/us/reading-phonics.html
January 23, 2020
Why is this question important? Given the limited resources that are available for the education of children, it is important to select interventions that have the greatest impact we can afford. Using Stuart Yeh’s effectiveness cost ratio formula, a rough comparison can be drawn comparing class size reduction with other educational interventions.
Citation: Yeh, S. S. (2007). The Cost-Effectiveness of Five Policies for Improving Student Achievement, American Journal of Evaluation, 28(4), 416-436.
Link: https://www.winginstitute.org/how-does-class-size
January 15, 2020
Overview of Value-Added Research in Education: Reliability, Validity, Efficacy, and Usefulness. Value-added modeling (VAM) is a statistical approach that provides quantitative performance measures for monitoring and evaluating schools and other aspects of the education system. VAM comprises a collection of complex statistical techniques that use standardized test scores to estimate the effects of individual schools or teachers on student performance. Although the VAM approach holds promise, serious technical issues have been raised regarding VAM as a high-stakes instrument in accountability initiatives. The key question remains: Can VAM scores of standardized test scores serve as a proxy for measuring teaching quality? To date, research on the efficacy of VAM is mixed. There is a body of research that supports VAM, but there is also a body of studies suggesting that model estimates are unstable over time and subject to bias and imprecision. A second issue with VAM is the sole use of standardized tests as a measure of student performance. Despite these valid concerns, VAM has been shown to be valuable in performance improvement efforts when used cautiously in combination with other measures of student performance such as end-of-course tests, final grades, and structured classroom observations.
Citation: Cleaver, S., Detrich, R. & States, J. (2020). Overview of Value-Added Research in Education: Reliability, Validity, Efficacy, and Usefulness. Oakland, CA: The Wing Institute. https://www.winginstitute.org/staff-value-added.
Link: https://www.winginstitute.org/staff-value-added
December 16, 2019
On the Reality of Dyslexia. This paper assesses research on the topic of dyslexia. Willingham’s piece is in response to comments made by literacy researcher, Dick Allington, in which he questions the legitimacy of the label, dyslexia. Answering this question is more than an academic exercise as having a clearer understanding of dyslexia is crucial if educators are to understand why 10% of students struggle to master reading, the skill essential to success in academic learning. Willingham highlights the etiology of the disorder, and he concludes that the ability to read is the product of the home environment, instruction at school, and genetics within the child. Dyslexia is a problem in the child’s ability to successfully master the skills of reading and is closely related to fluency in language. Dyslexia is not like measles in which you are ill, or you aren’t. Dyslexia is more like high blood pressure where individuals fall on a bell curve. Falling somewhere on the bell curve is supported by the hypothesis that the disorder is the complex interaction between multiple causes. Although it does not have a single source, dyslexia is successfully remediated through evidence-based language and reading instruction.
Citation: Willingham, D. (2019). On the Reality of Dyslexia. Charlottesville, VA.http://www.danielwillingham.com/daniel-willingham-science-and-education-blog/on-the-reality-of-dyslexia?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+nbspDanielWillingham-DanielWillinghamScienceAndEducationBlog+%28Daniel+Willingham%27s+Science+and+Education+Blog%29.
Link: On the Reality of Dyslexia
November 14, 2019
Using Resource and Cost Considerations to Support Educational Evaluation: Six Domains. Assessing cost, along with the effectiveness of an initiative is common in public policy decision-making, but is frequently missing in education decision-making. Understanding the cost-effectiveness of an intervention is essential if educators are to maximize the impact of an intervention given limited budgets. Education is full of examples of practices, such as class-size reduction and accountability through high-stakes testing, that produce minimal results while consuming significant resources. It is vital for those making critical decisions to understand which practice is best suited to meet the needs of the school and the schools’ students that can be implemented using the available resources. The best way to do this is through the use of a cost-effectiveness analysis (CEA).
A CEA requires an accurate estimation of all added resources needed to implement the new intervention. Costs commonly associated with education interventions include; added personnel, professional development, classroom space, technology, and expenses to monitor effectiveness. The second variable essential to a CEA is the selection of a practice supported by research. In the past twenty years, a significant increase in the quality and quantity of research supporting different education practices has occurred. A CEA compares the extra expenditures required to implement a new intervention to current practices against targeted education outcomes. Examples of educational outcomes are standardized test scores, graduation rates, or student grades.
The focus of this essay is on which economic methods can complement and enhance impact evaluations. The authors propose the use of six domains to link intervention effectiveness to the best technique needed to determine which practice is the most cost-effective choice. The six domains outlined in the paper are outcomes, treatment comparisons, treatment integrity, the role of mediators, test power, and meta-analysis. This paper provides examples of how analyzing the costs associated with these domains can complement and augment practices in evaluating research in the field of education.
Citation: Belfield, C. R., & Brooks Bowden, A. (2019). Using Resource and Cost Considerations to Support Educational Evaluation: Six Domains. Educational Researcher, 48(2), 120-127.
Link: https://edre.uark.edu/_resources/pdf/er2018.pdf
October 24, 2019
Small class sizes for improving student achievement in primary and secondary schools: a systematic review. This Campbell Collaboration systematic review examines the impact of class size on academic achievement. The review summarizes findings from 148 reports from 41 countries. Reducing class size is viewed by many educators as an essential tool for improving student performance, and is especially popular among teachers. But smaller class sizes come at a steep cost. Education policymakers see increasing class size as a way to control education budgets. Despite the real policy and practice implications, the research on the educational effects of class‐size differences on student performance is mixed. This meta-analysis suggests, at best only, a small impact on reading achievement. The study finds a small negative effect on mathematics. Given the fact that class size reduction is minimally effective while being costly, aren’t there better solutions that are both cost-effective, benefits students, and can help teachers be successful in a very challenging profession?
Citation: Filges, T., Sonne‐Schmidt, C. S., & Nielsen, B. C. V. (2018). Small class sizes for improving student achievement in primary and secondary schools: a systematic review. Campbell Systematic Reviews, 14(1), 1-107.
Link: https://onlinelibrary.wiley.com/doi/full/10.4073/csr.2018.10