The Morning Report
Get the news and information you need to take on the day.
If you just glanced at headlines in the Los Angeles Times and the Washington Post, you might be a bit confused. The Post declares, “Researchers fault L.A. Times methods in analysis of Calif. teachers.” The Times says, “Separate study confirms many Los Angeles Times findings on teacher effectiveness.”
Well, which is it? You might remember the story in question was one of the biggest and most controversial education stories last year. The Times published ratings for teachers based on what’s known as a “value-added” analysis comparing how their students scored on standardized tests from one year to the next.
Now a new study out of the University of Colorado has pointed out some weaknesses in the Los Angeles Times analysis. As the Times explained (and emphasized in its headline), the report found dramatic differences between how elementary school teachers impacted their students’ test scores, just like the newspaper did in its analysis. But, as the newspaper noted in a smaller, secondary headline, the study also took issue with some of its analysis.
But they also said they found evidence of imprecision in the Times analysis that could lead to the misclassification of some teachers, especially among those whose performance was about average for the district.
The authors largely confirmed The Times’ findings for the teachers classified as most and least effective. But the authors also said that slightly more than half of all English teachers they examined could not be reliably distinguished from average.
The Washington Post reported that researchers found the Times analysis left out factors that could impact teacher ratings, such as school demographics and how classmates might impact scores. It also noted that the Colorado researchers “were funded by a policy center with some backing from teachers unions,” which were virulently opposed to releasing the data. The Post did a nice job of boiling down why all this matters:
The back and forth on the Los Angeles ratings underscores that many practical questions remain about value-added analysis even as its influence grows. Among them: How to judge teachers whose students do not take standardized tests in reading and math? How to account for class placement and team instruction? How to account for the many advantages of affluent students and the many disadvantages of the poor?
For more on the debate over value-added analysis and how to use data in schools, you can also check out my earlier article about how local schools are using similar information to alter the way they teach — but not the way they evaluate teachers:
The San Diego Unified school board, which is strongly backed by the teachers union, has panned the idea of rating teachers with test scores, saying it reduces teaching to test prep. The district doesn’t calculate scores for teachers … Yet San Diego Unified has embraced the idea of using similar information to help schools improve. It wants to measure schools by how much they help each child grow. Some schools are already crunching test scores to measure growth. They don’t use the calculations to rate teachers; they use them to study what gets good results.
Update: You can now read the full report online. The researchers also issued a fact sheet arguing that the Los Angeles Times article mischaracterized its findings and that the Colorado study “confirms very few of the Times’ conclusions.”
Please contact Emily Alpert directly at firstname.lastname@example.org or 619.550.5665 and follow her on Twitter: twitter.com/emilyschoolsyou.