Heather Horn at The Atlantic with a round-up
Seeking to shed light on the problem, The Times obtained seven years of math and English test scores from the Los Angeles Unified School District and used the information to estimate the effectiveness of L.A. teachers — something the district could do but has not.
The Times used a statistical approach known as value-added analysis, which rates teachers based on their students’ progress on standardized tests from year to year. Each student’s performance is compared with his or her own in past years, which largely controls for outside influences often blamed for academic failure: poverty, prior learning and other factors.
Though controversial among teachers and others, the method has been increasingly embraced by education leaders and policymakers across the country, including the Obama administration.
In coming months, The Times will publish a series of articles and a database analyzing individual teachers’ effectiveness in the nation’s second-largest school district — the first time, experts say, such information has been made public anywhere in the country.
This article examines the performance of more than 6,000 third- through fifth-grade teachers for whom reliable data were available.
The United Teachers of Los Angeles was quick to blast the Times for their report, according to SCPR:
Unfair, said a statement released by the United Teachers of Los Angeles today. “It is the height of journalistic irresponsibility to make public these deeply flawed judgments about a teachers effectiveness,” it said.”The database will cause chaos at school sites, as parents scramble to get their children into classes taught by teachers labeled as `effective’ by a newspaper — not by education professionals,” UTLA said, emphasizing the word “newspaper” in italics. The union said the result is a public, incomplete and
inaccurate picture of a teacher’s effectiveness.
Chad Aldeman at The Quick and The Ed:
Have pity on the individual teachers for this public outing, but, at the same time, don’t blame the Times for what they’re doing. The teachers union has pressured the district against using value-added measures in teacher performance evaluations, and only now are they moving forward together. The district has been complicit for years, and then took the easy way out and gave the data to a newspaper. And, in an ironic twist of fate, the newspaper could publish the value-added results precisely because they were not part of teacher personnel files. Those are private and cannot be released publicly.
In contrast, Tennessee has been using a value-added model since the late 1980’s, and every year since the mid-1990’s every single eligible teacher has received a report on their results. When these results were first introduced, teachers were explicitly told their results would never be published in newspapers and that the data may be used in evaluations. In reality, they had never really been used in evaluations until the state passed a law last January requiring the data to make up 35 percent of a teacher’s evaluation. This bill, and 100% teacher support for the state’s Race to the Top application that included it, was a key reason the state won a $500 million grant in the first round.
Tennessee is a good comparison, because here is a place with longstanding, low-stakes use of the data. The data will now have much higher stakes attached to it, but there wasn’t nearly the acrimony that’s happening now in LA. That’s because, to a large extent, LAUSD has sat on this information for so long without doing anything with it. Kudos to the intrepid reporter for digging it out and making a story of it, but the fact that it’s been buried for so long and is only seeing the light of day in this manner has made it that much more controversial. LAUSD could’ve avoided all the headache by doing something with the data themselves years ago. That should’ve started with letting the teachers see their own data, because they are interested in it. The teachers quoted in the Times articles and the 2,000+ teacher requests the newspaper has received since the story’s release suggest that teachers do want to know how they perform on these measures.
Instead of a methodical process where teachers slowly become used to seeing their data and therefore comfortable with its use, LA now has a situation where many people are unfamiliar and uncomfortable with the data at the same time there’s suddenly pent-up demand from teachers, parents, and the public to see it.
When researchers show distributions of scores, they often show error bands to indicate “the inherent imprecision,” as Felch, Strong, and Smith wrote. For example, see the following figure from a 2000 paper by Kenneth Rowe on value-added measures: The point here is that showing imprecision is easy to do in a way that is professionally competent. Is that what the L.A. Times shows in its database? Here’s the chart for one teacher:
There are two graphing sins here: dequantification and an implication that the estimate for the teacher is infinitely accurate (or at least as accurate as the center of the diamond images). I don’t know what the Times editors and reporters thought they were doing by eliminating a scale, but this doesn’t remove the central problem of visually implying that the estimate of effectiveness is precise. Instead, it commits the sin of dequantification. To borrow from Edward Tufte, is the L.A. Times’ publication of these figures an act of reporting or finger-painting?
It also raises significant questions about the response to Jay Matthews. Was the Times deliberately trying to fudge what they were intending to do with the graphs, or are they really so incompetent an organization that they don’t have people who know how to design statistical figures and also didn’t check such a high-stakes display with people who do this professionally?
Sara Mead at Education Week:
The reality is that, even as value-added student test score data has emerged as the center of current debates over teacher evaluation, it’s only available and relevant for a fraction of the teachers in our public schools today. There is currently no value-added data for kindergarten and early elementary teachers, teachers in non-core subjects, or high school teachers in most places. My brother-in-law, who teaches middle school band and drama, and sister, who teaches high school composition and literature, do not have value-added data.
Some critics see this as an argument against new teacher evaluation systems that incorporate data on student performance. I see it the opposite way: The way we currently evaluate teachers is deeply flawed, not helpful to them or students, and there are lots of things we could do to move towards a more effective system of evaluating and developing teachers. Where we have value-added data as a source of information to inform teacher evaluations, we should use it. But since it’s only available for a subset of teachers, and therefore only a small piece of any meaningful solution to teach evaluation, we shouldn’t let debate over value-added or the various methodologies derail the broader effort to create better ways of evaluating teachers’ effectiveness and using that data to inform professional development and staffing decisions. We also shouldn’t pretend–as I sometimes fear my reform colleagues do–that value-added data is some kind of magic panacea that provides perfect information about teacher effectiveness. And we should put a lot more effort into developing and using validated and reliable observational tools, such as the Classroom Assessment Scoring System (CLASS), that look at teacher classroom behaviors and measure the extent to which teachers are implementing behaviors linked to improved student outcomes. (I’m even more concerned that the observational rubrics many districts and states will put into place under their proposed evaluation systems have not yet been validated than I am with any of the issues related to use of value-added data.)
Jack Shafer at Slate:
These conclusions are so sensible, so obvious, so intuitive that only a union official or education bureaucrat could possibly dispute them. Oh, the Economic Policy Institute took its shot, calling teacher assessment based on standardized-test results just “one piece of information” used in a “comprehensive evaluation.”
By doing something LAUSD should have done in the first place, the Times had shamed the cowardly school district into performing its own “value-added analysis” of the data. So far, so good. But what does the school district intend to do with these scores? Release them to the public? No. It’s going to dispense them confidentially to teachers in the fall. For all the good that will do parents and teachers, why doesn’t the school district play ostrich and dig a big hole in Playa del Rey and bury the scores?
Let’s hope the Times stays on this story—and that it or some other publication uses the California Public Records Act to publish these new, LAUSD-generated scores. If you can’t grade the graders, whom can you grade?