An excellent and unexpected article appeared in the business section of the New York Times on November 5, written by Eduardo Porter.
Despite bipartisan rhetoric about “closing the achievement gap,” and giving every child an equal change “regardless of zip code,” the evidence suggests that this is empty blather. What really matters is which schools get the best funding.
The United States is one of few advanced nations where schools serving better-off children usually have more educational resources than those serving poor students, according to research by the Organization for Economic Cooperation and Development. Among the 34 O.E.C.D. nations, only in the United States, Israel and Turkey do disadvantaged schools have lower teacher/student ratios than in those serving more privileged students.
Andreas Schleicher, who runs the O.E.C.D.’s international educational assessments, put it to me this way: “The bottom line is that the vast majority of O.E.C.D. countries…
View original post 338 more words
Except when it doesn’t.
Seems like a fair trade, no?
Let the data orgy begin!
NAEP data have been released and I anticipate almost as much time and money will be wasted on the data as has been wasted on administering the tests, scoring the tests, and creating the handy web link to all that data—notably the predictable link to gaps. [For the record, most of these data charts can be prepared without any child ever taking tests; just use the socioeconomic data on each child and extrapolate.]
Take a moment and scroll through the gray space between myriad groups in both math and reading.
There, enjoy it?
While you’re at it, look at the historical gaps between males and females in the SAT.
Males on average outscore females in reading and math (though females outscore males in writing, the one section of the SAT that doesn’t count for anything anywhere, hmmmm).
The problem, of course, is…
View original post 612 more words
In case you hadn’t noticed evidence is mounting of a massive value-added and growth score train wreck. I’ve pointed out previously on this blog that there exist some pretty substantial differences in the models and estimates of teacher and school effectiveness being developed in practice across states for actual use in rating, ranking, tenuring and firing teachers – and rating teacher prep programs – versus the models and data that have been used in high profile research studies. This is not to suggest that the models and data used in high profile research studies are ready for prime time in high stakes personnel decisions. They are not. They reveal numerous problems of their own. But many if not most well-estimated, carefully vetted value-added models used in research studies a) test alternative specifications including use of additional covariates at the classroom and school level, or include various “fixed effects” to better…
View original post 1,514 more words