Attorney Tyler Cramer is guest blogging today about how schools could link test scores to educators and what some of the complications might be in doing so. This is his second post for the day; if you missed his first one, you can check it out here. Send him your comments, questions and counterarguments at — Emily Alpert

Maybe you haven’t heard, but the 2009 Standardized Testing and Reporting (STAR) Program results were released yesterday. Yet again this year, I just cringed when the California Department of Education stated the results from the California Standards Test program “show California students overall continue to make steady academic progress.” I only wish it were so. Unfortunately, no one knows.

Why? It’s actually rather simple. Take a hypothetical K-5 elementary school with say 600 students or 100 kids in each grade. When the CDE uses the word “progress,” it’s comparing that school’s 2009 results against its 2008 results.

Wait a second, though. The STAR test is only given in Grades 2 through 11, so right there we take 200 kindergarteners and 1st graders out of the calculation.

Now, think about it. Out of the 400 kids who were tested in 2008 in grades 2 through 5, 100 of them were 5th graders who graduated. That’s 25 percent of the population who aren’t there for the 2009 test.  Add to that student mobility, especially in under-resourced communities where up to another 20 percent or more move to another school every year. So at the very best you’re comparing two populations that are composed of at least 45 percent different kids.

Do apples and oranges come to mind?

“Progress,” “gain,” “growth” etc. can only be measured when you are testing the proficiencies of the same students over time, i.e.  pre and post testing. Assessment systems that do that are referred to as “longitudinal data systems.” More important, good longitudinal data systems can isolate the effects of instructional inputs.

For example, assume that every student in our fictional elementary school ended 2nd grade in 2008 with a test score that showed he or she was performing at the 50th percentile in math.  Also assume there is no student mobility and exactly the same 100 kids go on to 3rd grade at the same school in five, 20 student classes.

Each class has the same distribution of demonstrated learning abilities, but 20 lucky ones were randomly assigned to Mrs. Mathy. Another 20 were randomly assigned to Mr. Remarc (60 were assigned to three other teachers). After the administration of the 2009 STAR Program CST, Mrs. Mathy’s 20 students each scored at the 65th percentile (a 15 percentile gain). Mr. Remarc’s students, however, scored at the 45th percentile (a five percentile drop).  The other 60 stayed at 50th percentile.

If you ask me, Mrs. Mathy should be rewarded based on the outstanding progress made by her students.  That’s the power of longitudinal data.


Leave a comment

We expect all commenters to be constructive and civil. We reserve the right to delete comments without explanation. You are welcome to flag comments to us. You are welcome to submit an opinion piece for our editors to review.

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.