I truly believe that it is possible to have a standardized test that does at least a decent job of measuring student achievement. That being said, I have yet to see one that does.
I snapped a quick pic of a CAPT (Connecticut's standardized test of choice) practice sheet that was left sitting by the copying machine on Friday.¹
My favorite part of this? It includes the little bubble-it-in-grid. Not because this particular worksheet gets scanned, but it's just practice so students know how to fill in bubbles. As if there's nothing more important in our students lives than learning these valuable life skills (Objective A.12.34: Students will display proper usage of No. 2 pencils and bubbling technique).
These tests always seem to be trying to trick students. The questions asks for the answer given to the nearest gallon. Doing the math without rounding gives you an answer of 12,990.6542 gallons. The bubble grid includes space for decimals. How many students put in 12,990.65 and get it marked incorrect? What are they supposed to bubble in? 12,990? Would that get marked wrong because the last two decimals aren't filled in? 12,990.00? That's technically incorrect² but I can see how a 15 year old who is really trying to follow directions to a "T" would answer in that way.
What knowledge is this question testing? At first it seems to be a question about proportions (40 gal. sap : 1 gal. syrup), but then it throws this whole gallons into quarts thing in at the end. Thus this question only tells us if students understand the conversion and the proportion concepts.The test can't determine if they understand one but not the other. Thus, the test doesn't determine what a student actually knows with any degree of accuracy.
Furthermore, how important is it for students to memorize conversion factors? Especially in Imperial Volume Units? I can barely keep those straight (and have little reason to). Anytime I really need to convert these units, I pull up Google and use their handy unit conversion tool.
What's the big deal?
This isn't a problem unique to Connecticut. It's a general problem that is pervasive throughout the high-stakes standardized testing world. How can these tests accurately determine what students know if they're poorly written? How can districts be told they're failing their students if the instrument used to determine that students aren't learning has serious validity problems? How can the entire education system in the United States buy into these tests as the best way to measure success?
Who are the people that write these tests? Do they read the questions they've written?
¹ Sorry for the poor quality images of the tests. They were taken with my camera phone.
² The reason 12,990.00 is incorrect because it implies that the measurement is accurate to the nearest one hundredth of a gallon which is a higher degree of accuracy than can be ascertained from the given information. In fact, the correct answer should be 13,000 gallons, due to the total dollar value being given as the entirely vague "about $556,000." This implies that the final answer can only be accurate to the nearest thousand. Thus ends the quick & dirty lesson on significant figures.