OP-ED | Behind the CMT Wizard’s Curtain
Connecticut’s standardized testing system ranks and labels public school students, schools, and districts in a way that purports to both evaluate student performance and identify students’ academic strengths and weaknesses. However, behind the Wizard’s curtain lie a lot of flying monkeys – flawed calculations that do very little to identify which skills students have and which they need to improve.
Consider the 2013 Connecticut Mastery Test, which was administered to all public school students in grades 3 through 8 in mathematics, reading, and writing. To simplify the discussion, consider only the most general case – students who took a common grade-level form of just these three tests, with no accommodations or exclusions.
Step 1. Translate each individual score into a scaled score of Advanced/Goal (100 points), Proficient (67 points), Basic (33 points), or Below Basic (0 points).
Step 2. Average all three scores to create an Individual Performance Index (Student IPI).
Step 3. Average all Student IPIs in a school to create a School Performance Index (SPI). Use the SPI, among other “indicators,” to label a school as Excelling, Progressing, Transitioning, Review, or Turnaround, not to mention a School of Distinction or a Focus school. And apply these fudge factors “to schools differently.” [Connecticut’s Accountability System: FAQ, p. 6]
Step 4. Average all Student IPIs in a district to create a District Performance Index (DPI).
While the CMT results do provide information elsewhere (Subject IPIs) about student strengths and weaknesses in subject areas, the indexing and categorizing of test results into IPIs, SPIs, and DPIs aggregates so much data as to make these figures meaningless for that purpose. As an analogy, a Grade Point Average for a student, a school, or a district is not very useful in assessing teaching or learning in any subject area. And, as with a GPA, one might question the precision of CMT results reported to the nearest tenth of a point when none of the input data was so precise.
Above and beyond the simple four-step case described above, the actual CMT arithmetic involves adding apples and oranges. Students in grades 5 and 8 take an additional test in science. Other students are exempted, or evaluated using different instruments and a different 100-50-0 scoring scale. Even more Wizard-like, the state Education Department “analyzed district-wide data and applied the results of those analyses to schools without tested grades.” [FAQ, p. 6]
To be sure, the Education Department cautions us that the “SPI should be interpreted not relative to the performance of other schools but relative to that particular school’s ability to make its annual performance improvement targets.” [FAQ, p. 5] However, it also states that the “index scores allow for appropriate peer comparisons among schools for accountability purposes, but may have limited diagnostic value.” [FAQ, p. 3] Which is it – a constructive system that aims to measure the progress of teaching and learning in a school or district, or a potentially destructive system that aims to rank schools and districts against each other in a race to some “top”?
Moreover, there are many ways to “game” this system, even legitimately. Consider the Electoral College system, in which presidential candidates concentrate their efforts on the states with the most electoral votes. So a school system might choose a future strategy of concentrating its efforts to raise these Wizard numbers on students considered to have the most academic potential, to the detriment of students considered to have the least academic potential, especially if the school systems were incorrect in identifying which students belonged to which group.
Could we do better?
Connecticut is spending a considerable amount of money on educational assessment, and, in particular, on out-of-state consultants. Unfortunately, in texting lingo, the IPIs, SPIs, and DPIs provide too much information without the information.
Improving student learning is too important an undertaking to focus funding on consultants to provide guidance to state and local administrators. The best guidance is already available from our public school teachers, who have the brains, hearts, and courage – and no flying monkeys – to collaborate on efforts to identify the basic materials, innovative ideas, or pilot programs that would promote the best teaching and learning in Connecticut, based on meaningful assessment results that identify those skills that need improvement. The IPIs, SPIs, and DPIs do not contribute to that worthy goal.
Margaret Cibes is a retired math and statistics teacher. She’s a contributor to the Media Clips department of the Mathematics Teacher journal and the Chance News wiki.