Social Networks We Use

Categories

CT Tech Junkie Feed

Nonprofit Promotes Safety Online With Two-Step Campaign
Aug 19, 2014 12:20 pm
Convenience is the enemy when it comes to staying safe online. That’s why a nonprofit organization was spreading...more »
VIDEO: Hartford Event to Focus on Online Safety August 18
Aug 16, 2014 12:24 pm
The National Cyber Security Alliance (NCSA) is hosting a free event at the Connecticut Science Center at 9:00 a.m....more »

Our Partners

˜

OP-ED | Behind the CMT Wizard’s Curtain

by Margaret Cibes | Feb 18, 2014 11:39am
(5) Comments | Commenting has expired
Posted to: Opinion

Connecticut’s standardized testing system ranks and labels public school students, schools, and districts in a way that purports to both evaluate student performance and identify students’ academic strengths and weaknesses. However, behind the Wizard’s curtain lie a lot of flying monkeys – flawed calculations that do very little to identify which skills students have and which they need to improve.

Consider the 2013 Connecticut Mastery Test, which was administered to all public school students in grades 3 through 8 in mathematics, reading, and writing. To simplify the discussion, consider only the most general case – students who took a common grade-level form of just these three tests, with no accommodations or exclusions. 

The Arithmetic

Step 1. Translate each individual score into a scaled score of Advanced/Goal (100 points), Proficient (67 points), Basic (33 points), or Below Basic (0 points).

Step 2. Average all three scores to create an Individual Performance Index (Student IPI).

Step 3. Average all Student IPIs in a school to create a School Performance Index (SPI).  Use the SPI, among other “indicators,” to label a school as Excelling, Progressing, Transitioning, Review, or Turnaround, not to mention a School of Distinction or a Focus school.  And apply these fudge factors “to schools differently.” [Connecticut’s Accountability System: FAQ, p. 6]

Step 4. Average all Student IPIs in a district to create a District Performance Index (DPI).

While the CMT results do provide information elsewhere (Subject IPIs) about student strengths and weaknesses in subject areas, the indexing and categorizing of test results into IPIs, SPIs, and DPIs aggregates so much data as to make these figures meaningless for that purpose. As an analogy, a Grade Point Average for a student, a school, or a district is not very useful in assessing teaching or learning in any subject area. And, as with a GPA, one might question the precision of CMT results reported to the nearest tenth of a point when none of the input data was so precise.

Above and beyond the simple four-step case described above, the actual CMT arithmetic involves adding apples and oranges. Students in grades 5 and 8 take an additional test in science. Other students are exempted, or evaluated using different instruments and a different 100-50-0 scoring scale. Even more Wizard-like, the state Education Department “analyzed district-wide data and applied the results of those analyses to schools without tested grades.”  [FAQ, p. 6]

Horse race

To be sure, the Education Department cautions us that the “SPI should be interpreted not relative to the performance of other schools but relative to that particular school’s ability to make its annual performance improvement targets.” [FAQ, p. 5]  However, it also states that the “index scores allow for appropriate peer comparisons among schools for accountability purposes, but may have limited diagnostic value.”  [FAQ, p. 3]  Which is it – a constructive system that aims to measure the progress of teaching and learning in a school or district, or a potentially destructive system that aims to rank schools and districts against each other in a race to some “top”?

Moreover, there are many ways to “game” this system, even legitimately. Consider the Electoral College system, in which presidential candidates concentrate their efforts on the states with the most electoral votes. So a school system might choose a future strategy of concentrating its efforts to raise these Wizard numbers on students considered to have the most academic potential, to the detriment of students considered to have the least academic potential, especially if the school systems were incorrect in identifying which students belonged to which group.

Could we do better?

Connecticut is spending a considerable amount of money on educational assessment, and, in particular, on out-of-state consultants. Unfortunately, in texting lingo, the IPIs, SPIs, and DPIs provide too much information without the information.

Improving student learning is too important an undertaking to focus funding on consultants to provide guidance to state and local administrators. The best guidance is already available from our public school teachers, who have the brains, hearts, and courage – and no flying monkeys – to collaborate on efforts to identify the basic materials, innovative ideas, or pilot programs that would promote the best teaching and learning in Connecticut, based on meaningful assessment results that identify those skills that need improvement. The IPIs, SPIs, and DPIs do not contribute to that worthy goal. 

Margaret Cibes is a retired math and statistics teacher. She’s a contributor to the Media Clips department of the Mathematics Teacher journal and the Chance News wiki.

Tags: , , , ,

Share this story with others.

Share | |

(5) Comments

posted by: Historian | February 18, 2014  2:19pm

Just to the south the state of New York has been giving Regents exams to all junior and Senior high school students for over fifty years. Three hour exams in
English, math, the sciences and languages. I took them and spent months studying previous years exams to prepare.. All this current blah blah about testing students and not a word about this New York practice and how it was used to maintain one of the best state wide school systems - Yes - Best! - as I found out applying to several colleges who advised my parents a high school regents diploma meant something in their acceptance process.
  I am sure other states also gave annual statewide tests. Why are there no “studies” of the impact of those tests. We are being bombarded by all sorts of special interests for and against the current hot bottom education BS - all without any historical basis - which, apparently lies just to the south of us…

posted by: friedrich5 | February 18, 2014  6:46pm

Historian, very well said !
The educational hierarchy in Hartford has and continues to be focused on making changes that they feel can make them famous…..forget about getting real analysis of what is successful.

posted by: Historian | February 18, 2014  9:55pm

Thank you…

posted by: Peg | February 19, 2014  11:29am

Note also that the GPA - unlike the CMT - has never, in my 40+ years of teaching, been used as a measure of my teaching, but rather as a measure of my students’ learning.  Classes differ widely from year to year in their academic abilities and interests, and other factors which affect student performance.  One would expect “average” GPAs, or IPIs, to exhibit the same variation, independently of who was the teacher.

posted by: Historian | February 19, 2014  3:10pm

GPA and CMT,etc? Any single test will reflect the student’s learning and, assuming a consistent student body, the various teacher’s individual performance based on the group’s performance from year to year with some allowance for study material variability, that should develop a standard variable over a period of years. 
  At the most basic let us take a standard group and teach them to add. Individuals will perform differently but on the
Whole, there will be consistent Group performance level from year to year for the same subject. 
  Frankly, I would assume a decrease in annual grades simply as teacher fatigue grows with the repetitive teaching increases over the year.