Wednesday, February 08, 2017

We Designed Evaluation This Way, Says UFT

I was pretty surprised at the answer I got to my question on MOSL the other night at UFT Executive Board. When I have questions, I write them down in advance because I cannot take notes while I myself am speaking. I prefaced my question by saying I would understand if UFT tried but could not negotiate a reasonable settlement with the DOE. Yet that's not the answer I got.

I had been to a MOSL committee meeting that day, and I was pretty surprised that we were expected to make an irrevocable decision about how teachers were rated without having all the relevant information handy. Here's most of my question:

Why are we supposed to make the course level irrevocable MOSL decision independent of the teacher level with no current knowledge of what choices or mandates will be available for teacher level decisions? Wouldn’t if make more sense if we knew what both factors were at the time of the first choice? Wouldn’t that help us to make the best possible decisions for our members? 

I had expected to hear that we did the best we could, but that the DOE was intractable and unreasonable. Yet I heard that it was designed this way deliberately, perhaps so as to give more time to make decisions. Yet we don't have a whole lot of time to make decisions. In fact, we have just a couple of weeks.

Maybe my notes aren't so good, but I also recall hearing a defense of the single measure on which we're now judged for the course level. For the last few years there have been the much ballyhooed multiple measures, state and local, but now they're gone and our sole course level measure is the state test. I heard how this was somehow an improvement, and how this was simpler and somehow tied into the matrix.

I don't see that, though. The junk science measure could just as easily have been an amalgam of state and local measures, and could just as easily have translated into the miraculous matrix. In my school, last year, we tied everyone to group measures wherever possible. We tied as few people as possible to groups of their own students. We did this for several reasons.

One reason, of course, is that there is no validity to tying teachers to test scores. This theory is supported by thinkers like Diane Ravitch, Carol Burris, and Leonie Haimson. In case that's not enough for you, it's also supported by the American Statistical Association, which says teachers affect student test grades by a factor of 1-14%. And for my Unity friends, it's also supported by AFT President Randi Weingarten, who famously declared, "VAM is a sham."

You never know about groupings. Some teachers may be particularly good at teaching repeater classes, but students who've already proven capable of failure are not necessarily a fair measure of how good any teacher is. And as many of us know, there may be a supervisor or two out there who will assign classes out of sheer malice and vindictiveness. None of this, evidently, influenced leadership when it negotiated this system.

So now, if you teach a course that terminates in a Regents exam, there will be nothing to mitigate your course-level junk science measurement. This is a significant change. In my school, for example, we tried to balance the junk science with large group measurements. We were successful in that there was minimal teacher-to-teacher variation in the junk science portion of our ratings. While many of us went from highly effective to effective, some of us went from ineffective to developing. I may have bitched about moving down from HE, but I came to see the benefits of being drawn to the middle.

Me, I'm an ESL teacher. I will therefore be judged on the NYSESLAT exam, a mishmosh of nonsense that changes each and every year. While I have learned a lot about Hammurabbi's code by asking a whole lot of students a whole lot of questions about it, I question whether this test measures the language acquisition it's my job to promote. And I certainly do not teach to this test. First of all, I generally have no idea what will be on it. More importantly, I know it was revised to be more Common Corey, for reasons that baffle me utterly. The fact is my kids have distinctly different English needs than those of kids born here. That NY State willfully chooses to ignore this does not mean I will neglect teaching kids the nuts and bolts of American English.

Last year, along with the rest of my department, I was rated well on the NYSESLAT, but I have no earthly notion as to why. It's ridiculous that we are expected to simply sit around and hope for the best on measures that are pivotal in whether or not we get to keep our careers.

There is a fundamental unfairness in this system. That is, everyone who does NOT teach a course attached to a state exam may be rated on group measures. Now we could make it "fair" by, say, tying an art teacher to the results of some random Regents math class, but just because the system sucks for me is no reason to make it suck for everyone. In my building, it's likely we will continue to attach teachers to group measures wherever possible. At worst, we'll perhaps attach teachers to their own departments where it's appropriate. That way, maybe, science teachers have a stake in whether or not they choose to tutor science students.

Me, I find multiple errors in the UFT negotiation process. I rate leadership ineffective. Thankfully for them, they don't spend one single solitary moment fretting over member opinion, as everyone with whom they speak has signed a loyalty oath and reaffirms the notion that everything is wonderful no matter what.

Ironically, for the future of our union, therein lies the fundamental problem.
blog comments powered by Disqus