In December, we had an English Language Arts benchmark, preparing for the real test in February. We were told initially we would be testing all four core subjects, only to learn at the last minute that we would test only ELA. Since we already had the days set aside in our lesson plans, the math department decided to use that time, albeit in regular classes, to complete a benchmark (the 2004 release test).
Despite the less-than-ideal conditions, our students did a good job. Their scores were more than twice that of my students at the same point last year, which I thought was a huge improvement. I took it to mean my students were better prepared coming in, and that I did a better job this fall semester than I had the previous year. I also took it to mean my students would do better when the real test came around in April than they had the previous year. I was glad the testing was over, and that we could move forward when we started the spring semester in January.
Thus my surprise when I was told that we would be taking four days in January for more benchmark testing. I argued in the department chair meeting that we didn't need another test because we just took one, and that our scores had improved from the previous year. What else did we need to prove? Unfortunately not only did anyone speak up to support me, but my co-chair actually spoke out against me!
She argued that the students and new, inexperienced teachers needed practice being in the testing environment. Both of those arguments are completely ludicrous. Our students have been taking these tests every year they've been in school! As for the teachers, their role is limited to reading from a script and then watching the kids take the test to make sure they don't cheat--that's it. We're not exactly asking them to read Gravity's Rainbow, okay?
Then I had to endure a department meeting where almost everyone declared how the December benchmark wasn't accurate at all. Really? Then why did we plan our entire spring semester based on the results from that benchmark? Why were we taking all of our planning periods to calculate results and write reports on students' performance by objective and overall if the test was so meaningless? Even worse, some of them declared that scores would go up. Who cares? If you are in this business for test scores, it's time to find another profession. If you're obsessed to the point that you feel all this benchmark testing is necessary, you've lost sight of our purpose as teachers.
I was concerned that the students were stressed and starting to get burnt out on testing, and that we lost four complete days of instructional time. I wondered why we would once again spend an entire day sitting in the same classroom when most students were done with their test before lunch. My worst fear, of course, is that someone will want to do this again before the real test--and I will absolutely raise hell to prevent that from happening.
I haven't yet seen the results, but I would venture to guess they've increased just enough for everybody to give themselves a pat on the back. Maybe that will give me the opportunity to, you know, teach something.
Of course, benchmark testing alone does not constitute madness. Additional symptoms include:
- Students being pulled from electives 2-3 times a week at minimum
- Constant TAKS-style multiple choice testing in the classroom, meaning less open-ended, problem solving, or higher order thinking questions
- Teachers giving up their conference period to "tutor" a subject they're not certified to teach
- More tests to come (presumably)
- My department wanted me to give my students 20-40 question multiple choice tests every week.
- The principal instituted something called a "power schedule", where classes were cut to 35 minutes and the last 2 hours of the day was spent doing test prep (again, teachers teaching subjects they're not certified in).
- Number of full school days spent taking practice tests: 15.
Yes, we did not meet Adequate Yearly Progress (AYP) goals last year because one of our subgroups, our 10th grade Limited English Proficiency (LEP) students, had a low passing rate. We met standards in all other areas. Instead of focusing on this one group, everyone is feeling the wrath. By the way--this year's 10th grade LEP students performed far better than their predecessors did on their 9th grade exams--which is usually the best predictor of their future success on these tests.
I didn't write about this to vent (okay, maybe a little), but to ask a few critical questions that we as educators need to answer:
- How can we fix the tests themselves? A recent commentary in the Austin-American Statesman forsees a future where Texas eliminates the TAKS and replaced it with small, periodic, online assessments on specific objectives.
- How can we prepare students for standardized tests without teaching to the test? What I've learned about the Rio Grande Valley is that school districts here have no idea how to do this. Districts create strategies focused solely on increasing passing rates, and when they're successful, everyone else copies them. They'll often implement conflicting or redundant ideas because it worked somewhere else, making things more difficult for teachers and students.
- How can we prepare students for college with so much focus on standardized tests? With so many standards to teach, and tests often based mostly on material covered in previous years, there's little time to give them what they need to succeed in college. It's no surprise that in Texas, half of students entering college need to take remedial courses (among Hispanic students, it's 63%).
- How does a dedicated teacher survive in such an environment? There's a reason most teachers leave the profession within five years, and this system is one of biggest.