Last week, teachers in our region joined me to score the NYS Assessment in Elementary Social Studies. This test includes, along with multiple choice questions and open-ended responses called "constructed response questions" (CRQs), one of my favorite writing tasks: the Document Based Question (DBQ). Truly - this is one of my favorite writing tasks. In addition to having students write (one of my favorite past-times), it also asks students to think like historians. Well - at least when the task is well written.
Scoring this assessment can at times be very subjective. Who am I kidding - it is incredibly subjective!! While leading the teachers through the rubric, anchor papers and the use of holistic (vs. analytic) scoring - I ask teachers to put their "pet peeves" in writing on a piece of paper to remind them that we are looking for different elements of writing in scoring this assessment. We can't let our pet peeves distract us or get caught up in what the students did not do. Instead, for purposes of this assessment, we need to look for what they were able to do in this one moment in time.
Despite my best efforts, there are always some complaints about how the essays are scored. Claims of bias because the papers came from a small/large/poor/affluent school district or because the student had an IEP/poor handwriting/no clue how to answer the question but tried really hard are often heard when teachers discover how other teachers scored THEIR kids. I try to be sure that we all have the same understanding of the rubrics and anchor papers but I can't score them all myself. I go for the greatest amount of consistency and hope for the best each and every year.
It seemed serendipitous to read 'Standardized You Say? "Confessions of a Scorer" in my latest issue of EdWeek on the heels of this scoring adventure. This article surfaced my deepest fears about scoring - particularly when I read want-ads in the paper asking for certified teachers to score NYS assessments or hear a state ed official tout the benefits of "electronic distributive scoring." No matter what measures we put into place - scoring student writing will always be subjective.
Interestingly, while looking for new blogs to add to my reader (because 104 simply is not enough) I came across this post about a third grade teacher using automated essay grading with her class. What struck me most about this post was not that the automated scoring was such a hit but that what the teacher described in terms of the process was a perfect example of what several colleagues and I have been discussing for weeks now: formative assessment.
The tool may have made the record keeping easier but it didn't change the practice - that much is evident from the description of her class.That teacher likely did all of the things she describes and attributes to the scoring program before using the program - just in a different way. For her students to give scores that were very similar to the program shows that they understood not only the rubric, but their own strengths and weaknesses in writing. Not an easy feat!
Which made me think about what we are assessing to begin with. If we truly want students to do well on the DBQ - we need to teach them how to read and interpret documents, make connections and develop a thesis statement using the documents to support that thesis. And we might have to do that without actually writing for a bit! In my classroom, I scaffolded the student writing of DBQs. After years of reading poorly written essays and getting frustrated at the smallest of writing issues, I realized that I couldn't solve the problem by throwing more DBQs at them. Instead, I needed to pick apart what they needed to do and start S-L-O-W-L-Y teaching them how to do those things, step by step. I spent less time grading poor work, they spent less time creating it. Instead - we grew together as their skills and confidence grew.
I've been working to translate my hard won lessons with others to avoid them having to walk the same path, but I realized that the power of my learning came from those mounting frustrations and failures as a writing teacher. It was then that I realized I needed to be a better writer to understand how to help them become better. My scoring will always be different than yours - my pet peeves are not your pet peeves, my expectations are not yours. But if they are aligned to a larger goal given to us by the state and if we communicate it to the students, we can expect more in our classroom and get proficient results on a once a year assessment - no matter who scores it.