Monday, November 24, 2008

On Scoring Writing

Last week, teachers in our region joined me to score the NYS Assessment in Elementary Social Studies. This test includes, along with multiple choice questions and open-ended responses called "constructed response questions" (CRQs), one of my favorite writing tasks: the Document Based Question (DBQ). Truly - this is one of my favorite writing tasks. In addition to having students write (one of my favorite past-times), it also asks students to think like historians. Well - at least when the task is well written.

Scoring this assessment can at times be very subjective. Who am I kidding - it is incredibly subjective!! While leading the teachers through the rubric, anchor papers and the use of holistic (vs. analytic) scoring - I ask teachers to put their "pet peeves" in writing on a piece of paper to remind them that we are looking for different elements of writing in scoring this assessment. We can't let our pet peeves distract us or get caught up in what the students did not do. Instead, for purposes of this assessment, we need to look for what they were able to do in this one moment in time.

Despite my best efforts, there are always some complaints about how the essays are scored. Claims of bias because the papers came from a small/large/poor/affluent school district or because the student had an IEP/poor handwriting/no clue how to answer the question but tried really hard are often heard when teachers discover how other teachers scored THEIR kids. I try to be sure that we all have the same understanding of the rubrics and anchor papers but I can't score them all myself. I go for the greatest amount of consistency and hope for the best each and every year.

It seemed serendipitous to read 'Standardized You Say? "Confessions of a Scorer" in my latest issue of EdWeek on the heels of this scoring adventure. This article surfaced my deepest fears about scoring - particularly when I read want-ads in the paper asking for certified teachers to score NYS assessments or hear a state ed official tout the benefits of "electronic distributive scoring." No matter what measures we put into place - scoring student writing will always be subjective.

Interestingly, while looking for new blogs to add to my reader (because 104 simply is not enough) I came across this post about a third grade teacher using automated essay grading with her class. What struck me most about this post was not that the automated scoring was such a hit but that what the teacher described in terms of the process was a perfect example of what several colleagues and I have been discussing for weeks now: formative assessment.

The tool may have made the record keeping easier but it didn't change the practice - that much is evident from the description of her class.That teacher likely did all of the things she describes and attributes to the scoring program before using the program - just in a different way. For her students to give scores that were very similar to the program shows that they understood not only the rubric, but their own strengths and weaknesses in writing. Not an easy feat!

Which made me think about what we are assessing to begin with. If we truly want students to do well on the DBQ - we need to teach them how to read and interpret documents, make connections and develop a thesis statement using the documents to support that thesis. And we might have to do that without actually writing for a bit! In my classroom, I scaffolded the student writing of DBQs. After years of reading poorly written essays and getting frustrated at the smallest of writing issues, I realized that I couldn't solve the problem by throwing more DBQs at them. Instead, I needed to pick apart what they needed to do and start S-L-O-W-L-Y teaching them how to do those things, step by step. I spent less time grading poor work, they spent less time creating it. Instead - we grew together as their skills and confidence grew.

I've been working to translate my hard won lessons with others to avoid them having to walk the same path, but I realized that the power of my learning came from those mounting frustrations and failures as a writing teacher. It was then that I realized I needed to be a better writer to understand how to help them become better. My scoring will always be different than yours - my pet peeves are not your pet peeves, my expectations are not yours. But if they are aligned to a larger goal given to us by the state and if we communicate it to the students, we can expect more in our classroom and get proficient results on a once a year assessment - no matter who scores it.

3 comments:

Linda704 said...

Great post! I love this: "teachers to put their "pet peeves" in writing on a piece of paper to remind them ...we need to look for what they were able to do in this one moment in time." I may try it next time I sit and score papers with teachers.

Angela said...

Posting "pet peeves" and looking for what kids were able to DO is a great strategy, Theresa. It's something ELA folks can be asked to do in another 8 weeks as well. Thank you for sharing.

Incidentally--I love that you love DBQs and writing. Wouldn't the world be a better place if everyone could face down this assessment with your energy and positivity?

It is a great opportunity for kids to show what they know. Very true. This post should be required reading for all scorers.

Kate said...

AWESOME post, T. I, too, love the idea of asking teachers to record their pet peeves in regards to scoring before they actually score. I will ask my ELA teachers to do the same in the next few weeks. I also think it's important for scorers to understand that state sides with the student in regarding what they do well. Oftentimes, teachers haggle over giving a paper a 3 instead of a 4 rather than simply reviewing the anchor 4 and determining what would make the paper before them not measure up. I say, if you're trying to determine whether a 3 or a 4, go 4 as the state would. Teachers often have a hard time because their classroom standards are set higher than the state and this is a good thing in classroom practice, but when it comes to scoring against a state rubric the confusion begins and calibration is often lost. Getting ALL teachers on the same page is so difficult! I don't know the easy answer here, but I am thankful that you have found a meaningful way to address the issue and try to get teachers to understand the importance of the task at hand. Thanks for sharing!