background flannel

Friday, October 4, 2013

Each According To Their Needs

             
I don’t know why I’m apprehensive about next week. I know I am right and I don’t fear failure. I am beyond resilient and yet – I really don’t want to mess this up. I am exercising my weakest teaching tools and exorcising my biggest professional demons simultaneously. In a word, I am organized. My room is still neat but messy at the edges, but my pedagogy has never been this focused before.
            Last blog, I wrote about data-mining but I wasn't specific. In 2002, when I took a course in differentiated instruction from Carol Ann Tomlinson, one of the tenets to “getting it right” was to let your choices be data driven. A decade ago, my data was tainted because I was using only my own assessments and observation. I was using information like individual student learning styles, IEPs, and 504 considerations. We did not have the advantage of web-based assessment engines like studyisland.com or PVAS at the time.
            We are more than half-way through the first marking period and that is part of my uneasiness with my new endeavor. It could not be helped, but I wish I was starting sooner. I wasn't going to attempt to develop individual improvement plans, alternative lessons, or flexible groupings based upon my (shrinking) gut.  (Guess who is reversing his middle-aged spread. --- Fodder for another blog. Down 15# and counting!) So, for my initial plan, I used my students’ first quarter benchmark scores from study island and my observations of the writing samples I had collected, so far, from journal entries and short responses to prompts.
            I wanted to know a few details to start. The first question: “Who passed and who failed?” While gathering my answer, I discovered, as I mentioned in an earlier blog, that my students were all over the map. There wasn't even a clear trend among the advanced, proficient, basic or below basic scores. Strengths and weaknesses fluctuated between individuals and across class grouping. With that distinction or inconsistency noted, I developed a new question. “What are my students’ strengths and weaknesses?”
            Study Island separated its questions into four domains and students were rated in each by their scores: Language, Writing, Reading, and Informational texts. A few of my students showed proficiency and even mastery in two or more domains, yet still they scored “basic” because one domain score was significantly low enough to drop them into a failure overall.
            I admit that, based upon past eighth grade performances, I was really looking for the data to validate my assumptions about this cohort of students. I expected to see that everyone was weak with reading informational texts. Far more students not only were proficient in informational texts but they even scored perfectly in that domain. However this area is still a concern area because this form of reading is still a weakness for many of my students. So, those students were going to need more time learning how to attach the genre.
            The second tenet of DI that I learned was to use the data to inform my decisions for lessons, and groupings, rather than the behavior and interpersonal dynamics among my students. As you may expect, some students had multiple weaknesses but some of these weaknesses I have disregarded this month because some of the benchmark was a pretest of information not yet rolled out, that will be introduced later in the curriculum.

            Our school has already committed to remediation and enrichment for all students who need it, regardless of  IEP, 504, and other legally binding markers. The data showed  me that the advanced scorers still needed remediation in their weak areas (some areas were under 60%.) The 2012-2013 PVAS data just went live this week in school. That was too late to help me achieve my goals for first marking period, but the data may paint a different picture than the results from our September testing. Since I want to see all my students grow, I need to remediate all of their weaknesses. The benchmark domain markers were a good place to start, and since they will be tested each quarter, the data will allow me to track progress, quarter by  quarter.