top of page

Measuring Achievement…of What?

 

 

Part 1: Big Picture

 

What data matters?

Special Educators love data.  The parent attorneys and advocates who I have encountered over the last five years seem to revel in numbers, test scores, and “concrete measures of progress.”  Traditionally, annual IEP Goals are written to ask “who, will do what, by when.”  The “will do what” element of an IEP goal often becomes a convoluted description of numbers and trials and percentages.  Example: “Given a set of 10 problems involving multiplication and division of fractions, Johnny will solve with 80% accuracy on 3 of 5 trials over two consecutive school weeks.”  In other words, over the last two weeks of his IEP cycle, Johnny is given 5 fraction problem sets, and if he solves 3 of those 5 sets with 80% accuracy, we celebrate.  Johnny has met his goal and we have data to prove it!  Data in the world of Special Education is perceived to be information that is easy to quantify, measure, summarize and present in a succinct manner.  Data shows what students can do and what skills they have learned.

 

But I argue that this view of data leaves a lot of stones unturned.  Whether or not a student can solve a set of fraction problems may be one element of the complex canvas of a student’s experience, but it does not tell the whole story.  I believe that data is anything that helps us better understand what we most care about as educators.  And what we care about differs dramatically from teacher to teacher and school to school.  Before we can talk about data and what data matters, we must define our beliefs about purpose of schooling – what should students learn and why?  This question is complex and veers rather outside the milieu of this assignment, but I think it’s worth briefly addressing to establish a context for my work.  I believe that schools should prepare students to lead passionate, productive lives after high school.  I believe that schools should teach students to think, to solve real-world problems, and to intricately understand their strengths and challenges as learners and as community members.  I think that schools should foster a sense of responsibility in students for the health of their community and their world.  Finally, I believe that schooling should hold students accountable for deep learning within a variety of disciplines including humanities, arts, sciences, and mathematics.

 

My personal values about education shape my views on what data I care most about as a leader.  I am interested in data that tells stories about students’ thinking and experiences.  I want to know if students can apply their knowledge and skills to real-world issues, work collaboratively and independently, and demonstrate tenacity and resilience as learners and community members.  I want to know if students are engaging deeply with rich content across the academic disciplines.  I’m interested in data that tell us about who attends the school, who stays, who goes, and why.  I’m also interested in data that provides information about what students do with their lives after high school: do they attend and graduate from a 2 or 4 year college or university?  Do they identify a career or trade that they love and can they hold down a job?  Do they participate actively in their community as citizens?  All of these questions and areas of interest inform my perception of data in the world of Inclusion and Special Education - I believe that IEP Goals and measures of progress should help us answer all of the questions above.

 

How do we measure student learning?

When exploring how to measure student learning, a few important understandings have surfaced in my review of some of the literature on this topic.  A good place to start is an exploration of the purpose of measuring student learning.  In his book Emotional Intelligence, Daniel Goleman writes, “We should spend less time ranking children and more time helping them to identify their natural competencies and grits, and cultivate those. There are hundreds of ways to succeed and many, many different abilities that will help you get there” (Goleman, 1994, p. 37).  I appreciate Goleman’s perspective and agree that our time is far better spent helping students identify their strengths than it is putting them through endless assessments for the purpose of sorting and ranking them.  Similarly, in her piece “Performance-Based Assessment and Educational Equity,” Linda Darling-Hammond argues that we should use assessments “in ways that serve teaching and learning, rather than sorting and selecting” (Darling-Hammond, 1994, p. 9).  Both Goleman and Darling-Hammond question how many educators currently use the information we glean about student learning (often for the purpose of sorting and ranking students and schools) and they make the excellent point that this information should be utilized instead to support student growth and to improve teaching and instruction! 

 

Darling-Hammond also asserts that “test-based decisionmaking has driven instruction toward lower order cognitive skills.  This shift has created incentives for pushing low scorers into special education, consigning them to educationally unproductive remedial classes, holding them back in grades, and encouraging them to drop out” (Darling-Hammond, 1994, p. 8).  I believe that in California, standardized testing has created, as Ben Daley puts it, a “relentless emphasis on memorizing lists of soon-to-be-forgotten facts” (Daley, 2009 p. 1).  The tunnel-versioned focus on excelling on these tests has indeed driven instruction toward rote, menial skills and has caused many schools to focus on work that has little authentic value.  In Special Education we continue to use IQ and “cognitive” testing as a means of measuring student capacity and worth.  Indeed, to qualify for a “Specific Learning Disability” a student must have a “severe discrepancy” between cognitive functioning and academic performance.  But this narrow conception of student “ability” leaves so much important and valuable information out of the equation.  Thus, many of the current common assessment practices in education (standardized testing and special education qualification, for example) fall short of capturing meaningful information and cause us to focus on lower-order cognitive skills.

 

I would much prefer to measure richer, more meaningful information about student learning and skills, and like Goleman and Darling-Hammond, I think this information should be used not to rank and sort students, but to help students cultivate their own sense of purpose and to help teachers improve their instructional practice!  Robert Sternberg’s WICS model presents a really valuable framework for examining how to go about measuring learning and skills that actually matter.  He devised a way to measure analytical skills, creative skills, and practical skills for the purpose of evaluating college applicants to Tufts University (Sternberg, 2009, p.3).  Students were asked to write creative stories, perform “situational-judgment inventories,” fill out “commonsense questionnaires,” and complete both multiple choice and open-ended measures (Sternberg, 2009, p.4).  Sternberg utilized a very rich and detailed set of rubrics to evaluate student performance on these measures.  The use of well developed and detailed rubrics is a concrete means of more objectively evaluating student performance, and more importantly it “goes beyond traditional models that emphasize memory and analytical learning and, as a result, enables all students to capitalize on their strengths and to compensate or correct for their weaknesses” (Sternberg, 2009, p. 6).  I deeply value and appreciate assessment measures that ask students to go beyond memorization of facts and instead address a range of skills and mindsets that are important for school and for life.

 

When it comes to sharing student learning data, I think that college acceptance and graduation rates (that are disaggregated by socio-economic status, ethnicity and gender) are meaningful. I also think there is a place for “value added” data (perhaps more for internal than for external sharing) to show if individual students are making progress over time compared with where they started. 

 

How do we measure the quality of our schools?

If I were to re-imagine California’s Academic Performance Index, I would take into account the following “measures” of quality for schools:
 

  • College acceptance and completion data 

  • Attrition data, particularly about which students leave and why.

  • Data that measures next steps for students who don’t attend college (enrollment in a trade school, for example, or ability to obtain and maintain a job in a field of interest).

  • Data that measures the four key academic mindsets that are components of deeper learning: “I belong in this academic community; I can succeed at this; My ability and competence grow with my effort; This work as value for me (Farrington, 2013, p.3-6).  This data could be obtained from student reported surveys.

  • Academic assessments that truly support and capture meaningful student learning.  I would propose the use of the kinds of rubrics that Sternberg describes in his work, teacher comments, student reflection and self assessment, grades, and performance based assessments

  • Data that reveals pre-Special Education referral practices and Special Education referral data

  • Performance on California’s new Smarter-Balanced Common Core assessment

 

How do we support colleagues in using data to inform practice?

I would like to facilitate a collegial culture where data is used to effectively guide instruction and decision making.  I like the three cycles described in Data Wise ("prepare, inquire and act") because they strike me as a powerful way to think about using data to make decisions (Boudette, City and Murnane, 2005, p 3).  I would like to guide Inclusion Specialists to engage in cycles of inquiry, action and reflection in tandem with classroom teachers with a focus on equitable access to learning for all students at HTH.  I think students’ IEP goals might create a solid framework for this work, so long as the goals are authentically connected with the work that is actually going on inside of the classroom.

 

Part 2: PITP Experience

For my PITP, I facilitated a review of exit card data from a recent Professional Development meeting of all HTH Inclusion Specialists.  The data we reviewed was a compilation of responses the Inclusion Specialists about their ideas for our shared Inclusion website.  Though there is a lot of data that I feel more passionate about compared with this particular data, I felt compelled to analyze the exit card responses because this action was an important next step for my action research.  It is very important to me that our Inclusion website is a resource that actually gets utilized and helps to improve our practice and instruction, and thus, I want to make sure that our community’s vision is honored and included in its development.

 

In reviewing the video of my facilitation, I am reflecting about how challenging it is to look at qualitative data (in this case, people’s “stream of consciousness” responses to a question) in a way that feels efficient and productive.  After receiving feedback from colleagues, I took all of the responses from the exit card survey and spaced them evenly on a word document so that each individual’s responses were clearly separated.  I debated doing my own analysis and trying to group and categorize people’s responses, but I worried that I would apply my own subjective lens to this, and so I decided to give the “raw” data to the group so that we could categorize it together.  I am glad that I made this choice, because it changed the outcome of our work.  For example, at minute 38 we discuss someone’s idea to include “common IEP language.”  I say, “I don’t understand what this means!”  And Nabia explains exactly what the person meant because she too wanted to have “boiler plate” language that we could easily access when filling out paperwork.  I am taking away the understanding that when trying to identify common themes and big ideas within a broad set of qualitative data, it’s important to have multiple perspectives and lens’ during the interpretation process.

 

When watching the video, there were a few moments that made me cringe.  First, at minute 2:23 I said to the group, “I’ve read through this many times myself and each time I just get overwhelmed by it.”  Why did I say this?!  It set a negative tone for the group by making it seem like our task was overwhelming, when in fact we addressed each person’s response systematically by charting on the white board and it went very smoothly!  I think I was feeling apologetic that the data was two pages long and not easy to interpret, but I could have instead said something like, “I chose not to analyze or group the data myself because I didn’t want to miss anything or apply my own subjective lens.  That’s why I’m asking us to look at this together as systematically as we can.”  I also had some strong facilitation moments, when I moved the conversation forward by saying, “Ok, so what I’m hearing is…” (14:21) and “Let’s table that for now and focus instead on…” (16:34).

 

The feedback I received from the other group members was positive.  One said “…you had a clear focus and objective for the meeting which was to look at the global headings and see if there was anything missing or that needed to change.”  Neither participant had any ideas about how to change the lay-out of the data, and both shared that they appreciated reading peoples’ first-hand narratives about our Inclusion website.  In the future, I might go about gathering feedback from exit cards a little differently.  Instead of asking a broad, open ended question (“what ideas do you have for the content of our Inclusion website”) I might instead ask more pointed and specific questions (“what content would you be most likely to access and utilize on a daily basis”).  This way, I would end up with data that is easier to interpret and better guides our process.

 

Sometimes the information about student learning that is easiest to quantify and analyze is actually the least meaningful.  I may be able to determine if Johnny can add fractions in five minutes, but whether Johnny can apply his knowledge to something meaningful in the real world may take me longer to find out.  I envision a not-so-distant world where meaningful classroom assessment practices are directly utilized within the IEP process, and where data is utilized not to rank and sort, but to improve learning and instruction.

 

 

References

 

Boudette, K., City E. and Murnane R. (2005)  Data Wise: A Step-by-Step Guide to Using Assessment Results to Improve Teaching and Learning.  Boston: Harvard Education Press.

 

Consortium on Chicago School Research.  (2013).  Academic Mindsets as a Critical Component of Deeper Learning.  Chicago, IL: Farrington, C.

 

Daley, B. (2009) Water, Water Everywhere, and Not a Drop to Drink.  unboxed, 4.  Retrieved from http://www.hightechhigh.org/unboxed/isssue4/water_water_everywhere/.

 

Darling-Hammond, L. (1994)  Performance-based Assessment and Educational Equity.  Harvard Education Review, 64 (1). 

 

Goleman, D.  (1994).  Ch. 3: When Smart is Dumb.  In Emotional Intelligence, (33-45).  New York: Bantam Books.

 

Sternberg, R. (2009).  Wisdom, Intelligence and Creativity Synthesized: A New Model for Liberal Education.  School Administrator, 66 (2), 10-11.

Using Data to Inform Practice

Video Excerpt from PITP

ARTIFACT:

Click HERE to see the data that was utilized for my PITP.

PITP Written Reflection:

bottom of page