Benjamin Bloom launched a half-century of research that consistently reported the very high effect size of formative assessments when he reported that, following high-quality initial instruction, teachers’ administration of a formative assessment helps identify precisely what students have learned well and where they still need additional work (Bloom, Hastings, & Madaus, 1971). These assessments provide feedback on where students are in their learning journey, to both students and teachers. Of course, Bloom’s research on assessments that inform future teaching and learning also demonstrated the efficacy of response to intervention, the larger system that he was designing and studying. Consider the term ‘response’ within response to instruction and response to intervention – we gather feedback on how students are responding and we use that feedback to respond to the needs that emerge.
Tom Guskey (2005), assessment expert and preeminent Bloom scholar, further makes this connection when describing the most common sense use of assessment:
“A far better approach, according to Bloom, would be for teachers to use their classroom assessments as learning tools, and then to follow those assessments with a feedback and corrective procedure. In other words, instead of using assessments only as evaluation devices that mark the end of each unit, Bloom recommended using them as part of the instructional process to diagnose individual learning difficulties (feedback) and to prescribe remediation procedures (correctives).”
The notion of assessment that provides feedback is not new and it’s connected to the systems of supports that all students deserve and that some students need to learn at high levels.
Formative assessment means that we gather evidence that informs where we are and what we still need to do; the evidence provides us feedback and allows us to provide feedback. Consider Michael Fullan on this topic:
“Assessment for learning…when done well…is one of the most powerful, high-leverage strategies for improving student learning that we know of. Educators collectively at the district and school levels become more skilled and focused at assessing, disaggregating, and using student achievement as a tool for ongoing improvement.” (Fullan, 2004, p. 71).
For assessment, or evidence-gathering, to be good for more than assigning a grade – for it to be a “tool for ongoing improvement” – we need to accept student performance on assessments as feedback and we need to intentionally use results from assessments to provide feedback to students.
Tell the WHOLE story of each student. Data trends displayed easily, and intervention help at your fingertips!
Visit www.MrElmer.com to learn about Chris Weber’s favorite student support software.
Lev Vygotsky researched the significance of meeting students where they are, as did David Ausbel 50 years later, defining a student’s zone of proximal development (ZPD) as:
“The distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem-solving under adult guidance, or in collaboration with more capable peers” (Vygotsky, 1978, p. 86).
We can only determine these distances and locate a student’s ZPD by gathering feedback regarding where they are right now.
John Hattie and Helen Timperley (2007) make direct connections between evidence gathering and feedback:
Feedback has no effect in a vacuum; to be powerful in its effect, there must be a learning context to which feedback is addressed. It is but part of the teaching process and is that which happens second – after a student has responded to initial instruction – when information is provided regarding some aspect(s) of the student’s task performance. It is most powerful when it addresses faulty interpretations, not a total lack of understanding. Under the latter circumstance, it may even be threatening to a student (p. 82).
In common sense terms, our feedback to students should give information about Where they are going?, How they are going?, and Where they need to go next? The work of Hattie and Timperley is considered the definitive study of this critical and common sense idea:
There are major implications from this review of feedback for assessment in the classroom. Assessment can be considered to be activities that provide teachers and/or students with feedback…Such a definition places emphasis on devising assessment tasks that provide information and interpretations about the discrepancy between current status and the learning goals at any of three levels: about tasks, about the processes or strategies to understand the tasks, and about the regulation, engagement, and confidence to become more committed to learn. This contrasts with the more usual definition of assessment, an activity used to assess students’ levels of proficiency. This usual definition places more emphasis on the adequacy of scores (and less on the interpretation of these scores). There are many ways in which teachers can deliver feedback to students and for students to receive feedback from teachers, peers, and other sources. The implication is not that we should automatically use more tests. Rather, for students, it means gaining information about how and what they understand and misunderstand, finding directions and strategies that they must take to improve, and seeking assistance to understand the goals of the learning. For teachers, it means devising activities and questions that provide feedback to them about the effectiveness of their teaching, particularly so they know what to do next. Assessments can perform all these feedback functions, but too often, they are devoid of effective feedback to students or to teachers (p. 101-102).
It’s common sense: Students give us feedback on how we’ve done in teaching; we then give feedback back to students on how they’re going. And then, we’re ready with supports that are based on that feedback to continue the learning.