Ripples & Reflections

"Learning is about living, and as such is lifelong." Elkjaer.

Making Feedback Visible: Four Levels Experiment

6 Comments

This quick brain-dump is based on ideas from Hattie’s Visible Learning for Teachers, Wiliam’s Embedded Formative Assessment and the pdf of The Power of Feedback (Hattie & Timperley) linked below. 

I spent much of today trying to grade a large project (Describing the Motion of the Rokko Liner, our local train), which was assessed for MYP Sciences criteria D, E, F. Based on some of our Student Learning Goal work on helping students cope with data presentation and interpretation, the lab had been broken into stages (almost all completed in-class), spread across A4 and A3 paper and GoogleDocs in Hapara.

Hattie & Timperley, Four Levels of Feedback. Click for the pdf of 'The Power of Feedback.'

Hattie & Timperley, Four Levels of Feedback. Click for the pdf of ‘The Power of Feedback.’ The image is on the page numbered 87.

The result: a lot of visible learning in that I could keep track of each student, see their work in progress and comment where needed. A lotof verbal feedback was given along the way, with some worked examples for students. Breaking the large assignment into stages helped keep it authentic and manageable for students, with some days dedicated to individual strands of the assessment criteria.

The challenge: a Frankenstein’s monster of a grading pile, part paper, part digital and all over the place. After trying to put comments on the various bits of paper and Google Docs I gave up, realising that I would be there for many hours and that potentially very little would be read carefully be students or actioned in the next assignment. I turned to Hattie (and Wiliam). Visible Learning for Teachers has a very useful section on Feedback (d=0.73, though formative assessment is d=0.9) and so I spent some time making up the following document, with the aim of getting all the feedback focused and in one place for students.

It is based on the four levels of feedback: task-level, process-level, self-regulation and self. In each of the first three sections I have check-boxed a few key items, based on things I am looking for in particular in this task and the common advice that I will give based on a first read through the pile. A couple of boxes will be checked for each student as specific areas for improvement, with the ‘quality’ statements explained in person. There is space under each for personal comments where needed. I fudged the ‘self’ domain a bit for the purpose of student synthesis of the feedback they are given – really making it a reflective space, geared towards the positive after the preceding three sections of constructive commentary.

Once I got the sheets ready, I chugged through the grading, paying attention most closely to the descriptors in the rubric, the task-specific instructions to students and then the points for action. However, I put very little annotation directly on the student work, instead focusing on this coversheet. It was marginally quicker to grade overall than the same task would have been normally, but the feedback this time is more focused. The double-sided sheet was given to them in class, attached to the paper components of their work, with the feedback facing out and the rubrics with grades hidden behind. This is a deliberate attempt to put feedback first. We spent about 25 minutes explaining and thinking through this in class.

Importantly, students were given time to think carefully about why certain notes had been made and boxes checked on their sheet. I asked them to respond to the feedback in the ‘self’ section, and make additional notes in the three sections of task-level, process-level and self-regulation. In discussion with individual students, we identified which were most pertinent – for some higher-achieving students they can take action in more detail at the task level, whereas others need to focus more on self-regulation. At the end of the lesson, the sheets and work were collected back, so I can read the feedback and use this to inform next teaching of lab skills.

The purpose of all this is to make it explicit where they need to focus their efforts for the next time, without having to wade through pages of notes. It hopefully serves to make the “discrepancy between the current and desired” performance manageable, and a sea of marking on their work will not help with this. I will need to frame this carefully with students – some need work on many elements, but I will not check or note them, instead focusing on the few that are most important right now. Incidentally, it also allows me to more quickly spot trends and potentially form readiness groupings based on clusters of students needing work on individual elements in the following lab.

At the end of the task I asked students for feedback on the process. They generally found the presentation of feedback in this way easier to manage than sifting through multiple multimedia components, and will keep this document as a reference for next time. A couple of higher-achieving students asked for more detailed feedback by section in their work, which is somthing I can do at request, rather than perhaps by default; I know these students will value and take action on it.

Here’s the doc embedded. If it looks as ugly on your computer as it does mine, click here to open it.

If you’ve used something like this, or can suggest ways to improve it without taking it over one side per section, I’d love to hear your thoughts in the comments or on Twitter. I’ll add to the post once I’ve done the lesson with the students.

UPDATE (2 December): Feedback-first, peer-generated

Having read that adding grades to feedback weakens the effect of the feedback, I’ve been thinking about ways to get students to pay more attention to the feedback first. For this task, a pretty basic spring extension data-processing lab, I checked the labs over the weekend and wrote down the scores on paper. In class I put students in groups of three and asked them to share the GoogleDoc of the lab with their partners. They then completed a feedback circle, using the coversheet below to identify specific areas for improvement and checking them. If they could suggest an improvement (e.g. a better graph title), they could add this as a comment.

This took about 15-20 minutes, after which students completed the process-level and self-regulation sections and returned the form to me, before continuing with the day’s tasks. Before the next class, I’ll add their grades to the form (rubrics are on the reverse of the copy I gave students) and log them in Powerschool. Delaying communication of the grade this way should, I hope, have helped students engage more effectively with the feedback – I learned last week that making changes in Powerschool resulted in automatic emails to students.

I was wary of doing this first thing on a Monday, but the kids were great and enjoyed giving and receiving feedback from peers. Of course some goofed off a little, but they were easy to get back on track. For the high-flyers who enojoyed the method less the first time, this gave them a chance to really pick through each others’ work to give specific feedback for improvement.

Here is the document:

……….o0O0o……….

……….o0O0o……….

The Power of Feedback (pdf). John Hattie & Helen Timperley, University of Auckland. DOI: 10.3102/003465430298487

Advertisements

Author: Stephen

Director of Learning & MYP Coordinator at Canadian Academy, Kobe, Japan. Formerly MYP HS Science & IBDP Bio teacher and missing it terribly. Twitterist (@sjtylr), dad and bloggerer.

6 thoughts on “Making Feedback Visible: Four Levels Experiment

  1. Stephen
    An important part that I missed earlier was putting the feedback first and the rubric/grade later. It gels with an interesting article shared on Twitter a month ago: http://www.ets.org/Media/Research/pdf/RR-08-30.pdf on the influence of feedback vs. praise/grade. Also good is http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CC0QFjAA&url=http%3A%2F%2Fwww.dylanwiliam.org%2FDylan_Wiliams_website%2FPapers_files%2FECER%252098%2520paper.doc&ei=URNeUra6Io6X0AWnxIGAAw&usg=AFQjCNGHECUsKAVbCxfm_rIA1aVBm5lWmA&sig2=dFcUBb8o9Tm5L2sHreNfKA&bvm=bv.54176721,d.d2k
    ugly link, I know, but it downloads as a word doc.

    thanks as always for your thoughts and experiences.
    regards
    Simon Underhill
    @simunderhill

    • Thanks Simon!

      I used the ETS paper in the ‘UPDATE’ above. Experimenting with getting students to engage more in feedback is becoming pretty interesting. We also have a TLC around it here now.

      Thanks,

      Stephen

  2. Hi Stephen,

    Finally had a chance to look over this. Do you see the task feed back is just an unpacking of the rubric. Do you think it would be just as effective if instead of a general descriptor rubric you provided an unpacked rubric? ie. Level 1-2, Students attempted to identify the variables L3-4 Students identified most of the variables, students identified all of the variables and explained how they were being manipulated

    Although the way it is set up know, it lets students know what they need to work on, yet saves you the time of having to differential between descriptor levels for a task specific rubric.

    Did you give this sheet out with the instructions? It would seem the task specific feedback would help guide students on what to do…presumably to gain a top mark. Is it possible for students to do all of the tasks yet not get a top mark?

    Do you see the process level and self regulation being the same for every assessment task or would you change it for every assessment?

    I am also wondering if the process level and self regulation feedback can be connected to the ATL and therefore give you tangible evidence of how a student is performing in these.

    I like this idea and want to try it out with my class! THanks for sharing.

    • Hi Scott,

      The elements you mention first are part of the task-specific clarifications in the design of the assessment, and students have this from the beginning. I have comment databases set up in Turnitin for this purposes, but the more I read about it all (and ask students) the more I realise that they’re not paying attention to that level of feedback. We can bias ourselves towards task-level feedback (what to fix) without giving students the process to fix it.

      So this feedback sheet is a supplement after the fact (or in the process). It needs to be focused, clear and allow for ‘the gap’ to appear crossable, without overwhelming students. They can go deeper into the comments on the work if they need to, or ask to go through it with me.

      If a student attempts all parts of the task outlined in the MYP rubrics and clarified at a task-specific level they are not guaranteed a top grade. Their work needs to meet the descriptors.

      Depending on the focus of the unit/lesson I might change out some of the common statements, but there is always going to be room to add written comments for students too. Process- and self-regulation will evolve over time, I think.

      They could connect to ATL, yes – if I can add some elements of this without burying kids in information I will do once the ATL guides are digested.

      Thanks!

      Stephen

  3. Reblogged this on i-Biology and commented:

    I’ve recently been trying to update and enhance teaching practices through a more careful consideration of Hattie’s learning impacts and through the MYP assessment criteria. This is an experiment in giving feedback in a more evidence-based manner.

  4. Pingback: Growth Mindsets in Differentiation & Feedback | i-Biology | Reflections

Thank-you for your comments.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s