Ripples & Reflections

"Learning is about living, and as such is lifelong." Elkjaer.


4 Comments

Capturing the Criteria & “Zooming In”

 

Shortlink to this resource: is.gd/mypassess

After some parent-teacher conferences recently, I was asked to show all of the MYP assessment criteria together and realised I couldn’t find something that met our needs for a single-reference, quick overview of the MYP assessment objectives and criteria.

Screen Shot 2017-04-27 at 17.52.59Here is an attempt to put the big ideas and rubrics together in one place, so that colleagues can quickly see vertical and horizontal articulation and connections, and so that parents have a resource to hand to help understand assessment.

You might find it useful.

To make your own copy, click “File –> Make a copy”.

This slideshow requires JavaScript.

 ……….o0O0o………..

Disclaimers:

  • This involved a lot of clicking and is bound to have some errors. Big thanks to Mitsuyo-san, our data secretary, who helped with this. 
  • Descriptors in bold did not make it across from text to spreadsheet. Use original descriptors in student assignments.
  • This is intended only as an overview of the programme. Teachers must exercise caution with this, and default to the published guides on the OCC for assessment rubrics, clarifications, rules and guidance.

………o0O0o……….

Edit: added 3 May 2017

Why the green bands? 

In each of the subject-area bands, you’ll find the Level 5-6 row accented with green. This is part of something I’m trying to work on with colleagues and students in terms of zooming into the objectives-level of assessment, and was something I used in #HackTheMYP.

The basic idea is this: 

  1. As a model of a 4-band rubric, we typically see the third band as ‘meets objectives‘. This means that the rows below are approaching and above are exceeding.
    • Try it: add up the scores for all 5, all 6 or a combination thereof. What does it come to when you apply the total to the 1-7 conversion chart? This is the kid that meets the outcomes of our core curriculum. 
  2. When we focus only on the top-band descriptors we may inadvertently end up doing one of two things:
    • Causing students to get stressed by default as they’re aiming for the ‘exceptional’ descriptors first. “The gap” between where they are and want to be is too big; or,
    • Falsely making our core expectations for all students fit the 7-8 band, thus leaving nowhere to go from there – creating a “low ceiling” and no room for extension into genuinely meeting those top descriptors.
  3. If we zoom into the 5-6 band first – in task design and as a student – we are able to set an appropriate expectation for all learners, see how and where to scaffold and support those who need it, and provide a “high ceiling” for innovation, application, analysis, synthesis, etc.
  4. It should then become easier to create the task-specific clarifications. If we can clearly describe the 5-6 “core” band first, we should then make sure that the levels above and below can be really clearly distinguished. In my experience, this is easier than starting at the top and working back.

If you’ve tried this idea (or similar), how did it go?

……….o0O0o……….

For a similar discussion and great resources, but in an SBG context, check out Jennifer Gonzalez’s (@cultofpedagogy) posts on the “single point rubric”:


Leave a comment

Standardization: Cycling away from the moderation “event”

A quick post to share a resource, based on some of our work at CA. I love cycle diagrams and was thinking about the process of moderation, planning and the challenges of effective collaboration when there are grades (and a big pile of ‘done’ grading) at stake. 

If you’ve ever tried to ‘moderate’ work that’s based on two or more teachers’ hours of effort in grading, you’ll recognise the challenge. The proposal here is to reframe stadardization as a cycle – various points of entry to working together on a common understanding of assessment – so that teachers align their assessment standards more closely. Post-hoc moderation events may tend towards defense of our own grading work; who wants to go back and change all that work?

Do you think you could put the cycle to work in your own context? 


2 Comments

The IMaGE of an International School

It’s crunch time for my MA International Education studies at the University of Bath, with a big literature review in progress and some data collection coming up, aiming to submit by the summer break. As much as I’ve loved the study, I’m looking forward to reclaiming some balance. 

………o0O0o……..

My plan for the dissertation is to update and pilot-test my web-chart of the international dimension of a school, aiming to tackle the challenge of defining a nebulous concept through visualisation, based on self-reporting, to generate “the IMaGE of an international school“. (IMaGE = international mindedness and global engagement). The small-scale case-study will generate an IMaGE for my own school, and the pilot study will help evaluate the usefulness of the visualisation and metrics.

Web8Sample (2)

A sample of the web chart in use, with the IMaGE showing the evolution of a school or a change in perception. The eight radials are still under development, and there will be descriptors for each in the final project. At first glance, where would you rate your won school? What do you see, think, wonder about the results of this (imaginary) school?

The idea of trying to evaluate or measure the ‘internationalisation’ of a school is not new: we already have metrics, practices or handbooks from various organisations, including the IB, CIS, ISA, ACE, OECD. This project aims to learn from, adapt and distil these qualities into an accessible tool that will generate a ‘visual definition’ for a school, as a starting point for further investigation.

Although some of the ideas within the chart have evolved a lot since the initial idea in 2012 (and I have found many more studies), here is the original assignment.

 

 

 

 


1 Comment

How NOT to be ignorant about the world.

Hans Rosling, TED.com

Another great Hans Rosling TED Talk, this time with his son, Ola.

Here Dealing with misconceptions, bias, ignorance of global issues and a little formative assessment*, they discuss how we can be better informed about the world, with a fact-based world view… and how we could (eventually) perform better than chimps on a global issues quiz. I have blogged about how this might be used in IBTOK or science classes on i-Biology.

……….o0O0o……….

Should a fact-based world view be the core curriculum of an international school?

Early in the talk, Ola recognises the influence of early bias of students and outdated curricula on the world view held by students – and how these are compounded by an ill-informed media. Through their project, they are trying to measure these misconceptions and propose a ‘global knowledge certificate’ that candidates (or organisations) might use to stay informed, to be competitive and to think about the future.

It seems to me that the fact-based world view would make for an excellent set of content-knowledge standards for an international school, and might pair nicely with the IB programmes as we seek to create knowledgable young inquirers who seek to make a positive difference to the world around them. How can they achieve this if they are learning outdated concepts of development or using stereotypes to paint the world in an ugly shade of ill-informed?

Hattie’s meta-analyses note that the power of prior learning (including prior mis-learning or misconception) has a very high impact on students’ future learning (d=0.67). As we generate scopes and sequences for courses or set up units of inquiry, should we be looking to the research not only on misconceptions in our own content domain but in global literacy in order to give students the tools they need to inquire in a changing and often-misunderstood world?

Is globally-literate the same as internationally-minded?

It is hard to define international-mindedness, though we can recognize it in our own settings. We might observe the behaviours of a globally-engaged student (or teacher), and might use assessments of students’ fact-based world-views as a measure of their international-mindedness. To this end, a globally-focused national school might be a more effective ‘international school’ than a more narrow-focused overseas expatriate school.**

You read about the ignorance project here on CNN, or find more classroom resources (including a world-view card game) on Gapminder’s education page. The Guardian also has a selection of global development quizzes, which you can take for fun or in class.

……….o0O0o……….

*making great use of the audience-response clicker system pioneered by Eric Mazur.

**this is part of the idea of my web-chart of the IMaGE (IM and Global Engagement) of a school in my MA work.


3 Comments

Faculty PD: Assessment Principles & Practices (and a stretched golf metaphor)

Over the last couple of weeks we’ve put a lot of work into developing Wednesday afternoon PD sessions for middle and high-school faculty on Assessment Principles and Practices. We’ve chosen to do this now, as this is an easy entry point for work on MYP: Next Chapter and it is always valuable PD to think about, evaluate and strengthen our practices. It builds on a lot of the good work that CA has been doing in recent years to improve assessment.

The inspiration for the theme came from Ken O’Connor’s (@kenoc7) blog post on “23 reasons why golf is better than classroom assessment and grading,” as well as some of Rick Wormeli’s (@RickWormeli) great series of videos on Assessment in the Differentiated Classroom. The aim was to emphasise the importance of the connection between the objectives of the unit and the assessment tasks, resulting in strong, worthwhile assessment taking place. As part of the discussion we connected the objectives of the MYP subjects to their respective assessment criteria and strands.

So far we have completed two of three (or more) sessions on this, with the first being a general overview, including revisiting our Assessment Policy and a Socrative Space Race, as well as sharing with colleagues from different departments. The second session focused on scaffolding tasks and was kicked off with Wormeli’s provocative talk on redos and retakes, before having some exemplary teachers show their scaffolding and student support tools to teachers. The second half of the session was devoted to further developing our own tasks and in later sessions we’ll evaluate these and think more carefully about what to do with ‘broken’ assessments and how to make best use 0f learning data.

The curriculum team, including Tony (@bellew), LizD (@lizdk), LizA and I were impressed by the feedback given in a one-minute essay between sessions and in the quality of collegial conversations taking place. It is clear that CA has come a long way in assessment philosophy and practices in recent years.  I am grateful to be in a place where we can work towards progress, share our practice and improve together as a faculty.

Here is the presentation. Apologies for stretching the golf metaphor to breaking point, but I wanted to also use it as a way to model use of CC images from Flickr. I’m trying to find a happy medium here between attractive ‘presentation zen’ for the PD sessions and functional informational flipbooks for teachers to refer to and use in their later work as they’re embedded to the faculty guide.


1 Comment

The Gradebook I Want

As the MYP sciences roll into the Next Chapter and we mull over the new guides, objectives and assessment criteria, we have the opportunity to reflect on our assessment practices. The IB have provided a very clear articulation between course objectives and performance standards (see image), which should make assessment and moderation a more efficient process.

There is a clear connection between the objectives of the disciplines and their assessment descriptors in all subjects in MYP: Next Chapter.

There is a clear connection between the objectives of the disciplines and their assessment descriptors in all subjects in MYP: The Next Chapter.

Underpinning these objectives, however, are school-designed content and skills standards. These are left up to schools for articulation so that the MYP can work in any educational context and this is great, though it does leave the challenge of essentially tracking two sets of standards (or more) in parallel: the MYP objectives and the internal (or state) content-level standards. In a unit about sustainable energy systems, for example, I might have 15-20 content-level, command term-based assessment statements, each of which could be assessed against any (or many) of the multiple strands for each of four MYP objectives.

As I read more about standards-based grading (or more recently standards-based learning on Twitter), I become more dissatisfied with the incumbent on-schedule assessment practices presiding over grading and assessment. I want students to be able to demonstrate mastery of both the MYP objectives and the target content/skills but I am left with questions:

  • If they score well on a task overall but miss the point on a couple of questions/content standards, have they really demonstrated mastery? How can I ensure that they have mastered both content and performance standards?
  • If they learn quickly from their mistakes and need another opportunity to demonstrate their mastery on a single content-level standard (or performance-level standard), do they need to do the whole assignment again? What if time has run out or there is not opportunity to do it again?
  • As we move through the calendar in an effort to cover curriculum and get enough assessment points for a best-fit, are we moving too superficially across the landscape of learning?
  • More importantly, is the single number – their grade – for the task, a true representation of what they know and can do? How can I present this more clearly, to really track growth?

My aim with all this is to encourage a classroom of genuine inquiry (defined as critical, reflective thought), in which I know that students have effectively learned a solid foundation of raw materials (the ‘standards’, if you will), upon which they can ask deeper questions, make more authentic connections and evaluate, analyse and synthesise knowledge. 

Lucky we have Rick Wormeli videos for reference. Here he is on retakes, redos, etc and it is worth watching (and provocative). There is another part, as well. If you haven’t seen them yet, go watch them before reading the rest of this post (the videos are better, TBH).

……….o0O0o…………

What do I do already? 

  • Lots of formative assessment: practice, descriptors on worksheets, online quizzes.
    • In each of these, there is a rubric connecting it to MYP Objectives
      • Each question is labeled with the descriptor level and strand (e.g. L1-2a, L5-6b, c).
      • I don’t usually give a grade, though do check the work. Students should be able to cross-reference the questions with the descriptors, carry out their own ‘best fit’ and determine the grade they would get if they so desire. This puts feedback first.
    • Learning tasks usually include target content standards
  • Drafting stages through Hapara/GoogleDocs to keep track of work and give comments as we go
  • An emphasis on self-assessment against performance descriptors and content-level standards (and goals for improvement or revision).
  • I use command terms all the time: every sheet, question, lesson where possible.
  • Set deadlines with students where possible and am flexible where needed.
  • In some cases reschedule assessment or follow up with interview or retake (but not as standard practice). As Wormeli says above (part 2): “at teacher discretion.”
  • Track student learning at the criterion-level (MYP objectives), though with current systems (Powerschool), not in great detail at the objective strands (descriptors) level (e.g. A.i, A.ii, A.iii).
  • I do tests over two lessons, giving out a core section in the first, collecting and checking in-between classes. In the next session, this is supplemented with extra questions that should allow students to take at least one more step up. For example, a student struggling with Level 3-4 questions would get more opportunities to get to that level, whereas another who has shown competency will get the next level(s) up.

What do I want to do? 

  • I want to also be able to effectively track every student’s growth in the content standards and develop deeper skills in inquiry (critical reflective thinking).
  • Develop a system for better tracking learning against the individual strands within each criterion (e.g. A.i, A.ii, A.iii).
  • Better facilitate development of student mastery, allowing us to move further away from scheduled lessons and into more effective differentiation and pacing.

What would help? 

I would really like a standards-based, MYP-aligned, content-customisable gradebook and feedback system that is effective in at least three dimensions:

  • Task-level to put levels for each task, each of which might produce multiple scores, including:
    • Various target content-level standards
    • MYP objective strands at different levels of achievement
  • It would need to allow for retake/redo opportunities for any and all standards that need to be redone – not necessarily whole assessment tasks. 
  • It would have to focus student learning on descriptors and standards, not on the numbers, in order to help them move forwards effectively. Students would need to be able to access it and make sense of it intuitively so that they could decide their own next steps even before I do.
  • It would super-duper if the system could produce really meaningful report cards that focus on growth over the terminal nature of semester grading.
What would a three-dimensional gradebook look like?

What would a three-dimensional gradebook look like?

Here is Rick again, describing another approach to a 3D gradebook:


6 Comments

Making Feedback Visible: Four Levels Experiment

This quick brain-dump is based on ideas from Hattie’s Visible Learning for Teachers, Wiliam’s Embedded Formative Assessment and the pdf of The Power of Feedback (Hattie & Timperley) linked below. 

I spent much of today trying to grade a large project (Describing the Motion of the Rokko Liner, our local train), which was assessed for MYP Sciences criteria D, E, F. Based on some of our Student Learning Goal work on helping students cope with data presentation and interpretation, the lab had been broken into stages (almost all completed in-class), spread across A4 and A3 paper and GoogleDocs in Hapara.

Hattie & Timperley, Four Levels of Feedback. Click for the pdf of 'The Power of Feedback.'

Hattie & Timperley, Four Levels of Feedback. Click for the pdf of ‘The Power of Feedback.’ The image is on the page numbered 87.

The result: a lot of visible learning in that I could keep track of each student, see their work in progress and comment where needed. A lotof verbal feedback was given along the way, with some worked examples for students. Breaking the large assignment into stages helped keep it authentic and manageable for students, with some days dedicated to individual strands of the assessment criteria.

The challenge: a Frankenstein’s monster of a grading pile, part paper, part digital and all over the place. After trying to put comments on the various bits of paper and Google Docs I gave up, realising that I would be there for many hours and that potentially very little would be read carefully be students or actioned in the next assignment. I turned to Hattie (and Wiliam). Visible Learning for Teachers has a very useful section on Feedback (d=0.73, though formative assessment is d=0.9) and so I spent some time making up the following document, with the aim of getting all the feedback focused and in one place for students.

It is based on the four levels of feedback: task-level, process-level, self-regulation and self. In each of the first three sections I have check-boxed a few key items, based on things I am looking for in particular in this task and the common advice that I will give based on a first read through the pile. A couple of boxes will be checked for each student as specific areas for improvement, with the ‘quality’ statements explained in person. There is space under each for personal comments where needed. I fudged the ‘self’ domain a bit for the purpose of student synthesis of the feedback they are given – really making it a reflective space, geared towards the positive after the preceding three sections of constructive commentary.

Once I got the sheets ready, I chugged through the grading, paying attention most closely to the descriptors in the rubric, the task-specific instructions to students and then the points for action. However, I put very little annotation directly on the student work, instead focusing on this coversheet. It was marginally quicker to grade overall than the same task would have been normally, but the feedback this time is more focused. The double-sided sheet was given to them in class, attached to the paper components of their work, with the feedback facing out and the rubrics with grades hidden behind. This is a deliberate attempt to put feedback first. We spent about 25 minutes explaining and thinking through this in class.

Importantly, students were given time to think carefully about why certain notes had been made and boxes checked on their sheet. I asked them to respond to the feedback in the ‘self’ section, and make additional notes in the three sections of task-level, process-level and self-regulation. In discussion with individual students, we identified which were most pertinent – for some higher-achieving students they can take action in more detail at the task level, whereas others need to focus more on self-regulation. At the end of the lesson, the sheets and work were collected back, so I can read the feedback and use this to inform next teaching of lab skills.

The purpose of all this is to make it explicit where they need to focus their efforts for the next time, without having to wade through pages of notes. It hopefully serves to make the “discrepancy between the current and desired” performance manageable, and a sea of marking on their work will not help with this. I will need to frame this carefully with students – some need work on many elements, but I will not check or note them, instead focusing on the few that are most important right now. Incidentally, it also allows me to more quickly spot trends and potentially form readiness groupings based on clusters of students needing work on individual elements in the following lab.

At the end of the task I asked students for feedback on the process. They generally found the presentation of feedback in this way easier to manage than sifting through multiple multimedia components, and will keep this document as a reference for next time. A couple of higher-achieving students asked for more detailed feedback by section in their work, which is somthing I can do at request, rather than perhaps by default; I know these students will value and take action on it.

Here’s the doc embedded. If it looks as ugly on your computer as it does mine, click here to open it.

If you’ve used something like this, or can suggest ways to improve it without taking it over one side per section, I’d love to hear your thoughts in the comments or on Twitter. I’ll add to the post once I’ve done the lesson with the students.

UPDATE (2 December): Feedback-first, peer-generated

Having read that adding grades to feedback weakens the effect of the feedback, I’ve been thinking about ways to get students to pay more attention to the feedback first. For this task, a pretty basic spring extension data-processing lab, I checked the labs over the weekend and wrote down the scores on paper. In class I put students in groups of three and asked them to share the GoogleDoc of the lab with their partners. They then completed a feedback circle, using the coversheet below to identify specific areas for improvement and checking them. If they could suggest an improvement (e.g. a better graph title), they could add this as a comment.

This took about 15-20 minutes, after which students completed the process-level and self-regulation sections and returned the form to me, before continuing with the day’s tasks. Before the next class, I’ll add their grades to the form (rubrics are on the reverse of the copy I gave students) and log them in Powerschool. Delaying communication of the grade this way should, I hope, have helped students engage more effectively with the feedback – I learned last week that making changes in Powerschool resulted in automatic emails to students.

I was wary of doing this first thing on a Monday, but the kids were great and enjoyed giving and receiving feedback from peers. Of course some goofed off a little, but they were easy to get back on track. For the high-flyers who enojoyed the method less the first time, this gave them a chance to really pick through each others’ work to give specific feedback for improvement.

Here is the document:

……….o0O0o……….

……….o0O0o……….

The Power of Feedback (pdf). John Hattie & Helen Timperley, University of Auckland. DOI: 10.3102/003465430298487