Ripples & Reflections

"Learning is about living, and as such is lifelong." Elkjaer.

1 Comment

Webb’s DOK4 & Transfer

115013bLast weekend I took part in a fabulous Bold Moves Curriculum Mapping Bootcamp, by Dr. Marie Alcock at ISKL. I was there to think about next steps for curriculum planning at CA, and it was a great opportunity to pick the brains of a true expert (and get lots done). I like the bootcamp model for PD: short, focused and with the opportunity to take immediate action with great feedback from colleagues in similar positions.

Through one of the discussions about high-quality assessment, Marie dug into Webb’s Depth of Knowledge (DOK) framework. It’s not a “wheel” of command terms as is often (and incorrectly) presented, but a way of framing how deeply students need to know and use information, skills and concepts. Similarly, DOK is not the same as Bloom’s Taxonomy, and is not a pyramid or a hierarchy of knowledge that “peaks” at DOK4. It can be accessed from any of the other three levels, and effectively sits in parallel.

For a decent explainer of how DOK levels work, see this by Erik Francis for ASCD Edge – I used his DOK descriptors in the teacher plansheet tool.

In practical terms, as explained by Marie, students should be able to access DOK4 from any one of the other DOK levels. This is important, as DOK4 essentially can act as a filter for transfer.

How else can the student use the knowledge, skills and content at this level? 

This means that in curriculum and task design and differentiation, teachers can set up situations for all students to pull their learning (even if only at a recall/DOK1 level) through to DOK4 by applying it in a new context – as long as it is the same skill/target. For example, this might mean taking a scientific skill and applying to a new experiment, or a writing technique applied to a new genre. This is knowledge augmentation. MYP Teachers will see the immediate connections here to higher-order objectives in the criteria.


Teacher Plansheet: A Practical Use

Transfer is a notoriously difficult skill to teach, even though it is included in the ATL framework, and so I sketched up this planning tool (pdf) in the hope that it can visualise how DOK4 can be used as a filter to make transfer explicit. Follow the arrows as you think about putting a target standard or learning outcome to work. What level (DOK1-2-3) is expected of the student? How else (DOK4) could it be used? For some excellent, practical resources on applying DOK in the various disciplines, check out Dr. Karin Hess’s Cognitive Rigor and DOK rubrics and resources.


Transferring the Transfer: Thinking Collaboratively

How else might this tool be put to use? Here are some quick thoughts on how this might work with the collaboration of the relevant experts or coaches in the school.

  • Technology Integration: using the DOK4 filter as an opportunity to amplify and transform (RAT model) the learning task (but still meet objectives).
  • Service Learning: In moving from “doing service” to service learning, could this be used to help frame students’ focus on planning, or post-service reflection? As students learn about issues of significance, how can they put it work through transfer to meaningful action? As they reflect on their learning, can they connect new and existing disciplinary knowledge?
  • Interdisciplinary Learning: How can students take their learning and use it meaningfully in a context that requires transfer between disciplines?





Capturing the Criteria & “Zooming In”


Shortlink to this resource:

Update Sept. 5 2017 based on edits summarized in this update from the IB.

After some parent-teacher conferences recently, I was asked to show all of the MYP assessment criteria together and realised I couldn’t find something that met our needs for a single-reference, quick overview of the MYP assessment objectives and criteria.

Screen Shot 2017-04-27 at 17.52.59Here is an attempt to put the big ideas and rubrics together in one place, so that colleagues can quickly see vertical and horizontal articulation and connections, and so that parents have a resource to hand to help understand assessment.

You might find it useful.

To make your own copy, click “File –> Make a copy”.

This slideshow requires JavaScript.



  • This involved a lot of clicking and is bound to have some errors. Big thanks to Mitsuyo-san, our data secretary, who helped with this. 
  • Descriptors in bold did not make it across from text to spreadsheet. Use original descriptors in student assignments.
  • This is intended only as an overview of the programme. Teachers must exercise caution with this, and default to the published guides on the OCC for assessment rubrics, clarifications, rules and guidance.


Edit: added 3 May 2017

Why the green bands? 

In each of the subject-area bands, you’ll find the Level 5-6 row accented with green. This is part of something I’m trying to work on with colleagues and students in terms of zooming into the objectives-level of assessment, and was something I used in #HackTheMYP.

The basic idea is this: 

  1. As a model of a 4-band rubric, we typically see the third band as ‘meets objectives‘. This means that the rows below are approaching and above are exceeding.
    • Try it: add up the scores for all 5, all 6 or a combination thereof. What does it come to when you apply the total to the 1-7 conversion chart? This is the kid that meets the outcomes of our core curriculum. 
  2. When we focus only on the top-band descriptors we may inadvertently end up doing one of two things:
    • Causing students to get stressed by default as they’re aiming for the ‘exceptional’ descriptors first. “The gap” between where they are and want to be is too big; or,
    • Falsely making our core expectations for all students fit the 7-8 band, thus leaving nowhere to go from there – creating a “low ceiling” and no room for extension into genuinely meeting those top descriptors.
  3. If we zoom into the 5-6 band first – in task design and as a student – we are able to set an appropriate expectation for all learners, see how and where to scaffold and support those who need it, and provide a “high ceiling” for innovation, application, analysis, synthesis, etc.
  4. It should then become easier to create the task-specific clarifications. If we can clearly describe the 5-6 “core” band first, we should then make sure that the levels above and below can be really clearly distinguished. In my experience, this is easier than starting at the top and working back.

If you’ve tried this idea (or similar), how did it go?


For a similar discussion and great resources, but in an SBG context, check out Jennifer Gonzalez’s (@cultofpedagogy) posts on the “single point rubric”:

Leave a comment

Standardization: Cycling away from the moderation “event”

A quick post to share a resource, based on some of our work at CA. I love cycle diagrams and was thinking about the process of moderation, planning and the challenges of effective collaboration when there are grades (and a big pile of ‘done’ grading) at stake. 

If you’ve ever tried to ‘moderate’ work that’s based on two or more teachers’ hours of effort in grading, you’ll recognise the challenge. The proposal here is to reframe stadardization as a cycle – various points of entry to working together on a common understanding of assessment – so that teachers align their assessment standards more closely. Post-hoc moderation events may tend towards defense of our own grading work; who wants to go back and change all that work?

Do you think you could put the cycle to work in your own context? 


The IMaGE of an International School

It’s crunch time for my MA International Education studies at the University of Bath, with a big literature review in progress and some data collection coming up, aiming to submit by the summer break. As much as I’ve loved the study, I’m looking forward to reclaiming some balance. 


My plan for the dissertation is to update and pilot-test my web-chart of the international dimension of a school, aiming to tackle the challenge of defining a nebulous concept through visualisation, based on self-reporting, to generate “the IMaGE of an international school“. (IMaGE = international mindedness and global engagement). The small-scale case-study will generate an IMaGE for my own school, and the pilot study will help evaluate the usefulness of the visualisation and metrics.

Web8Sample (2)

A sample of the web chart in use, with the IMaGE showing the evolution of a school or a change in perception. The eight radials are still under development, and there will be descriptors for each in the final project. At first glance, where would you rate your won school? What do you see, think, wonder about the results of this (imaginary) school?

The idea of trying to evaluate or measure the ‘internationalisation’ of a school is not new: we already have metrics, practices or handbooks from various organisations, including the IB, CIS, ISA, ACE, OECD. This project aims to learn from, adapt and distil these qualities into an accessible tool that will generate a ‘visual definition’ for a school, as a starting point for further investigation.

Although some of the ideas within the chart have evolved a lot since the initial idea in 2012 (and I have found many more studies), here is the original assignment.





1 Comment

How NOT to be ignorant about the world.

Hans Rosling,

Another great Hans Rosling TED Talk, this time with his son, Ola.

Here Dealing with misconceptions, bias, ignorance of global issues and a little formative assessment*, they discuss how we can be better informed about the world, with a fact-based world view… and how we could (eventually) perform better than chimps on a global issues quiz. I have blogged about how this might be used in IBTOK or science classes on i-Biology.


Should a fact-based world view be the core curriculum of an international school?

Early in the talk, Ola recognises the influence of early bias of students and outdated curricula on the world view held by students – and how these are compounded by an ill-informed media. Through their project, they are trying to measure these misconceptions and propose a ‘global knowledge certificate’ that candidates (or organisations) might use to stay informed, to be competitive and to think about the future.

It seems to me that the fact-based world view would make for an excellent set of content-knowledge standards for an international school, and might pair nicely with the IB programmes as we seek to create knowledgable young inquirers who seek to make a positive difference to the world around them. How can they achieve this if they are learning outdated concepts of development or using stereotypes to paint the world in an ugly shade of ill-informed?

Hattie’s meta-analyses note that the power of prior learning (including prior mis-learning or misconception) has a very high impact on students’ future learning (d=0.67). As we generate scopes and sequences for courses or set up units of inquiry, should we be looking to the research not only on misconceptions in our own content domain but in global literacy in order to give students the tools they need to inquire in a changing and often-misunderstood world?

Is globally-literate the same as internationally-minded?

It is hard to define international-mindedness, though we can recognize it in our own settings. We might observe the behaviours of a globally-engaged student (or teacher), and might use assessments of students’ fact-based world-views as a measure of their international-mindedness. To this end, a globally-focused national school might be a more effective ‘international school’ than a more narrow-focused overseas expatriate school.**

You read about the ignorance project here on CNN, or find more classroom resources (including a world-view card game) on Gapminder’s education page. The Guardian also has a selection of global development quizzes, which you can take for fun or in class.


*making great use of the audience-response clicker system pioneered by Eric Mazur.

**this is part of the idea of my web-chart of the IMaGE (IM and Global Engagement) of a school in my MA work.


Faculty PD: Assessment Principles & Practices (and a stretched golf metaphor)

Over the last couple of weeks we’ve put a lot of work into developing Wednesday afternoon PD sessions for middle and high-school faculty on Assessment Principles and Practices. We’ve chosen to do this now, as this is an easy entry point for work on MYP: Next Chapter and it is always valuable PD to think about, evaluate and strengthen our practices. It builds on a lot of the good work that CA has been doing in recent years to improve assessment.

The inspiration for the theme came from Ken O’Connor’s (@kenoc7) blog post on “23 reasons why golf is better than classroom assessment and grading,” as well as some of Rick Wormeli’s (@RickWormeli) great series of videos on Assessment in the Differentiated Classroom. The aim was to emphasise the importance of the connection between the objectives of the unit and the assessment tasks, resulting in strong, worthwhile assessment taking place. As part of the discussion we connected the objectives of the MYP subjects to their respective assessment criteria and strands.

So far we have completed two of three (or more) sessions on this, with the first being a general overview, including revisiting our Assessment Policy and a Socrative Space Race, as well as sharing with colleagues from different departments. The second session focused on scaffolding tasks and was kicked off with Wormeli’s provocative talk on redos and retakes, before having some exemplary teachers show their scaffolding and student support tools to teachers. The second half of the session was devoted to further developing our own tasks and in later sessions we’ll evaluate these and think more carefully about what to do with ‘broken’ assessments and how to make best use 0f learning data.

The curriculum team, including Tony (@bellew), LizD (@lizdk), LizA and I were impressed by the feedback given in a one-minute essay between sessions and in the quality of collegial conversations taking place. It is clear that CA has come a long way in assessment philosophy and practices in recent years.  I am grateful to be in a place where we can work towards progress, share our practice and improve together as a faculty.

Here is the presentation. Apologies for stretching the golf metaphor to breaking point, but I wanted to also use it as a way to model use of CC images from Flickr. I’m trying to find a happy medium here between attractive ‘presentation zen’ for the PD sessions and functional informational flipbooks for teachers to refer to and use in their later work as they’re embedded to the faculty guide.

1 Comment

The Gradebook I Want

As the MYP sciences roll into the Next Chapter and we mull over the new guides, objectives and assessment criteria, we have the opportunity to reflect on our assessment practices. The IB have provided a very clear articulation between course objectives and performance standards (see image), which should make assessment and moderation a more efficient process.

There is a clear connection between the objectives of the disciplines and their assessment descriptors in all subjects in MYP: Next Chapter.

There is a clear connection between the objectives of the disciplines and their assessment descriptors in all subjects in MYP: The Next Chapter.

Underpinning these objectives, however, are school-designed content and skills standards. These are left up to schools for articulation so that the MYP can work in any educational context and this is great, though it does leave the challenge of essentially tracking two sets of standards (or more) in parallel: the MYP objectives and the internal (or state) content-level standards. In a unit about sustainable energy systems, for example, I might have 15-20 content-level, command term-based assessment statements, each of which could be assessed against any (or many) of the multiple strands for each of four MYP objectives.

As I read more about standards-based grading (or more recently standards-based learning on Twitter), I become more dissatisfied with the incumbent on-schedule assessment practices presiding over grading and assessment. I want students to be able to demonstrate mastery of both the MYP objectives and the target content/skills but I am left with questions:

  • If they score well on a task overall but miss the point on a couple of questions/content standards, have they really demonstrated mastery? How can I ensure that they have mastered both content and performance standards?
  • If they learn quickly from their mistakes and need another opportunity to demonstrate their mastery on a single content-level standard (or performance-level standard), do they need to do the whole assignment again? What if time has run out or there is not opportunity to do it again?
  • As we move through the calendar in an effort to cover curriculum and get enough assessment points for a best-fit, are we moving too superficially across the landscape of learning?
  • More importantly, is the single number – their grade – for the task, a true representation of what they know and can do? How can I present this more clearly, to really track growth?

My aim with all this is to encourage a classroom of genuine inquiry (defined as critical, reflective thought), in which I know that students have effectively learned a solid foundation of raw materials (the ‘standards’, if you will), upon which they can ask deeper questions, make more authentic connections and evaluate, analyse and synthesise knowledge. 

Lucky we have Rick Wormeli videos for reference. Here he is on retakes, redos, etc and it is worth watching (and provocative). There is another part, as well. If you haven’t seen them yet, go watch them before reading the rest of this post (the videos are better, TBH).


What do I do already? 

  • Lots of formative assessment: practice, descriptors on worksheets, online quizzes.
    • In each of these, there is a rubric connecting it to MYP Objectives
      • Each question is labeled with the descriptor level and strand (e.g. L1-2a, L5-6b, c).
      • I don’t usually give a grade, though do check the work. Students should be able to cross-reference the questions with the descriptors, carry out their own ‘best fit’ and determine the grade they would get if they so desire. This puts feedback first.
    • Learning tasks usually include target content standards
  • Drafting stages through Hapara/GoogleDocs to keep track of work and give comments as we go
  • An emphasis on self-assessment against performance descriptors and content-level standards (and goals for improvement or revision).
  • I use command terms all the time: every sheet, question, lesson where possible.
  • Set deadlines with students where possible and am flexible where needed.
  • In some cases reschedule assessment or follow up with interview or retake (but not as standard practice). As Wormeli says above (part 2): “at teacher discretion.”
  • Track student learning at the criterion-level (MYP objectives), though with current systems (Powerschool), not in great detail at the objective strands (descriptors) level (e.g. A.i, A.ii, A.iii).
  • I do tests over two lessons, giving out a core section in the first, collecting and checking in-between classes. In the next session, this is supplemented with extra questions that should allow students to take at least one more step up. For example, a student struggling with Level 3-4 questions would get more opportunities to get to that level, whereas another who has shown competency will get the next level(s) up.

What do I want to do? 

  • I want to also be able to effectively track every student’s growth in the content standards and develop deeper skills in inquiry (critical reflective thinking).
  • Develop a system for better tracking learning against the individual strands within each criterion (e.g. A.i, A.ii, A.iii).
  • Better facilitate development of student mastery, allowing us to move further away from scheduled lessons and into more effective differentiation and pacing.

What would help? 

I would really like a standards-based, MYP-aligned, content-customisable gradebook and feedback system that is effective in at least three dimensions:

  • Task-level to put levels for each task, each of which might produce multiple scores, including:
    • Various target content-level standards
    • MYP objective strands at different levels of achievement
  • It would need to allow for retake/redo opportunities for any and all standards that need to be redone – not necessarily whole assessment tasks. 
  • It would have to focus student learning on descriptors and standards, not on the numbers, in order to help them move forwards effectively. Students would need to be able to access it and make sense of it intuitively so that they could decide their own next steps even before I do.
  • It would super-duper if the system could produce really meaningful report cards that focus on growth over the terminal nature of semester grading.
What would a three-dimensional gradebook look like?

What would a three-dimensional gradebook look like?

Here is Rick again, describing another approach to a 3D gradebook: