Incremental progression

Getting a full picture of a learner’s strengths across the Routemap takes some time, especially if evidence is being gathered from a range of curricular activities and if learner responses are being checked and re-visited. Gradually, however, a first Routemap will begin to emerge and can be summarised on a ‘Routemap planner’ by shading boxes where evidence seems consistent with the suggestions in the Assessment Booklet.

At this point, the teacher is confronted with what might be called the ‘digital dilemma’ of assessment – the box is either shaded/ticked or it is not – whilst, in the real world, a skill or a developmentally significant landmark, does not simply ‘switch on’ in this manner but is always seamlessly nuanced. ‘Lateral progression’ – progression within RfL boxes – is as important as progress down the Routemap and in simply shading/ticking a box as ‘achieved’ there is a danger that it will be ignored in the future.

For these, and other reasons, teachers in special education have always favoured ‘incremental scales of one sort or another (usually consisting of three or four elements in addition to ‘no achievement’). Perhaps the simplest adopted by many users of the RfL materials is to shade the box in different styles to represent degrees of achievement according to the evidence gathered to date.

incremental prog

A simple 3-part incremental scale through shading

‘Scoring’ evidence using an incremental scale helps to demonstrate progression in the degree of skill and quality of response shown by the learner, but can also be reflective of the extent to which the teacher feels that the evidence itself is sufficient to make a definitive judgement.  The primary purpose of giving an incremental score (e.g. to boxes on a Routemap planner) is to clarify the degree to which evidence currently reflects developmental landmarks and thus the next actions to be taken (assessment for learning). However, such judgements can be used for other purposes.

When adopted throughout a school, information from incremental scores can also be used to inform school improvement. A Routes for Learning Excel spreadsheet designed by Elson (2015) at Ifield School in Gravesend and distributed through the DfE-hosted SLD-Forum used a four-part scale to obtain summative information from learners’ Routemaps.

Incremental scales have also been used as a means of generating ‘quantitative’ data for external accountability.   Hogg (2014) describes using a four-point scale “to enable quantitative data to be collected” from RfL.  In this study, the intervention’s success was judged primarily by the fact that “Ofsted were satisfied with this means of tracking progress; the school was inspected in March 2013 and was graded outstanding”.

Much more sophisticated incremental scales have also been developed.  The Continuum of Skill Development [CSD] (Sissons, 2010) and the Engagement Scale (Carpenter et al, 2011) are two examples.  As their names suggest, the focus of each is very different, but there are similarities.  Both are conceived primarily as approaches to measuring progression towards pre-defined goals. Instructions for the Engagement Scale, for instance, invite the teacher to “select an activity for which the student has a low engagement that you want to increase”; whilst the CSD is described as a tool “to evaluate progress against learning intentions”.  A single activity or learning intention is given a score in each case.

The process of gathering evidence for an RfL Box might include some examples relating to learning intentions (e.g.  to notice a specified stimulus under specified conditions) but the incremental rating will relate to multiple activities and situations not just to one, (as illustrated below).



All of the incremental scales discussed so far have some similarities of structure, as the table below demonstrates:

table scales

Elson and Hogg both use broadly similar 4-point scales, whilst the CSD and the Engagement Scale both have four summary zones. In all the scales, the first two increments are associated with degrees of partial success, whilst, the third represents a more robust level of achievement. (In the third zone of the CSD, for instance, “the learner performs independently”, “the skill is reliably repeated” and is “frequently… demonstrated in different settings or contexts”). The fourth indicator in each system reflects completion, generalisation and consolidation.

An evaluation of the Engagement Profile and Scale by Chalaye and Male found wide variation in overall scores awarded by different staff when viewing video footage of two children with PMLD as well as significant differences according to their previous experience of those with PMLD and each child in particular (Chalaye and Male, 2014).  This perhaps indicates that the Engagement Scale might not be the best scale to use with RfL.

There has been no similar evaluation of the CSD.  Imray and Hinchcliffe (2014) welcome the CSD’s potential for providing “430 markers of progression” when used along with RfL, whilst Hogg specifically rejects it as “too cumbersome”.  However, a ‘hybrid’ approach is perhaps possible: the CSD can provide a high level of detail when measuring progression particularly in relation to single, tightly-defined goals or activities, but when the judgement needs to be made about a range of evidence drawn from different activities and contexts, (as described above), four-point summary zones or four-part descriptors are likely to be more useful and manageable.



Back: Reflecting on evidence: Using RfL