What principles can be defined for fair assessment of learners with significant additional learning needs (ALN)? The following are my personal views about this:
Assessment imposes an identity upon the person being assessed. We should ensure it is always that of a learner with potential to progress.
When scientists investigate the way that light propagates (travels), it can be shown conclusively that it travels as a wave. However, if changes are then made to the way the experiment is conducted, it can be shown just as conclusively to be a stream of particles. In a similar way, the frameworks and tools used to assess a learner can show them to be failing to learn (some examples are shown below) or progressing along a learning pathway.
How assessment has characterised those with significant ALN at different periods:
Up to 1903 fools
Up to 1944 idiots, imbeciles
Up to 1970 ineducable
Up to 2000 below Level 1 (UK)
Now working towards Outcome 1 (Wales)
Any framework used for assessment should account for all learners
One reason why the examples given above fail to do justice to, or respect the dignity of, those with significant ALN is because the assessment framework used has been constructed with an arbitrary starting point (e.g. an IQ of 80, or ‘Level 1’ = the start of (valued) learning). This makes it inevitable that many learners will be excluded. Indeed, Tomlinson (1982) argues that the label ‘ineducable’ precisely ensured that such children could not ‘disrupt’ schools of the time. Pritchard (1968) more benignly argues that society and its policy makers were simply seeking to do their best for these children.
Like Under Milkwood, assessment frameworks need to ‘begin at the beginning’. Schools will never learn to be ‘inclusive’ – and I’m not talking here about where teaching is located – unless the nation’s systems (in this case its assessment framework) provide a model for this.
Fair assessment does not measure learners against other learners
No two children are the same although at a similar age they may show similarities in what they can do and how they go about doing it.
However, learners with significant ALN are not at all similar to children who are developing ‘typically’. Therefore assessment frameworks and tools based entirely on age-related norms or ‘expectations’ can only characterise them negatively as well as obscure learning that is significant when viewed in their own terms (as Wolf-Schein, 1998 has pointed out).
Arguably, the paradigm which compares learners with each other demands that some proportion of children be excluded. This might be deliberate (with more, or less, benign intent), or simply due to a lack of imagination – even today, the vast majority of the population (including many policy makers) will never have entered a special school and are therefore highly unlikely to consider fully the needs of its children.
Nor are these children or young people, particularly those with profound and multiple learning difficulties (PMLD) at all like each other – there is no ‘norm’ for PMLD – so it is not useful to judge what they can do solely according to ‘levels’ which position them somewhere in a hierarchy with other learners (e.g. ‘P’ Scales).
Assessment should recognise the inter-connectivity of learning and the fact that learners follow many diverse learning pathways
Learners are not all the same. It is ‘normal’ that learners bring different sets of personal characteristics to the learning table and that their subsequent learning pathways differ significantly. Although it may be convenient to consider development and learning in terms of domain-specific (subject-specific) hierarchies, in the real world “a person develops along a web of multiple strands and … different people develop along different pathways or webs” (Ayoub and Fisher, 2006). Assessment frameworks need to be flexible enough to account for learners who do not or cannot visit every sub-step, and should value lateral, as much as vertical progression.
Assessment is analogue not digital
A digital clock clicks along in discreet units: either, completely the current step, or completely the one that follows; whereas on an analogue clock, the hands are in continuous motion. Unfortunately, the quest for quantifiable ‘data’ favours digital assessment – with hierarchies of yes/no items. Checklists and tick boxes have been ubiquitous in the recent history of assessment.
But at what point do we call a skill completely ticked when, arguably, it can always be refined, consolidated and applied more widely? (For instance, I’m currently reasonably able to hit a ball with a raquet, but could I compete with Novac Djokavic?)
‘Analogue’ assessment seeks to capture progression as a learner moves from tentative, supported early attempts through to mastery. It records evidence of learning (most often in narrative or video formats) in order to explore what it means. Then it places this alongside a scale of mastery to interpret its current status.
Learners’ achievements do not fit neatly into pre-defined ‘levels’
Learning is ‘scruffy’ (to borrow a term from Penny Lacey). No matter how hard we try to confine it for our convenience within the ruled lines of a hierarchy (e.g. when we attempt to Best ‘fit’ it into ‘Outcome 1’ or ‘P4’), real learning insists on flowing into adjacent levels. We need frameworks and tools which can capture the range within which learners are succeeding – rather than rule much of it out. The way that a learner’s achievement is spread tells us much that might inform future learning.
The primary purpose of assessment should be to inform and improve future learning
Many of the less desirable aspects of assessment which have been touched upon above result from the tyranny of external accountability and the need to sort learners. Assessment of learning to satisfy these narrow purposes distracts from what really matters – especially for learners who will never be chosen in the sorting game. You can have reams of assessment data for accountability purposes but if you can’t demonstrate that it has an impact on learning, those holding you to account may conclude that it is worthless.
Fair assessment which focuses on improving learning (assessment for learning) can nevertheless generate data that both informs school improvement and provides what inspectorates and other external parties require but it needs to do this for individuals in their own terms (what has been called ‘ipsative’ assessment).
We should seek to identify each learner’s strengths along with the challenges they face on their learning journey. From a clearly identified starting point we need to measure the progress they make and seek to understand their learning in its own terms. For this we need frameworks which are fit for purpose supported by tools which can help us navigate. These are represented elsewhere on this website.
Above all we need to have a belief that the journey and the distance travelled are more important than reaching (or failing to reach) any preconceived destination(s).