This is an exploratory discussion document. I wrote it to try to understand more clearly the current situation. I am open to persuasion and will take on board evidence-based amendments.


The transition from old to new

The original policy design for levels-based assessment was simple and elegant. This helped ensure its longevity throughout a period of constant policy churn. (I was present when it was first described at a meeting of TGAT over 28 years ago)

But it was eventually overloaded by a bolt-on superstructure of short-term progress targets. These, combined with the pressures of high-stakes accountability, brought about the problems delineated by the Commission on assessment without levels.

But the downside to the removal of levels, entirely ignored by the Commission, is too readily swept aside.

There were substantial advantages, not least for parents, in having a single national system applied consistently throughout their children’s education, regardless of age, stage or location. Everyone was familiar with the broad framework, even if they did not understand the subsequent agglomeration of detail.

Twitter attests to teachers’ widespread confusion and uncertainty as they transition to the new regime. This disquiet is compounded because details of the statutory assessment regime continue to emerge piecemeal.

This must be attributable in part to imperfect project planning or rethinking ‘on the hoof’, but substantive elements of the new system cannot be pinned down until after the first set of tests has been taken in May 2016.

Several commentators have expressed concern at the vacuum created by dispensing with the old framework before (or should that be without?) providing a replacement. The principles of effective change management have not been rigorously applied.

Many schools do not know what to do with the freedoms they have been given. A substantial proportion is reported to be continuing with levels, or a modified version of them.

If teachers are confused, most parents will face a very steep learning curve indeed.

Those interested in how the new system will support high attainers ought to be more worried than they seem that effective top end ‘stretch and challenge’ could be sacrificed on the altar of mastery.

In principle (if not sufficiently often in practice) the old levels-based system helped schools to set suitably demanding expectations of their high attainers.

When combined with a sophisticated understanding of top-end differentiation it encouraged them to blend acceleration (faster pace), extension (depth) and enrichment (breadth) in the right combinations to suit each learner’s needs.

The new orthodoxy of mastery depends exclusively on depth, forswearing breadth and especially faster pace on the grounds that, in the past, teachers too often succumbed to a perverse incentive to accelerate, regardless of the security of prior learning.

This is not consistent with the national curriculum. It also substitutes a new risk for the old one – that ‘all depth and no pace’, as opposed to the national curriculum’s endorsement of ‘depth before pace’, will cause too many high attainers to bump too often against unnecessarily restrictive age-related performance ceilings.

Measuring attainment though statutory assessment

According to the original plan, teacher assessment, informed by tests, was to have remained the default method of measuring attainment at the end of KS1, for reading, writing and maths alike.

Though, in the case of writing, the grammar, punctuation and spelling (GPS) test does not correspond directly with the teacher assessment, so the relationship is more oblique.

However this is now being rethought (see below).

Tests alone are paramount at the end of KS2, though – for the time being at least – statutory teacher assessment is still separately reported alongside.

KS2 writing is again an exception, following the KS1 precedent in being TA-led and only indirectly associated with the GPS test.

Statutory teacher assessment relies on teachers’ judgements against common national standards, each comprising a set of ‘pupil can’ statements, all of which the learner must satisfy. The ‘best fit’ notion has been discarded, so reducing the extent of professional discretion.

The standards have been published in interim teacher assessment frameworks which apply for 2016 only. This is an example of piecemeal implementation. We do not know why they are interim, or how they may be expected to change.

An ‘expected standard’ is common to each framework.

Where teacher assessment is dominant – at KS1 and for KS2 writing – the interim frameworks also include two other levels: ‘working towards the expected standard’ and ‘working at greater depth within the expected standard’.

It is not possible to be assessed as ‘working beyond the expected standard’. But an additional ‘working below’ rating will be used for reporting purposes, to accommodate those children not entered for the tests.

Teachers must apply one of the three available standards to all those being assessed. Their judgements should be informed by scaled scores achieved in the corresponding tests.

At the end of KS2, pupils’ attainment in reading and maths will be judged solely by these scaled scores derived from the relevant tests. Indeed, all statutory tests, for both KS1 and KS2, will use scaled scores and the scale applied is expected to be common to all of them.

The expected national standard on this scale – the level that all learners are expected to attain – will always be and will always remain 100, regardless of the test or key stage.

There is as yet no guidance about how other scaled scores will correspond to the teacher assessment standards. Nor can the standard associated with a scaled score of 100 be set. These depend on the outcomes of the 2016 tests. It follows that, for AY2015/16, any predicted KS2 outcome will be guesswork.

Each scale must stretch from a point significantly below 100 to a point significantly above. Early illustrations suggested a span from 80 to 130.

Pupils at the bottom of the scale will significantly undershoot the expected national standard. According to the ARA guidance those not expected to reach the standard by May 2016 should be excluded. Others working below the expected standard will be entered at their teachers’ discretion.

Since the expected national standard will be more demanding than it used to be, broadly equivalent to an old 2b at KS1 and a 4b at KS2, substantial proportions of the 2016 cohort are certain to score well below 100 in each test.

Those at the top of the scale will have demonstrated the fullest possible knowledge, skills and understanding permitted by the test: they will have achieved full marks.

Because the standard is more demanding, there is some extra headroom at the top end. Moreover, top-level performance is most likely to be depressed by teachers’ limited familiarity with the most demanding new material in the programmes of study.

But, despite this, the highest attainers will be scraping against an age-related performance ceiling.

The initial consultation document suggested these new KS2 tests would contain questions at least as demanding as those in the old level 6 tests. But those drew substantially on the KS3 programmes of study, so raising the age-related ceiling, enabling the highest attainers to receive due credit for moving through the curriculum at a faster pace.

All questions in the new KS2 sample tests are confined to the KS2 programme of study. Consistent with the interim teacher assessment frameworks, the highest attaining pupils may gain some credit for ‘working at greater depth within the expected standard’; they will gain none for working beyond that standard.

Such an outcome can only be recorded through in-school summative assessment. Since this will not be reported or used for accountability purposes there is a strong temptation for schools to dispense with it altogether.

When it comes to reporting, schools will need to publish ‘the percentage of pupils who achieve a high score in all areas at the end of key stage 2’. One assumes this means reading, writing and maths combined, but further details remain elusive.


Measuring progress through statutory assessment

The end point for measuring progress in the primary sector is fixed, but the baseline will change several times in the next few years:

  • For those completing KS2 from 2016 to 2019 it will be the old-style KS1 assessments undertaken four years earlier.
  • For those completing KS2 in 2020 and 2021 it will be the new-style KS1 assessments undertaken in 2016 and 2017 respectively.
  • For those completing KS2 in 2022, schools that have opted into the new reception baseline in 2015 can choose between that and a KS1 baseline, according to which gives the best result.
  • For those completing KS2 in 2023 or succeeding years, only the new reception baseline will be available. But it will not be compulsory – schools not using it will be judged on attainment measures alone.

A weekend press briefing in early November heralded an imminent Ministerial speech announcing a wholesale review of primary assessment and the appointment of a working group to determine the best way forward.

Instantly though, Ministers took to Twitter insisting that the only issue on the table would be firming up the KS1 baseline for the purpose of measuring progress.



Confusion reigned



The speech itself confirmed this narrower focus. Secretary of State Morgan restated the significance of progress measures:

‘My focus is on ensuring we can be confident every child is making the progress we know they can.’

The new element was described thus:

‘…to be really confident that students are progressing well through primary school, we will be looking at the assessment of pupils at age seven to make sure it is as robust and rigorous as it needs to be.

We’ll be working with headteachers in the coming months on how we get this right, holding schools to account and giving them full credit for the progress they achieve.’

There was no explanation of why the original decision to stick with tests informed by teacher assessment now needs unpicking.

This report from June 2015 suggested the problem arose from the decision to allow a teacher assessed reception baseline option:

‘A senior source told TES that “loading up” on two sets of teacher-assessed data to measure progress was deemed to be problematic. “Nick Gibb is looking at the idea of scrapping teacher assessment in KS1 tests entirely in favour of having reported tests. It is because there is a difficulty with using teacher assessment for progress, plus they want to reduce teachers’ workload,” the source said.

“The issue is that you can’t measure progress accurately with teacher assessment, and there are incentives for schools to depress pupils’ scores to show that progress is being made.”’

The obvious trade-off here would be to sacrifice the reception baseline in return for test-led assessment at KS1.

The Government’s commitment to a reception baseline may be undermined by anticipated comparability issues between the three different assessments. One option would be to dispense with the one driven by teacher assessment, but that has been chosen by the large majority of settings. It would be more straightforward to abandon the reception baseline entirely.

Details of the progress calculation were not expected until early 2016, but official channels subsequently suggested they would appear ‘shortly’. It seems likely that this issue with the KS1 baseline will push us back to the original timetable.

The methodology should broadly mirror the value-added approach developed for assessing progress between KS2 and KS4.

Each pupil’s scaled score in each test will be compared against the results achieved by other pupils nationally with the same prior attainment. Pupils will be allocated to prior attainment groups for this purpose. We do not know how many – deciles perhaps?

The progress measure will most likely be calculated by subtracting the average scaled score for the relevant group from the actual score a pupil has achieved.

So if a high attainer achieves a scaled score of 125, while the average for the corresponding national group is 120, his progress score is +5. Conversely, if the high attainer’s scaled score is 115 his progress score is -5.

Most pupils’ progress scores will be positive or negative single integers but, assuming the scale is similar to the illustrative version discussed above, a few outliers will have scores of +/-20.

It will not be unusual for a comparatively low-attaining learner to record a positive progress score without achieving the expected standard. Many more learners will achieve the expected standard, even significantly exceed it, but nevertheless record a negative progress score.

These individual scores for each assessment will be provided to schools, but need not be reported to parents or shared with pupils (though they should be made available to parents who request them).

The scores for each of reading, writing and maths will be aggregated across all pupils in the school’s KS2 cohort. (It is unclear as yet how writing – as opposed to the GPS test outcome – can be included in the calculation.)

These three aggregated progress measures will be published on schools’ websites and used in the primary floor standards.

The average aggregate score needed to exceed the floor standards will be the national median score for all schools, which will almost certainly be below zero, although this too will not be established until the 2016 tests have been completed.


Designing internal assessment 

Under the new regime schools can determine their own in-school formative and summative assessment schemes. They can set their own grading scales and standards – whether numeric or descriptive – to identify interim attainment levels and progress towards the end-of-key-stage outcomes.

These ought to fit with end of key stage testing and teacher assessment, but no restrictions have been placed on their development to ensure this.

Curriculum-based models seem popular, built around the division of programmes of study into termly units. In-school attainment and progress can be tied to lock-step completion of these units in chronological order.

Some schools seem to want to restrict themselves to this ‘granular’ approach to assessment, focused exclusively on the assimilation of specified content.

Others understandably want to be able to assess overall progress, to tell parents whether their children are on track for specified end-of-key-stage outcomes. This requires some sort of scale, either numeric or descriptive.

Some have pronounced the death of the ‘flight path’ approach to learner progress but there is nothing to prevent it reappearing after the 2016 tests, no longer criterion-referenced admittedly, but based on the typical attainment trajectories of learners with similar starting points.

It will be interesting to see whether pressure to reintroduce ‘expected progress’ for each learner as an external accountability measure can be resisted in the longer term.

Some schools are developing their own systems in-house, either individually or collaboratively, while others are buying in a commercial solution, choosing amongst an ever-expanding field of suppliers. There is no quality assurance; caveat emptor applies.

Regardless of the system adopted, schools should withstand the temptation to dispense with ‘working beyond’ judgements, even though such effort is no longer recognised in the statutory assessment arrangements. This ought to be assessable within the key stage if not at the end.

Aggressive marketing of mastery, some of it undertaken by government-funded agencies, is influencing system design and the choices made by schools.

There is strong pressure in the system, particularly in maths, to adopt a mastery-driven pedagogy that rejects all progress through faster pace. This subverts the ‘depth before pace’ guidance in the national curriculum to an ultra-conservative ‘depth instead of pace’ approach.

The Commission’s report sets out broad principles to inform schools’ independent decision-making, but this is set alongside its own paean to mastery.

But schools are entirely free to reject such pressure, even though it features in the report of the Commission on Assessment without Levels, because it cuts directly across the right they have been granted to adopt inclusive internal assessment systems that satisfy all their learners’ needs.

The assessment package must be sophisticated enough to register different kinds of progress, including the consolidation of existing learning, working at greater depth within the existing standard, enrichment activity (breadth) and, as discussed above, pushing beyond age-related expectations as a consequence of working at a faster pace. 

Of course opportunities to accelerate must be delayed until the learner is secure in their prior learning, nor should learners pre-empt opportunities to work at greater depth. But, equally, they should not be given inferior status, assumed to have negligible value to the learner, or treated as a last resort.

Success of this kind can and should count, even at the end of a key stage. Some enterprising assessment organisation should be extending choice in the market place by certificating high attainers’ KS3 achievement at the end of KS2, so encouraging secondary schools to take account of it on transition and adjust their teaching accordingly.

This would provide a neat counterpoint to Conservative manifesto pledges to introduce KS2 test retakes during KS3 for those ‘who do not reach the required standards’.


What about inspection?

Ofsted have given cast-iron assurances that they will give schools no credit for adopting any particular assessment approach, including that advocated by the apologists for mastery. The free market is king; the cheerleaders may be ignored.

The inspection handbook sets out a balanced position, attaching equal value to exceeding age-related expectations and to studying age-related material in greater depth:

‘In scrutinising pupils’ work, inspectors will consider how well:

  • pupils are making good progress towards meeting or exceeding the expected attainment for their age, as set out in the school’s own curriculum and assessment policies
  • pupils are set challenging goals, given their starting points, and are making good progress towards meeting or exceeding these
  • pupils are gaining and consolidating knowledge, understanding and skills
  • pupils, including the most able, do work that deepens their knowledge, understanding and skills, rather than simply undertaking more work of the same difficulty or going on to study different content.’

I would have preferred the final section to read ‘…or before going on…‘, since that is more consistent with the national curriculum.

The critical point is that exceeding age-related expectations cannot be assumed to be synonymous with ‘working at greater depth within the expected standard’. There should also be in-built flexibility to raise the performance ceiling for high attainers who are ready and capable of working beyond that standard.



October 2015