Over the past two months, I’ve been writing the assessment policy for my new school. It was inspired very much by Making Good Progress, but also contains ideas from several policies that teachers kindly shared with me on twitter.
I have a two-hour INSET slot on assessment at the start of next term. One of my ideas is to share this policy with our teachers, ask them for their views and work on improving it as a team.
Before I do that, it would be great if anyone would help me by making any suggestions for improvement.
It will cover all subjects (as we currently only have one or two teachers in each) so I wonder if it lacks flexibility: it’s written very much from my maths teaching perspective?
Here you go:
GES Assessment Policy
Formative assessment refers to practices used by teachers which assess their pupils’ progress in order that the teacher and pupil can plan their future learning more effectively.
Teachers should use assessment:
- to determine pupils’ prior learning
- to check on pupils’ understanding
- as a form of retrieval practice to improve memory
- to correct factual and literacy errors / poor effort
- to modify teaching and decide whether or not to move on
In order to achieve these aims, they may use the following forms of assessment.
Individual Verbal Feedback
Verbal feedback should be specific and positive (‘do this’, rather than ‘don’t do that’) and should ensure that the responsibility to improve the work remains with the pupil. Teachers should spend a brief amount of time giving feedback to a pupil; if feedback takes longer, then further instruction is needed and the teacher should assess whether a reteach of the topic/concept is required.
This has two main purposes:
Sharing knowledge and understanding around the class;
Checking on the knowledge and understanding of the class.
In the first case, teachers may accept ‘hands up’ but the majority of the time, teachers should use named questions. This enables teachers to assess whether pupils have understood the topic they are teaching, rather than just hearing from the highest-attaining pupils.
It is wise to ask a question, pause for the whole class to think, then target the question towards an individual, to encourage all pupils to engage in thinking.
Critiquing Good Work
Teachers photograph a good piece of work and project it on the board. The class discusses the strengths of the work and how it can be improved.
Teachers ask questions and pupils write their responses on mini-whiteboards which they hold up for the teacher to see.
Pupils should generally wait to show their answers at the same time, so that they are not encouraged to rush.
Complex tasks should be broken down into smaller steps for this activity.
Multiple Choice Questions
A teacher presents a multiple choice question and pupils can hold up a number of fingers to indicate their answer.
The wrong answers should ideally contain common misconceptions.
Higher-attaining pupils can be encouraged to think through what misconceptions might lead to the other answers given.
Self and Peer Assessment
If trying to address misconceptions, self-assessment is often more effective as pupils learn from there own mistakes more readily than mistakes of others.
Peer assessment can have the advantage of pupils being exposed to and learning from the ideas of others, but this may be more effectively managed through the strategy of ‘critiquing good work’.
Tests are an effective way to encourage pupils to retrieve information and test their understanding. In studies, pupils learn most from such tests if they are low-stakes: self-marked, no negative consequences for poor performance or even no-stakes: the teacher doesn’t even find out the score.
Such tests should not be assigned a percentage or grade. The focus should be on what the test tells the pupil and teacher about the next steps required for the pupils to improve.
Short, specific tasks usually provide better formative information than complex tasks, because they help to highlight the exact misconceptions of a pupil.
Teachers are encouraged to look through all homework, make brief notes and to give ‘whole class feedback’ the following lesson.
Teachers are encouraged to make notes about common errors and add them to the schemes of learning in order to address these potential pitfalls when the topic is delivered in the future.
If teachers wish to give individual written feedback on homework, it should not come in the form of a grade, but should comment on what is specifically good about the work and give one or two suggestions for improvement.
These suggestions may come in the form of follow up tasks (possibly one of the 5R’s: see appendix 1.)
Ideally, the teacher should check that these follow up tasks have been completed, but we do not want to encourage an endless cycle which burdens teachers with unmanageable workload. The follow up tasks should be more work for the pupil than for the teacher.
Teachers should encourage pupils to look back on previous feedback before completing future tasks.
The main purpose of summative assessment is to give all stakeholders (pupils, teachers and parents) an idea of how pupils are performing and progressing over time.
In order for this to happen, the results of such assessment needs to be reliable and valid, and to communicate shared meaning.
A twenty minute test will not give a reliable picture of a pupils knowledge and understanding of an entire subject. homework is not a reliable indicator of performance, as the level of time, effort and assistance sought can vary significantly.
In order to be as reliable as possible, they should be long, and ideally set over several different days to allow for pupils having a ‘bad day’.
A test on the conditional tense in Spanish will probably not be a valid indicator of how well a pupil will perform in GCSE Spanish. Similarly, the quality of a long-term project will not be a valid indicator for a subject which is assessed by examination.
Tests will be valid if they sample from a large and wide ranging proportion of the expected knowledge and understanding for a pupil of this age.
A raw score (e.g. 21 / 30) or percentage (e.g. 53%) does not communicate shared meaning because there is no common basis of understanding. It is not clear to pupils or parents, and to some extent even teachers, whether 21/30 or 53% is a ‘good’ score, nor what ‘good’ even means in this context. In order to communicate shared meaning, summative assessments results should be scaled appropriately.
How do we apply these principles at GES?
Each year group takes part in an extended period of exams towards the end of the academic year.
In year 7 and 8, these tests are sat in classrooms within the normal school timetable.
In year 9 and above, these tests are set in an exam hall over the course of one week. Revision periods are allocated between exams.
Where a department demonstrates that examination is not the most reliable predictor or GCSE success, flexibility will be given as to the method of assessment used.
Each subject is tested at least twice, with the length of exam being related to the number of lessons taught in each subject and the age of the pupils involved.
Pupils in year 7 and 8 can expect at least 2 hours of tests in Maths, English and Science. Pupils in year 9 and above can expect to take at least 3 hours of tests in these subjects. The length of these tests help make them a reliable indicator of a pupil’s performance.
These tests will aim to cover as much of the material taught up to the point as possible, in order to make them a as valid assessment as possible.
Results for each subject will be provided as a standardised score, such that the average score for the year group in each subject is 100 and the standard deviation is 20. This helps us to compare pupils’ performance between subjects and from year to year. See appendix 2 for more detail on this process.
In year 9 and 10, there will also be an indication of what a score of 70, 100 and 130 might mean in terms of a ‘working towards’ GCSE grade. These will be produced using CEM data, alongside assessments from national comparative judgement assessments in English and Maths. This will help to communicate shared meaning to parents, without giving the false impression that we can accurately predict grades at this stage.
In year 11, mock exams will be sat in February and the grades will be reported to parents, alongside that term’s progress report from teachers.
As a school, we only use summative assessment once per year because, in order to be reliable and valid, the tests must take up a significant amount of potential teaching time. We also feel that formative assessment is more important for pupils’ leaning; summative tests are not easy to use formatively as they include complex tasks which require a variety of knowledge and skills, making it less clear to the teacher which of these are lacking.
As a result, teachers are discouraged from using summative assessment at other times of the year.
Reporting to Parents
Assessment of Effort
We use the following effort descriptors:
- Listens carefully during whole-class discourse.
- Works hard during individual tasks in class.
- Collaborates well with peers.
- Completes homework carefully and on time.
- Asks questions to clarify or probe as appropriate.
The score for each criteria is on the following scale:
- Almost always
Pupils self-assess their effort before teachers assess it.
Teachers meet with parents and pupils. They discuss the effort assessments and agree upon one or two targets for the pupil to work on.
Pupils create a Google document with their targets, share it with their tutor, who makes sure they know what they need to do in order to meet their targets.
Teachers write brief comments on how each pupil is working towards the targets they set in Autumn term. They should be aimed at the pupils and hence written in the second person.
They are sent to parents, pupils and tutors, who discuss them with the pupils.
Pupils self-assess and teachers separately assess pupils’ effort.
Teachers meet with pupils and parents to discuss the effort assessments and progress towards their targets. During this meeting, targets are revised if appropriate.
Pupils update their target sheet and discuss this with their tutors, particularly focussing on targets that have remained from the autumn term.
Teachers mark the end of year assessments and work with the head of assessment to convert the scores into scaled scores, which are then reported to parents.
Lets say Jamie scores 75% on an English test and 60% on a science test.
It appears at first that’s he’s doing better in English, but this does not take account of the difficulty of the test.
It could be that the class average in English was 80% and the average in science was 50%. Then Jamie is actually below average for English and above average for science. Pupils intuitively know this, which is why they want to ask their peers how they did after results of a test are delivered.
There is also a more subtle issue, which is that the results of different tests may be more spread out than others.
To account for differing averages and spread, we can standardise the scores in the following way:
The standardised score in every test will have an average of 100 and a ‘spread’ of 20. In the example of Jamie’s test results above, his English grade may (it would depend on the spread of results) be standardised to 93 and his Science grade may be standardised to 120.
This will allow his tutor and parents to compare these results fairly: he can’t use the classic excuse “but everyone did badly in English”.
Next year, he will receive another science grade on the same standardised measure. Let’s say this is 115. In this case, we should be careful not to assume that he has done worse this year than last / made less than average progress in science. If however, his score is 90 in science in the second year, this significant drop is probably worth investigating.
This system is not perfect:
It does not allow us to compare the performance of departments or teachers but we don’t believe that we should use test results to do this.
It doesn’t give students an idea of how they’re doing nationally. This issue it tackled in the feedback policy by relating standardised scores to GCSE grades.
Note that we are only talking about summative tests here, in which the aim is to “track pupils’ attainment and progress, to give them, their teachers and parents an idea of how they might perform in future external exams.”.
Formative tests, which form the vast majority of testing, should not be analysed in this way and pupils should be discouraged from comparing their performance to each other.