Rectangular color blocks in red, pink, purple, blue, green, yellow-green, orange, and tan, aligned in a row on a white background.

CELTA Now, DELTA Later? Assessment Literacy in Teacher Training

By Elena Kapshutar

The CELTA is a highly practical entry-level teaching qualification. The face-to-face version lasts four weeks and seeks to pack in everything the future teacher will need, but inevitably some topics receive a superficial treatment. One of these is assessment literacy: how trainees justify level and needs before then monitoring and evaluating learning. While pointing trainees in the direction of more information is usually enough on the CELTA, the gap between explicit and implicit knowledge of the subject becomes clear on the DELTA, where assessment principles are clear and examinable: purposes, validity, reliability, impact, practicality, etc.

This article suggests a light-touch way to make that hidden assessment thinking more visible on the CELTA through simple routines embedded in input, teacher practice (TP) feedback, and assignments. And if you’re not a CELTA trainer? This article offers practical suggestions that any teacher trainer can implement in their sessions to help bring a touch more assessment literacy into the lives of their teachers.

What do we mean by assessment literacy?

In early teacher training, assessment literacy is the ability to use basic assessment thinking to make sound teaching decisions. For this article, I define it simply as making cautious claims about learner level or performance, backed by clear evidence and criteria, and using that evidence to guide planning and evaluation. On the CELTA, trainees usually know some proficiency frameworks (CEFR, IELTS, CLB), but often struggle with how strongly to make a claim, what evidence supports it, and how to link that evidence to lesson aims and post-lesson evaluation.

Assessment literacy in the current CELTA syllabus

A useful starting point for trainers is to treat assessment literacy not as extra content but as a thread we can surface within what the CELTA already assesses.

The CELTA syllabus is organised around five topic areas, and the assessment framework includes planning and teaching plus four written assignments. The assessment aims include assessing learner needs, and the course explicitly expects monitoring and evaluation of learning (Cambridge English, 2024).

Here is a CELTA-aligned map of where assessment literacy is already implied:

Topic 1 (Learners and context). When candidates discuss learner profiles and needs, assessment literacy shows up as evidence quality: what data do we have – observation, learner output, diagnostic tasks, institutional placement results, and what inferences are we making from that data?

Topic 2 (Language analysis and awareness). When candidates anticipate problems and select language points, assessment literacy shows up as diagnosis: what errors are systematic, what are slips, and what would demonstrate improvement?

Topic 3 (Skills). Trainees can run a skills lesson smoothly – lead-in, pre-task, task, post-task, feedback, but still be unsure what would show that learners actually improved. The key here is linking the aim to something observable: does your final task give you evidence of the specific listening/reading subskill you set out to develop?

Topic 4 (Planning and resources). This is where assessment literacy becomes concrete: defining outcomes and deciding what observable behaviour would indicate progress.

Topic 5 (Teaching skills and professionalism). The CELTA explicitly expects effective monitoring and evaluation. The practical move is to monitor with a clear focus, e.g., one language point or one interaction pattern, so what you notice feeds directly into feedback and next steps.

Why DELTA Module 1 can feel so demanding

The CELTA does build assessment-related habits, even if they are not always named as such. The problem shows up later on the DELTA, especially Module 1, where candidates have to talk about assessment explicitly and accurately. Things that were handled through routine classroom decisions on the CELTA – needs analysis, monitoring, evaluating learning – become examinable concepts that must be defined, compared, and applied.

Module 1 is clear about this focus: it expects candidates to understand the role and methods of assessment, including purposes of assessment (diagnostic, formative, summative), and core principles such as validity, reliability, impact, and practicality, as well as to evaluate common assessment techniques (Cambridge English, 2022).

So when teachers start M1 preparation, they often face three simultaneous challenges:

  • Volume and breadth: lots of areas; assessment being one of several big ones.
  • Lack of a clear roadmap: many candidates don’t know where to begin.
  • A gap in habits: they may be excellent teachers but not used to framing classroom decisions in assessment terms.

This is why assessment literacy at the CELTA level can pay off later: it builds habits of evidence and criteria that transfer directly into M1 thinking.

Bringing assessment literacy into CELTA without turning it into an assessment course

The CELTA already trains candidates to notice, respond to, and justify decisions. The only change I am arguing for is this: we make the evidence part more explicit. I suggest we do that not with extra theory, but with small routines built into what we already do: input, planning support, TP feedback, and the written assignments.

Below are three low-impact ways I use and have seen other tutors use to do this:

  1. Replace level statements with evidence statements.

One recurring CELTA habit is jumping to labels: They’re B2 / They need listening / They can’t use past tenses. It’s not wrong at the CELTA stage, but it may often seem unsupported. So I ask candidates to write one sentence of evidence before they write one sentence of interpretation.

Trainer prompt:

  • What did you actually hear/see in learner output that led you to that conclusion?
  • What else would you need to be confident?

Quick activity (8-10 minutes): give trainees five common level/needs statements from plans or assignments (anonymised). For each one, they add a line saying what they actually saw/heard that supports it, and then rewrite the statement in a more cautious, evidence-based way. The point is to get them used to separating what they observed from what they’re concluding.

Why it’s useful: this one change raises the quality of CELTA written work such as learner profiles, lesson rationales, and post-lesson evaluations, and helps with the transition to Delta Module 1. It also looks good as part of the lesson planning pack that many teachers have to produce before and/or after a formal observation at their school.

  1. Finish skills lessons with clear proof

Skills lessons can be well-paced and well-staged, but the last stage sometimes ends up as a generic follow-up that doesn’t really show whether the target subskill has improved. A useful habit is to plan the final stage as a proof point: one small piece of learner output that lets the teacher check the aim.

Trainer prompt:

  • What exactly will learners do at the end that shows progress with the target subskill?
  • What will you collect or listen for (one concrete thing)?
  • Does your last stage produce that, or is it just one more activity?

Quick activity (8-10 minutes): give trainees 3-4 common CELTA skills aims and ask them to write a matching proof point task for each (one sentence per task). Then compare: does the task actually show the subskill?

Example aims and proof points:

  • Reading: identify stance → learners underline two phrases that signal stance and justify choice.
  • Listening: listen for detail → learners correct a short inaccurate summary using details from the audio.

Why it’s useful: it makes skills lesson aims more concrete, improves lesson evaluation (was the main aim achieved fully?), and gives clearer material for feedback.

  1. Monitor with one focus

Trainees often monitor to help learners complete the task by offering timely scaffolding, but monitoring is also the stage when the teacher gathers information for feedback and next steps. The simple shift is: one monitoring focus per key stage, and a plan for what to do with what is noticed.

Trainer prompt:

  • During this stage, what are you listening/looking for exactly? (one thing)
  • What will you do with it afterwards? (one decision)
  • Quick activity (6-8 minutes): hand out a short lesson plan (or use a trainee’s plan). Pick two stages (e.g., the main task and the follow-up). For each stage, trainees write one line under the stage title:
  • Monitor for: (one specific thing to listen/look for)
  • Use it for: (what they’ll do with what they notice, e.g., delayed feedback, regrouping, quick extra practice)

Why it’s useful: it makes monitoring purposeful and stops feedback being vague (monitor more closely). It also supports cleaner choices in the lesson: what to correct, what to recycle, and what to practise next.

A one-page checklist to use with your CELTA trainees (or your own teachers!)

Use this when writing learner profiles, justifying aims, planning the end of a lesson, or evaluating learning.

  • Claim: What am I saying about learners (level / need / problem / progress)?
  • Evidence: What did I see / hear / collect that supports this?
  • Confidence: How sure can I be based on that evidence? What’s missing?
  • Success: What would “better” look like (one clear indicator)?
  • Proof point: Which stage will produce evidence of improvement?
  • Monitoring: What will I listen/look for, and what will I do with it afterwards?

Conclusion

The CELTA should not turn into a course on language testing. However, it already requires trainees to make assessment-related decisions: identifying needs and level, monitoring, and evaluating learning. A realistic improvement is to make the thinking behind those decisions more explicit through a few small routines and shared questions about claims, evidence, and criteria.

This matters for teachers who go on to the DELTA. But it also helps teachers working in schools where their responsibilities might include placing students in the right lesson through assessments of writing or oral interviews. Teachers planning their formally observed lessons each semester would also appreciate a better understanding of this topic, as writing lesson aims is something that never seems to come easily – either on the CELTA or long afterwards.

References

Cambridge English (2024) CELTA: Syllabus and assessment guidelines. Cambridge.
Cambridge English (2022) DELTA: Syllabus specifications. Cambridge.

Biography

A person with straight brown hair, glasses, and a black jacket looks directly at the camera against a plain light background.

Dr Elena Kapshutar is a teacher educator and assessment specialist based in Ontario, Canada. She is a Cambridge CELTA and Delta M1/M3 tutor, and works as a speaking and writing assessor for Cambridge and Oxford. She develops ELT, DaF and FLE materials for major publishers and supports teachers with practical, evidence-based approaches to assessment and classroom decision-making.

Author

Share this post

write for us

Write for Us

We are always on the lookout for new materials ideas, papers, photos and articles. Have your work published in the IH Journal.

christopher-walker

Contact the Editor

Contact our editor, Christopher Walker

cover-history

About the IH Journal

Read about the history of the IH Journal

Subscribe to the Journal

Join our IH Journal mailing list to receive publication notifications and opportunities to write for us!