How scoring works

A clear explanation of what we do, what we do not claim, and how to use the feedback well.

1) Rubrics are adapted from published descriptors

Our criteria are adapted from published exam and curriculum descriptor styles to provide structured feedback. This means we align to the same style of criteria categories; it does not mean each result is identical to official human marking outcomes.

2) Score scaling is transparent

Different criteria can have different maximum marks (for example 3, 5, or 6). To show criteria consistently, we scale each criterion score to a 1-10 display scale:

scaled = (raw score / criterion max) * 10

Example: a 2/6 criterion is displayed as about 3.3/10.

3) Results are for learning support

AI scoring can have small variation between runs. The intended use is to identify strengths, gaps, and next edits for the current draft. It is a practice feedback tool for parents, tutors, and students, not a high-stakes replacement for formal assessment decisions.

4) How to get the most value

  • Use one draft at a time and focus on the actionable comments.
  • Revise and resubmit to check whether specific issues are improved.
  • Use rubric scores as a guide, and teacher feedback as the final authority.