Back to Blog
AI assessmentAI feedbackeducation policyteacher feedbackacademic integritywriting assessment

What the University of Melbourne Gets Right About AI and Assessment

A practical review of the University of Melbourne's guidance on using AI for assessment and feedback, and what it means for schools, tutors, and families.

Kids Writing6 March 2026

The University of Melbourne's guidance on using AI for student assessment and feedback is one of the clearest higher-education statements we have seen on a question many schools are still struggling with:

Where should AI sit in the feedback process, and where should human judgment remain non-negotiable?

Their answer is sensible: AI can assist with feedback and evaluation, but staff remain responsible for the judgment, the feedback, and the final academic decision.

That is exactly the right starting point.

The strongest idea in the policy

The best line in the guidance is not a technical one. It is a professional one:

AI can support assessment, but it should not replace the educator's judgment.

That matters because assessment is not just an administrative task. It is part of the relationship between teacher and student. A mark is not only a number. Good assessment tells a learner:

  • what they did well
  • what they misunderstood
  • what they should improve next
  • how confidently a teacher stands behind that advice

If an educator outsources that whole process to an AI tool, the feedback may become faster, but it also risks becoming thinner, less accountable, and less trusted.

Melbourne's guidance gets that balance right. It does not treat AI as forbidden. It treats AI as assistive.

What the policy gets right about grading

The document says the sole use of new generative AI tools to allocate marks or grades is not appropriate.

That is a strong and necessary line.

Why? Because grading is a high-stakes judgment. Even when an AI system is useful, it can still:

  • misread nuance
  • overstate confidence
  • apply a rubric inconsistently
  • be influenced by prompt design in ways staff do not fully notice

Using AI to suggest observations is one thing. Using it to determine a student's final mark on its own is another.

For schools and universities, that distinction is crucial.

The policy is especially strong on risk

A lot of AI guidance focuses only on cheating. This one is broader and better. It also addresses:

  • University intellectual property
  • copyright-protected material
  • student intellectual property
  • the danger of uploading sensitive content into open third-party tools

That is a major strength.

In practice, one of the biggest mistakes institutions make is treating AI as if it were just another website. It is not. Once teaching materials, rubrics, model responses, or student work are pasted into the wrong system, privacy and ownership problems appear very quickly.

Melbourne's guidance correctly points staff toward secure internal tools and enterprise-approved systems where possible.

Transparency and opt-out are the right default

Another strong point: students should be clearly told which tools are being used, how they are being used, and how they can opt out.

This is good policy for two reasons.

First, it protects trust. Students should not have to guess whether a machine helped shape the feedback they received.

Second, it protects legitimacy. If AI is being used in assessment-related workflows, the institution should be able to explain the process clearly and defend it.

Too many AI rollouts fail because they are framed as efficiency upgrades first and educational decisions second. Transparency reverses that.

Where this guidance is most useful

The document is most persuasive when applied to feedback-first workflows, such as:

  • expanding short teacher comments into fuller draft feedback
  • using rubric-aligned analysis to identify likely strengths and weaknesses
  • reviewing code quality or essay structure before a final human judgment
  • helping educators scale formative feedback without pretending the tool is the assessor

This is the zone where AI can genuinely help.

If AI helps a teacher move from "awkward expression" to a clearer explanation of why a paragraph is weak, that can improve the student experience.

If AI helps surface patterns across a class so the teacher can reteach a concept, that can improve instruction.

If AI shortens feedback turnaround while the teacher still reviews and owns the output, that is a meaningful gain.

What schools and tutors can learn from it

Even though this guidance is written for university staff, the core ideas translate well to K-12 education, tutoring, and at-home learning.

Here is the practical version:

  • Let the student do the thinking first.
  • Use AI to give structured, rubric-based feedback on the student's draft.
  • Keep a human adult responsible for the judgment.
  • Use AI feedback to guide revision, not to replace authorship.
  • Protect privacy by being careful where student work is uploaded.

That model is much healthier than using AI as a ghostwriter.

One limitation: the policy is careful, but not very concrete

If there is one weakness in the document, it is that it stays at the governance level more than the classroom level.

That is understandable for an institutional policy page. But many educators still need clearer examples of:

  • what a good prompt-review workflow looks like
  • how much teacher editing is enough before feedback is sent
  • which tasks are low-risk enough to automate partially
  • how to explain AI-assisted feedback to students in plain language

So the guidance is strong on principles, but still leaves a lot of implementation work to local teams.

The bigger lesson

The University of Melbourne is right to resist two bad extremes:

  • "AI should never be used in assessment"
  • "AI can now handle assessment for us"

Both positions are too simplistic.

The more useful position is:

AI can help educators give better, faster, more consistent feedback, but the educator must remain accountable for the judgment.

That is the line more institutions should adopt.

Bottom line

This is a thoughtful piece of guidance.

It is cautious without being anti-innovation. It is open to practical gains, but firm about human responsibility, privacy, and academic trust. Most importantly, it recognizes that assessment is not just a workflow to optimise. It is a teaching act.

That is exactly why AI belongs in the support role, not the final-authority role.


If you are thinking about AI and writing feedback in a school or family context, you may also like:

This article was researched and written by the Kids Writing team with AI assistance for structure and drafting. All facts, exam criteria, and recommendations are based on published official sources.

Ready to improve your writing?

Try Kids Writing AI — your personal writing tutor, available any time.

Start Marking