Tuesday, April 28, 2026

Evaluating large language models for AI-assisted grading: a framework and case study in higher education - Yago Saez, Luis Mario Garcia, Asuncion Mochon & Pedro Isasi, Nature

This article presents an empirical evaluation of six state-of-the-art large language models for grading student assignments in a university-level course on data analytics and machine learning. The study compares the ability of the models to generate grades and feedback with that of human instructors, using statistical and semantic measures for evaluation. The results show that DeepSeek-R1 provided the closest alignment with human evaluations in both grading accuracy and feedback quality. Beyond this case study, the article contributes a replicable framework for systematically benchmarking LLMs in higher education assessment, specifying model selection, prompt design, evaluation measures, and cost analysis. The proposed framework ensures continued relevance as new models emerge, providing educators and researchers with a transferable methodology to evaluate AI-assisted grading in higher education.

No comments: