Deploying language model-based assessment support technology in a computer science degree: How do the academics feel about it?

Yee-King, Matthew and Fiorucci, Andrea. 2025. 'Deploying language model-based assessment support technology in a computer science degree: How do the academics feel about it?'. In: IEEE Global Engineering Education Conference (IEEE EDUCON). Queen Mary University of London, United Kingdom 22 - 25 April 2025. [Conference or Workshop Item]
Copy

We present two contrasting case studies wherein we used large language model (LLM) technology to support critical elements of our work in the context of a large scale online undergraduate computer science degree. Firstly we used semantic embeddings to identify student-student collusion in exam answers. Secondly we used LLMs to generate starter drafts for exam question papers. We gathered academic staff responses to the two systems through structured interviews. We describe and use a novel, LLM-powered inductive thematic analysis methodology to tag and identify themes in the interviews. All analysis was carried out on locally hosted language models.

We identified 26 themes, some shared across the two systems, others unique. The academics were largely comfortable with the use of LLM technology in assessment, the exam generator system helped to kick-start exam writing and the collusion detection tool found otherwise invisible cases. Academics emphasised the need for human oversight of such systems, but were prepared to use them as they perceived that they improved the efficiency of exam processes.


picture_as_pdf
IEEE_educon_25_LLMs_in_BScCS.pdf
subject
Accepted Version

View Download

Atom BibTeX OpenURL ContextObject in Span OpenURL ContextObject Dublin Core Dublin Core MPEG-21 DIDL Data Cite XML EndNote HTML Citation METS MODS RIOXX2 XML Reference Manager Refer ASCII Citation
Export

Downloads