New modes of assessments are needed to evaluate of the authenticity of student learning in an artificial intelligence (AI) world. In mid-2023, we piloted a new assessment type; a collaborative group multimedia assessment with AI allowance. The aim of the research study was to explore the experiences of students using AI in a multimedia assessment. We further aimed to determine whether these use cases changed student perceptions of the ways AI can be used in learning and assessment.
MethodsStudents enrolled in a capstone Pharmacology interdisciplinary unit (n = 40) were included in an exploratory, qualitative case study methodology. Thematic analysis using an AI role-based conceptual framework was used to explore student perceptions of AI use prior to and during their projects from logbooks documenting the assessment process.
ResultsAI was initially perceived by students as having a personal tutor-style role, which aligned with the taxonomy with AI acting as an Arbiter (49 %), Oracle (41 %) and Quant (10 %). In contrast to their earlier perceptions, AI was only used in a limited manner in the early stages of assessment in the idea generation in the role as an Oracle (86 %) or in data analytic purposes as a Quant (14 %), (n = 14 cases in 5 groups). No student group used AI to generate written text for the final assessment.
DiscussionTension between perceived and actual use of AI is indicative of the uncertainty faced by students with the allowance of AI within assessments. Clear guidance for educators and students about how to assess the AI-supported learning process is needed to ensure the integrity of the assessment system.
Comments (0)