Quality Insights speakers address the impacts of AI
Date: | February 26 - 2025 |
---|
QAA's Quality Insights Conference opened on 26 February with a keynote address from Deakin University's Professor Phillip Dawson which advanced a practical – and structural – approach to the development of assessment strategies in the age of AI (you can read our full report of Phillip's presentation here.)
Phillip's presentation was followed by a session which explored ways in which academic assessment and feedback practices are being reconsidered to address the challenges and the opportunities presented by the proliferation of generative artificial intelligence technologies.
Observing that "artificial intelligence poses a fundamental threat to the validity of academic assessments", colleagues from the University of Reading reported on an experiment whereby they had blind-tested assessors with AI-generated submissions.
Their research found that 97 per cent of these submissions went undetected as the products of artificial intelligence – and even tended to secure upper class award grades – "outperforming student submissions across the board" – with the exception of final module assessments.
The team from Reading's Generative AI Working Group proceeded to detail the strategies they have developed to address this challenge, underpinned by a category framework which maps appropriate uses of artificial intelligence in assessment.
"There are a host of reasons why people have ethical concerns," said Dr Siân Lindsay. "The important thing is not just to ignore it or hope it goes away,"
A team from Loughborough University then went on to explain how they have been developing uses of artificial intelligence in assisting teaching staff in the generation of feedback on students' formative assessments – "without replacing academic judgment" – to "enhance engagement and support formative learning"; complementing rather than superseding the essential role of lecturers in these processes.
Their research has shown that students have been generally happy with the quality and timeliness of AI-generated feedback, while noting that they wouldn't want or expect this approach to become a substitute for their interactions with real-life teaching staff.
The final panel of the opening day of Quality Insights 2025 considered how assessment design might underpin academic integrity in the era of artificial intelligence.
Chaired by QAA's Dr Nick Watmough, the panel featured Dr Ruth Stoker (Director of Strategic Teaching & Learning at the University of Huddersfield), Dr Justin Tonra (Academic Integrity Officer at the University of Galway), Dr Mike Perkins (Head of the Centre for Research & Innovation at British University Vietnam) and Dr Annie Hughes (Head of Learning & Teaching Enhancement at Kingston University) – who discussed AI earlier this month in the QAA Blog.
Mike questioned the idea that ed tech companies' AI-detection tools had ever represented a "saviour from the heavens" – supposing that "these services aren't accurate enough" when determining decisions on students' futures – and therefore suggested that, rather than seeking to outlaw them entirely, we need to move towards defining appropriate uses of these technologies.
He proposed that we need frameworks to help us determine permissible levels of AI use in relation to different modules and assessments – and that unrestricted use wouldn't always be appropriate.
"I don't want to go to the doctor's in 20 years' time and find that the doctor who's treating me got their degree through ChatGPT," he said.
Ruth explained that her own research in this field has also discovered a range of different student uses – and different student perspectives on the ethical uses – of AI.
She observed that some students had expressed the view that they weren't studying for three years so that Gen AI could earn their degree certificate for them – and that some even felt that the use of generative AI had diminished their learning: "After using ChatGPT so many times [one said], it's difficult for me to write 20 lines in a row."
She also noted that AI can homogenise expression – "the voice of the student might be lost in translation" – and that a reliance on AI to learn language might prove to be self-defeating and may stifle language acquisition.
"We need to consider what we want students to learn and how our assessments test those learning outcomes," she said.
Annie agreed that these are key questions for educators to consider.
"If curating the written word isn't a higher skill, what kind of higher skills are we trying to engender in our students?" she asked.
Annie described the opportunities-based framework she and her colleagues had developed in response to the growing use of generative artificial intelligence, and echoed comments made earlier in the day by Phillip Dawson as to the need to assure, in reviewing their design, the validity of assessments.
She also raised concerns as to the use of AI in, for example, institutional planning decisions or in helping admissions teams make decisions about student applications. She emphasised that AI tools are "not created by neutral actors" and stressed that it is essential that we ensure that both staff and students are properly trained in their possible applications.
Justin – whose disciplinary background is in literature – spoke of the development in his own teaching practice of opportunities for students to move beyond traditional forms of written assessment: for example, by promoting forms of group work and even getting his students to talk with chatbots about poetry, developing dialogues which those students have then critiqued in preparation for their final assessment – an assessment which took the form of an in-person human conversation.
"Being able to ask the correct questions is evidence that a student has met the learning outcomes," he said.
He concluded that we should focus on deterring the misuse of these technologies, on strengthening assessment security and on reinforcing discussions with our students about the nature and value of academic integrity.
The conference's first day also included presentations focused on the promotion of cultures of compassion, belonging, inclusivity, resilience and support in higher education from teams based at De Montfort University, the University of Greenwich, the University of Leeds, the University of Liverpool, Nottingham Trent University, Ulster University and University College London.
Meanwhile, members of the quality teams at The Open University and Queen's University Belfast introduced their technology-enhanced and risk-based approaches to annual monitoring and external examiner processes – and colleagues from the University of Greenwich, the University of Lincoln, Manchester Metropolitan University and Royal Holloway considered the opportunities and challenges of shared modules.