Skip to main content Accessibility Statement

Conference keynote addresses challenges of AI

Date: February 26 - 2025
More than 1,000 delegates have registered to attend our Quality Insights Conference 2025, which is taking place on 26-27 February.

QAA's annual spring conference opened today with a keynote address delivered by Deakin University's Professor Phillip Dawson, an internationally renowned expert in assessment, artificial intelligence and academic integrity.

Phillip discussed the importance of assessment for learning and emphasised that all of our understandings of integrity take place and develop in social contexts. 

Indeed, Phillip began by admitting that he himself had "contract-cheated" in his childhood – by getting his mum to help him with a piece of homework. He flagged up an episode of Peppa Pig which addresses this perennial problem. (At this point, he thanked a delegate in the online chat who had declared that, by contrast, Bluey would of course never cheat.)

He stressed however that higher education is having to move fast to catch up with the challenges posed to traditional modes of assessment by the emergence and proliferation of generative AI. 

"Assessment needs to change for a time of AI," he said. "There's a need for assessment to change in a world where AI can do the things that we used to assess."

He emphasised the importance of students' engagement with AI being ethical, active and critical.

"Forming trustworthy judgments about student learning in a time of AI requires multiple, inclusive and contextualised approaches to assessment," he said. "If we don't have trustworthy judgments about what our graduates can do, then society is let down." 

He proposed a set of approaches to underpin the development of assessment strategies that can be relevant and appropriate to the age of AI: "Assessments should emphasise appropriate, authentic engagements with AI, a systemic approach to programme assessment aligns with disciplines and qualifications, and the process of learning. We need assessment that uncovers what students are doing. Assessments should emphasise opportunities for students to work appropriately with each other and AI – and emphasise security at meaningful points across a programme to inform decisions about progression and completion – finding the key elements and investing in securing those points."

He emphasised that we must prioritise the validity of our assessment practices. "Validity matters more than cheating," he said. "Are we assessing the thing that we think we're assessing? Are we assessing what we need to assess? Validity is the main thing."

He recommended a strategy of what he called "reverse scaffolding" – a rule whereby students can only use AI to help them do things when they have already demonstrated that they can do those things without it. He also commended ongoing engagement with industry developments in order to ensure that we are promoting a process of "authentic assessment that represents our students' futures".

He concluded by arguing that discursive changes offering advice to students (such as traffic light systems) just aren't enough: "You can't address this problem of AI purely through talk. You need action. We need structural changes. We can't pretend that guidance to students will be effective in securing assessment. Don't set restrictions that can't be enforced. It's a fantasy land to think you can do this discursive change to assessment and that will do it." 

He supposed that we need to assess how our students apply and operationalise knowledge: "The times of assessing what people know are gone."

Asked whether we should just train our students how to use AI, he answered that when he's in a plane he's happy for the pilot to make the use of all the tech – but also expects the pilot will be able to fly the plane if all that tech fails. 

"There's something to be said about having the ability to do something without the technology," he said.