Skip to main content Accessibility Statement
6 February 2025

How AI may make us rethink the purpose of HE

 




 Author

 



Dr Annie Hughes
Head of the Learning and Teaching Enhancement Centre, Kingston University London

 



Dr Tim Linsey
Head of Technology Enhanced Learning, Kingston University London

 

Dr Annie Hughes is Head of the Learning and Teaching Enhancement Centre at Kingston University London. Dr Tim Linsey is Head of Technology Enhanced Learning at Kingston University London. Annie and Tim lead a QAA-funded Collaborative Enhancement Project exploring the opportunities AI offers to higher education and the graduates of the future.

 

Two previous blogs to have come out of our Collaborative Enhancement Project have begun to interrogate the challenges that generative AI is posing to how we define our purpose and measure quality in higher education.


These are big questions which I'd like to start to explore a little further here. 

Universities originally began as hubs of intellectual thought and knowledge-sharing, latterly becoming places of innovation and scientific discovery with the integration of teaching with research. Whilst there is no doubt that technological innovations from the printing press to the internet have transformed access to knowledge, universities are still viewed as the pinnacle of intellectual endeavour. Academic writing has been privileged over other forms of writing and viewed as more legitimate than different forms of expression, including oral traditions.

However, the ability to write academically has for centuries been limited to a small group of people – academics themselves and others with access to that educational privilege and cultural capital (very often those whose parents and grandparents also enjoyed the benefits of university educations). 

With developments in generative AI, the ‘skill’ of academic writing becomes ostensibly obsolete as a measure of intellectual excellence. The intellectual argument remains sacrosanct, but the ability to express it in acceptable and legitimate ways becomes more easily accessible. 

Generative AI can, then, facilitate a level playing field which, immediately and pragmatically, offers to democratize the academy. That at least is a conceivable potential. 

But at what cost?  So far, generative AI has, for a large part, brought to the party is an opportunity to express perspectives and ideas in ways which primarily reflect the norms and expectations of academic writing and formal discourse and continues to privilege the textual. 

But, if generative AI leads to the diminished value of the art of academic writing, it may open  the door to, and restore, our understanding of the importance of dialogue, of dialogic and dynamic processes of teaching, learning and assessment which empower the diversity and distinctiveness of individual voices and the interrelation of those voices.

 

Implications for assessment practices

When we're marking students work – conventionally at least – we’re not just marking their knowledge and understanding of a subject but also their ability to communicate it in an appropriate way – traditionally through an essay or report. 

But, if machine ‘intelligence’ can write an essay in an academic style or a report in a corporate voice identikit in terms of expression and tone, then why should we value, recognise and reward that ability as a higher skill? When we have AI to do it for us, it appears it's no longer something we need to be able to do for ourselves. Similarly, if generative AI can conduct the basic research and generate ideas for us, why would we do it ourselves? 

Under such conditions, it shouldn't be the end product, the writing, that gets the credit: it should be the student's ability to use the tool critically – not for it to think for them – but for it to create a text which they can then evaluate, edit and hone.

Now that generative AI can perform these functions for us, we can accelerate the trajectory of learning and assessment to focus on developing future skills, transferable graduate skills such as problem-solving abilities, building students' capacities for critical thinking, self-awareness and collaborative work, to name but a few. This renewed focus on what we might see as the core values of learning should surely be seen as a positive thing.

Generative AI will allow us to continue to move away from content-led assessment models towards active and dialogic forms of learning which adopt authentic and multimodal modes of assessment, opening up a world of possibilities for how we do higher education (better) by focusing on the genuinely higher (human) skills that cannot be so simply replicated by machines. 

Generative AI shouldn't be there to think for students. It should free them up to think for themselves.

Universities' policies and practices need to take into account these opportunities, just as teaching staff need to be supported to develop curricula, learning and teaching practices and assessment design fit for an AI-assisted world and support their students to use it.

 

Big questions

So, in its simplest analysis, we can argue that Generative AI is a leveller, supporting students to translate their critical subject knowledge and understanding into a mode of discourse recognised as valid by the academic establishment – regardless of their cultural, social, national and linguistic backgrounds.

But doesn't that eventually beg the question as to why we should need to do that in the first place?

Generative AI can certainly empower. It can support students to produce work in styles that were not previously easily available to them.

But by embracing this writing tool are we continuing to accelerate the tendency for a digitally determined realm of discursive homogeneity? Might we be making all those diverse voices start to sound the same – and even to say the same things?

Generative AI can give access to the skills to allow multiple voices to be heard, but it doesn't address the fundamental problem that academia and society still expect all those voices to sound a particular way. 

If we value a diversity of perspectives and forms of expression (and appreciate that those perspectives are intrinsically linked with their modes of discourse), then we may recognise the risk that AI (by valorising, prioritising and perpetuating a dominant mode of academic discourse) is reinforcing traditional knowledge hierarchies. As such, perhaps all it is doing is broadening access to a traditional set of tools, so that students' real and authentic voices can assume or be transformed into – or be silenced and subsumed by – the authority of academia's homogenous and hegemonic voice. 

Does this great leap forwards then represent a process of democratisation or of assimilation?

We've been learning for decades the benefits of recognising and promoting the inherent and enriching value of the diversity of student experience in our classrooms – and we've therefore done our best to ensure that everything from academic regulations to assignment briefs are written in an accessible language and style. In the interests of inclusivity, we've purged unnecessarily academic writing from our own student-facing communications – yet we still expect our students themselves to write that way, or to use machines to help them do so.

There has always been the counterargument that it's our job to give each individual student the tools they need to succeed. We can indulge in as much blue sky thinking as we like. But the bottom line is that our students expect us to help them develop the skills they'll need in-order to do well ‘out there’ – not in an egalitarian utopia but in the actual society in which we live. 

We can perhaps console ourselves by thinking that our students will, in years to come, use the skills that higher education has given them to make our unjust world a better place. But is that not just kicking the proverbial can down the road?

Should we instead be focused on the promotion of diversity and its inherent value to learning? Universities claim to be engines of social mobility and social justice, but in many ways they merely replicate society's hierarchies of privilege and disadvantage.

 

Where do we go from here?

Artificial intelligence helps people do what needs doing – this needs to be embraced. We also know that it replicates biases. It can provide individuals extraordinary opportunities, but it can also perpetuate the injustices which pervade society as it perpetrates as-yet-unknown impacts upon the global environment. 

Yet there is evidently an opportunity here to use the time that generative AI frees up to critically interrogate its outputs, in order to move towards a deeper understanding of our academic disciplines – their inherent biases and what forms of knowledge are currently silenced. 

In the end, then, we might usefully acknowledge that generative AI has the potential to do both great good and great harm. 

What is clear is that it gives us significant opportunities to revitalise how higher education is delivered and assessed. What higher education can most helpfully do is to appropriately embrace generative AI, supporting students to effectively use this technology whilst developing hypercritical perspectives on its outputs. This will support students to recognise and challenge the broader tensions, contradictions and hypocrisies inherent in all our systems – digital, social, political and academic – building their higher skills for the future. 

We cannot be certain what kind of future generative AI will bring. That uncertainty may keep some of us awake at night. But it's also that uncertainty which keeps us thinking and learning.