Every month Cafe Scientifique hosts a free talk at The Drill focused on an area of science, followed up by a Q&A and general discussion. Michael Rowe, Associate Professor in Digital Innovation at the School of Health and Social Care at the University of Lincoln, presented this month’s talk on Artificial Intelligence and its potential applications and complications in higher education. Here I’ve summarised Professor Rowe’s main points, as well as my thoughts, in a post that you hopefully enjoy as much as I did the talk.

Professor Rowe spent no time jumping into the society-changing potential of deep learning AI such as OpenAI’s GPT-3, speculating that areas we tend to consider off-limits to this kind of technology, such as creativity and empathy, may actually be achievable by these new programs. I was quickly sold on the idea of AI creativity if only because I have seen the discussion (if any Twitter interaction could possibly be described as such) flaring online about whether using this technology to create art is tantamount to stealing; this line of argument may seem strange to those unfamiliar with the process by which this artificial knowledge is produced, so now is probably a good time to explain.

AI programs are trained using information relevant to the purpose of the program known as training data. For example, a language model such as GPT-3 will be fed on huge bodies of text, such as books or Wikipedia articles, using algorithms far too complex for my understanding, to recognise patterns until it can use this information to respond to prompts. ChatGPT is a public chatbot built on top of the GPT language model that you can try for yourself here: https://openai.com/blog/chatgpt

Since these models are using pre-existing media to generate content one can see the argument for AI-generated art constituting theft, however, such a topic really warrants an article of its own due to its complex nature. Something that can be said, as Professor Rowe pointed out, is that the process AI models undertake is not too dissimilar to that used by humans; our creativity is born out of our observations of the world and other art, after which we aggregate this information, pull it through our internal filters and produce a piece of art on the other end. The morality of the process aside, it seems that AI is capable of what could be generally understood as creativity. The neural networks used are, after all, modelled from the biological neural networks found in the brains of humans and other animals, and an experiment was even conducted in which people were asked to compare AI-produced music to music created by humans in which the general consensus was that the AI music was more “artistic”.

The case for AI mastering empathy is a little harder to make, as it is harder to quantify. It is very likely that an AI model could emulate empathy by sifting through data covering how to recognise somebody’s mental state and how to respond to it, probably even better than humans can. The problem here is that empathy is in some ways more of a moral concept; if a person acts empathetically to somebody else, but internally feels no emotional connection, are they actually displaying empathy? Is it possible for an AI to actually feel anything? Do androids dream of electric sheep?

Regardless, should AI continue in the direction it is headed, it will eventually be capable of creativity and empathy that, from an outside observer, would be indistinguishable “the real thing”. Professor Rowe warns that if we look where AI cannot go, we will eventually find that space being taken too; ChatGPT has even passed the US Medical Licensing Exam. He explains that GPT-3 is 90% cheaper than previous versions of the software, with APIs already released to allow use with other software and hardware, suggesting that it might not be too long until ChatGPT and similar programs work their way into every aspect of our lives.

The power of AI, in terms of what it is potentially capable of and its likely widespread nature, has caused concern for some. The training data used are all produced by humans, being the only such data available, and since we are not perfect, neither is GPT. In an age where misinformation is constantly highlighted, guardrails have been engineered into the user-facing ChatGPT in an attempt to curtail negative consequences from some of GPT’s inaccuracies; after all, AI software of this kind does not care about the truth, it simply attempts to provide an answer for a given prompt. These guardrails are not perfect either, as there have been claims of political bias as well as successful attempts to bypass them, however, the OpenAI team are constantly updating these protocols, quickly patching some of the most well-known workarounds, and are clearly cognisant of their own potential biases as demonstrated by the work they have put in to reduce and remove these biases when called out.

So, how does this factor into higher education? The most obvious cause for concern comes from ChatGPT’s capability of writing essays and other projects for students. Professor Rowe has thus far demonstrated just how capable the AI model is, and how quickly it is developing; if it is even still possible to detect whether or not a piece of work is written by AI, this will not be the case for long. Could extra guardrails be put in place to stop students from exploiting this new technology? Could the same programs be used to detect which work is AI generated, leading to an AI arms race between students and faculty? This would certainly be difficult, as any regulation would move forward at a snail’s pace compared to the speed at that AI is developing.

Professor Rowe proposes a more nuanced solution: that students and academics should not be pitted against each other, but instead use this new technology collaboratively. After all, there is no outrage or concern about university staff using ChatGPT to create lesson plans, as they are trusted to use the software responsibly. What if AI was used to enhance the university experience?

Two main factors were highlighted in this talk as possible barriers to students achieving their potential academically. The first of these was the allocation of resources and the limits thereof; it would be completely unfeasible for each student to have their own personal tutor. Not with AI, though. Students can chat to programs like ChatGPT and get quick, accurate and tailored information, a service that historically only the teacher could provide in a one-on-one setting. Professor Rowe even hypothesised that one day this would replace the university model as it stands today, especially given the ability of AI to mimic empathy.

The second main barrier for students was their life circumstances. Studies show that only around 3% of students cheat in their work, and this is mainly due to socioeconomic factors. The time and resources that could be spent preventing students from using this software might be better used helping students that are struggling, and finding new and interesting ways to incorporate AI into the education system. Professor Rowe summarised with a message that applies not only to higher education but also to all areas of life that might be affected by these coming developments: our goal cannot be to beat AI, but should instead be to use it to become more human.

Professor Michael Rowe can be found on Twitter @michael_rowe. The next talk, ‘Clinical research in the back of an ambulance’, will take place on the 18th of April at The Drill.