Students, you need to use AI in your assignments

Graphic created using ChatGPT 4o with the prompt “draw a picture for the blog that follows, including diverse students”, and the whole text of the blog.

A new teaching semester has started, and most of my students were surprised by my overencouragement for them to use AI for their assignments (at least in my modules), meaning that there are still some (many) teachers around that are telling them that that the use of AI should be avoided and that it would/may be cheating.

Both in February and April 2023, at the inaugural Technology Enhanced Learning Community of Practice event and BILETA 2023 respectively, and when the cover of the newspapers still were presenting large language models as the end of literacy, I insisted on the need to adapt assessment to the rise and rise of AI in general and large language models in particular. Many of my colleagues rose up in arms to the chants of “Cheating institutionalisation!” (with different words, though), and claiming that the already high proportion of cheaters in Higher Education would become astronomical. I simply replied that the vast majority of my students were not cheaters, and asked whether they would pass a student that submitted work with numbers as fabricated as those they were mentioning, which contradicted all the literature and available data at university level. Some didn’t like my question, none replied but all got the message.

The central issue is that we are not at the dawn of the AI age, we are in the morning of it, and those who cannot master it will be replaced by AI. So, students need to know how to use it, while understanding that if AI alone can write their assignments, the market will not need them because, at individual level in certain jobs, AI is cheaper than an employee. The challenge is to produce work that uses AI but goes beyond it, and there is where teachers (and professions’ regulators) come in the picture.

The question is not whether a particular AI tool can pass some country’s examination, but if the bar examination is a valid method to assess whether some is ready to be a lawyer, to give a blunt example. And the same applies to almost every module/class/course or whatever name subjects are given in different institutions.

It has become clear, and that somehow seems to be missing in certain discussions about AI and copyright (if you cannot distinguish between human and AI produced work, you may need to rethink the concept of originality instead of insisting on some formalities that reality will render obsolete very soon, like the courts’ repeated mantra “no human no copyright”), that AI is extremely good for many things and surpasses human in many others, but it is not match for human intelligence. Accordingly, instead of trying to stop the incoming waves with a bucket, many of us need to get up from the lounge chair, leave the comfort of the beach, and learn to surf.

For my first seminar of Business Law, the task is “Using ChatGPT or similar, answer the questions given at the end of Lecture 1. Prepare to discuss”, and I explained to the students that they will have to deconstruct the LLMs given answers justify, support or refute them, with particular attention paid to the hallucinations.

Part of the module’s assessment used to be a Self-reflective journal, where students needed to critically reflect upon some area of law, and the learning process that took them from where they were in relation to it before the start of the module and to where they are at the end of it. Now, the same task consists on asking a LLM to critically analyse a particular area of law, to explain what prompts they used to order the task and why those were the appropriate prompts, to justify, support or refute the AI critique, and to explain how their semester learning process allowed them to do so.

And there is much more to come…

Leave a comment