Students, you need to use AI in your assignments

Graphic created using ChatGPT 4o with the prompt “draw a picture for the blog that follows, including diverse students”, and the whole text of the blog.

A new teaching semester has started, and most of my students were surprised by my overencouragement for them to use AI for their assignments (at least in my modules), meaning that there are still some (many) teachers around that are telling them that that the use of AI should be avoided and that it would/may be cheating.

Both in February and April 2023, at the inaugural Technology Enhanced Learning Community of Practice event and BILETA 2023 respectively, and when the cover of the newspapers still were presenting large language models as the end of literacy, I insisted on the need to adapt assessment to the rise and rise of AI in general and large language models in particular. Many of my colleagues rose up in arms to the chants of “Cheating institutionalisation!” (with different words, though), and claiming that the already high proportion of cheaters in Higher Education would become astronomical. I simply replied that the vast majority of my students were not cheaters, and asked whether they would pass a student that submitted work with numbers as fabricated as those they were mentioning, which contradicted all the literature and available data at university level. Some didn’t like my question, none replied but all got the message.

The central issue is that we are not at the dawn of the AI age, we are in the morning of it, and those who cannot master it will be replaced by AI. So, students need to know how to use it, while understanding that if AI alone can write their assignments, the market will not need them because, at individual level in certain jobs, AI is cheaper than an employee. The challenge is to produce work that uses AI but goes beyond it, and there is where teachers (and professions’ regulators) come in the picture.

The question is not whether a particular AI tool can pass some country’s examination, but if the bar examination is a valid method to assess whether some is ready to be a lawyer, to give a blunt example. And the same applies to almost every module/class/course or whatever name subjects are given in different institutions.

It has become clear, and that somehow seems to be missing in certain discussions about AI and copyright (if you cannot distinguish between human and AI produced work, you may need to rethink the concept of originality instead of insisting on some formalities that reality will render obsolete very soon, like the courts’ repeated mantra “no human no copyright”), that AI is extremely good for many things and surpasses human in many others, but it is not match for human intelligence. Accordingly, instead of trying to stop the incoming waves with a bucket, many of us need to get up from the lounge chair, leave the comfort of the beach, and learn to surf.

For my first seminar of Business Law, the task is “Using ChatGPT or similar, answer the questions given at the end of Lecture 1. Prepare to discuss”, and I explained to the students that they will have to deconstruct the LLMs given answers justify, support or refute them, with particular attention paid to the hallucinations.

Part of the module’s assessment used to be a Self-reflective journal, where students needed to critically reflect upon some area of law, and the learning process that took them from where they were in relation to it before the start of the module and to where they are at the end of it. Now, the same task consists on asking a LLM to critically analyse a particular area of law, to explain what prompts they used to order the task and why those were the appropriate prompts, to justify, support or refute the AI critique, and to explain how their semester learning process allowed them to do so.

And there is much more to come…

Algorithmic systems and sustainability

After more than a year not even opening this almost twenty years old blog, several changes in my private and job life imply that I will return to this old pastime. I have decided to spend less time on planes and managerial roles in Higher Education, and more in research, teaching and engagement activities, meaning more time to write (with, of course, some academic and policy related travelling).

Last year we were somehow in awe for the rapid development of AI, although one could argue that what we saw was just a very fast adoption of a particular type of algorithmic systems, generative AI, while even that type of algorithmic systems have been around people’s lives for quite longer than a year and a half.

However, it is true that the irruption of generative AI and Large Language Models made algorithms a super-hot issue, so much that it seems that the whole IT law field has been swamped by AI discussions, and that there is no much else to talk about. But if the different scenarios and the obvious challenges that algorithmic systems presented to the law, seemed to quickly create a consensus (really?) in the need of regulating them, the usual tendency of lawyers, law academics, judges and policy makers to focus on what it allowed them to modify less the current legal status quo, resulted in important (fundamental) areas of law left outside of the analysis and or regulatory frenzy. One of them is the dilemmatic relationship between algorithmic systems and sustainability, which will have deep effect both in the environment and in the businesses operating in the AI field.

The argument has been that sustainability and climate change implications of AI are common to any technological and economic activity and that, at best, there should be a generic sustainability legal framework that applies to all of them, not specifically to AI. The counterarguments to that are various and can be made from different angles. From the sectorial point of view, the same could be said for the oil, the cement and the transport industries, but there is a growing body of discussions and case-law that says that their situation is not a generic one, even when generic rules are been applied to them. If we focus on the substantial issues and emissions, the old view that a different in degree big enough implies a change in class, seems to apply squarely here: something that emits substantially more than other activities and or vast amount of greenhouse gasses emissions are intrinsic to its functioning, does not share common characteristics with any technological and economic activity. Algorithmic systems are in this category, and regulating them with a focus on sustainability and climate change is essential.

In the coming days I will start to dissect the why and how that is true, coupled with the potential application of current rules, which are being used to deal with other heavy-emitter industries.