Students, you need to use AI in your assignments

Graphic created using ChatGPT 4o with the prompt “draw a picture for the blog that follows, including diverse students”, and the whole text of the blog.

A new teaching semester has started, and most of my students were surprised by my overencouragement for them to use AI for their assignments (at least in my modules), meaning that there are still some (many) teachers around that are telling them that that the use of AI should be avoided and that it would/may be cheating.

Both in February and April 2023, at the inaugural Technology Enhanced Learning Community of Practice event and BILETA 2023 respectively, and when the cover of the newspapers still were presenting large language models as the end of literacy, I insisted on the need to adapt assessment to the rise and rise of AI in general and large language models in particular. Many of my colleagues rose up in arms to the chants of “Cheating institutionalisation!” (with different words, though), and claiming that the already high proportion of cheaters in Higher Education would become astronomical. I simply replied that the vast majority of my students were not cheaters, and asked whether they would pass a student that submitted work with numbers as fabricated as those they were mentioning, which contradicted all the literature and available data at university level. Some didn’t like my question, none replied but all got the message.

The central issue is that we are not at the dawn of the AI age, we are in the morning of it, and those who cannot master it will be replaced by AI. So, students need to know how to use it, while understanding that if AI alone can write their assignments, the market will not need them because, at individual level in certain jobs, AI is cheaper than an employee. The challenge is to produce work that uses AI but goes beyond it, and there is where teachers (and professions’ regulators) come in the picture.

The question is not whether a particular AI tool can pass some country’s examination, but if the bar examination is a valid method to assess whether some is ready to be a lawyer, to give a blunt example. And the same applies to almost every module/class/course or whatever name subjects are given in different institutions.

It has become clear, and that somehow seems to be missing in certain discussions about AI and copyright (if you cannot distinguish between human and AI produced work, you may need to rethink the concept of originality instead of insisting on some formalities that reality will render obsolete very soon, like the courts’ repeated mantra “no human no copyright”), that AI is extremely good for many things and surpasses human in many others, but it is not match for human intelligence. Accordingly, instead of trying to stop the incoming waves with a bucket, many of us need to get up from the lounge chair, leave the comfort of the beach, and learn to surf.

For my first seminar of Business Law, the task is “Using ChatGPT or similar, answer the questions given at the end of Lecture 1. Prepare to discuss”, and I explained to the students that they will have to deconstruct the LLMs given answers justify, support or refute them, with particular attention paid to the hallucinations.

Part of the module’s assessment used to be a Self-reflective journal, where students needed to critically reflect upon some area of law, and the learning process that took them from where they were in relation to it before the start of the module and to where they are at the end of it. Now, the same task consists on asking a LLM to critically analyse a particular area of law, to explain what prompts they used to order the task and why those were the appropriate prompts, to justify, support or refute the AI critique, and to explain how their semester learning process allowed them to do so.

And there is much more to come…

The AI letter (and the fallacy of not shooting the messenger)

Several days have past since a group of academics and business people released a letter asking for a moratorium on AI development and deployment. As presented, the letter represents too little too late from a group of people that includes some of those with the least legitimacy to say things like “[s]uch decisions must not be delegated to unelected tech leaders”. Really? Do we need to quote 1990s’ Lessig again when referring to a letter signed by some of those who fought so hard to be uncontrolled from any form of regulation by elected leaders that their responsibility in the current situation, astonishly ignored by some of the less cynical signatories, cannot be understated?

Some of the those signing the letter, both business people making billions and academics getting chairs and winning prizes, are directly responsible for the “profound change in the history of life on Earth” created by technologies that have already contributed significantly to environmental catastrophe, greatest ever income inequality, growth of undemocratic movements and continuous degradation of average intelligence. Are they serious when the say “dramatic economic and political disruptions (especially to democracy) that AI will cause”? Do they know that the, then, more robust democracy in the world elected a misogynistic fraudster thanks to first the more progressive groups ignoring masses of people, but then the capture of them by the immoral with the help of the technologies they sell or helped develop with their papers? Have they heard about mass manipulation by the media exacerbated by technology, and the impact on Brexit? Do they know about the use of current technologies for the rise and rise of autocratic tendencies around the word (and, no, the use of social platforms in the Arab Spring does not show the democratic impact of technological developments…even a broken clock gives the correct time twice a day and in that case the current scenario is not that promising) So, “will cause”?! Sorry to say it, but the correct phrase is “it has already caused” and, many of the signatories were/are part of it, not forgetting that many before them have already pointed out the potential harmful impact of AI.

Furthermore, when someone in the past 30 years would point to the need to either stop and or regulate some technological developments until “we are confident that their effects will be positive and their risks will be manageable”, many of those signing this letter would respond fallaciously that such a moratorium or regulation would stifle innovation and a plethora of non-sense that, via billions spent in lobbyists and lawyers, and prestigious papers, they managed to infuse at all levels, making sure that they got their billions and their prizes. What happens now? Wasn’t technological innovation more important than, almost, everything, so they lobbied and argued to be allowed to reign in society as robber barons reigned during the twilight of the Wild West?

Here is when many (most) will say, “focus on the message and not in the messenger”, which is a fallacy that needs to be sent to the Averno with many other phrases produced or used by, coincidentally, many signatories of the letter. A metaphorical example will elucidate the paramount importance of the identity and past behaviour of the messenger even with correct, reasonable and 100% true messages (of which the letter is not one of them). Let’s assume that a flock of sheep is desperate looking for water; lost, thirsty and near starvation and a wolf comes telling them the exact location of a water well, which happens to be correct. Should the sheep focus on the message or should check whose coming from? Has the wolf suddenly found a fondness for every form of life that decides to make sure that the sheep survive, or it the wolf leading them to the well where the pack is waiting to slaughter them all? Or is just making sure that they survive a little longer so the pack has a longer term provision of meat? So, yes, the message is important, but many times the messenger is as important too, and with the letter, there are plenty of examples to show that the sudden interest in “the clear benefit of all, and [to] give society a chance to adapt” is indeed very sudden for many of the signatories.

Just as example as the letter has been widely promoted using one of the owners’ names; will Tesla pause all sale of cars until all their systems “implement a set of shared safety protocols […] that are rigorously audited and overseen by independent outside experts” and that “[…] systems adhering to them are safe beyond a reasonable doubt”? Will the academics return their chair and prizes, and refrain from publishing papers until they “are confident that [the] effects [of what they are producing] will be positive and their risks will be manageable”, again, “beyond a reasonable doubt”? Until that day that such a pause is implemented, the signature of the mentioned letter by some signatories seems, at best, shameless. Some may say that this situation is different and or that when they were developing the theories and technologies that led to today’s situation and today’s AI, they didn’t know to what may lead, but that only reinforces the point: they should have known, or the billions and the chairs and prizes need to be given back. It is impossible to not remember when a number of scientists, many whom had been part of the nuclear weapons program, signed a letter committing to create conscience in other scientists and the public on the dangers to humanity of the nuclear weapons…such a huge number of Nobel prizes and distinguished scientists did not about the dangers when they were designing nuclear weapons, what did they think were designing them for? A science fair?!

The letter has few good points; but as the technological moratorium is unlikely to happen (and many signatories have zero moral authority to request one), the focus should be in a clear and strong regulatory framework, because as some ignotus academic said in 2018 (down in page 66), “[t]he challenge is not technical; it is sociopolitical”, and the same academic already said back in 2008, “current law and principles are ill-equipped to deal with further radical changes in technology, which could imply the need of a more proactive approach to legal developments”.

In conclusion, as ChatGPT likes to say, it is not time for billionaires and respected academics to sign letters, but to put their money where they mouth is, and focus their work on the actual achievement of a robust regulatory system, for example by using their lobby power and money to that effect instead of doing exactly the opposite, so we can all “enjoy a long AI summer” instead of having hordes of people suffering sunburns.

ChatGPT, the Skynet moment, and Judgement day without steel robots

There was a time when we imagined that the end of times would be marred with steel robots crushing the humans that tried to disconnect them, or enslaved them as a source of energy, but while science fiction has provided plenty of accurate predictions of things to come, it seems that end of the society as we know it may come from something less muscular and more subtle. We are going through days when it is almost impossible to open any social media site or publication without bumping into a discussion about the uses, benefits and problems from the use of freely available Natural Language Processing algorithmic systems that seem to create texts of almost anything better than humans.

Leaving aside that it is actually not true that the available systems create text better than humans, as they are quite basic, rigid and plagued of errors, let’s assume for a moment that they are indeed better than humans in creating those texts. There is a pervasive mistake making the algorithm the centre of the discussion, and the hero of the imagined saving-all AI. As many know and have pointed out, the centre, the middle and the periphery of everything that AI can do is the DATA (yes, with capital letters!). There are plenty of writing about the ownership and privacy of such a data and, therefore, the resulting text (or image), but the crucial issue, the one with the capability of creating judgment-day scenario, is the quality of the data.

There is some true in the statements that AI is not biased per se, but there is even more true in the fact that by using biased data AI can replicate and reinforce the original bias, so much that it can convert it in the new, accepted as unbiased, reality. The same applies to any form of AI, including descriptive, predictive and prescriptive, where the description, the prediction and the prescription is based on data that is biased, or incomplete, or plainly wrong, or a combination  of all or any of those.

Let’s use as example Facebook and the way that it shows users news in the feed and advertising. There are many “studies” that say that with XX number of “likes” Facebook knows what are your preferences and with a YY number of them it can predict better than yourself what you like and want, but that is not entirely true. It is more accurate to say that, based on what you actually like, Facebook shows you news and adds that are close to it but usually with a tendency to move towards what the advertiser wants you to like, in such a subtle manner that YY likes later you are actually liking what Facebook or the advertiser wanted you to like in the first place. Do you really think that millions of Americans woke up one day and just for their dislike of Hillary decided to vote for a misogynistic, fraudster, liar and megalomaniac like Trump? If you believe that you are not getting the gravity of the Cambridge Analytica scandal and how by knowing the actual preferences of people, an algorithm can start crawling-pegging their interest until they are somehow remote and even the opposite of what they originally wanted (yes, some of those who originally were sincerely abhorred by the idea of a sitting US president having an affair with an intern and then lying under oath, were the same who then supported a women-grabbing, “friend” of a paedophile, who knowingly tried to subvert the basis of their democracy).

Now let’s imagine that it is not “friends” news or adds what the algorithm is showing you, but your whole consumption of news, data, science and information, which is written specially for you; that every time you want to know something you have a system that does not give you a link to a page but gives you a text saying what actually “is” (in the ontological sense). Although it sounds great and the possibilities seem endless, yes you guessed, it all depends on the quality of the data to which the system has access to. Currently most AI systems are “fed” data to train them so they know how to behave in certain situations, even in a biased form, but the ultimate goal is to unleash them in the wealth of data constituted by Internet, mainly because there is where the money is. As you are already guessing, here is where we have the judgement day moment may arise, that moment when Skynet is connected to Internet and the machines kill everyone around them. No killing here, but the effects can be also daunting.

If social media and its algorithmic exacerbation of unethical and unprofessional press has led to massive manipulation, making people choosing social self-harm at the levels we’ve seen in Brexit, Trump, Bolsonaro, different forms of Chavism and a long list of people choosing what will damage them and their community, only imagine if all information they receive, everything that they “write” or ask the system to write come from this data-tainted algorithmic description, prediction and prescription. A different, soon coming, post is needed to the issue of data quality and the role of the established press in the misinformation campaings, but it is clear that we need to start discussing that, besides how bad AI chats are supposed to be for essays as form of assessment, there is a prospect of real social dissolution by misinformation and manipulation at scale not seeing until now.