ChatGPT, the Skynet moment, and Judgement day without steel robots

There was a time when we imagined that the end of times would be marred with steel robots crushing the humans that tried to disconnect them, or enslaved them as a source of energy, but while science fiction has provided plenty of accurate predictions of things to come, it seems that end of the society as we know it may come from something less muscular and more subtle. We are going through days when it is almost impossible to open any social media site or publication without bumping into a discussion about the uses, benefits and problems from the use of freely available Natural Language Processing algorithmic systems that seem to create texts of almost anything better than humans.

Leaving aside that it is actually not true that the available systems create text better than humans, as they are quite basic, rigid and plagued of errors, let’s assume for a moment that they are indeed better than humans in creating those texts. There is a pervasive mistake making the algorithm the centre of the discussion, and the hero of the imagined saving-all AI. As many know and have pointed out, the centre, the middle and the periphery of everything that AI can do is the DATA (yes, with capital letters!). There are plenty of writing about the ownership and privacy of such a data and, therefore, the resulting text (or image), but the crucial issue, the one with the capability of creating judgment-day scenario, is the quality of the data.

There is some true in the statements that AI is not biased per se, but there is even more true in the fact that by using biased data AI can replicate and reinforce the original bias, so much that it can convert it in the new, accepted as unbiased, reality. The same applies to any form of AI, including descriptive, predictive and prescriptive, where the description, the prediction and the prescription is based on data that is biased, or incomplete, or plainly wrong, or a combination  of all or any of those.

Let’s use as example Facebook and the way that it shows users news in the feed and advertising. There are many “studies” that say that with XX number of “likes” Facebook knows what are your preferences and with a YY number of them it can predict better than yourself what you like and want, but that is not entirely true. It is more accurate to say that, based on what you actually like, Facebook shows you news and adds that are close to it but usually with a tendency to move towards what the advertiser wants you to like, in such a subtle manner that YY likes later you are actually liking what Facebook or the advertiser wanted you to like in the first place. Do you really think that millions of Americans woke up one day and just for their dislike of Hillary decided to vote for a misogynistic, fraudster, liar and megalomaniac like Trump? If you believe that you are not getting the gravity of the Cambridge Analytica scandal and how by knowing the actual preferences of people, an algorithm can start crawling-pegging their interest until they are somehow remote and even the opposite of what they originally wanted (yes, some of those who originally were sincerely abhorred by the idea of a sitting US president having an affair with an intern and then lying under oath, were the same who then supported a women-grabbing, “friend” of a paedophile, who knowingly tried to subvert the basis of their democracy).

Now let’s imagine that it is not “friends” news or adds what the algorithm is showing you, but your whole consumption of news, data, science and information, which is written specially for you; that every time you want to know something you have a system that does not give you a link to a page but gives you a text saying what actually “is” (in the ontological sense). Although it sounds great and the possibilities seem endless, yes you guessed, it all depends on the quality of the data to which the system has access to. Currently most AI systems are “fed” data to train them so they know how to behave in certain situations, even in a biased form, but the ultimate goal is to unleash them in the wealth of data constituted by Internet, mainly because there is where the money is. As you are already guessing, here is where we have the judgement day moment may arise, that moment when Skynet is connected to Internet and the machines kill everyone around them. No killing here, but the effects can be also daunting.

If social media and its algorithmic exacerbation of unethical and unprofessional press has led to massive manipulation, making people choosing social self-harm at the levels we’ve seen in Brexit, Trump, Bolsonaro, different forms of Chavism and a long list of people choosing what will damage them and their community, only imagine if all information they receive, everything that they “write” or ask the system to write come from this data-tainted algorithmic description, prediction and prescription. A different, soon coming, post is needed to the issue of data quality and the role of the established press in the misinformation campaings, but it is clear that we need to start discussing that, besides how bad AI chats are supposed to be for essays as form of assessment, there is a prospect of real social dissolution by misinformation and manipulation at scale not seeing until now.

ICT, farming and law

The population of the planet is going to growth from the current 7.7 billion to 9.7 billion in 2050 and nearly 11 billion by 2100, meaning that on one hand, a higher pressure to the availability of land for agriculture, and on the other need to greater agricultural production for food, raw materials and energy. Global climate change due to human activity and environmental degradation implies that extending the agricultural frontiers by further depleting existing forests is not an option. Smart farming consists of a suite of technologies rather than a single technology, and its global market stood at nearly 5 billion US dollars in 2016, expecting to reach 16 billion US dollars by 2025.  AI can be used to process that data, forecast production output and anomalies for better distribution, financial planning and mitigation; smart sensors can collect vast amount of data to forecast production outputs and anomalies; driverless machinery can perform different tasks around the clock, with replicable precision and subject to adverse environmental conditions; drones are being used to gather data and control both crops and animal production; geographic information  systems allow farmers to increase production to map and project fluctuations in environmental factors; and digital veterinary applications include telemedicine, trackers, wearable, monitoring and identification devices, and visual and sound recording. The use of these technologies in agricultural production refer to a range of legal issues, some of which have currently clear definition and others that might need some adaptations and reform. Artificial intelligence in agriculture attracts all the legal issues currently being pointed to artificial intelligence in general, including contractual data issues, with some that might have specific impact on agricultural production. The main use of systems based on algorithms is to forecast different scenarios, using current and past data to find patterns, and, for example, there might be legal uncertainty when management decisions lead to severe variations in agricultural output, due to the lack of transparency and accountability found in some AI. The vast amount of data produced by sensors in fields and animals lead to the need of retailoring agricultural contracts to identify the different responsibilities and limitation of liability arising from the negative consequences of wrong decisions based on faulty data. At the same time, some of the data may result in the identification of the producers, which would attract the whole set of data protection laws to data that seems to be unrelated to it in ways not foretold by legislators. Furthermore, due to the sensitivity of the data recollected, the security of the data needs to be a clear legal requirement, not only at contractual level but also with some public safety and market transparency considerations. The use of mechatronics, drones, geographic information systems and the whole set of digital veterinary applications brings back the issue of privacy, liability for both malfunction and, more importantly, undesirable production results, adding a strong need to cybersecurity and a set of regulatory compliance, which may bring into question some fundamental rights issues. For example, can a farmer use technology based on drones near an airport? If not, would the airport operator or the state compensate the farmer for the potential losses or lack of profits? What are the security requirements for veterinary applications that have the potential to put unhealthy products in the consumer market through third party malign interference? These are few of the many issues raised by ICT use in agricultural production, all of which deserves further analysis, so, keep an eye on Electromate