The AI letter (and the fallacy of not shooting the messenger)

Several days have past since a group of academics and business people released a letter asking for a moratorium on AI development and deployment. As presented, the letter represents too little too late from a group of people that includes some of those with the least legitimacy to say things like “[s]uch decisions must not be delegated to unelected tech leaders”. Really? Do we need to quote 1990s’ Lessig again when referring to a letter signed by some of those who fought so hard to be uncontrolled from any form of regulation by elected leaders that their responsibility in the current situation, astonishly ignored by some of the less cynical signatories, cannot be understated?

Some of the those signing the letter, both business people making billions and academics getting chairs and winning prizes, are directly responsible for the “profound change in the history of life on Earth” created by technologies that have already contributed significantly to environmental catastrophe, greatest ever income inequality, growth of undemocratic movements and continuous degradation of average intelligence. Are they serious when the say “dramatic economic and political disruptions (especially to democracy) that AI will cause”? Do they know that the, then, more robust democracy in the world elected a misogynistic fraudster thanks to first the more progressive groups ignoring masses of people, but then the capture of them by the immoral with the help of the technologies they sell or helped develop with their papers? Have they heard about mass manipulation by the media exacerbated by technology, and the impact on Brexit? Do they know about the use of current technologies for the rise and rise of autocratic tendencies around the word (and, no, the use of social platforms in the Arab Spring does not show the democratic impact of technological developments…even a broken clock gives the correct time twice a day and in that case the current scenario is not that promising) So, “will cause”?! Sorry to say it, but the correct phrase is “it has already caused” and, many of the signatories were/are part of it, not forgetting that many before them have already pointed out the potential harmful impact of AI.

Furthermore, when someone in the past 30 years would point to the need to either stop and or regulate some technological developments until “we are confident that their effects will be positive and their risks will be manageable”, many of those signing this letter would respond fallaciously that such a moratorium or regulation would stifle innovation and a plethora of non-sense that, via billions spent in lobbyists and lawyers, and prestigious papers, they managed to infuse at all levels, making sure that they got their billions and their prizes. What happens now? Wasn’t technological innovation more important than, almost, everything, so they lobbied and argued to be allowed to reign in society as robber barons reigned during the twilight of the Wild West?

Here is when many (most) will say, “focus on the message and not in the messenger”, which is a fallacy that needs to be sent to the Averno with many other phrases produced or used by, coincidentally, many signatories of the letter. A metaphorical example will elucidate the paramount importance of the identity and past behaviour of the messenger even with correct, reasonable and 100% true messages (of which the letter is not one of them). Let’s assume that a flock of sheep is desperate looking for water; lost, thirsty and near starvation and a wolf comes telling them the exact location of a water well, which happens to be correct. Should the sheep focus on the message or should check whose coming from? Has the wolf suddenly found a fondness for every form of life that decides to make sure that the sheep survive, or it the wolf leading them to the well where the pack is waiting to slaughter them all? Or is just making sure that they survive a little longer so the pack has a longer term provision of meat? So, yes, the message is important, but many times the messenger is as important too, and with the letter, there are plenty of examples to show that the sudden interest in “the clear benefit of all, and [to] give society a chance to adapt” is indeed very sudden for many of the signatories.

Just as example as the letter has been widely promoted using one of the owners’ names; will Tesla pause all sale of cars until all their systems “implement a set of shared safety protocols […] that are rigorously audited and overseen by independent outside experts” and that “[…] systems adhering to them are safe beyond a reasonable doubt”? Will the academics return their chair and prizes, and refrain from publishing papers until they “are confident that [the] effects [of what they are producing] will be positive and their risks will be manageable”, again, “beyond a reasonable doubt”? Until that day that such a pause is implemented, the signature of the mentioned letter by some signatories seems, at best, shameless. Some may say that this situation is different and or that when they were developing the theories and technologies that led to today’s situation and today’s AI, they didn’t know to what may lead, but that only reinforces the point: they should have known, or the billions and the chairs and prizes need to be given back. It is impossible to not remember when a number of scientists, many whom had been part of the nuclear weapons program, signed a letter committing to create conscience in other scientists and the public on the dangers to humanity of the nuclear weapons…such a huge number of Nobel prizes and distinguished scientists did not about the dangers when they were designing nuclear weapons, what did they think were designing them for? A science fair?!

The letter has few good points; but as the technological moratorium is unlikely to happen (and many signatories have zero moral authority to request one), the focus should be in a clear and strong regulatory framework, because as some ignotus academic said in 2018 (down in page 66), “[t]he challenge is not technical; it is sociopolitical”, and the same academic already said back in 2008, “current law and principles are ill-equipped to deal with further radical changes in technology, which could imply the need of a more proactive approach to legal developments”.

In conclusion, as ChatGPT likes to say, it is not time for billionaires and respected academics to sign letters, but to put their money where they mouth is, and focus their work on the actual achievement of a robust regulatory system, for example by using their lobby power and money to that effect instead of doing exactly the opposite, so we can all “enjoy a long AI summer” instead of having hordes of people suffering sunburns.

ICT, farming and law

The population of the planet is going to growth from the current 7.7 billion to 9.7 billion in 2050 and nearly 11 billion by 2100, meaning that on one hand, a higher pressure to the availability of land for agriculture, and on the other need to greater agricultural production for food, raw materials and energy. Global climate change due to human activity and environmental degradation implies that extending the agricultural frontiers by further depleting existing forests is not an option. Smart farming consists of a suite of technologies rather than a single technology, and its global market stood at nearly 5 billion US dollars in 2016, expecting to reach 16 billion US dollars by 2025.  AI can be used to process that data, forecast production output and anomalies for better distribution, financial planning and mitigation; smart sensors can collect vast amount of data to forecast production outputs and anomalies; driverless machinery can perform different tasks around the clock, with replicable precision and subject to adverse environmental conditions; drones are being used to gather data and control both crops and animal production; geographic information  systems allow farmers to increase production to map and project fluctuations in environmental factors; and digital veterinary applications include telemedicine, trackers, wearable, monitoring and identification devices, and visual and sound recording. The use of these technologies in agricultural production refer to a range of legal issues, some of which have currently clear definition and others that might need some adaptations and reform. Artificial intelligence in agriculture attracts all the legal issues currently being pointed to artificial intelligence in general, including contractual data issues, with some that might have specific impact on agricultural production. The main use of systems based on algorithms is to forecast different scenarios, using current and past data to find patterns, and, for example, there might be legal uncertainty when management decisions lead to severe variations in agricultural output, due to the lack of transparency and accountability found in some AI. The vast amount of data produced by sensors in fields and animals lead to the need of retailoring agricultural contracts to identify the different responsibilities and limitation of liability arising from the negative consequences of wrong decisions based on faulty data. At the same time, some of the data may result in the identification of the producers, which would attract the whole set of data protection laws to data that seems to be unrelated to it in ways not foretold by legislators. Furthermore, due to the sensitivity of the data recollected, the security of the data needs to be a clear legal requirement, not only at contractual level but also with some public safety and market transparency considerations. The use of mechatronics, drones, geographic information systems and the whole set of digital veterinary applications brings back the issue of privacy, liability for both malfunction and, more importantly, undesirable production results, adding a strong need to cybersecurity and a set of regulatory compliance, which may bring into question some fundamental rights issues. For example, can a farmer use technology based on drones near an airport? If not, would the airport operator or the state compensate the farmer for the potential losses or lack of profits? What are the security requirements for veterinary applications that have the potential to put unhealthy products in the consumer market through third party malign interference? These are few of the many issues raised by ICT use in agricultural production, all of which deserves further analysis, so, keep an eye on Electromate