The New US AI Action Plan or loosing race you declare

The Trump Administration just released America’s AI Action Plan, a bold, sweeping roadmap to secure what it defines as “unquestioned and unchallenged global technological dominance.” Framed as an existential race against geopolitical rivals like China, this plan sets out to transform every major sector of American life, industry, national security, infrastructure, education, through Artificial Intelligence. It is unapologetically ambitious, deregulatory, and ideologically driven, although these last factors with their clear anti science rhetoric may result in not achieving its stated aims.

The document is riddled with contradictions, selective interpretations of freedom, and a startling disregard for the pressing global challenge of sustainability. Yet, beneath the rhetoric and nationalist framing, there are pockets of pragmatic proposals, especially in sector-specific AI deployments, workforce development, and open source AI infrastructure, that deserve serious engagement.

At its core, the AI Action Plan reads like a manifesto for accelerationism without brakes. The opening pages reject previous efforts at cautious regulation, like Biden’s Executive Order 14110, and embrace full-speed deployment of AI, unburdened by red tape, environmental considerations, or ethical frameworks. The plan’s repeated insistence on removing regulatory barriers casts oversight itself as a threat, particularly oversight related to misinformation, diversity, climate change, and human rights. Paradoxically, based purely on ideology, the Office of Science and Technology Policy is tasked not with strengthening public interest safeguards but with rescinding rules deemed ideological or anti innovation.

This deregulatory zeal extends to infrastructure. Environmental protections under NEPA, the Clean Air Act, and the Clean Water Act are portrayed as inconvenient obstacles to building the data centres and energy systems AI needs. Climate considerations are not just omitted, they are actively scrubbed from public standards, with an explicit instruction to eliminate references to climate change from NIST frameworks. While this framing may excite Silicon Valley libertarians, and others poised to profit from unrestrained business activities, it raises the question of what kind of AI ecosystem will be the US building if the very values that ensure justice, accountability, and environmental sustainability are excised from its foundation.

One of the starkest contradictions in the plan is its call to defend freedom of speech in AI systems, followed immediately by a directive to suppress content or models that reflect so-called social engineering agendas or woke values. That is, according to the drafters of the policy, freedom of speech is guaranteed by prohibiting speech, which the equivalent of organising free orgies to promote virginity.

For instance, developers must ensure that their systems are “free from top-down ideological bias” a phrase used to justify banning government procurement of AI that acknowledges diversity, equity, climate change, or structural inequality . This narrow conception of objectivity suggests that any model reflecting progressive or globally accepted norms is inherently suspect. Accordingly, the Action Plan’s version of freedom seems to operate on a one-way street. It welcomes open dialogue, unless that dialogue challenges the current administration’s values. The implications for academic freedom, AI ethics research, and inclusive policymaking are profound, all of what are paramount for sustained innovation.

Perhaps the most glaring omission is the complete lack of any serious engagement with sustainability. Despite dedicating an entire pillar to AI infrastructure, including data centres, semiconductors, and the national grid, there is not a single reference to sustainable development goals, carbon emissions, or green AI. Instead, the plan explicitly promotes the expansion of energy intensive infrastructure while celebrating the abandonment of “radical climate dogma”. The phrase “Build, Baby, Build” is invoked as a national imperative, with energy consumption framed only as a barrier to be bulldozed through.

This omission is especially concerning given growing global awareness that AI, particularly large scale models, can have significant carbon footprints. The EU AI Act and many national strategies now link AI policy with broader climate objectives, not forgetting that the global investment in the low carbon energy transition reached $2.1 trillion in 2024. America’s plan, by contrast, treats environmental sustainability as a politically inconvenient distraction, and risks leaving the US out of the innovation fuelled by those funds. This leaves the U.S. not only misaligned with international efforts, but also vulnerable to long term economic and environmental risks.

However, amid the ideological rhetoric and the toddler-like phrases, there are components of the Action Plan that are thoughtfully constructed and potentially transformative, especially where the focus shifts from populism and geopolitics to sectoral applications and innovation ecosystems.

The plan calls for targeted AI adoption strategies in critical sectors such as healthcare, manufacturing, agriculture, national security, and scientific research. It supports regulatory sandboxes and domain specific AI Centres of Excellence, mechanisms that can help scale safe and effective innovation in complex environments.

Initiatives to modernise healthcare with AI tools, apply AI in advanced manufacturing, and support biosecurity research show a clearer understanding of AI’s potential for real world impact. If implemented with inclusive governance, these initiatives could significantly enhance productivity and resilience in key sectors, although as presented risk to leave outside of the funding pool those who focus on the environmental impact of their investments.

The Plan’s provisions to retrain and upskill American workers seem also well conceived, recognising the labour market disruption AI may cause and proposing concrete steps, from expanding apprenticeships and AI literacy in technical education, to tax incentives for employer-sponsored training. The establishment of an AI Workforce Research Hub could, if well supported, provide crucial data and forward looking analysis on job displacement, wage effects, and emerging skill demands. It has to be seen how the need for serious research is balanced with the constant attack to some of the world’s top research institutions.

The Plan’s strong endorsement of open weight and open source models may be one of its most forward looking elements. These models are essential for academic research, governmental transparency, and innovation outside the Big Tech ecosystem as, unlike closed source systems that concentrate power, open models allow more equitable access and experimentation.

Furthermore, the commitment to build a sustainable National AI Research Resource (NAIRR) infrastructure and improve financial mechanisms for compute access, especially for researchers and startups, is a rare bright spot. It signals an intention to diversify the AI innovation ecosystem but, again, it might collide with the constant defunding and battling of the White House with serious research institutions.

Finally, the Plan’s third pillar, international diplomacy and AI security, seeks to export “the full American AI stack” to likeminded nations while isolating rivals, particularly China. The aim is to create a global alliance built around U.S. developed hardware, standards, and systems. Here the plan may hit hard against reality, as the constant undermining of diplomatic principles and rules by the US government, and the growing lack of trust at global scale of American commitment to and international system based on rules, may result in countries looking for solutions elsewhere.

Without shared values of sustainability, fairness, and rights based governance, will the world want what America is selling? The EU, Canada, Brazil, and other global actors are increasingly anchoring AI governance in democratic accountability, inclusive participation, and climate conscious design. An American AI regime defined by deregulation and cultural exclusion may find limited traction outside its ideological bubble.

Ideology is the foundation of thinking, but when it replaces thinking, it may lead to a situation where the plans go against achieving the expected results, and some aspects of the America’s AI Action Plan might be a good example of that.

The fallacy of the AI debate in academic papers

The current image has no alternative text. The file name is: no-ai-in-papers.png

In the midst of the hand wringing over the use of artificial intelligence in academic writing, one fundamental truth seems to be getting lost: scientific papers exist to advance knowledge, not to pass a style audit or some sort of forensic analysis by not too keen peer reviewers. Whether a paper was written with the help of AI, edited by a colleague, or typed out manually at 2 a.m. is entirely irrelevant if the work it communicates is original, rigorous, and contributes meaningfully to the field.

And yet, paradoxically, many journals and peer reviewers remain locked in a self-defeating contradiction, where they claim to defend the sanctity of originality in research, while simultaneously enforcing rigid, performative standards of academic prose and citation that actively discourage innovation and insight. In doing so, they create a culture where form and appearance of science is privileged over function and actual scientific progress, where how a paper is written matters more than what it says.

The Real Purpose of a Scientific Paper

At its core, a scientific paper serves one purpose: to communicate the findings of a research process that either advances a theoretical understanding or offers a practical solution. The writing is the vehicle, not the destination.

We do not ask whether a microscope or a statistical software suite as the used in statistical analysis tainted the “authenticity” of a result, while a few reviewers, if any, actually checks whether the presented statistics add up. Yet we ask that of writing tools like AI, even when they’re used merely to structure or polish a piece, or help the writer better articulate complex ideas. The growing fixation on how a text is generated often obscures a more critical question: does the research move the field forward? We have reached the summit of ridiculousness by even questioning (and wasting journals’ space) if an abstract was written by AI (let me be as plain as possible: if you are not writing your abstracts using AI, the time consumed in writing an abstract should be discounted from your salary!).

Original Thought vs. Citation Performance

This problem is compounded by a deep contradiction in the peer review process. On one hand, reviewers and editors bemoan the lack of originality in submissions, urging authors to offer novel perspectives. On the other, they often reject papers that stray too far from the citation dense orthodoxy of academic writing, particularly when those papers provide fresh, compelling insights (they even call them “opinion pieces”). While it is important to have a robust literature review to show that what is the state of the art in a particular field, not all the research should be about literature, if not, better leave to AI to do the summary.

If a paper dares to deviate from the one citation per sentence model, or it synthesises across disciplines in a way that doesn’t fit the journal’s rigid schema, it risks an almost certain rejection not on scientific grounds, but stylistic ones, disguised as science. This pressure has only intensified in the AI era, where any sign of syntactic uniformity is now suspiciously scrutinised, as if clarity were a symptom of machine authorship, which in addition discriminates against the multilingual, who tend to use more flourished language, and some of the words usually employed by AI. Not forgetting that the so called AI uses probabilistic analysis based on what it has “learned” from previous papers, so if it uses a lot the word “nuance”, as I have done for years, it is because “nuance” has been used a consistently in previous mostly non AI papers, thus, using it now would also be highly probable even if not writing using AI. What it probably changed, is that those making now that important research about integrity, have never read or counted the papers those words before, for a variety of reasons, likely stylistic or for not being their type of science (which is based mainly about citations and not necessarily about original thought) .

The AI Moral Panic

The moral panic surrounding AI in academia is understandable but misdirected. Concerns about plagiarism, ghostwriting, and the erosion of critical thinking are valid, but they aren’t new, they’re just taking a new form, and the idea that now has increased substantially needs to be proved with the same rigour that those doing that research are demanding from the rest. The value of AI, like any tool, depends entirely on how it is used, and claiming that a change in the use of certain words implies the prevalence of some form of academic dishonesty is far more less rigorous and unscientific that many (most?) papers written using the aid of AI.

Using AI to fabricate results is fraud, plain and simple; using it to simply summarise what others have done and presenting as original is wrong, no discussion about that (although it seems to be preferred my some reviewers). But using AI to help articulate a novel, original contribution is no different from using grammar checking software or a thesaurus. Rejecting a paper on the mere suspicion that “AI helped with the wording” is akin to rejecting a paper because the figures were “too polished”, it is a non sequitur.

Reclaiming the Purpose of Academic Publishing

We need to return to a basic question in academic publishing: is this paper advancing the field? Is it offering a solution to a real theoretical or practical problem? Does it demonstrate methodological integrity? Is it grounded in evidence, even if it doesn’t reference every single author who ever touched the topic?

If the answer is yes, then the prose style, the citation density, and the grammatical polish are secondary at best. Reviewers should be encouraged to focus on the substance, not the scaffolding, although it is clear that it requires real effort by peer reviewers, instead of simply counting the number of references. Furthermore, the argument that writing properly is part of serious science or academia is as ridiculous as the one heard decades ago when having a good handwriting was also seen as a requirement to be a scholar.

At the end of the day, AI is not the enemy of science, rigid, anti-innovative gatekeeping is. Let’s not mistake performance for insight, or style for substance, as the the health of scientific inquiry depends on our ability to recognise and reward originality and rigour, no matter what tools helped communicate them.

(the subtitles were suggested by AI 😉 although the ideas are sadly mine)

Students, you need to use AI in your assignments

Graphic created using ChatGPT 4o with the prompt “draw a picture for the blog that follows, including diverse students”, and the whole text of the blog.

A new teaching semester has started, and most of my students were surprised by my overencouragement for them to use AI for their assignments (at least in my modules), meaning that there are still some (many) teachers around that are telling them that that the use of AI should be avoided and that it would/may be cheating.

Both in February and April 2023, at the inaugural Technology Enhanced Learning Community of Practice event and BILETA 2023 respectively, and when the cover of the newspapers still were presenting large language models as the end of literacy, I insisted on the need to adapt assessment to the rise and rise of AI in general and large language models in particular. Many of my colleagues rose up in arms to the chants of “Cheating institutionalisation!” (with different words, though), and claiming that the already high proportion of cheaters in Higher Education would become astronomical. I simply replied that the vast majority of my students were not cheaters, and asked whether they would pass a student that submitted work with numbers as fabricated as those they were mentioning, which contradicted all the literature and available data at university level. Some didn’t like my question, none replied but all got the message.

The central issue is that we are not at the dawn of the AI age, we are in the morning of it, and those who cannot master it will be replaced by AI. So, students need to know how to use it, while understanding that if AI alone can write their assignments, the market will not need them because, at individual level in certain jobs, AI is cheaper than an employee. The challenge is to produce work that uses AI but goes beyond it, and there is where teachers (and professions’ regulators) come in the picture.

The question is not whether a particular AI tool can pass some country’s examination, but if the bar examination is a valid method to assess whether some is ready to be a lawyer, to give a blunt example. And the same applies to almost every module/class/course or whatever name subjects are given in different institutions.

It has become clear, and that somehow seems to be missing in certain discussions about AI and copyright (if you cannot distinguish between human and AI produced work, you may need to rethink the concept of originality instead of insisting on some formalities that reality will render obsolete very soon, like the courts’ repeated mantra “no human no copyright”), that AI is extremely good for many things and surpasses human in many others, but it is not match for human intelligence. Accordingly, instead of trying to stop the incoming waves with a bucket, many of us need to get up from the lounge chair, leave the comfort of the beach, and learn to surf.

For my first seminar of Business Law, the task is “Using ChatGPT or similar, answer the questions given at the end of Lecture 1. Prepare to discuss”, and I explained to the students that they will have to deconstruct the LLMs given answers justify, support or refute them, with particular attention paid to the hallucinations.

Part of the module’s assessment used to be a Self-reflective journal, where students needed to critically reflect upon some area of law, and the learning process that took them from where they were in relation to it before the start of the module and to where they are at the end of it. Now, the same task consists on asking a LLM to critically analyse a particular area of law, to explain what prompts they used to order the task and why those were the appropriate prompts, to justify, support or refute the AI critique, and to explain how their semester learning process allowed them to do so.

And there is much more to come…

AI and environmental damage

In the previous entry the issue of the environmental and climate change impact of AI use and development was presented as important, and in need of urgent treatment by policy makers (who are squarely ignoring it in most proposals of AI regulation). Those impacts are real, considerable and multifaceted, involving major energy consumption, resource depletion, and a variety of other ecological consequences.

Training large AI models requires immense computational power, leading to the use of large quantities of energy. Just as example, training a single model like GPT-3 can consume 1287 MWh of electricity, resulting in about 502 metric tons of CO2 emissions, which is comparable to the annual emissions of dozens of cars. But while the energy consumed during the training phase is significant, quite more energy is used during the inference phase, where models are deployed and used in real-world applications. There have been interesting forms to justify such a use, mainly by comparing pears not with apples but with scissors, but they seem to obviate the fact that the general human emission that are compared with the AI ones will be there regardless of the activity, so the improper use of AI adds emissions without subtracting much of them. In a world that the development and deployment of AI is bound to keep growing at bubble-like rates, this implies that the location of data centres plays a crucial role in determining the carbon footprint of AI, as they are bound to double their energy consumption by 2026 (if 170 pages is too much to read, simply go to page 8). Data centres powered by renewable energy sources, have a lower carbon footprint compared to those in regions reliant on fossil fuels, and there is an argument about making such use compulsory.

From the resource depletion and e-waste point of view, AI hardware, including GPUs and specialized chips, requires rare earth elements and other minerals. The extraction and processing of these materials can lead to environmental degradation and biodiversity loss. AI is been currently used to find ways to replace those rare earth elements, but even then, as AI technology evolves, older hardware becomes obsolete, contributing to a steep increase in the amount of electronic waste. Besides the global inequalities generated by the mountains of e-garbage currently dumped in developing countries, E-waste contains hazardous substances like lead, mercury, and cadmium, which can contaminate soil and water if not properly managed.

A less obvious but equally significant impact is water usage. Training AI models like requires abundant amounts of water for cooling data centres, with some studies claiming that the water consumed during the training of algorithmic models is equivalent to the water needed to produce hundreds of electric cars.

To add to the energy and resources consumption, the uncontrolled, improperly regulated widespread use of AI can have severe ecological impact, particularly and paradoxically, when used in activities where proper use of AI can minimize them. Not making sustainability a key aspect of algorithm design, training and AI deployment, may lead to situations that it is more profitable to carry on with environmentally harmful AI driven activities, like the overuse of pesticides and fertilizers, harming soil and water quality and reducing biodiversity, not mentioning that AI-based applications like delivery drones and autonomous vehicles can disrupt wildlife and natural ecosystems without giving much benefits (beyond increasing the already fat profits of few).

All this supports the idea that AI regulation must address sustainability issues and not leave to general environmental legislation, because it is important to know who owns what AI produces, but only if we have a planet where you can enjoy those works…

Algorithmic systems and sustainability

After more than a year not even opening this almost twenty years old blog, several changes in my private and job life imply that I will return to this old pastime. I have decided to spend less time on planes and managerial roles in Higher Education, and more in research, teaching and engagement activities, meaning more time to write (with, of course, some academic and policy related travelling).

Last year we were somehow in awe for the rapid development of AI, although one could argue that what we saw was just a very fast adoption of a particular type of algorithmic systems, generative AI, while even that type of algorithmic systems have been around people’s lives for quite longer than a year and a half.

However, it is true that the irruption of generative AI and Large Language Models made algorithms a super-hot issue, so much that it seems that the whole IT law field has been swamped by AI discussions, and that there is no much else to talk about. But if the different scenarios and the obvious challenges that algorithmic systems presented to the law, seemed to quickly create a consensus (really?) in the need of regulating them, the usual tendency of lawyers, law academics, judges and policy makers to focus on what it allowed them to modify less the current legal status quo, resulted in important (fundamental) areas of law left outside of the analysis and or regulatory frenzy. One of them is the dilemmatic relationship between algorithmic systems and sustainability, which will have deep effect both in the environment and in the businesses operating in the AI field.

The argument has been that sustainability and climate change implications of AI are common to any technological and economic activity and that, at best, there should be a generic sustainability legal framework that applies to all of them, not specifically to AI. The counterarguments to that are various and can be made from different angles. From the sectorial point of view, the same could be said for the oil, the cement and the transport industries, but there is a growing body of discussions and case-law that says that their situation is not a generic one, even when generic rules are been applied to them. If we focus on the substantial issues and emissions, the old view that a different in degree big enough implies a change in class, seems to apply squarely here: something that emits substantially more than other activities and or vast amount of greenhouse gasses emissions are intrinsic to its functioning, does not share common characteristics with any technological and economic activity. Algorithmic systems are in this category, and regulating them with a focus on sustainability and climate change is essential.

In the coming days I will start to dissect the why and how that is true, coupled with the potential application of current rules, which are being used to deal with other heavy-emitter industries.

The AI letter (and the fallacy of not shooting the messenger)

Several days have past since a group of academics and business people released a letter asking for a moratorium on AI development and deployment. As presented, the letter represents too little too late from a group of people that includes some of those with the least legitimacy to say things like “[s]uch decisions must not be delegated to unelected tech leaders”. Really? Do we need to quote 1990s’ Lessig again when referring to a letter signed by some of those who fought so hard to be uncontrolled from any form of regulation by elected leaders that their responsibility in the current situation, astonishly ignored by some of the less cynical signatories, cannot be understated?

Some of the those signing the letter, both business people making billions and academics getting chairs and winning prizes, are directly responsible for the “profound change in the history of life on Earth” created by technologies that have already contributed significantly to environmental catastrophe, greatest ever income inequality, growth of undemocratic movements and continuous degradation of average intelligence. Are they serious when the say “dramatic economic and political disruptions (especially to democracy) that AI will cause”? Do they know that the, then, more robust democracy in the world elected a misogynistic fraudster thanks to first the more progressive groups ignoring masses of people, but then the capture of them by the immoral with the help of the technologies they sell or helped develop with their papers? Have they heard about mass manipulation by the media exacerbated by technology, and the impact on Brexit? Do they know about the use of current technologies for the rise and rise of autocratic tendencies around the word (and, no, the use of social platforms in the Arab Spring does not show the democratic impact of technological developments…even a broken clock gives the correct time twice a day and in that case the current scenario is not that promising) So, “will cause”?! Sorry to say it, but the correct phrase is “it has already caused” and, many of the signatories were/are part of it, not forgetting that many before them have already pointed out the potential harmful impact of AI.

Furthermore, when someone in the past 30 years would point to the need to either stop and or regulate some technological developments until “we are confident that their effects will be positive and their risks will be manageable”, many of those signing this letter would respond fallaciously that such a moratorium or regulation would stifle innovation and a plethora of non-sense that, via billions spent in lobbyists and lawyers, and prestigious papers, they managed to infuse at all levels, making sure that they got their billions and their prizes. What happens now? Wasn’t technological innovation more important than, almost, everything, so they lobbied and argued to be allowed to reign in society as robber barons reigned during the twilight of the Wild West?

Here is when many (most) will say, “focus on the message and not in the messenger”, which is a fallacy that needs to be sent to the Averno with many other phrases produced or used by, coincidentally, many signatories of the letter. A metaphorical example will elucidate the paramount importance of the identity and past behaviour of the messenger even with correct, reasonable and 100% true messages (of which the letter is not one of them). Let’s assume that a flock of sheep is desperate looking for water; lost, thirsty and near starvation and a wolf comes telling them the exact location of a water well, which happens to be correct. Should the sheep focus on the message or should check whose coming from? Has the wolf suddenly found a fondness for every form of life that decides to make sure that the sheep survive, or it the wolf leading them to the well where the pack is waiting to slaughter them all? Or is just making sure that they survive a little longer so the pack has a longer term provision of meat? So, yes, the message is important, but many times the messenger is as important too, and with the letter, there are plenty of examples to show that the sudden interest in “the clear benefit of all, and [to] give society a chance to adapt” is indeed very sudden for many of the signatories.

Just as example as the letter has been widely promoted using one of the owners’ names; will Tesla pause all sale of cars until all their systems “implement a set of shared safety protocols […] that are rigorously audited and overseen by independent outside experts” and that “[…] systems adhering to them are safe beyond a reasonable doubt”? Will the academics return their chair and prizes, and refrain from publishing papers until they “are confident that [the] effects [of what they are producing] will be positive and their risks will be manageable”, again, “beyond a reasonable doubt”? Until that day that such a pause is implemented, the signature of the mentioned letter by some signatories seems, at best, shameless. Some may say that this situation is different and or that when they were developing the theories and technologies that led to today’s situation and today’s AI, they didn’t know to what may lead, but that only reinforces the point: they should have known, or the billions and the chairs and prizes need to be given back. It is impossible to not remember when a number of scientists, many whom had been part of the nuclear weapons program, signed a letter committing to create conscience in other scientists and the public on the dangers to humanity of the nuclear weapons…such a huge number of Nobel prizes and distinguished scientists did not about the dangers when they were designing nuclear weapons, what did they think were designing them for? A science fair?!

The letter has few good points; but as the technological moratorium is unlikely to happen (and many signatories have zero moral authority to request one), the focus should be in a clear and strong regulatory framework, because as some ignotus academic said in 2018 (down in page 66), “[t]he challenge is not technical; it is sociopolitical”, and the same academic already said back in 2008, “current law and principles are ill-equipped to deal with further radical changes in technology, which could imply the need of a more proactive approach to legal developments”.

In conclusion, as ChatGPT likes to say, it is not time for billionaires and respected academics to sign letters, but to put their money where they mouth is, and focus their work on the actual achievement of a robust regulatory system, for example by using their lobby power and money to that effect instead of doing exactly the opposite, so we can all “enjoy a long AI summer” instead of having hordes of people suffering sunburns.

ChatGPT, the Skynet moment, and Judgement day without steel robots

There was a time when we imagined that the end of times would be marred with steel robots crushing the humans that tried to disconnect them, or enslaved them as a source of energy, but while science fiction has provided plenty of accurate predictions of things to come, it seems that end of the society as we know it may come from something less muscular and more subtle. We are going through days when it is almost impossible to open any social media site or publication without bumping into a discussion about the uses, benefits and problems from the use of freely available Natural Language Processing algorithmic systems that seem to create texts of almost anything better than humans.

Leaving aside that it is actually not true that the available systems create text better than humans, as they are quite basic, rigid and plagued of errors, let’s assume for a moment that they are indeed better than humans in creating those texts. There is a pervasive mistake making the algorithm the centre of the discussion, and the hero of the imagined saving-all AI. As many know and have pointed out, the centre, the middle and the periphery of everything that AI can do is the DATA (yes, with capital letters!). There are plenty of writing about the ownership and privacy of such a data and, therefore, the resulting text (or image), but the crucial issue, the one with the capability of creating judgment-day scenario, is the quality of the data.

There is some true in the statements that AI is not biased per se, but there is even more true in the fact that by using biased data AI can replicate and reinforce the original bias, so much that it can convert it in the new, accepted as unbiased, reality. The same applies to any form of AI, including descriptive, predictive and prescriptive, where the description, the prediction and the prescription is based on data that is biased, or incomplete, or plainly wrong, or a combination  of all or any of those.

Let’s use as example Facebook and the way that it shows users news in the feed and advertising. There are many “studies” that say that with XX number of “likes” Facebook knows what are your preferences and with a YY number of them it can predict better than yourself what you like and want, but that is not entirely true. It is more accurate to say that, based on what you actually like, Facebook shows you news and adds that are close to it but usually with a tendency to move towards what the advertiser wants you to like, in such a subtle manner that YY likes later you are actually liking what Facebook or the advertiser wanted you to like in the first place. Do you really think that millions of Americans woke up one day and just for their dislike of Hillary decided to vote for a misogynistic, fraudster, liar and megalomaniac like Trump? If you believe that you are not getting the gravity of the Cambridge Analytica scandal and how by knowing the actual preferences of people, an algorithm can start crawling-pegging their interest until they are somehow remote and even the opposite of what they originally wanted (yes, some of those who originally were sincerely abhorred by the idea of a sitting US president having an affair with an intern and then lying under oath, were the same who then supported a women-grabbing, “friend” of a paedophile, who knowingly tried to subvert the basis of their democracy).

Now let’s imagine that it is not “friends” news or adds what the algorithm is showing you, but your whole consumption of news, data, science and information, which is written specially for you; that every time you want to know something you have a system that does not give you a link to a page but gives you a text saying what actually “is” (in the ontological sense). Although it sounds great and the possibilities seem endless, yes you guessed, it all depends on the quality of the data to which the system has access to. Currently most AI systems are “fed” data to train them so they know how to behave in certain situations, even in a biased form, but the ultimate goal is to unleash them in the wealth of data constituted by Internet, mainly because there is where the money is. As you are already guessing, here is where we have the judgement day moment may arise, that moment when Skynet is connected to Internet and the machines kill everyone around them. No killing here, but the effects can be also daunting.

If social media and its algorithmic exacerbation of unethical and unprofessional press has led to massive manipulation, making people choosing social self-harm at the levels we’ve seen in Brexit, Trump, Bolsonaro, different forms of Chavism and a long list of people choosing what will damage them and their community, only imagine if all information they receive, everything that they “write” or ask the system to write come from this data-tainted algorithmic description, prediction and prescription. A different, soon coming, post is needed to the issue of data quality and the role of the established press in the misinformation campaings, but it is clear that we need to start discussing that, besides how bad AI chats are supposed to be for essays as form of assessment, there is a prospect of real social dissolution by misinformation and manipulation at scale not seeing until now.