European tech sovereignty or the problems when public goods run on private code

I recently submitted a paper on artificial intelligence, carbon markets and sustainability governance in the European Union, where I repeatedly return to a structural problem that goes far beyond climate policy and moves to the heart of the current debates about European teach sovereignty. Public goods are increasingly governed through private digital infrastructures. This is not a marginal technical issue. It is a constitutional one.

Debates on digital sovereignty often focus on platforms, clouds or communications tools. The deeper issue sits one layer below, in the infrastructures of verification, trust and coordination that now underpin core public functions, from justice and public administration to climate governance and sustainability reporting. When these infrastructures are privately designed, operated and controlled, the state does not merely outsource services, but it delegates power.

This is particularly visible in carbon markets and sustainability reporting, where systems of measurement, reporting and verification have become the backbone of regulatory effectiveness. Emissions are no longer governed primarily through inspections or permits, but through data pipelines. Sensors, platforms, algorithms, standards and registries translate physical reality into data, data into compliance and compliance into legal and economic consequences. Whoever controls that translation layer exercises a form of functional sovereignty.

Private digital infrastructures are often presented as neutral, efficient and scalable. In practice, they embed choices about methodologies, thresholds, defaults and visibility. In carbon markets, this means deciding what counts as a real reduction, how uncertainty is treated, how anomalies are detected and which data are considered authoritative. These are not technical details. They are normative decisions with distributive effects.

The problem is not the involvement of private actors. European regulation has long relied on hybrid governance. The problem is that the architecture itself is frequently opaque, fragmented and weakly accountable. In sustainability governance, this has translated into inconsistent MRV standards, low quality carbon credits, greenwashing risks and a widening gap between regulatory ambition and implementation capacity. Similar patterns appear whenever public objectives depend on proprietary systems governed by foreign legal orders or commercial incentives.

Recent European moves away from non European digital services in the public sector are often framed as geopolitical reactions, but they are also responses to a rule of law risk. When public institutions rely on infrastructures they do not control, continuity of service, data integrity and institutional autonomy become contingent.

This risk is particularly acute in sustainability governance. Climate policy is cumulative and long term. It depends on historical data, methodological consistency and institutional memory. Vendor lock in, opaque algorithms or sudden changes in service conditions can undermine not only efficiency but also legal certainty and democratic legitimacy. A carbon market that cannot credibly verify emissions is not merely inefficient. It is normatively hollow.

What follows from this is not technological isolationism. The European response, increasingly visible in policy, is more structural. Certain digital systems must be treated as regulatory public goods. This includes MRV infrastructures, digital identity, trust services, registries and core data spaces. The objective is not that the state builds everything itself, but that the rules of the system, its interoperability, auditability and ultimate control remain public., and for that there should be certainty that they are subject to European law.

This is where the European digital rulebook matters. Instruments such as data protection law, the artificial intelligence framework, digital identity rules and cybersecurity obligations are often criticised as burdens, although their deeper function is infrastructural. They aim to ensure that innovation occurs within an environment where public values such as rights, accountability and proportionality are not external constraints but design parameters.

In the sustainability domain, this logic is already visible in the gradual move away from purely private carbon standards toward European level certification, registries and verification rules, and the lesson generalises for other tech domains. If the objective is public, the infrastructure cannot be entirely private.

The real risk for Europe is not that it regulates too much, but that it confuses speed with progress. Innovation built on fragile and unaccountable infrastructures ultimately erodes trust, and trust is itself an economic and political asset. By insisting that core digital infrastructures serving public purposes remain governed as such, Europe is not rejecting innovation. It is redefining its conditions, and contradictorily making it more pro innovation by giving legal certainty, which the ill conceived onmibus package is eroding.

Digital sovereignty, understood this way, is not about borders in cyberspace. It is about ensuring that when public goods run on code, that code is embedded in a legal and institutional architecture compatible with European constitutional principles. This approach is slower and more demanding, but it is substantially more resilient, and in a world where governance increasingly happens through systems rather than statutes, resilience may be Europe’s most underestimated innovation strategy.

Resilience, AI, and the sustainability rift revealed at Davos

When the World Economic Forum convenes each year in the Swiss Alps, it performs the paradox of a gathering of global elites claiming to represent the world even as they expose its fractures. In 2026, that paradox was laid bare by a tale of two speeches, one by Canadian Prime Minister Mark Carney that mourned the decline of a rules based global order and called for a cooperative response, and another by a U.S. president whose rhetoric reflected a very different, often unilateral, logic of power. The contrast between these visions mirrors the tensions now surfacing across the technological and environmental frontiers where AI and sustainability intersect.

Carney’s address was striking not simply for its sharp geopolitical diagnosis, that the orders of the past are fracturing under the strain of great power rivalry, but because it explicitly linked this rupture to the need for collective responses grounded in values, sovereignty, and sustainable development rather than coercion. He challenged the Davos audience to stop invoking an illusion of rules based stability and instead build alliances that reflect the realities of geopolitical competition and ecological crisis.

In contrast, and much commented upon in the global press, other speeches and reactions at Davos captured a very different mood, represented by competition over strategic assets like Greenland, tariff threats, and rhetorical appeals to national ascendancy rather than shared stewardship. This tension, between cooperation and competitive assertion, has direct implications for how humanity manages two of the defining challenges of our era: the governance of AI and the sustainability of our planetary systems.

AI, sustainability, and the fragility of resilience

In their idealised forms, AI and efforts to decarbonize our economies might seem complementary, where intelligent systems optimise energy use, accelerating climate modeling, and enhancing adaptive capacity across sectors. Yet at Davos, the debate revealed that the very governance structures which will shape AI deployment and climate action are also splitting along familiar economic and geopolitical lines.

On the AI front, discussions underscored a stark asymmetry: while leaders in the Global North emphasise competitiveness, AI sovereignty, and market advantage, much of the Global South worries about infrastructural dependency, digital extraction, and corporate control. From the vantage of emerging and middle powers, AI is less a tool of empowerment than a conduit through which external norms and economic leverage are exercised. This divergence mirrors long standing concerns that sustainability commitments are too often decoupled from the very power asymmetries that produce environmental harm in the first place.

The sustainability dimension was itself present in panels emphasising the environmental costs of AI, from energy intensive data centers to the material extraction required for high performance computing hardware. Leaders at Davos spoke of human centered AI and structural responses like AI taxation and safety nets for displaced labour, yet such frameworks remain aspirational and unevenly backed by policy. Where AI is subsidised without ecological accounting, efficiency gains risk becoming externalising mechanisms, one of improvements on paper that shift costs onto communities, ecosystems, and future generations.

A fractured global response to shared risk

The rift revealed in Davos reflects deeper structural tensions, in technological competition versus collective stewardship, national sovereignty versus transnational governance, and short term advantage versus long term ecological and economic viability. Carney’s call for middle powers to act together is, at its heart, a plea for a world that sees cooperation not as a luxury but as a survival strategy.

This is precisely where resilience must be rethought. Resilience cannot be merely adaptive; it must be transformative. It must take into account the uneven capacities of countries and communities to shape the trajectory of both AI and sustainability transitions. A Global South community that cannot contest algorithmic systems or influence data governance will find itself bearing not just technological dependencies but also environmental burdens and economic marginalization.

Carney’s framing, that the old order is gone and a new one must be built consciously, cooperatively, and sustainably, resonates with broader debates about the limits of adaptation. Resilience without justice is hollow, as adaptation that ignores structural inequities will reproduce vulnerability rather than dissipate it.

Toward a new architecture of governance

A truly sustainable and resilient future demands coherence between AI governance, climate policy, and global economic structures, and Europe has shown how can be done. This requires at least four shifts:

  • Distributed Agency: AI systems must be designed and governed as public infrastructures, not proprietary black boxes controlled by a handful of hyperscale firms and powerful states. This is essential for equitable resilience.
  • Ecological Accountability: AI’s environmental footprint, from hardware lifecycles to energy demand, must be integrated into governance frameworks, not treated as an afterthought in efficiency narratives.
  • Transnational Cooperation: The fractures evident at Davos remind us that unilateral power politics undermine collective action. Shared risks, climate tipping points, AI displacement, digital exclusion, cannot be managed by any one state acting alone.
  • Capacity Building in the Global South: Resilience will be meaningful only if countries with fewer resources can shape rules rather than have them imposed, whether in trade, technology, or climate finance.

Governance as the central frontier

The Davos narratives of 2026, from geopolitical tensions over Greenland to calls for new alliances, highlight the fundamental truth that we are living through a structural rupture, not a smooth transition. The futures of AI governance and sustainability are not separate questions; they are interlocked because both challenge the assumptions of unilateral power, short term gain, and technocratic neutrality.

To cultivate resilience in this era is not to adapt quietly to what is coming, but to forge systems of governance that reflect human values, ecological limits, and shared vulnerability. As global discussions fragment along lines of power and interest, the risks of fragmentation grow. But so does the possibility that communities, cities, nations, and coalitions, especially those historically marginalised, will insist on governance that is inclusive, sustainable, and just.

Our resilience, and that of the planet, depends on nothing less.

What Europe’s Policy Reversals Mean for Sustainability, Business and AI

As Europe begins 2026, the continent finds itself at a crossroads in the governance of sustainability, technology and industry. Policymakers across the European Union and the United Kingdom are increasingly embracing deregulatory reforms, promoted as necessary to enhance competitiveness, stimulate investment and ease administrative burdens on business. Yet these reforms, when examined together, reveal a structural shift away from the sustainability frameworks that have shaped corporate accountability, environmental protection and long term innovation strategies over the past decade. This shift is more than a matter of regulatory calibration, reflecting a political economy in which deregulation is treated as an end rather than a means.

Recent policy changes, from the weakening of the EU’s sustainability reporting regime and shifts in nuclear regulation, to the potential rollback of the 2035 internal combustion engine ban and pressure to relax AI governance frameworks, suggest a broader reorientation. The cumulative effect is to elevate short term economic calculations over long term resilience and systemic stewardship.

1. The Retreat from Sustainability Reporting

Just last week, the European Council and the European Parliament agreed to significantly simplify the Corporate Sustainability Reporting Directive (CSRD) and the Corporate Sustainability Due Diligence Directive (CSDDD). Under the revised framework, only companies with over 1,000 employees and €450m in annual turnover remain in scope for mandatory sustainability reporting, while due diligence obligations now apply only to firms with more than 5,000 employees and €1.5bn in turnover. Moreover, mandatory climate transition plans and certain reporting requirements were eliminated, and a large proportion of smaller businesses were exempted from the rules entirely. This retreat removes approximately 90 % of companies from CSRD’s scope and 70 % from CSDDD’s remit, dramatically shrinking the regulatory perimeter of corporate accountability.

What was initially designed to standardise environmental, social and governance (ESG) disclosures now risks becoming an optional add-on. The scaling back of reporting thresholds reduces transparency and weakens the incentives for firms to integrate sustainability into core business strategies. Rather than equipping investors and stakeholders with reliable data on climate risk, supply chain impacts and human rights performance, the revised regime favours voluntary approaches, an outcome that benefits larger firms with entrenched reporting capacities but leaves rising enterprises and mid sized suppliers in a regulatory limbo.

2. Deregulation in High-Risk Sectors

The United Kingdom’s efforts to streamline nuclear regulation similarly illustrate the risks of deregulation in domains where environmental and safety stakes are high. Recent proposals to simplify planning, environmental and safety oversight for nuclear projects have drawn criticism for sidelining ecological expertise and reducing the scope of environmental assessments. While proponents argue that regulatory fragmentation has contributed to high costs and delays, critics warn that diminishing safety and environmental safeguards could erode public trust and undermine long term energy sustainability.

Similar tensions are visible across energy policy more broadly. Though the EU has prioritised energy grid upgrades and infrastructure resilience in recent years, the broader deregulatory frame risks reducing environmental assessment to procedural formality rather than substantive governance, especially when energy transitions intersect with local ecological concerns.

3. The Combustion Engine Backtrack

Shortly before Christmas 2025, yesterday to be precise, the European Commission announced a major shift in its automotive climate policy, proposing the easing of the 2035 ban on new internal combustion engine (ICE) vehicles, following intense pressure from Germany, Italy and major automakers. Under the original rule, all new cars and light vans sold in the EU from 2035 were to emit zero tailpipe CO₂. The revised plan now targets a 90 % reduction in CO₂ emissions from 2021 levels by 2035, instead of a full zero emission mandate, and allows continued sales of plug-in hybrids and vehicles powered by synthetic fuels or non-food biofuels.

The retreat from a hard combustion engine ban comes amid headwinds for the European auto industry, slower than expected electric vehicle (EV) adoption, intense competition from Chinese EV manufacturers, and rising costs for infrastructure and battery supply. Automakers have lobbied vigorously for flexibility, arguing that plug-in hybrids, biofuels and alternative compliance schemes are necessary to preserve jobs and industrial capacity.

Environmental advocates and many EV-focused companies, including Volvo, have criticised the shift as a setback for Europe’s climate leadership, a potential drag on investment in electrification, and as a move to hide the sector’s inefficiencies and poor business choices. Critics argue that diluting the target undermines regulatory predictability and could leave Europe lagging in the rapidly growing global EV market, especially as China accelerates battery vehicle deployment and U.S. policy oscillates between incentives and rollbacks.

From a sustainability perspective, this reversal illustrates how deregulatory pressures can reshape climate policy itself, not merely loosen reporting obligations or reduce paperwork, but recalibrate the very targets that define long term decarbonisation pathways.

4. AI Governance in a Deregulatory Era

Artificial intelligence poses similar governance challenges. AI technologies increasingly permeate business operations, supply chain optimisation, resource allocation and sustainability analytics. Their environmental footprint, particularly through energy intensive model training and data centre operations , is substantial, and their social impact, from labour displacement to bias amplification, is profound. Effective governance is essential to ensure that AI contributes to sustainability rather than undermines it.

Yet political pressure, particularly emanating from competitors with more permissive regulatory regimes and large corporations’ lobbying, pushed Europe toward weaker AI oversight. The result is a tension between the original EU’s risk based AI governance framework and the deregulatory narrative that frames oversight as antithetical to innovation. In practice, well designed regulation can enable innovation by providing legal certainty and aligning technological development with societal values; absence of regulation often results in fragmented standards, ethical harms and competitive disadvantage.

5. Competitive Pressures and Policy Drift

Across sectors, the deregulatory narrative shares a common rationale, and regulation is portrayed as a barrier to competitiveness; those who seek licence to profit over anything else, and to externalise their costs, have succeeded in equating regulation to sovietisation, when the truth is far from it. Whether in sustainability reporting, automotive emissions targets, nuclear licensing or AI oversight, the same fallacious claim resurfaces, regulatory simplification will catalyse growth. But this logic is flawed when it conflates short term cost reduction with strategic competitiveness. True competitiveness for businesses, particularly in the 21st century, depends on resilience, innovation rooted in environmental and social performance, and the ability to operate within predictable, transparent policy frameworks.

European firms have historically outperformed competitors in regulated spaces precisely because regulation provided structure for investment in long-term capabilities, from vaccines to aerospace and advanced manufacturing. Regulatory retreat does not inherently create advantage; it creates uncertainty.

In recent years, Europe delivered two of the COVID-19 vaccines that enabled the global economy to restart (and those invented outside the Europe also had a substantial, if not complete, state support via a pro innovation regulated environment), created the World Wide Web, and fielded aerospace technologies that continue to outperform global competitors. Airbus’ consistent lead over Boeing in deliveries, bolstered by its sustained investment in sustainable aviation and hydrogen propulsion, illustrates how regulated environments can support innovation more effectively than more permissive systems dominated by short term financial priorities that end in inefficiencies created by continuous diversion of funds and energy for damage control.

In defence technology, European capabilities such as the Meteor missile demonstrate innovation at the technological frontier, which is being adopted by other countries. In quantum communications, Europe is building coordinated sovereign capabilities, exemplified by the Eagle-1 satellite, which aims to provide secure continental networks based on quantum key distribution. These advancements are neither accidental nor the product of deregulation. They arise from structured governance, sustained investment and regulatory clarity.

Reframing Regulation as Sustainability Infrastructure

Europe’s recent policy shifts reflect more than political compromise; they signal a broader philosophical shift that elevates short term competitive narratives over the systemic goals of sustainability, transparency and innovation governance. Deregulation is not inherently harmful, but when it diminishes accountability frameworks, erodes environmental targets and reduces regulatory certainty, it undermines well-being, investor confidence and climate action.

Sustainability is not an add-on to economic policy. It is economic policy, a structural condition for resilience, competitiveness and societal stability in a world defined by the climate crisis, technological disruption and demographic change. To preserve Europe’s sustainability leadership, policymakers must recognise regulation not as a burden but as essential infrastructure, a basis on which responsible business, robust markets and trustworthy technology can thrive in the decades ahead.

The New US AI Action Plan or loosing race you declare

The Trump Administration just released America’s AI Action Plan, a bold, sweeping roadmap to secure what it defines as “unquestioned and unchallenged global technological dominance.” Framed as an existential race against geopolitical rivals like China, this plan sets out to transform every major sector of American life, industry, national security, infrastructure, education, through Artificial Intelligence. It is unapologetically ambitious, deregulatory, and ideologically driven, although these last factors with their clear anti science rhetoric may result in not achieving its stated aims.

The document is riddled with contradictions, selective interpretations of freedom, and a startling disregard for the pressing global challenge of sustainability. Yet, beneath the rhetoric and nationalist framing, there are pockets of pragmatic proposals, especially in sector-specific AI deployments, workforce development, and open source AI infrastructure, that deserve serious engagement.

At its core, the AI Action Plan reads like a manifesto for accelerationism without brakes. The opening pages reject previous efforts at cautious regulation, like Biden’s Executive Order 14110, and embrace full-speed deployment of AI, unburdened by red tape, environmental considerations, or ethical frameworks. The plan’s repeated insistence on removing regulatory barriers casts oversight itself as a threat, particularly oversight related to misinformation, diversity, climate change, and human rights. Paradoxically, based purely on ideology, the Office of Science and Technology Policy is tasked not with strengthening public interest safeguards but with rescinding rules deemed ideological or anti innovation.

This deregulatory zeal extends to infrastructure. Environmental protections under NEPA, the Clean Air Act, and the Clean Water Act are portrayed as inconvenient obstacles to building the data centres and energy systems AI needs. Climate considerations are not just omitted, they are actively scrubbed from public standards, with an explicit instruction to eliminate references to climate change from NIST frameworks. While this framing may excite Silicon Valley libertarians, and others poised to profit from unrestrained business activities, it raises the question of what kind of AI ecosystem will be the US building if the very values that ensure justice, accountability, and environmental sustainability are excised from its foundation.

One of the starkest contradictions in the plan is its call to defend freedom of speech in AI systems, followed immediately by a directive to suppress content or models that reflect so-called social engineering agendas or woke values. That is, according to the drafters of the policy, freedom of speech is guaranteed by prohibiting speech, which the equivalent of organising free orgies to promote virginity.

For instance, developers must ensure that their systems are “free from top-down ideological bias” a phrase used to justify banning government procurement of AI that acknowledges diversity, equity, climate change, or structural inequality . This narrow conception of objectivity suggests that any model reflecting progressive or globally accepted norms is inherently suspect. Accordingly, the Action Plan’s version of freedom seems to operate on a one-way street. It welcomes open dialogue, unless that dialogue challenges the current administration’s values. The implications for academic freedom, AI ethics research, and inclusive policymaking are profound, all of what are paramount for sustained innovation.

Perhaps the most glaring omission is the complete lack of any serious engagement with sustainability. Despite dedicating an entire pillar to AI infrastructure, including data centres, semiconductors, and the national grid, there is not a single reference to sustainable development goals, carbon emissions, or green AI. Instead, the plan explicitly promotes the expansion of energy intensive infrastructure while celebrating the abandonment of “radical climate dogma”. The phrase “Build, Baby, Build” is invoked as a national imperative, with energy consumption framed only as a barrier to be bulldozed through.

This omission is especially concerning given growing global awareness that AI, particularly large scale models, can have significant carbon footprints. The EU AI Act and many national strategies now link AI policy with broader climate objectives, not forgetting that the global investment in the low carbon energy transition reached $2.1 trillion in 2024. America’s plan, by contrast, treats environmental sustainability as a politically inconvenient distraction, and risks leaving the US out of the innovation fuelled by those funds. This leaves the U.S. not only misaligned with international efforts, but also vulnerable to long term economic and environmental risks.

However, amid the ideological rhetoric and the toddler-like phrases, there are components of the Action Plan that are thoughtfully constructed and potentially transformative, especially where the focus shifts from populism and geopolitics to sectoral applications and innovation ecosystems.

The plan calls for targeted AI adoption strategies in critical sectors such as healthcare, manufacturing, agriculture, national security, and scientific research. It supports regulatory sandboxes and domain specific AI Centres of Excellence, mechanisms that can help scale safe and effective innovation in complex environments.

Initiatives to modernise healthcare with AI tools, apply AI in advanced manufacturing, and support biosecurity research show a clearer understanding of AI’s potential for real world impact. If implemented with inclusive governance, these initiatives could significantly enhance productivity and resilience in key sectors, although as presented risk to leave outside of the funding pool those who focus on the environmental impact of their investments.

The Plan’s provisions to retrain and upskill American workers seem also well conceived, recognising the labour market disruption AI may cause and proposing concrete steps, from expanding apprenticeships and AI literacy in technical education, to tax incentives for employer-sponsored training. The establishment of an AI Workforce Research Hub could, if well supported, provide crucial data and forward looking analysis on job displacement, wage effects, and emerging skill demands. It has to be seen how the need for serious research is balanced with the constant attack to some of the world’s top research institutions.

The Plan’s strong endorsement of open weight and open source models may be one of its most forward looking elements. These models are essential for academic research, governmental transparency, and innovation outside the Big Tech ecosystem as, unlike closed source systems that concentrate power, open models allow more equitable access and experimentation.

Furthermore, the commitment to build a sustainable National AI Research Resource (NAIRR) infrastructure and improve financial mechanisms for compute access, especially for researchers and startups, is a rare bright spot. It signals an intention to diversify the AI innovation ecosystem but, again, it might collide with the constant defunding and battling of the White House with serious research institutions.

Finally, the Plan’s third pillar, international diplomacy and AI security, seeks to export “the full American AI stack” to likeminded nations while isolating rivals, particularly China. The aim is to create a global alliance built around U.S. developed hardware, standards, and systems. Here the plan may hit hard against reality, as the constant undermining of diplomatic principles and rules by the US government, and the growing lack of trust at global scale of American commitment to and international system based on rules, may result in countries looking for solutions elsewhere.

Without shared values of sustainability, fairness, and rights based governance, will the world want what America is selling? The EU, Canada, Brazil, and other global actors are increasingly anchoring AI governance in democratic accountability, inclusive participation, and climate conscious design. An American AI regime defined by deregulation and cultural exclusion may find limited traction outside its ideological bubble.

Ideology is the foundation of thinking, but when it replaces thinking, it may lead to a situation where the plans go against achieving the expected results, and some aspects of the America’s AI Action Plan might be a good example of that.

AI and environmental damage

In the previous entry the issue of the environmental and climate change impact of AI use and development was presented as important, and in need of urgent treatment by policy makers (who are squarely ignoring it in most proposals of AI regulation). Those impacts are real, considerable and multifaceted, involving major energy consumption, resource depletion, and a variety of other ecological consequences.

Training large AI models requires immense computational power, leading to the use of large quantities of energy. Just as example, training a single model like GPT-3 can consume 1287 MWh of electricity, resulting in about 502 metric tons of CO2 emissions, which is comparable to the annual emissions of dozens of cars. But while the energy consumed during the training phase is significant, quite more energy is used during the inference phase, where models are deployed and used in real-world applications. There have been interesting forms to justify such a use, mainly by comparing pears not with apples but with scissors, but they seem to obviate the fact that the general human emission that are compared with the AI ones will be there regardless of the activity, so the improper use of AI adds emissions without subtracting much of them. In a world that the development and deployment of AI is bound to keep growing at bubble-like rates, this implies that the location of data centres plays a crucial role in determining the carbon footprint of AI, as they are bound to double their energy consumption by 2026 (if 170 pages is too much to read, simply go to page 8). Data centres powered by renewable energy sources, have a lower carbon footprint compared to those in regions reliant on fossil fuels, and there is an argument about making such use compulsory.

From the resource depletion and e-waste point of view, AI hardware, including GPUs and specialized chips, requires rare earth elements and other minerals. The extraction and processing of these materials can lead to environmental degradation and biodiversity loss. AI is been currently used to find ways to replace those rare earth elements, but even then, as AI technology evolves, older hardware becomes obsolete, contributing to a steep increase in the amount of electronic waste. Besides the global inequalities generated by the mountains of e-garbage currently dumped in developing countries, E-waste contains hazardous substances like lead, mercury, and cadmium, which can contaminate soil and water if not properly managed.

A less obvious but equally significant impact is water usage. Training AI models like requires abundant amounts of water for cooling data centres, with some studies claiming that the water consumed during the training of algorithmic models is equivalent to the water needed to produce hundreds of electric cars.

To add to the energy and resources consumption, the uncontrolled, improperly regulated widespread use of AI can have severe ecological impact, particularly and paradoxically, when used in activities where proper use of AI can minimize them. Not making sustainability a key aspect of algorithm design, training and AI deployment, may lead to situations that it is more profitable to carry on with environmentally harmful AI driven activities, like the overuse of pesticides and fertilizers, harming soil and water quality and reducing biodiversity, not mentioning that AI-based applications like delivery drones and autonomous vehicles can disrupt wildlife and natural ecosystems without giving much benefits (beyond increasing the already fat profits of few).

All this supports the idea that AI regulation must address sustainability issues and not leave to general environmental legislation, because it is important to know who owns what AI produces, but only if we have a planet where you can enjoy those works…