What Europe’s Policy Reversals Mean for Sustainability, Business and AI

As Europe begins 2026, the continent finds itself at a crossroads in the governance of sustainability, technology and industry. Policymakers across the European Union and the United Kingdom are increasingly embracing deregulatory reforms, promoted as necessary to enhance competitiveness, stimulate investment and ease administrative burdens on business. Yet these reforms, when examined together, reveal a structural shift away from the sustainability frameworks that have shaped corporate accountability, environmental protection and long term innovation strategies over the past decade. This shift is more than a matter of regulatory calibration, reflecting a political economy in which deregulation is treated as an end rather than a means.

Recent policy changes, from the weakening of the EU’s sustainability reporting regime and shifts in nuclear regulation, to the potential rollback of the 2035 internal combustion engine ban and pressure to relax AI governance frameworks, suggest a broader reorientation. The cumulative effect is to elevate short term economic calculations over long term resilience and systemic stewardship.

1. The Retreat from Sustainability Reporting

Just last week, the European Council and the European Parliament agreed to significantly simplify the Corporate Sustainability Reporting Directive (CSRD) and the Corporate Sustainability Due Diligence Directive (CSDDD). Under the revised framework, only companies with over 1,000 employees and €450m in annual turnover remain in scope for mandatory sustainability reporting, while due diligence obligations now apply only to firms with more than 5,000 employees and €1.5bn in turnover. Moreover, mandatory climate transition plans and certain reporting requirements were eliminated, and a large proportion of smaller businesses were exempted from the rules entirely. This retreat removes approximately 90 % of companies from CSRD’s scope and 70 % from CSDDD’s remit, dramatically shrinking the regulatory perimeter of corporate accountability.

What was initially designed to standardise environmental, social and governance (ESG) disclosures now risks becoming an optional add-on. The scaling back of reporting thresholds reduces transparency and weakens the incentives for firms to integrate sustainability into core business strategies. Rather than equipping investors and stakeholders with reliable data on climate risk, supply chain impacts and human rights performance, the revised regime favours voluntary approaches, an outcome that benefits larger firms with entrenched reporting capacities but leaves rising enterprises and mid sized suppliers in a regulatory limbo.

2. Deregulation in High-Risk Sectors

The United Kingdom’s efforts to streamline nuclear regulation similarly illustrate the risks of deregulation in domains where environmental and safety stakes are high. Recent proposals to simplify planning, environmental and safety oversight for nuclear projects have drawn criticism for sidelining ecological expertise and reducing the scope of environmental assessments. While proponents argue that regulatory fragmentation has contributed to high costs and delays, critics warn that diminishing safety and environmental safeguards could erode public trust and undermine long term energy sustainability.

Similar tensions are visible across energy policy more broadly. Though the EU has prioritised energy grid upgrades and infrastructure resilience in recent years, the broader deregulatory frame risks reducing environmental assessment to procedural formality rather than substantive governance, especially when energy transitions intersect with local ecological concerns.

3. The Combustion Engine Backtrack

Shortly before Christmas 2025, yesterday to be precise, the European Commission announced a major shift in its automotive climate policy, proposing the easing of the 2035 ban on new internal combustion engine (ICE) vehicles, following intense pressure from Germany, Italy and major automakers. Under the original rule, all new cars and light vans sold in the EU from 2035 were to emit zero tailpipe CO₂. The revised plan now targets a 90 % reduction in CO₂ emissions from 2021 levels by 2035, instead of a full zero emission mandate, and allows continued sales of plug-in hybrids and vehicles powered by synthetic fuels or non-food biofuels.

The retreat from a hard combustion engine ban comes amid headwinds for the European auto industry, slower than expected electric vehicle (EV) adoption, intense competition from Chinese EV manufacturers, and rising costs for infrastructure and battery supply. Automakers have lobbied vigorously for flexibility, arguing that plug-in hybrids, biofuels and alternative compliance schemes are necessary to preserve jobs and industrial capacity.

Environmental advocates and many EV-focused companies, including Volvo, have criticised the shift as a setback for Europe’s climate leadership, a potential drag on investment in electrification, and as a move to hide the sector’s inefficiencies and poor business choices. Critics argue that diluting the target undermines regulatory predictability and could leave Europe lagging in the rapidly growing global EV market, especially as China accelerates battery vehicle deployment and U.S. policy oscillates between incentives and rollbacks.

From a sustainability perspective, this reversal illustrates how deregulatory pressures can reshape climate policy itself, not merely loosen reporting obligations or reduce paperwork, but recalibrate the very targets that define long term decarbonisation pathways.

4. AI Governance in a Deregulatory Era

Artificial intelligence poses similar governance challenges. AI technologies increasingly permeate business operations, supply chain optimisation, resource allocation and sustainability analytics. Their environmental footprint, particularly through energy intensive model training and data centre operations , is substantial, and their social impact, from labour displacement to bias amplification, is profound. Effective governance is essential to ensure that AI contributes to sustainability rather than undermines it.

Yet political pressure, particularly emanating from competitors with more permissive regulatory regimes and large corporations’ lobbying, pushed Europe toward weaker AI oversight. The result is a tension between the original EU’s risk based AI governance framework and the deregulatory narrative that frames oversight as antithetical to innovation. In practice, well designed regulation can enable innovation by providing legal certainty and aligning technological development with societal values; absence of regulation often results in fragmented standards, ethical harms and competitive disadvantage.

5. Competitive Pressures and Policy Drift

Across sectors, the deregulatory narrative shares a common rationale, and regulation is portrayed as a barrier to competitiveness; those who seek licence to profit over anything else, and to externalise their costs, have succeeded in equating regulation to sovietisation, when the truth is far from it. Whether in sustainability reporting, automotive emissions targets, nuclear licensing or AI oversight, the same fallacious claim resurfaces, regulatory simplification will catalyse growth. But this logic is flawed when it conflates short term cost reduction with strategic competitiveness. True competitiveness for businesses, particularly in the 21st century, depends on resilience, innovation rooted in environmental and social performance, and the ability to operate within predictable, transparent policy frameworks.

European firms have historically outperformed competitors in regulated spaces precisely because regulation provided structure for investment in long-term capabilities, from vaccines to aerospace and advanced manufacturing. Regulatory retreat does not inherently create advantage; it creates uncertainty.

In recent years, Europe delivered two of the COVID-19 vaccines that enabled the global economy to restart (and those invented outside the Europe also had a substantial, if not complete, state support via a pro innovation regulated environment), created the World Wide Web, and fielded aerospace technologies that continue to outperform global competitors. Airbus’ consistent lead over Boeing in deliveries, bolstered by its sustained investment in sustainable aviation and hydrogen propulsion, illustrates how regulated environments can support innovation more effectively than more permissive systems dominated by short term financial priorities that end in inefficiencies created by continuous diversion of funds and energy for damage control.

In defence technology, European capabilities such as the Meteor missile demonstrate innovation at the technological frontier, which is being adopted by other countries. In quantum communications, Europe is building coordinated sovereign capabilities, exemplified by the Eagle-1 satellite, which aims to provide secure continental networks based on quantum key distribution. These advancements are neither accidental nor the product of deregulation. They arise from structured governance, sustained investment and regulatory clarity.

Reframing Regulation as Sustainability Infrastructure

Europe’s recent policy shifts reflect more than political compromise; they signal a broader philosophical shift that elevates short term competitive narratives over the systemic goals of sustainability, transparency and innovation governance. Deregulation is not inherently harmful, but when it diminishes accountability frameworks, erodes environmental targets and reduces regulatory certainty, it undermines well-being, investor confidence and climate action.

Sustainability is not an add-on to economic policy. It is economic policy, a structural condition for resilience, competitiveness and societal stability in a world defined by the climate crisis, technological disruption and demographic change. To preserve Europe’s sustainability leadership, policymakers must recognise regulation not as a burden but as essential infrastructure, a basis on which responsible business, robust markets and trustworthy technology can thrive in the decades ahead.

The New US AI Action Plan or loosing race you declare

The Trump Administration just released America’s AI Action Plan, a bold, sweeping roadmap to secure what it defines as “unquestioned and unchallenged global technological dominance.” Framed as an existential race against geopolitical rivals like China, this plan sets out to transform every major sector of American life, industry, national security, infrastructure, education, through Artificial Intelligence. It is unapologetically ambitious, deregulatory, and ideologically driven, although these last factors with their clear anti science rhetoric may result in not achieving its stated aims.

The document is riddled with contradictions, selective interpretations of freedom, and a startling disregard for the pressing global challenge of sustainability. Yet, beneath the rhetoric and nationalist framing, there are pockets of pragmatic proposals, especially in sector-specific AI deployments, workforce development, and open source AI infrastructure, that deserve serious engagement.

At its core, the AI Action Plan reads like a manifesto for accelerationism without brakes. The opening pages reject previous efforts at cautious regulation, like Biden’s Executive Order 14110, and embrace full-speed deployment of AI, unburdened by red tape, environmental considerations, or ethical frameworks. The plan’s repeated insistence on removing regulatory barriers casts oversight itself as a threat, particularly oversight related to misinformation, diversity, climate change, and human rights. Paradoxically, based purely on ideology, the Office of Science and Technology Policy is tasked not with strengthening public interest safeguards but with rescinding rules deemed ideological or anti innovation.

This deregulatory zeal extends to infrastructure. Environmental protections under NEPA, the Clean Air Act, and the Clean Water Act are portrayed as inconvenient obstacles to building the data centres and energy systems AI needs. Climate considerations are not just omitted, they are actively scrubbed from public standards, with an explicit instruction to eliminate references to climate change from NIST frameworks. While this framing may excite Silicon Valley libertarians, and others poised to profit from unrestrained business activities, it raises the question of what kind of AI ecosystem will be the US building if the very values that ensure justice, accountability, and environmental sustainability are excised from its foundation.

One of the starkest contradictions in the plan is its call to defend freedom of speech in AI systems, followed immediately by a directive to suppress content or models that reflect so-called social engineering agendas or woke values. That is, according to the drafters of the policy, freedom of speech is guaranteed by prohibiting speech, which the equivalent of organising free orgies to promote virginity.

For instance, developers must ensure that their systems are “free from top-down ideological bias” a phrase used to justify banning government procurement of AI that acknowledges diversity, equity, climate change, or structural inequality . This narrow conception of objectivity suggests that any model reflecting progressive or globally accepted norms is inherently suspect. Accordingly, the Action Plan’s version of freedom seems to operate on a one-way street. It welcomes open dialogue, unless that dialogue challenges the current administration’s values. The implications for academic freedom, AI ethics research, and inclusive policymaking are profound, all of what are paramount for sustained innovation.

Perhaps the most glaring omission is the complete lack of any serious engagement with sustainability. Despite dedicating an entire pillar to AI infrastructure, including data centres, semiconductors, and the national grid, there is not a single reference to sustainable development goals, carbon emissions, or green AI. Instead, the plan explicitly promotes the expansion of energy intensive infrastructure while celebrating the abandonment of “radical climate dogma”. The phrase “Build, Baby, Build” is invoked as a national imperative, with energy consumption framed only as a barrier to be bulldozed through.

This omission is especially concerning given growing global awareness that AI, particularly large scale models, can have significant carbon footprints. The EU AI Act and many national strategies now link AI policy with broader climate objectives, not forgetting that the global investment in the low carbon energy transition reached $2.1 trillion in 2024. America’s plan, by contrast, treats environmental sustainability as a politically inconvenient distraction, and risks leaving the US out of the innovation fuelled by those funds. This leaves the U.S. not only misaligned with international efforts, but also vulnerable to long term economic and environmental risks.

However, amid the ideological rhetoric and the toddler-like phrases, there are components of the Action Plan that are thoughtfully constructed and potentially transformative, especially where the focus shifts from populism and geopolitics to sectoral applications and innovation ecosystems.

The plan calls for targeted AI adoption strategies in critical sectors such as healthcare, manufacturing, agriculture, national security, and scientific research. It supports regulatory sandboxes and domain specific AI Centres of Excellence, mechanisms that can help scale safe and effective innovation in complex environments.

Initiatives to modernise healthcare with AI tools, apply AI in advanced manufacturing, and support biosecurity research show a clearer understanding of AI’s potential for real world impact. If implemented with inclusive governance, these initiatives could significantly enhance productivity and resilience in key sectors, although as presented risk to leave outside of the funding pool those who focus on the environmental impact of their investments.

The Plan’s provisions to retrain and upskill American workers seem also well conceived, recognising the labour market disruption AI may cause and proposing concrete steps, from expanding apprenticeships and AI literacy in technical education, to tax incentives for employer-sponsored training. The establishment of an AI Workforce Research Hub could, if well supported, provide crucial data and forward looking analysis on job displacement, wage effects, and emerging skill demands. It has to be seen how the need for serious research is balanced with the constant attack to some of the world’s top research institutions.

The Plan’s strong endorsement of open weight and open source models may be one of its most forward looking elements. These models are essential for academic research, governmental transparency, and innovation outside the Big Tech ecosystem as, unlike closed source systems that concentrate power, open models allow more equitable access and experimentation.

Furthermore, the commitment to build a sustainable National AI Research Resource (NAIRR) infrastructure and improve financial mechanisms for compute access, especially for researchers and startups, is a rare bright spot. It signals an intention to diversify the AI innovation ecosystem but, again, it might collide with the constant defunding and battling of the White House with serious research institutions.

Finally, the Plan’s third pillar, international diplomacy and AI security, seeks to export “the full American AI stack” to likeminded nations while isolating rivals, particularly China. The aim is to create a global alliance built around U.S. developed hardware, standards, and systems. Here the plan may hit hard against reality, as the constant undermining of diplomatic principles and rules by the US government, and the growing lack of trust at global scale of American commitment to and international system based on rules, may result in countries looking for solutions elsewhere.

Without shared values of sustainability, fairness, and rights based governance, will the world want what America is selling? The EU, Canada, Brazil, and other global actors are increasingly anchoring AI governance in democratic accountability, inclusive participation, and climate conscious design. An American AI regime defined by deregulation and cultural exclusion may find limited traction outside its ideological bubble.

Ideology is the foundation of thinking, but when it replaces thinking, it may lead to a situation where the plans go against achieving the expected results, and some aspects of the America’s AI Action Plan might be a good example of that.

AI and environmental damage

In the previous entry the issue of the environmental and climate change impact of AI use and development was presented as important, and in need of urgent treatment by policy makers (who are squarely ignoring it in most proposals of AI regulation). Those impacts are real, considerable and multifaceted, involving major energy consumption, resource depletion, and a variety of other ecological consequences.

Training large AI models requires immense computational power, leading to the use of large quantities of energy. Just as example, training a single model like GPT-3 can consume 1287 MWh of electricity, resulting in about 502 metric tons of CO2 emissions, which is comparable to the annual emissions of dozens of cars. But while the energy consumed during the training phase is significant, quite more energy is used during the inference phase, where models are deployed and used in real-world applications. There have been interesting forms to justify such a use, mainly by comparing pears not with apples but with scissors, but they seem to obviate the fact that the general human emission that are compared with the AI ones will be there regardless of the activity, so the improper use of AI adds emissions without subtracting much of them. In a world that the development and deployment of AI is bound to keep growing at bubble-like rates, this implies that the location of data centres plays a crucial role in determining the carbon footprint of AI, as they are bound to double their energy consumption by 2026 (if 170 pages is too much to read, simply go to page 8). Data centres powered by renewable energy sources, have a lower carbon footprint compared to those in regions reliant on fossil fuels, and there is an argument about making such use compulsory.

From the resource depletion and e-waste point of view, AI hardware, including GPUs and specialized chips, requires rare earth elements and other minerals. The extraction and processing of these materials can lead to environmental degradation and biodiversity loss. AI is been currently used to find ways to replace those rare earth elements, but even then, as AI technology evolves, older hardware becomes obsolete, contributing to a steep increase in the amount of electronic waste. Besides the global inequalities generated by the mountains of e-garbage currently dumped in developing countries, E-waste contains hazardous substances like lead, mercury, and cadmium, which can contaminate soil and water if not properly managed.

A less obvious but equally significant impact is water usage. Training AI models like requires abundant amounts of water for cooling data centres, with some studies claiming that the water consumed during the training of algorithmic models is equivalent to the water needed to produce hundreds of electric cars.

To add to the energy and resources consumption, the uncontrolled, improperly regulated widespread use of AI can have severe ecological impact, particularly and paradoxically, when used in activities where proper use of AI can minimize them. Not making sustainability a key aspect of algorithm design, training and AI deployment, may lead to situations that it is more profitable to carry on with environmentally harmful AI driven activities, like the overuse of pesticides and fertilizers, harming soil and water quality and reducing biodiversity, not mentioning that AI-based applications like delivery drones and autonomous vehicles can disrupt wildlife and natural ecosystems without giving much benefits (beyond increasing the already fat profits of few).

All this supports the idea that AI regulation must address sustainability issues and not leave to general environmental legislation, because it is important to know who owns what AI produces, but only if we have a planet where you can enjoy those works…