Artificial Intelligence (AI) has increasingly been framed as a solution to environmental challenges, contributing to sustainability efforts such as charting methane emissions or mapping the dredging of sand. Yet, its development and lifecycle pose significant environmental risks. This paradox exists in a policy landscape where Big Tech has growing power, actively creating an environment conducive to its own long-term objectives of expansion and influence. Recent developments in US tech policy, with Trump taking a more deregulatory stance, risk deepening global power imbalances by further entrenching U.S. tech dominance. Nations in the Global South already face disadvantages in engaging with global governance platforms and multinational corporations, and weak AI regulation may exacerbate these vulnerabilities.
Environmental toll of the AI lifecycle
Despite the need for more transparency and research into AI’s environmental impacts, estimates highlight the severity of its effects. For example, training a single large language model (LLM) emits approximately 300,000 kg of carbon dioxide, equal to around 125 round-trip flights between Beijing and New York. A single LLM query requires 2.9 watt-hours of electricity, whereas a regular internet search only requires 0.3 watt-hours. Moreover, the rise of Gen-AI has increased the electricity demand, reflected in the surging demand for data centres. Consequently, the number of data centres worldwide has risen from 500,000 in 2012 to 8 million. Throughout this process, energy consumption has increased twofold every four years. Data centres also require large amounts of water for cooling and electricity generation purposes. Estimations project that the global demand for water resulting from AI could reach 4.2-6.6 billion cubic metres in 2027. This exceeds 50 percent of the UK’s 2023 annual water use. Additionally, although the exact volume of electronic waste generated by AI is still unclear, only a scarce amount of 22 percent of e-waste is being disposed of in an environmentally sound way.
Finally, the growing demand for graphical processing unit chips increases the demand for minerals and metals. The similarity of these minerals and metals to those mined for transitioning to a low-carbon economy suggests that AI-driven extraction of minerals and rare earth elements could contribute to the environmental impacts already associated with this sector.
Big-Tech in Policymaking
Besides the environmental cost of individual AI models, the continuous expansion of AI and the projected surge in data centres to facilitate this have rendered Big Tech a major force in global energy consumption. This rapid growth and its subsequent increased environmental risk are facilitated by a policy landscape in which Big Tech exerts significant influence. The growing accumulation of wealth, cross-border interests, and global influence is to such an extent that sovereign states have begun to treat Big Tech like sovereign actors. Denmark’s appointment of the first-ever Tech Ambassador is an example of that. Big Tech’s influence extends beyond identifying social issues and defining ‘problems’ by controlling information on digital platforms, influencing media and generating content through tools like GenAI. It also provides ‘solutions’ to these epistemic challenges as digital solutions become increasingly embedded in infrastructural, social, and governmental systems. Changing societal structures increasingly characterised by technical solutionism are legitimising Big Tech as a constituency in policymaking and the technologies it offers, reinforcing Big Tech’s position in policymaking. Moreover, increased affordability and accessibility make these digital solutions more feasible to implement.
Additionally, tech companies have become important actors in the political sphere as a response to the threat of (over-)regulation. Various mechanisms used to influence the political dimension of policymaking include consultations with governments, lobbying and the funding of universities, think tanks and experts. The effect of this political capital is evident in the 2024 EU AI Act, where lobbying and efforts of discursive and consultative power helped mitigate the conditions in the proposed act. For example, in a joint statement, BSA | The Software Alliance urged EU institutions to leave General Purpose AI out of the Act’s scope, warning that not doing so would be ‘detrimental’ to development and ‘hamper’ innovation.
Big Tech’s influence across problem shaping, policy formation and political processes raises the question of whose interests are being served. Innovation-centric narratives underestimate Big Tech’s influence within the policy landscape, overlooking the possibility that innovation can be a means to other ends, such as influence or profits. For example, Google attempted to stifle innovation to preserve its internet search market dominance by using anti-competitive practices. OpenAI’s request for copyright exemption further highlights how the argument for innovation masks underlying motives of market dominance and influence, as this exemption could stifle innovation and creative freedom, especially for smaller players in the field and across sectors.
Less regulation thus, does not necessarily lead to greater innovation or social, political, or environmental benefits. Big Tech primarily seeks to shape an environment that supports its own expansion and growing influence. Effective regulation can help shape the current trajectory of AI development in a way that is better equipped to address environmental challenges than Big Tech is willing or able to.
Initiatives in the Global South
The extractive characteristic of AI development extends beyond environmental harm and is highlighted in the asymmetrical concentration of power between the Global North and the Global South. Resource-rich but infrastructure-poor states in the Global South specifically risk environmental degradation whilst benefitting minimally from AI-driven economic growth. Paired with the increased reliance on technology and foreign proprietary models for AI deployment, this power asymmetry mirrors historical patterns of technological as well as economic dependence.
In response, regulatory initiatives are emerging in the Global South. The African Union’s Continental Artificial Intelligence Strategy acknowledges AI’s transformative potential whilst encouraging governments to leverage existing laws to address challenges posed by AI. It emphasises the need to consider environmental risks in AI governance, calling for a multi-tiered approach to mitigate these risks and ensure equal distribution of benefits, while highlighting the need for regional cooperation.
The UAE Cabinet has approved the ‘UAE’s International Stance on Artificial Intelligence Policy’, which is grounded in six core principles, sustainability notably being one of them. However, without binding regulations on sustainable AI, rather than just AI for sustainability, there is a risk of outsourcing the environmental burden rather than addressing it. This could be observed in the UAE’s investment in data centres abroad, driven by energy constraints, as its data centres are at capacity. The UAE’s engagement in international fora to establish global AI sustainability standards may represent a step toward addressing these cross-border challenges.
The need for international cooperation is further highlighted in the draft of the Brazil AI Act, which explicitly links AI to sustainable development. The act proposes not only environmental protection and sustainable development to be a foundational AI principle but also urges these principles to guide AI’s development, implementation and use.
Other domestic initiatives reveal economic opportunities that also address sustainability goals. Kenya rebalances sustainability and economic benefits through a clear regulatory framework in the renewable energy sector. The regulations for geothermal projects, combined with incentives and in partnership with development-finance institutions, enabled Microsoft and United Arab-Emirates-based AI firm G42 to build a geothermal-powered data centre. Similarly, Malaysia’s Corporate Renewable Energy Supply Scheme (CRESS) connects corporates with clean energy providers, attracting investments by Google and Oracle, expected to contribute over US$9.5 billion to Malaysia’s economy by 2030. For data-centre developers, this means direct access to energy from renewable providers.
Recommendations
The recommendations advocate for the establishment of regulatory frameworks that address Big Tech’s policymaking influence and mitigate the risks of rapid expansion and extractive activities related to AI development while promoting sustainable practices.
- Establish context-specific regulatory frameworks to align AI development and deployment with specific social and environmental goals.
- Implement Sustainability reporting to improve transparency, promote accountability and data-driven decision-making.
- Foster regional and international cooperation through coalition-driven efforts to strengthen common values and norms surrounding sustainable AI development and address the power asymmetry observed in the decision-making sphere.
- Leverage existing legal frameworks to establish a multi-tiered governance approach and shape domestic AI narratives.
- Incentivise sustainable AI through regulatory frameworks that harmonise initiatives with environmental goals whilst promoting economic growth, as demonstrated by emerging initiatives in Kenya and Malaysia.
Dewi de Weerdt is a postgraduate student at the University of Essex, pursuing their degree in Human Rights Theory and Practice.