Our blogThe Hidden Environmental Cost of AI
Cover Image for The Hidden Environmental Cost of AI

The Hidden Cost of AI: Energy, Ethics, and the Future of Responsible Innovation

In an interview with the BBC, Professor Gina Neff of Queen Mary University London revealed, “ChatGPT is ‘burning through energy,’ and the data centres used to power it consume more electricity in a year than 117 countries.”


Revelations like this make us reflect on the environmental cost of AI on the earth and humanity by extension.

ChatGPT released its image generator just recently. In an article by TechCrunch, Brad Lightcap, who oversees day-to-day operations and global deployment at OpenAI, over 130 million users have generated more than 700 million images since the upgraded image generator launched in ChatGPT on March 25.

This figure was just a mere 2 weeks ago and has possibly hit a billion by now.
Going by the pace at which humanity is adopting the use of AI for both important and mundane activities, the power generation toll might just be unbearable in the near future. As more people integrate AI into their daily lives, from simple tasks to complex decision-making, the demand for energy-intensive data processing will only continue to rise.

This begs the question:

What are the responsible applications of AI?

Should AI use be restricted by regulations?

Here’s what we think:

AI’s Growth is Outpacing Our Energy Capacity

The Environmental Toll of Large Language Models

The scale at which large language models (LLMs) like GPT-4 and Gemini are being developed and deployed is staggering. This is not just in terms of innovation but also environmental cost. According to a study referenced in MIT Technology Review, researchers from the University of Massachusetts Amherst, training a single AI model can emit as much carbon as five cars in their lifetimes. And that’s just the training phase. Once deployed, the inference process, which involves running the model to generate outputs, burns even more electricity.

For context, Meta’s LLaMA-2 model required over 1,000 GPUs running for weeks. Each GPU consumes roughly 300–500 watts per hour, and many of these models are deployed across thousands of servers worldwide. The energy consumption of AI is no longer a back-end issue. It’s a planetary one.

Water, GPUs, and the Supply Chain

Beyond electricity, LLMs require significant water resources to keep data centers cool. In 2024 alone, Google’s data centers used nearly 6 billion gallons of water for cooling, much of it in water-stressed areas.

The compute demand also inflates the price and supply strain of critical components like GPUs. This supply chain creates cascading challenges: e-waste, rising energy costs, and increased pressure on national grids—particularly in the Global South, where infrastructure is already under strain.

What Does Responsible AI Look Like?

When AI Solves Real Problems

Responsible AI isn't just about what it can do. It's about why it should. Take AI in healthcare diagnostics, climate modeling, or precision agriculture domains where machine learning augments human capacity to solve high-impact problems.

In Kenya, for instance, organisations like Omdena are championing the cause by launching projects that use machine vision tools to help farmers detect crop disease early, preventing food insecurity.

These are high-utility applications, where the compute cost is justified by public benefit.

When AI Just Adds Load

On the flip side, we’re also seeing a flood of low-utility use cases: AI-written junk blogs, deepfake videos, spam bots in e-commerce, and automated content mills that clog the web. These don’t just waste computing energy, they erode trust in digital ecosystems. The problem isn’t novelty, it’s scaleless utility.

Africa’s Case for Context-Driven AI

Why Infrastructure Constraints Demand Smarter Models

In regions like sub-Saharan Africa, where data infrastructure is still emerging, the push for AI needs a leaner, context-sensitive approach. You can’t simply lift-and-shift Western-trained models into African markets. Not only are the datasets often irrelevant, but the compute costs become prohibitive.

This is why AI in Africa must prioritize efficiency, local relevance, and sustainability, not just capability.

Pastel Africa’s Lean-AI Approach with Sigma

One example is Pastel Africa, which builds enterprise AI solutions tailored for African institutions. Instead of deploying bloated models, their approach emphasizes smaller, interpretable, and use-case specific models that deliver real value without unnecessary overhead.

Sigma, Pastel’s flagship product for fraud detection, is built with resource-aware deployment in mind, balancing explainability, speed, and minimal hardware demands. It’s a quiet testament to what responsible AI design looks like when done right.

What the Path Forward Requires

Regulation, Audits, and Energy Standards

We can’t continue deploying models without AI governance frameworks. The EU AI Act is a strong step forward, classifying use cases by risk and requiring model documentation standards and impact assessments. A similar framework adapted for Africa could go a long way in ensuring ethical AI deployment here.

Governments and tech firms should also report the energy footprint of their models. This would include emissions reporting alongside a commitment to renewables where possible.

Context-Specific Models and Edge Optimization

We also need to reimagine the obsession with bigger models. Instead, let’s invest in context-specific models that are trained on smaller, local datasets and run efficiently on edge devices, especially in underconnected regions. There's no need to deploy a billion-parameter model to sort receipts in places like Lagos when a 50MB transformer will do.

Redefining Incentives for Innovation

Lastly, let’s rethink what innovation means. Today’s incentive structures reward complexity, scale, and hype. But what if innovation was rewarded for its restraint, sustainability, and alignment with human needs?

It’s time we measure AI not just by capability but by consequence.

Conclusion

Restraint Is the Real Innovation

There’s a reason the phrase “just because we can doesn’t mean we should” resonates more than ever in the AI discourse.

We are on the cusp of an age where the cost of progress may exceed the benefit—not because AI is inherently bad but because its deployment has outpaced our discipline. If unchecked, the environmental and ethical toll of AI may undermine its own promise.

The future belongs to the builders who ask better questions. And to those who, like Pastel Africa, quietly build solutions that solve real problems, respect real limitations, and still push the envelope.

Want to see what responsible AI looks like in action? Explore our work at Pastel Africa.