written By Eve Muyanja
edited by Nnenna Hemeson
My relationship with AI has been split between awe at its capabilities — and deep discomfort with what it reveals about our world.
Time and again, when prompting AI image generators for a “person” or “humanoid,” the result defaults to a white man. Ask for a woman, and the image shifts to a white woman. Only when race, hair, and identity are explicitly specified does anything else appear. This is not accidental. It exposes a hierarchy coded into the system itself — one that mirrors global power structures rather than challenging them.
White masculinity remains the default. Everyone else is an exception.
AI does not exist in isolation. It reflects the systems that fund it, design it, train it, and benefit from it. And those systems continue to exclude diverse communities and other marginalised groups from meaningful participation in AI development. Whether or not representation statistics confirm this imbalance, the outputs speak loudly enough.
AI may be innovation — but innovation without equity is not progress. A technology that reproduces existing inequalities at scale is not neutral. It is political.
The Human Costs Behind the Code
The story of AI does not stop at biased outputs. It extends deep into the ground — into the extraction of critical minerals that power AI hardware and digital infrastructure.
Cobalt, essential for smartphones and AI systems, is predominantly extracted from the Democratic Republic of Congo — a country already burdened by conflict, ecological destruction, and economic exploitation. Communities pay with their health, safety, and land, while tech companies prioritise speed, scale, and shareholder returns.
This is not innovation. It is exploitation rebranded.
Further down the supply chain, vulnerable workers carry the emotional and psychological costs of AI development. In countries like Kenya, AI annotators perform essential labour — tagging data, moderating content, training algorithms — often underpaid, unprotected, and exposed to traumatic material without adequate mental health support.
The benefits of AI flow upwards.
The harms concentrate downwards.
Infrastructure Without Accountability
Currently, it is estimated ChatGPT uses enough electricity to charge eight million phones daily and about 39.16 million gallons of water per day. The environmental footprint of AI is growing rapidly, yet accountability remains dangerously thin. Data centres consume enormous amounts of electricity and water, frequently placed in communities without meaningful consultation or safeguards.
In Mexico, the opening of a Microsoft data centre coincided with widespread power outages. Similar patterns are emerging globally — from water shortages to grid instability — as communities absorb the cost of infrastructure designed to serve distant markets.
Meanwhile, AI corporations — including Microsoft, Amazon, Meta, OpenAI and Google — continue to generate immense wealth through stock markets and investor confidence.
Profit is centralised. Risk is externalised.
Scale Without Safeguards
Recent concerns around Grok AI, developed by xAI and embedded within X, highlight what happens when AI systems are deployed at scale without strong safeguards. Marketed as a “truth-seeking” alternative, Grok has raised concerns for producing extremely sexual and harmful outputs on sensitive social and political issues, up to 6,700 generated per hour. A wake up call for society.
The consequences of this approach are not evenly distributed. When guardrails are weak, women and children are disproportionately exposed to harassment, sexualised content, misinformation, and abuse, often with little-to-no protection or recourse — especially on large social platforms where harmful narratives spread rapidly.
“The protection of the most vulnerable cannot wait. The EU Commission have taken investigative steps in relation to X and its obligations under the DSA, and will now also carefully assess the changes to Grok that X has announced, to ensure they effectively protect citizens in the EU”.
Grok reflects a broader industry pattern: speed, visibility, and dominance are prioritised over safety and accountability. Once again, technological power concentrates at the top, while the social costs fall on those least protected.
AI without guardrails does not create freedom.
It creates space for depravity and exploitation.
Managing Risk Is a Political Choice
If AI is to shape our future, then who governs it — and who it serves — must be a public concern.
AI gains must be distributed more fairly, and its harms actively mitigated. Governments have a responsibility to regulate data centres, enforce labour protections, and hold technology companies accountable for social and environmental damage.
Energy-intensive AI systems must be powered by renewable solutions, not at the expense of water security or community resilience. Estimates suggest that tools like ChatGPT consume enough electricity to charge millions of phones daily, alongside tens of millions of gallons of water. These costs cannot remain invisible.
I do not want a future where AI holds more power than people — where technological advancement outruns ethical responsibility.
If AI is to be part of our world, it must be governed with transparency, justice, and care. Otherwise, it is not intelligence we are scaling — it is inequality.