When India Said No to Bill Gates

When India Said No to Bill Gates
Photo by Rafael Garcin / Unsplash

The Moment Morality Reclaimed the Stage at the India AI Impact Summit 2026


Reports say the chair sat empty. That was the thing people noticed first — before the speeches, before the pledges, before the Delhi Declaration. An empty seat in the front row of the Bharat Mandapam in New Delhi, where Bill Gates was supposed to sit on the morning of February 19, 2026. I imagine the placard was still there — perhaps someone had forgotten to remove it, or perhaps someone had left it deliberately, the way you might leave a candle burning in a window for a guest who is not coming home.

Hours earlier, the organizers had made their decision. Gates would not deliver his keynote. The Gates Foundation offered the kind of explanation that explains nothing: the withdrawal was “to ensure the focus remains on the AI Summit’s key priorities.” But reports confirmed what the diplomatic language concealed. Summit organizers had expressed discomfort at giving Gates the stage after details of his past association with Jeffrey Epstein resurfaced in documents released by the U.S. Department of Justice. At a summit whose first principle was morality, the organizers decided that the messenger could not be allowed to overshadow the message.

Not with fury. Not with public denunciation. With something quieter and, in its way, more devastating — a refusal.

I keep returning to that refusal. There is something in it that speaks to a shift larger than one man, one summit, one cancelled speech. Something about where the world’s moral center of gravity is migrating, and what that migration means for the most consequential technology humanity has ever built.


The Summit That Changed the Conversation

The Isha Upanishad — possibly the oldest philosophical text still read by living humans — opens with a verse that should trouble anyone building artificial intelligence: Ishavasyam idam sarvam — all this, whatever moves in this moving world, is pervaded by the divine. Do not covet, for whose is wealth? Whose is this data? Whose is this intelligence? The Upanishad’s answer: it belongs to no one, and therefore it belongs to all. That verse has been recited in temples for three thousand years. But until New Delhi, I had never seen it become foreign policy.

Four AI summits preceded this one. Bletchley Park in 2023. Seoul in 2024. Paris in 2025. Each convened by wealthy nations, each circling the same anxious orbit — safety, alignment, existential risk. All questions asked by people who already possess the technology and fear what it might become. The fear of the powerful is a particular kind of fear. It worries about control.

India asked a different question.

With dozens of country delegations, multiple heads of state, and Sundar Pichai and Sam Altman and Dario Amodei all present, Prime Minister Modi unveiled the MANAV framework — an acronym that happens to be the Hindi word for “human.” Its five pillars span governance, sovereignty, access, and legitimacy. But the first — the foundation on which the others rest — is morality. Not safety. Not competitive advantage. Morality.

That distinction matters more than it appears to. When the West speaks of AI safety, it speaks the language of risk management — containment, guardrails, red lines. Risk is a financial concept. You manage it. You hedge it. You price it into the model. But when India placed morality at the foundation, it was speaking the language of dharma — right action as a precondition for building anything at all. The Delhi Declaration that emerged from the summit is the first major global AI governance blueprint to come from the developing world. And its most radical feature is not any single policy, but its insistence that the question


The Karma of Decisions Made Today

Karma is not punishment. The word has been domesticated in the West, turned into a bumper sticker about parking spaces and petty revenge. In Hindu philosophy, karma is consequence — every action sets in motion a chain of effects that ripples forward through time, touching lives not yet born.

This is the frame through which we must examine what is happening right now with artificial intelligence.

When a foundation model is trained on data harvested from developing nations — their languages, their medical records, their agricultural knowledge, the patterns of how their grandmothers speak — and that model is then sold back to those same nations at a premium, what karma is being created? The Delhi Declaration named this pattern with a precision that made several CEOs in the room visibly uncomfortable: “AI extractivism.” Data flows north. Value flows north. The Global South provides the raw material; the Global North provides the finished product. The colonial pattern, wearing new clothes. Different century. Same structure. Same direction of extraction.

Consider one example among hundreds: Indian hospitals generate vast troves of medical imaging data — X-rays, CT scans, pathology slides — that global AI companies use to train diagnostic models. Those models are then licensed back to Indian healthcare systems at prices that public hospitals cannot afford. The data was Indian. The intelligence built on it is not. The patients whose bodies trained the algorithm will never benefit from it.

Who bears the karmic weight of that design?

The Srimad Bhagavatam answered this question centuries before anyone built a large language model: Etāvaj janma-sāphalyam dehinām iha dehiṣu / prāṇair arthair dhiyā vācā śreya-ācaraṇam sadā — “It is the duty of every living being to perform welfare activities for the benefit of others with his life, wealth, intelligence, and words.” Life. Wealth. Intelligence. Words. Artificial intelligence touches all four. The verse is not a suggestion. It is an obligation — and it runs in the opposite direction from extraction.

In the Islamic tradition, amanah — trust, stewardship — places a sacred obligation on those who hold power. Human beings are khalifah, stewards of creation, not its owners. Technology is a trust held for all. The Maqasid al-Shariah prioritize the preservation of life, intellect, progeny, wealth, and faith. Artificial intelligence touches every single one.

What connects dharma and amanah — traditions separated by geography, language, and three thousand years of divergent history — is a shared understanding that power without moral purpose is not neutral. It corrodes. And the consequences of how we wield new capabilities are measured not in quarters or election cycles but in generations.


The Shadow and the Light

Gates did not speak. But his absence filled the room the way silence fills a cathedral — you could feel it pressing against the walls.

Here was one of the most influential technologists alive — absent not because a rival blocked him or a regulator intervened, but because the gravitational pull of moral accountability had become, finally, stronger than the gravitational pull of prestige. Details of his association with Epstein, resurfacing in Department of Justice releases weeks earlier, cast a shadow that philanthropy could not dispel. At a summit whose first principle was morality, the contradiction was untenable.

This is not a verdict on Gates. Courts will render those.

What matters is what India’s response reveals about the moment we inhabit. The Shiva Purana tells the story of the Samudra Manthan — the churning of the cosmic ocean by gods and demons together, seeking the nectar of immortality. But before the nectar rose to the surface, something else emerged: halahala, a poison so potent it threatened to destroy all of creation. No one wanted it. No one had planned for it. And it was Shiva who stepped forward and drank the poison, holding it in his throat so that creation could survive. They call him Neelkanth — the blue-throated god.

AI is the churning. The nectar is real — extraordinary capability, medical breakthroughs, efficiencies that could lift millions. But the poison is real too — surveillance, displacement, extraction, the quiet erasure of languages and livelihoods that no quarterly report measures. The question the Samudra Manthan poses to our moment is simple and terrible: who drinks the poison? Right now, the Global South provides the data, the labour, the raw material. The Global North extracts the nectar. That is not Shiva’s sacrifice. That is involuntary extraction dressed in the language of innovation.

What India attempted in Delhi is something closer to Shiva’s actual role — stepping forward, voluntarily, to hold the poison so that creation can continue. Building a moral framework is thankless, slow, and commercially unrewarded. No quarterly earnings call celebrates it. But someone must do it, or the nectar is worthless.

The Bhagavad Gita warns: Ahankara vimudhatma kartaham iti manyate — the one deluded by ego thinks, “I am the doer.” For decades, the technology industry operated under precisely this delusion. Innovation was its own moral credential. Disruption was its own justification. Build the extraordinary tool, and nobody asks too many questions about the hands that built it.

That bargain is breaking.

When Altman and Amodei sat in the same hall where Gates was meant to stand, the subtext was impossible to miss. The age of the tech titan as moral authority — the age when building something brilliant granted you immunity from scrutiny — that age is ending. The Delhi Declaration’s insistence on “glass box” safety rules and accountable governance suggests that whatever comes next will be written not by individuals but by institutions. Not by genius but by consensus. Not by the Global North alone, but by a wider, more complicated, more honest civilization.


Two Worlds, One Question

For the United States, AI is infrastructure and arsenal. The White House delegation, led by White House science and technology director Michael Kratsios, said it plainly: Washington “totally rejects” centralized oversight of AI. The American vision is sovereignty through strength. Build faster. Export aggressively. Let market forces sort out who benefits. The “American AI Exports Program,” the “Tech Corps” modeled on the Peace Corps, the financing mechanisms through the World Bank — these are instruments of projection.

For India and much of the Global South, AI is sustenance and sovereignty. When Union Minister Ashwini Vaishnaw described India’s strategy as “frugal, sovereign, and scalable,” he was articulating something that Silicon Valley has no vocabulary for. Frugal — not every problem requires a trillion-parameter model. Sovereign — data belongs to the nation that generates it. Scalable — the technology must reach the farmer in Bihar, not just the developer in Bangalore.

Between these two visions sits the question the Delhi Declaration tried to answer. Who does AI serve?

The developing world knows extraction — centuries of it, dependency, imposed frameworks. But India has already built something different. Its Digital Public Infrastructure, its Aadhaar identity system, its UPI payment architecture — these are not copies of Western platforms. They are indigenous innovations serving over a billion people. When India speaks of sovereign AI, it speaks from experience. Not aspiration.

And yet the Delhi Declaration acknowledged an uncomfortable truth: foundation models train across borders, cloud infrastructure is multinational, and the desire for sovereignty must negotiate — daily, painfully — with the reality of interdependence.

This tension will define the next decade. Not safety versus acceleration. But extraction versus stewardship.


The Actors and the Inheritance

Consider who was in the room. Pichai, Altman, Amodei, Macron, the UN Secretary General. The builders, the regulators, the diplomats. And consider who was absent. Jensen Huang of Nvidia withdrew citing unforeseen circumstances. Gates withdrew under the weight of a moral reckoning. Two absences. Two different reasons. But the same lesson — the age of AI will be shaped not only by those who show up, but by the standards we apply to those who seek the stage.

The investment pledges — over $250 billion by the summit’s close — are staggering. Reliance committed $110 billion over seven years. Microsoft extended $50 billion across the Global South. But pledges are not principles. Capital follows incentives. And India’s own corporate champions are not exempt from the moral framework the summit articulated — they carry governance questions of their own. The test of Delhi’s sincerity will be whether the standard applies inward as fiercely as it applies outward. The question — the only question that matters — is whether the moral framework articulated in Delhi will become the incentive structure itself.

The technologists in that room are making choices whose consequences will outlast their careers, their companies, perhaps their nations. These are not technical decisions. They are karmic ones. The karma of a generation is not erased by the good intentions of the next. It must be lived through.


The Question That Matters

When the organizers in Delhi chose to remove a billionaire’s keynote rather than compromise the moral premise of their gathering, they made a choice that every spiritual tradition recognizes. Principle over prestige. Integrity over convenience.

Will it hold? Will the Delhi Declaration’s language of morality and stewardship survive contact with the realpolitik of hundreds of billions in pledges, the lobbying of corporations whose quarterly earnings depend on unfettered data access, the sheer velocity of a technology that doubles its capability every eighteen months?

The Katha Upanishad says two paths approach every person: shreyas, the good, and preyas, the pleasant. The wise choose the good. The foolish choose the pleasant. AI acceleration is preyas — profitable, intoxicating. AI morality is shreyas — difficult, demanding of a patience that quarterly earnings do not reward. Delhi chose shreyas. Whether that choice endures is the open question of our time.

But here is what the spiritual traditions teach, across every geography and era: the karmic weight of a decision does not depend on its immediate outcome. It depends on the intention behind it. And the intention declared in New Delhi — before we discuss what AI can do, let us discuss what AI should do — is one that bends the arc toward accountability.

The Christian tradition says: by their fruits you shall know them. Islam says: actions are judged by intentions. Both are saying what the Upanishads have said for three millennia — that the measure of power is not what it builds but whom it serves. That question has never been more urgent than it is now.

For every allocator deploying capital into AI, the Delhi Declaration offers a simple due diligence question that no financial model captures: does this investment create stewards, or does it create extractors? That question belongs in every investment memo, every board deck, every sovereign mandate written from this point forward.

The question is not whether AI will reshape civilization. It already is. The question is whether the civilization that emerges will be one that future generations recognize as worthy of their inheritance. That is what the empty chair in New Delhi made louder than any keynote could.

Not with algorithms. Not with investment pledges. Not with keynote addresses.

With their actions.

Karma is God’s perfect law.


Nazem Alkudsi, CFA, is the founder of @LongArcNews. A former CEO in the Abu Dhabi sovereign wealth ecosystem and three-decade veteran of institutional investing, he writes about capital, power, and civilizational patterns.of AI is, at root, a question about what kind of civilization we are choosing to build.


Related from Long Arc News

The Friday Deadline — Geopolitics and AI on a collision course

Your Children’s Teachers Will Be Machines — Who controls the values AI teaches

Rock, Water, and Wire — The physical layer beneath the AI revolution