The slogans are always the easiest part.
Prime Minister Narendra Modi stood before the crowd at the latest AI Impact Summit and delivered exactly what the branding consultants ordered: a vision of "welfare and happiness for all." It’s a nice thought. It’s the kind of sentiment you’d find on a motivational poster in a breakroom where the coffee machine has been broken for six months. This time, the fix isn't a new filter; it’s "human-centric progress" powered by silicon and a staggering amount of electricity.
We’ve seen this script before. A world leader walks onto a stage, avoids the terrifying technical jargon, and promises that the robots aren't coming for our jobs—they’re coming to make us smile. The rhetoric is clean. The reality is messy, expensive, and smells faintly of ozone.
The core of the pitch is simple. India wants to skip the awkward "tech-bros-breaking-things" phase of AI development and jump straight to the part where algorithms fix healthcare, education, and rural poverty. Modi isn't just talking about chatbots that can write mediocre poetry in Hindi. He’s talking about sovereign AI—a government-backed, data-hungry machine designed to manage the lives of 1.4 billion people.
It sounds noble. It also sounds like the ultimate administrative shortcut.
But let’s look at the friction. Building a "human-centric" AI isn't just about writing better code; it’s about the hardware. India recently greenlit a $1.2 billion "IndiaAI" mission. Most of that cash is earmarked for GPUs—the expensive, power-hungry chips that Nvidia sells for the price of a luxury sedan. You don't get happiness for free. You buy it at a massive markup from Santa Clara, California.
There’s a glaring trade-off here that nobody in the front row wants to talk about. To make AI "human-centric" in a country as vast as India, you need data. A lot of it. You need the digital footprints of farmers in Punjab, street vendors in Bangalore, and students in Bihar. The "welfare" promised by the state usually comes with a side of total legibility. If the government is going to use AI to tailor services to your exact needs, it has to know exactly who you are, what you eat, and how you spend your time. It’s a "happiness" that requires you to be permanently pinned to the digital board.
The cynical view is that "human-centric" is just a polite way of saying "state-managed."
The summit was filled with talk about the "democratization of technology." Fragments of a dream. But technology isn't a democracy; it’s an oligarchy of compute power. If you don't own the chips and you don't own the data centers, you're just a tenant in someone else’s cloud. Modi’s push for a homegrown stack is an attempt to break that lease, but the bill is coming due. You can’t build a digital utopia on a budget, and you certainly can't do it without making some uncomfortable choices about privacy and dissent.
It’s easy to promise welfare. It’s harder to explain what happens when the "human-centric" algorithm denies a pension because of a glitch in a facial recognition database. Or when the "happiness" predicted by a model doesn't account for the fact that people actually quite like their privacy.
We’re told this AI will be a "force for good." A tool for the common man. It’s the same line we heard about social media in 2011, and we all know how that turned out. Now, the stakes are higher. We aren't just talking about sharing photos; we’re talking about the fundamental plumbing of a nation.
The summit ended with the usual rounds of applause and a flurry of press releases. The delegates went back to their hotels, and the politicians went back to their offices to figure out how to pay for the next cluster of H100s. The promise remains hanging in the air like a cloud of smog over New Delhi: a future where the machine knows what’s best for you before you do.
If this is the new "human-centric" reality, you have to wonder which humans they’re actually talking about.
