Silicon Valley loves a pinky swear.
There’s something almost touching about the tech industry’s obsession with the honor system. We’ve seen it with data privacy, we’ve seen it with social media moderation, and now we’re seeing it with the apocalypse-adjacent threat of rogue artificial intelligence. The latest person to join the "just trust them" chorus is S. Krishnan, the Secretary of India’s Ministry of Electronics and Information Technology (MeitY).
Speaking on the heels of the latest global AI summits, Krishnan suggested that voluntary commitments from tech giants might actually do more heavy lifting than rigid, top-down laws. His logic? The tech is moving too fast. By the time a bureaucrat finishes a draft of a regulation, the model they’re trying to restrain has already been replaced by something twice as hungry and four times as opaque.
It sounds pragmatic. It sounds nimble. It’s also a total farce.
We’re currently living through a gold rush where the shovels are Nvidia H100 GPUs that cost $30,000 a pop and the stakes are, depending on who you ask, the future of human labor or the end of reality as we know it. In this environment, asking a trillion-dollar company to voluntarily slow down for the sake of "safety" is like asking a shark to consider the feelings of a seal.
The New Delhi Declaration and the various communiqués coming out of these summits are essentially high-end scrapbooks. They’re filled with nice sentiments about "human-centric" tech and "ethical deployment." But Krishnan’s insistence that these voluntary pacts are superior to hard regulation misses the fundamental friction of the industry. Compliance is a cost center. Innovation is a profit center. When the two collide—and they collide every single morning at 9:00 AM—the profit center wins.
Let's look at the actual trade-offs. To make an AI model "safe," you have to hobble it. You have to spend millions on "red teaming," hiring humans to try and trick the bot into saying something racist or building a bomb. You have to filter datasets, which costs time and compute power. If Company A decides to be a good citizen and spends six months stress-testing their new Large Language Model, and Company B just ships their raw, unhinged version to grab the market share, Company A loses. Voluntary commitments don't fix that. They just make the losers feel better about losing.
Krishnan isn't wrong about the speed. The legislative process in any democracy is a slow, grinding machine. It wasn't built for a world where software updates every Tuesday. But there’s a difference between "laws are hard to write" and "let’s just let the companies grade their own homework."
MeitY’s stance reflects a broader anxiety in India. The country doesn't want to regulate itself out of the race. They want to be a global hub for AI, not just a back-office for data labeling. If they come down too hard with the "Digital India Act" or any other legislative hammer, they risk spooking the investors who are currently dumping billions into the local ecosystem. So, they opt for the "voluntary" route. It’s a way to look like a leader in AI ethics without actually making anyone do anything they don’t want to do.
It's the "Coffee Shop" test of governance. If you can’t explain the consequences of breaking a rule over a latte, the rule doesn’t exist. Right now, if OpenAI or Google or some scrappy startup in Bengaluru breaks their "voluntary commitment" to AI safety, what happens? Do they get a sternly worded letter? Does S. Krishnan unfollow them on X?
There is no price tag on a broken promise.
We’ve seen this movie before. In the early 2010s, we were told that social media companies would voluntarily protect our data and curate our feeds for the "public good." We saw how that worked out. It led to a mental health crisis, the erosion of local news, and the weaponization of misinformation. But hey, they signed some very pretty declarations back then, too.
The reality is that "voluntary" is just another word for "optional." And in a sector where the compute costs alone can burn through a billion dollars in a quarter, nobody is going to opt for the expensive, slow, safe route unless they’re forced to. Krishnan can talk about the flexibility of commitments all he wants, but flexibility is also what allows things to snap when they’re under pressure.
At some point, we have to stop treating these tech giants like precocious children and start treating them like the massive, profit-driven utilities they are. Until there’s a specific, painful penalty for cutting corners, these summit declarations are just expensive ways to kill trees.
If the best defense we have against the risks of AI is the collective Pinky Swear of five CEOs who are all trying to bankrupt each other, are we actually being governed, or are we just being managed?
