It’s a slideshow. Seven slides of pure, unadulterated hubris.
We’ve all seen the deck by now. Some McKinsey-adjacent think tank or a venture capital firm with too much dry powder releases a series of high-gloss charts. They want to show us the future. They want to show us how the world changes when the machines start doing the thinking. But if you look past the gradients and the rounded sans-serif fonts, the picture isn't nearly as pretty as the marketing suggests.
First, there’s the death of the "starter" career. The graphics show a "shift in labor dynamics," which is corporate-speak for firing every junior copywriter, coder, and paralegal in the building. Why pay a twenty-two-year-old $60,000 a year to learn the ropes when a Large Language Model can produce a "good enough" first draft for the price of a monthly API subscription? The ladder is being pulled up. We’re trading the next generation of experts for a slight bump in this quarter’s margins. It’s a gamble that assumes senior talent just spawns out of thin air without ever having been a trainee.
Then comes the hardware. The charts suggest we’ll all be wearing some form of AI-infused jewelry—pins, glasses, or pendants that whisper into our ears. They don't mention the friction. They don't mention that the Humane AI Pin was a $699 paperweight that overheated on a simple phone call. The "change" here isn't a hands-free utopia. It’s a world where every piece of tech you own has a battery life of four hours and requires a $24-a-month subscription just to tell you the weather. Your glasses will have a Terms of Service agreement longer than Ulysses.
Third on the list: The "Dead Internet." This isn't a theory anymore; it’s the new reality. The graphics show a surge in "content volume." That’s a polite way of saying the web is being flooded with synthetic sludge. SEO-optimized gibberish. AI-generated images of soldiers with six fingers. It’s a feedback loop. Machines are now training on the garbage produced by other machines. Finding a genuine human thought on the first three pages of a Google search is starting to feel like an archeological dig.
Fourth, we have the "Energy Pivot." This one is actually honest, if you know how to read the data. To keep these models running, Big Tech is scouting for nuclear reactors. Microsoft is trying to resurrect Three Mile Island. The trade-off is simple: we get a chatbot that can write a mediocre haiku about brunch, and in exchange, the tech industry consumes enough water to drain a mid-sized lake and enough electricity to power a small nation. The carbon footprint of a single high-end training run is a number so large it stops being a statistic and starts being a threat.
Fifth is the "Personalized Reality." The slides depict a world where your AI knows you better than your mother does. It curates your news, your shopping, and your social interactions. But intimacy sold as a service is just surveillance with a friendlier face. If the "change" is that we never have to talk to a person we disagree with, we haven't solved polarization. We’ve just built digital padded cells and called them "user-centric experiences."
Sixth: War at machine speed. The graphics for this section are usually sterile. Blue icons moving across a map. They talk about "autonomous defense systems." In reality, we’re looking at drones that make kill-chain decisions in milliseconds. The friction here isn't technical; it's moral. We’re handing the keys to a black box that can’t feel regret and can’t be held accountable in a Hague courtroom. It’s efficient. It’s also terrifying.
Finally, there’s the "Post-Truth Economy." Deepfakes aren't just for making celebrities say dumb things anymore. They’re for making your CFO think you’re on a Zoom call asking for a $25 million wire transfer. The "change" here is the total erosion of the "seeing is believing" principle. Once the cost of faking reality drops to zero, the value of the truth becomes prohibitively expensive.
The deck ends with a slide about "growth." It always does. But as you click through the last few pages of these glossy projections, you have to wonder about the people who made them. They seem so certain that more is always better. More data, more speed, more automation. They never stop to ask if we actually like the version of the world they’re building.
Are we actually solving problems, or are we just building a more expensive way to be lonely and confused?
