Why Bill Gates is wrong about AI and 3 things he needs to realize

Click here to visit Original posting

Bill Gates has been making the rounds lately to promote his new memoir, Source Code, and sharing his vision of rapid and massive transformation of our lives over the next decade by artificial intelligence. He pitched a world where "Intelligence will be completely free" in an interview with Harvard professor Arthur Brooks, with ubiquitous and universally available AI tutors and doctors that outmatch most educators and medical practitioners.

He went even further in a recent appearance on The Tonight Show. When Jimmy Fallon somewhat nervously asked him if we'll still need humans, Gates quipped, "Not for most things."

Look, I admire Gates’ enthusiasm. He is clearly invested in a future where AI extends opportunity to underserved populations and pushes the limit of what people are capable of, which is great. It's a little absurd to believe that AI will replace most human roles within a decade, though, and doubly so for positions built on human-to-human interaction. That's more than optimism; it drifts into the kind of AI hallucination that limits the adoption of AI in precisely those fields.

AI Limits

For one thing, current AI models aren't completely ready for what he describes. Yes, large language models behind tools like ChatGPT and Gemini are impressive when it comes to mimicking conversation, writing code, and even imitating human painters. But the illusion of competence hides a laundry list of unresolved issues. AI still makes mistakes, sometimes hilarious ones, but it's not so funny when you fail a test or get misdiagnosed.

Anyone who’s spent more than ten minutes with a chatbot has probably watched it veer off into at least some nonsense, whether confidently inventing facts or suggesting you eat rocks. These aren’t just glitches. They’re systemic quirks that stem from the way these models work, using statistical pattern recognition without real understanding.

Even the companies building this stuff are quietly worrying they’re running out of quality training data. Once you’ve consumed the entire publicly available internet, you hit diminishing returns. It’s like trying to get smarter by rereading the same old textbooks; you might sharpen some things, but you won’t have new insights. Without breakthroughs in how we train and structure AI models, we may be closer to an awkward plateau than the exponential curve that Gates's future would require.

Human touch

Even if AI gets way better, it still won’t be human. That’s not just sentimental—it’s functional. So many jobs that Gates suggests AI could “solve” rely on things no machine has: a childhood, a body, a lifetime of subtle emotional calibrations.

Yes, AI is getting better at reading and employing emotional nuance, but I remain skeptical that it could match the above-average human equivalent of a teacher or doctor, let alone the best of them. Could an AI earn a teenager’s trust when they’d rather be literally anywhere else but in a lesson or sit with a patient in pain and make them feel heard? Maybe, but not in ten years.

What makes for competent logistics planning, customer service, human resource management, and so many other roles is the ability to balance human needs, motivations, and unpredictability. AI can help in all of these fields. It already does. It can write reports, crunch numbers, and flag anomalies. In some cases, it can outperform humans. But replacing the entire role suggests that just because an AI can paint in the style of Van Gogh, it could’ve also survived his mental illness, navigated 19th-century Paris, and invented post-impressionism. It’s not just about output—it’s about the messy, lived-in process behind it.

A deep reservoir of subtle, emotional intelligence is baked into any human career involving other humans. Gates seems to think this can be simulated convincingly enough to make no difference. I’m not so sure.

AI suspicions

This brings me to my last point: even if AI could match or beat human performance in nearly everything, it doesn’t mean people will want that. Let’s not forget that we’re a species with many members who enjoy small talk with baristas even when there’s a self-checkout option. Most people value other humans for more than just the mechanical aspect of their profession, especially in areas like medicine, education, and caregiving.

On The Tonight Show, Gates joked that no one wants to watch robots play baseball, and he’s right. But he stops short of realizing that many people won’t want only robots to teach their kids how to play. Not because the robots aren't technically competent, but because we still prefer the flawed but relatable experience of other humans.

Sure, I'd love a hyper-precise machine with a well-trained AI to perform micro-surgery on me, but there had better be a human surgeon overseeing its work and keeping an eye on me beyond the machine's focus.

AI futures

Gates’s suggestions aren't bad ideas if applied correctly. He’s absolutely right that AI can help extend access to critical services in places that don’t currently have enough teachers or doctors. The part of his vision where AI becomes a helpful assistant for everyone, filling gaps and enhancing what humans already do, is something I'd love to see come true. That said, the leap from “AI can help” to “AI will do everything” is a dangerous oversimplification of both technology and humanity.

So yes, AI is going to change the world. It already has. But not in the clean, utopian, humans-on-vacation way that Gates imagines. It’s going to be messier. Slower. Full of unexpected detours and stubbornly human resistance. People often like their teachers and trust their doctors. They might let AI help, but they won’t give up that human touch without a fight.

You might also like...