The limits of AI in 2025
Summary
I reflect on the accelerating power—and persistent limits—of artificial intelligence. From storytelling to software tools, the future of work needs AI tools that augment human abilities rather than automate them. If we lose sight of this nuance, we risk falling prey to the Turing Trap.
I recently returned from a longish break, fulfilling a lifelong dream to see snow leopards in the wilderness. The time in the high Himalayas, thin air and light headedness notwithstanding, gave me space to reflect on many aspects of life and work. These days, it feels like we’re all AI professionals, and my job title includes AI, so I “literally” earn my dime by thinking about such technology. It’s hard to pick a book about AI because the tech moves so fast, but one of the books I read during my break was Co-intelligence by Ethan Mollick. It’s a delightful read because it steers clear of the tech specifics and focuses more on the possibilities. Part two - the heart of the book - has chapters with the following titles.
AI as a person
AI as a creative
AI as a coworker
AI as a tutor
AI as a coach
AI as our future
As you’ll notice, each chapter addresses AI as a companion. Indeed, this companionship is where most AI applications focus today. But as I’ve written before, there’s a looming (a pun I’ll explain shortly) cloud of job losses that we also associate with AI. Indeed, the point at which present-day AI eliminates a job is when the job is routine, non-novel and acceptably risky knowledge work.
The conditions that influence AI-led job losses
With every iteration of foundational models, AI capabilities are proceeding at a frenetic pace. For example, look at Ethan Mollick’s article on how image-generation capabilities have improved in a matter of months. Mockups and combinations of text and imagery are now easier to produce. AI’s ability to understand creative briefs has improved as well. Does this disrupt design agencies that deliver visuals based on someone else’s creative vision? Well, absolutely! Designing a standalone visual is routine, non-novel and acceptably risky. However, arriving at the right creative vision is risky enough that most companies will want humans to do the job. And the idea itself must be novel enough to stand out amid the flood of AI-generated content.
This idea about what capabilities AI can replace brings me to the present-day limitations of AI. At work, I often share my updates as highly visual, recorded presentations. Over the years, I’ve developed a presentation style inspired by Garr Reynolds and Nancy Duarte. While both their books have been around for over 15 years, it’s fair to say that they haven’t eliminated death by PowerPoint. For better or for worse, my presentations often stand out in the usual crowd of slideuments. Anyway, before I went on holiday, I shared one of these visual updates, and the video garnered some curiosity amongst colleagues. One colleague asked me:
“What AI did you use to create this video?”
The fact is that despite being an “AI dude”, I use very little AI for my visual stories. Maybe I’ll ask it for help in getting started with a visual concept or some feedback on my storyline, and it almost always generates my closed captions, but mostly, it’s the familiar grind of creating custom visuals, adding my voice to the narrative and piecing together animations, so one concept flows into another. I can use AI as a production assistant but not as the director. Not yet.
And here’s where I see the dichotomy between what I think AI should do and the efforts of several AI entrepreneurs. Roy Bahat of Bloomberg Beta explains this dichotomy through three categories of AI applications - looms, slide rules and cranes.
Looms aim to replace people who do a particular job. Think about the non-novel, routine and acceptably risky knowledge work here. This metaphor should explain the earlier pun in this article.
Slide rules make people more effective - think of chat assistants, agents and software development tools like Replit or Cursor.
Cranes go beyond the capabilities and energies of any one person. Bahat gives the example of large-scale translations, or how AI provides researchers with instant access to predicted models of proteins they’re studying, so they can speed up their experiments and develop cures faster than ever.
The different categories of AI applications according to Roy Bahat
Bahat believes there’s an undue focus on looms and slide rules — and not enough on cranes. He refers to Erik Brynjolfsson of Stanford, who defines the Turing trap as:
“the false premise that the ultimate goal of AI should be to imitate humans”
At work and in society, the Turing trap is a significant pitfall that undermines and underutilises human potential. Brynjolfsson says,
“Too much focus on creating artificial humans can lead us into a trap where we sacrifice the real benefits of machines—namely, their potential to complement humans and do things we can’t.”
I don’t need a loom to replace me as a presenter and a storyteller. But yes, I’d like it to extend my capabilities like never before, by synthesising large volumes of data, finding relevant content from massive repositories, remixing ideas across sources and acting as a crane for my creative vision.
So, as I settle back into the world of work and AI, I wonder if this fascination with the Turing test and the consequent Turing trap we’re falling into must end. Should the world of work focus more on augmentation than automation? Can we someday embrace the limits of AI instead of pushing them? Will the world of work be more enriching, and will society be more equitable, with more cranes than looms? I don’t have answers, but I’m certain this isn’t the last time these questions will be front and center for me.