AI will replace ALL developers by 2027. Not a question of if, but when. Thread 🧵👇
My $997 prompt engineering masterclass will change your life. Last 3 spots!!
If you're not using AI for EVERYTHING you're already obsolete. Wake up.
Spent 3 days debugging hallucinations in prod. Here's what I learned about guardrails →
AI cut our boilerplate time by 60%. But we still write all architecture decisions by hand.
Tried 4 approaches to AI-assisted code review. 2 failed badly. Here's what actually worked.
There’s a phrase I keep coming back to: “the playbook.”
Not a literal book. Not a framework some consultant is selling. It’s simpler than that. It’s the idea that when it comes to AI, the only opinions worth your time come from people who are in the trenches. Building things. Shipping things. Breaking things and fixing them at midnight. Not the ones giving keynotes about a future they’ve never touched.
The playbook is: listen to the builders. Ignore everyone else.
The Noise Problem
Open any social media platform and you’ll find thousands of people talking about AI. Posting threads. Recording podcasts. Selling courses. Hosting webinars about “the future of work.”
Most of them have never built anything with AI.
They’ve used ChatGPT to write a LinkedIn caption. They’ve watched a demo video. They’ve read an article and formed a strong opinion based on someone else’s experience. And now they’re positioning themselves as authorities.
This is the noise. And it’s deafening.
The problem isn’t that they’re wrong about everything. Some of their observations are fine. The problem is they’re operating without feedback loops. They’re theorizing without testing. Predicting without building. And when you take advice from someone who hasn’t tested their ideas against reality, you’re building on sand.
What “Building” Actually Means
Let me be specific about what I mean by “building,” because the word gets thrown around loosely.
Building doesn’t mean:
- Prompting ChatGPT and calling the output yours
- Using a no-code tool once and tweeting about it
- Watching someone else build on a livestream
- Reading release notes and forming opinions
- Selling a course on a tool you used for a week
Building means:
- Writing code (or working closely with someone who does) that integrates AI into real workflows
- Shipping products that real people use, then dealing with what breaks
- Spending hours debugging why the model hallucinated in production
- Making architectural decisions about where AI belongs and where it doesn’t
- Iterating based on actual user feedback, not theoretical use cases
- Dealing with the boring parts: rate limits, costs, latency, edge cases, error handling
The distinction matters because building gives you something theory never can: contact with reality. When you build, you learn what works and what doesn’t. You develop intuition that can’t be faked. You earn opinions instead of borrowing them.
The Gap Between Theory and Practice
Here’s what I’ve noticed from actually building with AI every day.
The theorists say: “AI will replace developers.”
The builders know: AI makes good developers faster. It doesn’t replace the thinking. It accelerates the doing. The people who understand architecture, system design, and debugging are using AI to ship at 3x speed. The people who can’t code are generating apps they can’t maintain (I wrote a whole post about this with Lovable).
The theorists say: “AI can do everything now.”
The builders know: AI is incredible at certain tasks and terrible at others. It can generate boilerplate in seconds. It can refactor code faster than I can read it. But it can’t hold context across a complex system. It can’t make judgment calls about product direction. It can’t understand why your user is frustrated. There are clear boundaries, and the only way to find them is to push against them daily.
The theorists say: “You need to learn prompt engineering.”
The builders know: The tools are moving so fast that the best “prompt engineering” is understanding your problem deeply and communicating it clearly. That’s not a new skill. That’s the same skill good developers and communicators have always had. The fancy prompt templates people sell will be irrelevant in six months.
The theorists say: “This changes everything overnight.”
The builders know: Change is real but gradual. You integrate AI into one workflow. You measure the results. You adjust. You expand to the next workflow. It’s not a revolution that happens on a Tuesday. It’s a compounding advantage that builds over months of consistent application.
How to Spot a Builder
So how do you filter signal from noise? Here’s what I look for.
They show their work. Builders share what they’ve built, not what they think about what someone else built. They post screenshots, demos, code snippets, results. They talk about specific problems they solved, not abstract possibilities.
They talk about failures. Anyone can share wins. Builders talk about what went wrong. The feature that didn’t work. The integration that fell apart. The approach they abandoned. If someone only shares success stories, they’re either lying or not building anything complex enough to fail.
They have nuanced opinions. “AI is amazing” is not an opinion. “AI is great for X but terrible for Y, and here’s what I learned the hard way” is an opinion earned through experience. Builders live in the gray areas because that’s where reality lives.
They change their minds. Someone who held one opinion six months ago and holds a different one now isn’t inconsistent. They’re learning. Builders update their views because they keep getting new data. The people who’ve been saying the same thing for two years are the ones who stopped learning.
They’re specific. Vague claims are a red flag. “AI is transforming industries” tells you nothing. “We reduced our content production time by 60% by using Claude for first drafts and having a human editor do final passes” tells you everything. Specificity comes from experience. Vagueness comes from imagination.
They’re humble about what they don’t know. The loudest voices in AI are often the least experienced. Builders have been humbled by the technology enough times to know they don’t have all the answers. They’ll tell you “I’m not sure about that” because they’ve learned that certainty is expensive when you’re wrong.
Why This Matters Right Now
We’re in an unusual window. The tools are new enough that real expertise is rare but accessible. You don’t need a PhD or a massive budget. You need curiosity and a willingness to build.
But this window creates a problem: the gap between genuine expertise and performed expertise is almost invisible to an outsider. Someone with a polished website and confident delivery can seem more credible than someone who’s been shipping code every day but doesn’t have a personal brand.
That means people are making real decisions (what to learn, what to build, what tools to adopt, what to invest in) based on the opinions of people who’ve never tested those opinions against reality.
The cost of following bad AI advice is real:
- You waste months learning the wrong tools
- You build on platforms that won’t scale
- You adopt workflows that look impressive in demos but fall apart in practice
- You miss the approaches that actually work because they’re not flashy enough for a keynote
The Builder’s Advantage
Here’s the thing about building: it compounds.
Every project teaches you something the tutorials can’t. Every bug reveals a limitation the marketing materials won’t mention. Every shipped feature gives you calibration that no amount of reading provides.
After months of building with AI, you develop a sense for it. You know when to trust the output and when to rewrite it. You know which tasks to hand to AI and which to do yourself. You know the difference between AI-assisted work and AI-generated garbage.
That intuition is the real playbook. And you can’t buy it. You can’t learn it from a course. You can only earn it by doing the work.
The Uncomfortable Truth
Most of the people talking about AI right now will have moved on to the next trend in a year. They were talking about crypto before this. NFTs before that. They’ll be talking about whatever comes next.
The builders will still be here. Still shipping. Still learning. Still refining their understanding through the only method that works: contact with reality.
That’s who I listen to. That’s who I learn from. That’s who I want to surround myself with.
Not the keynote speakers. Not the course sellers. Not the thread writers who’ve never opened a terminal.
The builders.
How to Apply This
If you’re trying to navigate the AI landscape, here’s the filter:
-
Before taking someone’s AI advice, ask: “What have they built?” If the answer is “content about AI” and nothing else, move on.
-
Follow the builders. Find people who share their actual work. Their actual results. Their actual failures. Learn from their specifics, not someone else’s abstractions.
-
Become a builder yourself. Pick something small. Build it with AI. Ship it. The understanding you gain from one real project will outweigh everything you’ve read about AI combined.
-
Stay skeptical of hype. If something sounds too good to be true, it probably is. The real capabilities of AI are impressive enough without exaggeration.
-
Update your beliefs. Build, learn, adjust. Repeat. The people who are most right about AI are the ones who’ve been wrong the most times and updated accordingly.
The playbook isn’t complicated. It’s not a secret. It’s the same advice that’s been true in every industry, for every technology, forever.
Listen to the people who are building. Not the ones who are performing.
The signal is in the work. It’s always been in the work.
Currently: building things, breaking things, learning things. The usual.