LLMs don’t replace expertise — they amplify it
Benedict Evans recently noted that an LLM-generated biography of him might be wrong and unhelpful - but if he used it, edited it, and guided it, it could be very useful.
That’s the key: these tools aren’t autonomous agents; they’re amplifiers. They don’t tell “the truth, the whole truth and nothing but the truth.” Recognising hallucinations, gaps, and wrong assumptions takes expertise — and intent.
We’ve seen this pattern before:
The risk is mistaking syntactic fluency for understanding. A coherent, confident answer can feel authoritative even when it’s subtly wrong. These models are designed to sound trustworthy, and that makes it easy to overtrust.
It’s like reading a respected newspaper, then noticing how flawed the reporting is on a subject you know well. Once you see the errors, gaps, and distortions, you begin to question the rest. The same is true with LLMs: the illusion of competence is strongest where your own knowledge is weakest.
This is particularly acute in software, where “vibe coding” - generating plausible-looking code without deep understanding - may work for quick fixes, but becomes a liability at scale. Tools can assist, but they don’t substitute for deliberate design or supportable engineering.
Used wisely, LLMs are powerful scaffolding. But for now, the agent - the one doing the thinking - is still you.