LLM's and Expertise

LLMs don’t replace expertise — they amplify it

Posted by hossg on June 11, 2025 · 2 mins read

LLMs don’t replace expertise — they amplify it

Benedict Evans recently noted that an LLM-generated biography of him might be wrong and unhelpful - but if he used it, edited it, and guided it, it could be very useful.

That’s the key: these tools aren’t autonomous agents; they’re amplifiers. They don’t tell “the truth, the whole truth and nothing but the truth.” Recognising hallucinations, gaps, and wrong assumptions takes expertise — and intent.

We’ve seen this pattern before:

  • Chess and Go players now train with AI, not against it and human strategy has evolved dramatically as a result.
  • Radiologists using AI are more accurate than either AI or humans alone.
  • In software, LLMs can generate boilerplate and scaffolding fast, but they don’t reason about architecture, design trade-offs, or long-term supportability.

The risk is mistaking syntactic fluency for understanding. A coherent, confident answer can feel authoritative even when it’s subtly wrong. These models are designed to sound trustworthy, and that makes it easy to overtrust.

It’s like reading a respected newspaper, then noticing how flawed the reporting is on a subject you know well. Once you see the errors, gaps, and distortions, you begin to question the rest. The same is true with LLMs: the illusion of competence is strongest where your own knowledge is weakest.

This is particularly acute in software, where “vibe coding” - generating plausible-looking code without deep understanding - may work for quick fixes, but becomes a liability at scale. Tools can assist, but they don’t substitute for deliberate design or supportable engineering.

Used wisely, LLMs are powerful scaffolding. But for now, the agent - the one doing the thinking - is still you.