Computers have an amazing new capability — LLMs — and we can all see that they are important and powerful.
But what isn't so clear is how to use them in practice. How do we harness the potential to help us do the things we care about?
And one thing is for sure: chat is not the answer.
When we think about other capabilities — like the CPU, or the GPU — a pattern emerges. These things became much more capable over their lifetimes. In part because they scaled (Moore's Law); but also because we built amazing abstractions for working with them.
For example, in 1955, one line of code was a physical card with holes in it. In 2020 a single developer could ship a full application in a weekend using React, SQLite, and a cloud platform. That's not because developers got smarter. It's because each generation built abstractions that let the next generation ignore the layer below.
With these abstractions you stopped thinking about memory addresses. Then you stopped thinking about memory allocation. Then you stopped thinking about servers. Each abstraction didn't just save time — it let you think about different, higher-level problems. 70 years of abstractions leading to a many-thousand-fold increase in programmer capability.
Well, today we are just about at the punch card level for LLMs. Everyone's hand-crafting prompts, manually managing context, duct-taping agent harnesses together, and writing system prompts that start with "You are a helpful assistant who MUST NOT" and go downhill from there. Thousand-fold productivity leaps are waiting for us — we just need to build the right abstractions.
But what are they?
It's hard to figure out with all the hype trains zipping around. I remember the first time I used LangChain. I thought — this is kinda cool, but it feels like a weekend hack and is harder to use than to not use. And I remember MCP, which I still think has the distinction of being the worst explained piece of technology this decade. (If you ever have trouble sleeping, just watch a random sampling of YouTube videos on the topic.)
I had a big "aha" a few months ago in the form of Claude Code. It is an abstraction — maybe a programming language or tool or system — that gave me a major multiplier in my capability. It did this by so elegantly connecting ideas that were floating around: agents, tool use, vibe coding, and the raw LLM capabilities of reasoning and understanding human expression, and turned them into a step-change in productivity.
And if you haven't used Claude Code I think you should. It sounds really lame at first — a CLI!?! — but it is sublime. And it is for so much more than coding.
What do you think? Are abstractions going to be relevant in the LLM era? Or are raw LLMs going to get so smart so fast that it turns out chat is the only abstraction we need?