There Are Two Very Different Ways to Use AI. Most People Only Know One.
A few weeks ago I found myself asking a chatbot to explain why Magnus Carlsen is so dominant in chess. Not because I needed to know for work. Just curiosity, on a Sunday afternoon. And it was genuinely good. Clear explanations of his endgame precision, his psychological approach, his ability to grind positions most grandmasters would accept as draws.
That kind of interaction is what most people mean when they talk about using AI. You ask something, it explains something. It is useful the same way a very well-read friend is useful, except available at 11pm and never tired of your questions.
But over the past few months I have started using AI in a completely different way. And the difference between the two has started to matter more than I expected.
I write at samyongzhi.com. The whole site was actually built using Claude Code. That part I had figured out. But for the content side of things, writing drafts, proofreading, translating posts into Indonesian, figuring out the right angle for a piece, I was still doing all of that in Claude’s chat interface or ChatGPT. Then copying the result. Pasting it into the right place. Running back to Claude Code to do the technical bits. A lot of switching between tools and windows.
What I realised recently is that none of that switching was necessary. Claude Code can handle the content work too, as long as the context is there. The memory files I had set up meant it already knew my writing style, my audience, the structure of my posts, the Indonesian conventions I follow for the translated versions. It did not need me to paste any of that in. It was already sitting there.
Publishing a single post involves more steps than most people realise. Here is what the actual process looks like:
Write the draft. Translate it into Indonesian, because my wife is Indonesian and a meaningful portion of my audience is too. Update the site’s codebase with the new files. Push those changes to GitHub, which automatically triggers the live site to update. Then sync the post to Notion, which I use as a personal archive and reference.
Five steps. Each one involves different files, different tools, or different systems. Done through a chat interface, you still do most of the moving yourself. The AI gives you outputs. It does not actually move anything.
The analogy that helped me understand the difference: using a chat AI is like briefing a contractor in a meeting room. You describe the job, they give you a plan or a draft, you take it back to the worksite yourself. Using an agent is like having someone sit at your desk. They can open your files, see the actual state of things, and just do the next step.
Same underlying capability. Completely different experience.
For my publishing workflow, this meant I could describe what I needed and watch it work through the whole sequence. Draft, translate, create the right files, push to GitHub, update Notion. Not perfectly every time. But continuously, without me copying and pasting between tools.
There was one moment that made the shift very concrete for me.
Claude Code kept pausing mid-workflow to ask for my permission before taking each action. Reasonable behaviour, honestly. But I had stepped away from my laptop for a bit, came back, and found the process completely stopped, waiting for me to approve the next step.
Out of curiosity, I asked Claude Code how to solve this. It explained that it could update its own settings. And it walked me through the options, from asking permission for almost everything to operating with much more autonomy.
I sat with that for a moment. I had just asked a tool how to give itself more freedom to act. And it answered directly, laid out the tradeoffs, and let me decide.
That is not how I think about chatbots. That is something different.
My daughters are seven. They use AI the way most kids their age do when they encounter it: asking questions, getting answers, moving on. The same way I used Google as a kid, just faster and more conversational.
But the tools they will actually use as adults are going to look much more like what I described above. Not “ask it something and get an answer,” but “describe a goal and let it work through the steps.”
The skill that matters for that is not technical. It is not knowing how to code or understanding how language models work. It is knowing how to break a goal into clear steps. How to describe what you actually want, precisely enough that something can act on it. How to review what came back and judge whether it is actually right.
That is problem decomposition. That is critical thinking applied to outputs you did not produce yourself. We can teach that now, in contexts that have nothing to do with AI, long before our kids need to direct an agent through a complex workflow.
Most people’s mental model of AI is built around chat interfaces, because that is what most people have spent time with. The tools have moved further than the mental models have.
Worth paying attention to which one you are actually reaching for, and whether the task actually fits.