Agents
-
What Is an AI Agent, Actually?
We need some actual definitions. The word “agent” is getting slapped onto every product and service, and marketers aren’t doing anybody favors as they SEO-optimize for the new agentic world we live in. There’s a huge range in what these things can actually do. Here is my attempt at clarity.
The Spectrum of AI Capabilities
Chatbot / Assistant — This is a single conversation with no persistent goals and no tool use. You ask it questions, it answers from a knowledge base. Think of the little chat widget on a product page that helps you find pricing info or troubleshoot a common issue. It talks with you, and that’s about it.
LLM with Tool Use — This is what you get when you open “agent mode” in your IDE. Your LLM can read files, run commands, edit code. A lot of IDE vendors call this an agent, but it’s not really one. It’s a language model that can use tools when you ask it to. The key difference: you are still driving. You give it a task, it does that task, you give it the next one.
Agent — Given a goal, it can plan and execute multi-step workflows autonomously. By “workflow” I mean a sequence of actions that depend on each other: read a file, decide what to change, make the edit, run the tests, fix what broke, repeat. It has reasoning, memory, and some degree of autonomy in completing an objective. You don’t hand it step-by-step instructions. You describe what you want done, and it figures out how to get there.
Sub-Agent — An agent that gets dispatched by another agent to handle a specific piece of a larger task. If you’ve used Claude Code or Cursor, you know what I’m talking about. The main agent kicks off a sub-agent to go research something, review code, or run tests in parallel while it keeps working on the bigger picture. The sub-agent has its own context and tools, but it reports back to the parent. It’s not a separate autonomous agent with its own goals. It’s more like delegating a subtask.
Multi-Agent System — Multiple independent agents coordinating together, either directly or through an orchestrator. The key difference from sub-agents: these agents have their own goals and specialties. They negotiate, hand off work, and make decisions independently. Think of a system where one agent monitors your infrastructure, another handles incident response, and a third writes the postmortem. Each Agent is operating autonomously but aware of the others.
So How Is Something Like OpenClaw Different From a Chatbot?
A chatbot is designed to talk with you, similar to how you’d just talk with an LLM directly. OpenClaw is designed to work for you. It has agency. It can take actions. It’s more than just a conversation.
Obviously, how much it can do depends on what skills and plugins you enable, and what degree of risk you’re comfortable with. But here’s the interesting part: it’s proactive. It has a heartbeat mechanism that keeps it running continuously in the background. It’ll automatically check on things or take action on a schedule you specify, without you having to prompt it.
A Few Misconceptions Worth Clearing Up
OpenClaw is just one specific framework for building and orchestrating agents, but the misconceptions around it apply broadly.
“Agents have to run locally." That’s how OpenClaw works, sure. But in reality, the enterprise agents are running invisibly in the background all the time. Your agent doesn’t need to live on your laptop.
“Agents need a chat interface." Because you can talk to an agent, people assume you must have a chat interface for it to be an agent. But by definition, agents don’t require a conversation. They can just run in the background doing things. No chat window needed.
“Sub-agents are just function calls." This one trips up developers. When your agent spawns a sub-agent, it’s not the same as calling a function. The sub-agent gets its own context window, its own reasoning loop, its own tool access. It can make judgment calls the parent didn’t anticipate. That’s fundamentally different from passing arguments to a function and getting a return value.
Why Write This Down
I mainly wrote this for myself. I keep running into these terms and needing a mental model to put them in context, so as I’m thinking about building agentic systems and trying to decide what level of capability I actually need for a given problem. The process of writing it down makes those decisions somewhat easier.
-
Spent some time with Google Antigravity today and I think I’m starting to get it. Built a few agents to test it out. The agent manager stuff is genuinely interesting and seems useful. The planning features (spec-driven development) though? Not sold on those yet.