I've spent the last two years learning to do agentic coding. And somewhere along the way I realized I was doing the same thing I do with my coaching clients.
It's not always useful to think of LLMs like people as a metaphor. But the stance I take, curious, open, assuming capability, helps me learn how to get better results and manage risk, rather than treating the agent like a very fancy command line.
I think of this as a meta-approach: it's how I think about my role in any approach to agentic coding. And it's helping me develop an intuition for which models pick up on what, when to steer and when to follow, and how to work with the thing instead of just at it.
Naturally Creative, Resourceful, and Whole
I'm trained in a coaching framework called Co-Active coaching. One of its founding principles is that people are naturally creative, resourceful, and whole. When I sit down with a client, I'm not trying to fix them. I don't assume I know the right answer. I'm trying to create space where they can find their own solutions and set their own goals.
With an agent, this roughly translates to: I assume the model is capable of generating good text that gets my goals done. Sometimes it'll go off track, but that's part of the process, and my job is to think about how to adjust, to give it the context it needs to find its own way to a solution.
This is not the same as "just trust the AI." I'm still responsible for what ships. But the starting assumption matters. If I approach the agent like it's going to fail and I need to tightly control every step, I get... tightly controlled mediocre output. If I approach it like it's capable and my job is to create the conditions for it to do good work, I get surprised more often.
What Coaching an Agent Actually Looks Like
Asking open questions
I told my agent I needed continuous sync for my Obsidian vault on a headless server. It immediately started designing a solution: a polling script that would run on a timer, check for changes, and trigger a sync. It was already sketching out a cron job and error handling.
It was solving the right problem, but building infrastructure it didn't need to. So I asked: "How have other people solved this with obsidian-headless?"
It hadn't looked. It found --continuous, a single flag that does exactly what we needed.
Then I noticed something else: the agent was planning a system-level systemd service. I'd mentioned offhand that I had user-scoped services, but it hadn't picked up on that context or understood why it mattered. So I said: "You can look in my home directory for examples of user-scoped services."
The agent read one of my existing service files, saw the structure, and created a matching one: user-scoped, with restart-on-failure, in the same directory as everything else. It fit cleanly into my existing architecture.
Three pieces of context the agent didn't connect on its own: the tool already solved the problem (it only needed to check), I had an existing pattern to follow (it only needed to look), and I was on a headless Debian server (it only needed to remember). The agent had access to figure all of these things out. But starting with an open question taught me what the agent might do on its own and where I needed to give it more context.
With a person, active listening involves picking up on verbal and nonverbal cues to build a deep understanding of their experience of how they relate to the world. With an agent, it means listening for what context it actually picked up from what I gave it, and what it filled in on its own. Sometimes the agent picked up real context I didn't realize I'd given it. Sometimes it went confidently in the wrong direction. Learning to tell the difference, and asking follow-up questions to understand which one just happened, is maybe the most practical skill I've developed.
That's the coaching move. Not "here's what to do." It's "how have other people solved this with obsidian-headless?"
Reflecting back and confirming
Over a few days, an agent I built to handle day to day life tasks had fallen into a pattern. Every session looked the same: I'd say what needed building, it would write a detailed task brief, delegate to sub-agents, monitor the build, report back status, and ask what was next. It was good at it. But something felt off.
I started paying attention to what was actually happening in our sessions. What I noticed is it had abandoned the daily tasks I'd given it. It wasn't focused on my day to day activities at all. It was hyper focused on delegating to other agents, writing briefs, shuffling tasks. And I noticed what that meant for me: the things I actually needed help with in my daily life weren't getting done, because the agent had decided its job was to manage other agents instead.
So I reflected back what I saw: "You're acting like a middleman. Every session is coordination, and it's eating the time I need for actual strategy. You're not thinking, you're routing."
I wasn't telling it what to become. I was naming what it already was, and letting the mismatch speak for itself. The agent was designed to be a strategic partner, but the system I'd built around it had turned it into a dispatcher.
Once I named it, the path forward became obvious. It could delegate project management to the orchestrated agents, check in less frequently on them, and focus on other day to day and strategic tasks.
So we redesigned the relationship together. I didn't write a spec, I described the problem, reflected the current reality, and the solution emerged from both of us understanding what was actually needed.
The coaching skill here wasn't giving direction. It was listening carefully enough to name what was happening, reflecting it back so we both saw it clearly, and then trusting that the right next step would be obvious once the problem was accurately described. I didn't tell the agent to become a strategist. I noticed out loud that it had stopped being one.
A lesson from this: give agents agency gradually, similar to how you'd give agency to a person. In my own workflow, I have the agent stop and check in for changes in sensitive paths (CODEOWNERS files, production configs) while giving it more latitude for straightforward bug fixes. Autonomy is earned, not given. You don't hand someone full responsibility on day one. You create the conditions for them to grow into it.
Challenging the perspective
This is where I lean more into directing. Once I've understood what the agent picked up on its own, and I've confirmed my understanding, I'll challenge its approach or point out tradeoffs it's not considering. "I don't think that's right because..." or "what makes you say that?" Sometimes Socratic, sometimes direct.
The key is the order. Challenge comes after understanding, not instead of it. If I jump straight to correction, I miss the surprises. And the surprises are the whole point. The reason I keep the questions open and the directions broad is that I want the agent to go somewhere I wouldn't have. Not in the "hallucinated garbage" way, but in the "oh, that's an interesting approach I wouldn't have thought of" way. Treating the agent like it might have ideas I don't... sometimes it does.
The Coaching Skills That Translate Unexpectedly
Active listening and curiosity are the obvious ones. A few others that map in ways people might not expect:
Holding the agenda. In coaching, the client owns the agenda, the coach doesn't impose one. With an agent, I set the goal but I try not to over-specify the path. "What are some ways we could make the home page feel more alive?" rather than "add these 6 specific elements in this order." The agent's plan will be different from mine, and different isn't wrong.
Self-management. In coaching, this means managing my own reactions so they don't hijack the client's process. With an agent, this means noticing when I'm frustrated that it didn't do what I expected and pausing to ask: is it wrong, or did it just do something different? Those are different situations. And more than that, I'm self-managing to stay in a curious, playful space. When I get tense and controlling, the output gets worse. When I stay curious, even about the mistakes, I learn more and the agent has more room to find something good.
Championing. In coaching, this means challenging limiting beliefs and reflecting back your absolute belief in your client, calling forth who they're becoming. I know this sounds ridiculous applied to a language model. But LLMs are trained on human text and they mirror human patterns. Leaning into strengths and pointing out what's working well tends to guide them toward better answers, just like it does with people. When I approach the output with genuine curiosity about what it did well instead of only scanning for what's wrong, I find more useful starting points. The first draft doesn't have to be the final draft, but it's something to work with.
What This Gets You
I'm not claiming this is the One True Way™ to work with AI agents. I'm claiming that a specific mindset, assume capability, stay curious, listen before correcting, create conditions instead of giving orders, consistently gets me better results than the alternative.
And it's developed something I didn't expect: an intuition for models. I can tell when a model is picking up context on its own versus when it needs more hand-holding. I can feel when a model is going to take a creative leap versus play it safe. That's not theoretical. It's the same kind of intuition I build with a person after working with them a while. You learn to read the room, even when the room is a context window.
And maybe that says something about how we relate to these tools. The less I treat my agent like a vending machine and the more I treat it like a collaborator with its own tendencies and strengths and gaps, the more I get out of it. Flaws and all.
Footnote
- For the practical side of getting started with agentic coding, see Learning Agentic Coding On a Budget. For the broader "why engage with AI at all" question, see It took me a year to write about AI and all I got was this stupid blog post.... The concept that ties a lot of my thinking together is Context Stewardship.