AI coding is a skill. You have to decide how much context to put in your brain vs the AI. You can waste your time thinking about the wrong problem because you failed to delegate. Or you can give yourself a headache when the AI coder doesn’t get it.

I think about it in terms of spectrum of human to AI context. At the highest levels, we, humans, own all the context. We operate here when our specific value-add matters. We also work here in the many cases AI coders aren’t that smart yet. At the lowest level we decide context is not worthy of our attention, we let the AI take over.

Let’s walk through the layers from most human to least human in the loop.

Most removed layer – ChatGPT dialog

(foundational education, high-level reflection, like talking to a colleague at another company)

Here you formulate a “Stackoverflow question” to ChatGPT and pas. You’re engaging in socratic dialog trying to take apart a problem. This layer is useful because asking good questions is a skill and requires YOU to reformulate the required context into a succinct answer. This layer is like talking to a colleague at a different company about your problem. From them, you would get a less-biased perspective. Indeed perspective is the key value here. It’s often like therapy — laying out problems from first principles, stating your assumptions explicitly, and thinking step-by-step how to explain a problem helps solve the problem itself.

Claude Code / Codex ask knowledge questions

(ask questions about your code, like talking to a colleague at your company)

At this next layer deeper, we assume the project is in context. We ask questions – “in our project, see over here we’re repeating the HTML header, in golang / fiber, how do we use a shared layout?”. This layer is like talking to a trusted colleague, they can give you direct answers that take into account the project’s context. You might get a quick answer, but you may not get a deep, foundational rethinking of things which we sometimes need. The lower amount of human context means your brain isn’t forced to reformulate the context into a question anyone could understand. You trade-off learning for convenience. This is fine — there is a lot we don’t need to waste our times learning deeply about. Sometimes we need to just get shit done.

Claude Code / Codex suggest changes

(get proposed changes along with explanations, copy-paste them in with our discretion, like asking coworker for suggested changes)

Next we don’t even try that hard to learn. We assume the project is in context, the AI proposes changes, and I selectively copy-paste. We get specific, targeted changes proposed to us. We try to incorporate them, and make decisions when to trust the AI change-by-change. I prefer doing this when it’s not clear I trust the model to understand the programming language or framework, but still want direct guidance on a task. Honestly outside of very core languages — HTML, Python, Javascript, or Typescript, this is my preferred workflow.

Claude Code / Codex actual coding.

(like delegating a task to a junior coworker)

You have an overconfident coding toddler full of the world’s programming knowledge. Give them a task, with strong guardrails to not destroy your codebase. This is the maximum amount of trust, trading off the most knowledge gained by you the human. It’s particularly useful in a core set of technologies like Python, HTML, Javascript etc. But beyond “top 5 github languages” proceed carefully. I personally use this a lot to suggest narrow changes to HTML as I’m not a web designer / frontend programmer. However, I trust abilities here to get smarter nd my trust might increase. You can tune the guardrails in how much latittude to give the AI and human.

Human vs AI context (and trust!)

At each layer deeper, you personally need to process less context in your brain – letting the model have that context. You decide depending on your situation when the context is important for you to understand vs when you trust the AI to just handle it. It’s like running a company. Is it important for you to get in the weeds on every detail? Or can you trust your AI coworker to do its work, however imperfectly?


Enjoy softwaredoug in training course form!

I hope you join me at Cheat at Search with LLMs to learn how to apply LLMs to search applications. Check out this post for a sneak preview.

Doug Turnbull

More from Doug
Twitter | LinkedIn | Newsletter | Bsky
Take My New Course - Cheat at Search with LLMs