· Brittany Ellich · tutorial  · 13 min read

Start where you are: A practical guide to building with AI

The best practices for building with AI haven't been written yet, and that's actually exciting. This post breaks down a layered approach to AI-assisted development, from chat to coding agents to agent fleets, with practical tips for getting started no matter where you are.

The best practices for building with AI haven't been written yet, and that's actually exciting. This post breaks down a layered approach to AI-assisted development, from chat to coding agents to agent fleets, with practical tips for getting started no matter where you are.

A former manager of mine recently reached out and asked if I’d come talk to his team about how I use AI in my day-to-day work. I loved this because 1) I was super flattered and 2) I got to prep something super loose and easy. It wasn’t a marketing pitch or a conference talk, it was just a conversation with a team of engineers who wanted to figure out how to actually use these tools.

I put together a presentation for his team and I wanted to turn it into a blog post because I think the framework is useful for anyone trying to figure out where to start (or where to go next) when learning to build with AI.

A few caveats before I jump in: I’m a software engineer. I’m not an AI researcher, I don’t work on GitHub Copilot, and I’m not speaking on behalf of my company. I’m not here to market anything. My suggestions are tool-agnostic when they can be, though most of my experience is with Copilot, so that’s where my examples tend to come from. Use whatever tool works for you.

Meeting you where you are

Before getting into the practical stuff, I think it’s worth acknowledging that people are in very different places with AI right now. I break it down into three rough buckets:

AI Enthusiasts are already building with AI daily and actively keeping up with new releases. AI Skeptics may have tried it a while ago but wrote it off, or they’re curious but not yet convinced. And AI Vegans are opting out on moral grounds, not opposed to the learning, but opposed to the tool.

All of these are valid positions. Some people are excited, and that’s awesome. Some people are really tired of being marketed to, and that’s completely understandable (honestly, same). Some people are abstaining for reasons that are also valid. I wrote about this dynamic more in AI Has an Image Problem and Living in the Inflection Point.

Here’s the thing I want to emphasize, though: this is a really good time to get good at this. The best practices for building with AI haven’t been written yet. Builders like you and me get to write them. This is an incredible time to jump in and be part of the story that figures this whole thing out.

The framework I use to describe the different ways to build with AI is layered. Most people start with one thing, get comfortable, and then add the next. Let’s walk through it.

Layer 1: Synchronous AI

Synchronous AI is working directly with AI tools in real time. This is where most people start, and it’s the easiest on-ramp.

There are three main interfaces here: Chat (like Copilot Chat, Claude, or ChatGPT), Integrated Development Environment (IDE) integrations (like the Copilot extension in VS Code, Claude Code in VS Code, or Cursor), and Command Line Interface (CLI) tools (like Copilot CLI or Claude Code). IDE and CLI are basically the same thing and are down to individual preference, but CLI has been getting a lot of love lately so I see a lot more CLI demos out there than IDE demos.

Chat is the simplest entry point. It replaces Stack Overflow and Google searches for quick questions. The key to doing it well is providing lots of context. Instead of a Google search like “for loop Golang,” you can ask: “What’s the most efficient way to write a for loop for iterating through an array in Go?” You get a better, more tailored answer.

IDE and CLI tools are where the real development assistance happens. The key skills here are context management, using custom agents and skills, and the “plan then implement” workflow.

Context management

This is probably the most important thing to get right. Context management means giving the AI the right information about your project: not too much and not too little. In practice, this means maintaining a file like copilot-instructions.md (for GitHub Copilot) or claude.md (for Claude Code) in your repository. Or even better, a more open standard like agents.md (detailed here). These files contain things like: “In this repository, we want unit tests for every individual function and integration tests for larger functionality,” or “In this Go project, we prefer table-driven tests,” or “Make sure to run go fmt before committing any code.” It’s simple, but it makes a huge difference in the quality of what the tools produce.

Custom agents vs. skills

These two concepts are gaining a lot of traction and they’re easy to confuse, so here’s how I think about the difference.

Custom agents (Copilot | Claude) provide a subset of context that’s useful for specific situations. Think: “You’re a documentation expert, use the Diataxis framework” for writing docs. Or: “This legacy code is written with jQuery, here’s the context to know about it for when a fix is happening there.” You don’t need that context all the time, but when you do, it’s really helpful.

Skills (Copilot | Claude) are for completing a repetitive task. Think: “Here are the steps to integrate a new SKU into the billing platform” or “These are the steps to create a new version of this API.” If you find yourself doing the same thing over and over, a skill is probably what you want. There’s a great Anthropic skills repo and a GitHub community Copilot skills collection to get you started.

Plan then implement

This is the most popular way I see people working synchronously with AI tools right now, and it’s the workflow I recommend to anyone getting started. It’s straightforward: you start in plan mode. “I need to add a button to do X, help me plan this feature.” The AI explores the codebase, adds context, and gets clarity on requirements before building anything. You review the plan, make adjustments, and then say: “Please implement this.” Plan mode is powerful because it keeps you in the driver’s seat. You’re not just hoping the AI does the right thing, you’re reviewing the approach and approving it or correcting it before any code gets written.

Layer 2: Asynchronous AI

This is where things get interesting, and where the mental shift gets harder. Asynchronous AI means delegating tasks to coding agents and code review agents. Instead of working alongside the AI in real time, you’re assigning work and checking in on results later. I’ve found that this is one of the most difficult skills for folks to learn, particularly if they haven’t acted in a lead or manager role in the past where they’re used to writing up requirements for someone else to implement. Coding agents let you assign an issue to tools like Copilot, Claude, or Codex agents and let them work on it. Code review agents like Copilot Code Review or CodeRabbit automatically review code written by a person or by AI. And before you ask, yes, there is value in having AI review AI-written code. You can even use a different model than the one that wrote the code for a fresh perspective. The hardest part about this layer isn’t the tool. It’s letting go of the control over the task.

What makes a good agent task

This is a judgment skill that you’ll only learn by doing, but the general rule is: small, well-scoped tasks. “Improve test coverage in this file” or “Fix the order in which these things load”, those are great agent tasks. “Rewrite this entire codebase in Rust”, not so much. A tip that’s helped me a lot: use AI to help you write the agent tasks or break larger tasks into smaller ones. I wrote more about this in A Software Engineer’s Guide to Agentic Software Development, and there’s a great post on the GitHub blog called WRAP Up Your Backlog that goes deeper on writing good issues for agents.

Layer 3: Agent fleets

This is the bleeding edge right now (as of February 2026, so it is subject to change), orchestrating lots of agents at once. Tools like fleet mode in GitHub Copilot and agent orchestration in Claude Code let you delegate a large task to an agent that then breaks it into smaller pieces and delegates those to sub-agents. I’m going to be honest: I don’t really know the best practices here yet, and I don’t think most people do. I don’t often have the need to do this and don’t know many folks doing it regularly. From the folks that I’ve heard of doing this regularly, it seems to go hand-in-hand with a future where teams stop reviewing individual lines of code completely, which is a big shift that’s still playing out. This is on the roadmap for most of us, but if you’re not there yet it’s okay.

Becoming AI fluent

This last section is a grab bag of practical tips I’ve collected from my own experience and from watching others navigate this space.

Be mindful of security

This one is important. A lot of AI tools, especially MCP (Model Context Protocol) servers, are presenting some real security risks. MCP is essentially an API for your agent that lets it communicate with other tools like Google Calendar, Notion, and so on. They’re really easy to set up, which is both their strength and their risk. Don’t hand all your information over to something unless you understand what it’s doing.

Model selection

You can spend a lot of time trying to optimize for the best model, and honestly, the landscape is changing so fast that any specific recommendation I make will probably be outdated soon. My current setup as of writing this is: the latest Claude Opus for plans and orchestration, the latest Claude Sonnet for implementation and running agents, and the latest GPT Codex for code review. The general principle is to use the most recent frontier model available, since that will be the most up-to-date. If you have other suggestions, though, I’m down to hear them!

Prompting

There’s been a lot of early documentation around writing the “right” prompts. Honestly? My guideline is simple: write the issue as though you were making it for someone brand new to the codebase. That’s it. If you can describe what you want clearly enough for a new teammate to understand, you can describe it clearly enough for an AI agent.

Context window management

As your conversation with an AI tool gets longer, quality can decline as you approach the context limit. It’s often better to start a new chat or summarize one that’s getting long and start fresh. Using too many MCPs can also eat into your context limit. Be aware of how much is left in your context window and ask your agent to summarize what it was doing and start a new chat when you reach a breakpoint.

Rethinking how to use your time

Using AI effectively across all these layers means a LOT of context switching. Protect your brain. Break up your day accordingly. I’ve found it helpful to time-box between “asynchronous development” (delegating to agents) and “synchronous development” (deep focus work). This graphic shows what I mean. I try to make sure I have ample time still for synchronous work, typically for things that are more difficult or need more exploration, and then asynchronous time where I’m pushing code through with a coding agent.

A time-blocked schedule where I have 30-minute increments for agentic tasks, such as 'triage issues to Copilot', and 2 hour blocks of time for 'development time'

Go build something

Here’s my biggest piece of advice: build something. This can be literally anything. It can be work-related, but it doesn’t have to be, it just has to be something that gets you excited. Start from nothing and use AI tools the entire time. Starting with a new codebase is the fastest way to learn because you don’t have to worry about production systems, legacy code, or code review, you can just experiment, and that’s how you’ll learn. Just like with “tutorial hell”, you’re not going to learn anything by following someone else’s tutorial. You’ll only REALLY learn this by doing it. Here’s how I start projects now: I set up a Zoom meeting with myself, turn on the transcript, talk out loud and brain dump everything I want the tool to do, give that transcript to Claude or Copilot, and ask “Please turn this brain dump into a list of features that I can use to build this application.” It sounds ridiculous but it works incredibly well. I wrote about this process in I Guess I’m AI Pilled Now? if you want to see it in action. Some project ideas to get you started:

  • a custom to-do list app
  • an AI personal assistant
  • a meeting summarizer
  • a morning briefing assistant
  • a personal changelog or brag doc
  • an automated standup bot
  • a notification triage assistant.

You probably have a computer in front of you right now. You can create an app for an audience of one (yourself!) and run it locally. If you want some starting points, check out the Marvin template for creating a personal assistant agent, or my own Command Center for inspiration on what a personal productivity app can look like. Another tool I’ve found really helpful for the product/PRD side is ChatPRD. (Just be mindful about security with any of these!)

Share with your team

AI is better with friends. Build tools together, share them, talk about what’s working and what isn’t. Things are moving quickly and many minds are better than one. Build tools that the entire team can use. Share the things that you’re learning with your coworkers. The folks I’ve learned the most from are the ones sharing tips in Slack channels and showing each other cool things they’ve figured out. No matter where you are in your journey, share the thing you’re doing and see if it resonates with anyone else, or if there’s a different way to do it. As I’ve mentioned previously, this is the absolute best time to be focusing on your relationships and human interactions. Those are the things that AI tools can’t replicate, and it’s what will keep you competitive in the job market when everyone can build with AI, and folks will remember if you were someone who helped others learn right now.

TL;DR

If there are three things to take away from this, they’re:

  1. Start where you are. Do some synchronous work first, then try asynchronous work, then try orchestration. Layer things on one at a time. Try not to feel overwhelmed or “behind”, this is all still very, very new. You’re in the right place at the right time, and the key is to just start.
  2. Go build something, today. It’s easier to start from scratch and do something you’re excited about outside of work. But go build a thing. That’s the only way you’ll learn!
  3. Share with your team. It’s a weird time… help everyone out by learning together!

The best practices for AI-assisted development haven’t been written yet. We’re the ones who get to write them, and that’s both very intimidating and very cool! All you have to do is start where you are.


If you want to keep up with AI happenings, here are some resources I’ve found helpful: the How I AI podcast for ideas on what others are building, Last Week in AI for news on releases and hardware, and our own Overcommitted podcast where we interview folks building in AI. Bluesky has also been great for AI conversation recently.

Share:

0 Likes on Bluesky

Likes:

  • Oh no, no likes, how sad! How about you add one?
Like this post on Bluesky to see your face show up here

Comments:

  • Oh no, no comments, how sad! How about you add one?
Comment on Bluesky to see your comment show up here
Back to Blog

Related Posts

View All Posts »
A Software Engineer's Guide to Agentic Software Development

A Software Engineer's Guide to Agentic Software Development

I've cracked the code on breaking the eternal cycle - features win, tech debt piles up, codebase becomes 'legacy', and an eventual rewrite. Using coding agents at GitHub, I now merge multiple tech debt PRs weekly while still delivering features. Tickets open for months get closed. 'Too hard to change' code actually improves. This is the story of the workflow.