The Right Way to Use AI Agents for Coding

For the last 6+ months I’ve been using GitHub Copilot Pro almost every day across multiple projects. I’ve also played with agent-style workflows (multi-step planning, notes during development, summaries at the end), and I’m consistently impressed with how far LLMs have come for real shipping code.

There’s a lot of noise about “the next coding agent.” Cursor pops up in every discussion. Personally, I’m very happy with Microsoft’s GitHub Copilot set-up and haven’t felt a strong need to switch. My bet is that Microsoft will keep doing this better and tighter inside the developer ecosystem we already use.

Below is exactly how I get the most out of Copilot Pro and agent-style coding—what works, what to avoid, and the small habits that compound into big productivity wins.

My simple framework: Context → Review → Test/Verify

1) Provide rich context (this is the multiplier)

Agents are only as good as the context you give them. When I want quality, I front-load the brief:

  • Point to the right files/folders. Tell the agent where the relevant components, services, routes, schemas, and tests live.

  • Select code blocks before asking for edits. (Inline selection = less hallucination.)

  • Include screenshots for UI issues (layout bugs, CSS/DOM quirks, breakpoints). Visuals eliminate ambiguity.

  • Link docs for any external API/SDK you’re using (Stripe, Firebase, Shopify, etc.). Agents do far better when they can cross-reference the contract you’re coding against.

  • Give acceptance criteria. One or two bullet points on what “done” means—e.g., “passes these cases,” “handles 429 retries,” “SSR-safe; no window usage.”

2) Review everything it writes (including the notes)

Treat the agent like a senior intern: fast, helpful, not infallible.

  • Read the plan/notes it generates. This is where you catch wrong assumptions early.

  • Skim diffs before applying. Look for over-eager refactors, missing null checks, or forgotten edge cases.

  • Ask for a short summary of what changed and why. This makes your commit message and PR description basically write themselves.

  • Do frequent Git commits so you can easily switch back

3) Manually verify the result (don’t skip this)

I always do quick, targeted checks:

  • Run the tests (or ask Copilot to add a couple if coverage is thin).

  • Click through the UI on the exact flows described in the acceptance criteria.

  • Try the sad paths (timeouts, 401/403, empty data, mobile width).

  • Check performance: if the agent added “convenience” awaits in hot paths, fix those.

Those three steps—context, review, verify—are the backbone.

If you are starting from scratch, describe the architecture and goals and prepare a plan.
So far I have worked on existing projects that already had the architecture prepared and the AI knew what to follow.

“Cursor vs Copilot?” My take

Cursor is getting a lot of buzz and I respect what they’re building. My personal workflow lives inside GitHub Copilot Pro and the GitHub/Microsoft ecosystem, and it’s been reliable, fast, and deeply integrated with the tooling I already use.

For me that’s what matters. If your team is all-in on VS Code + GitHub, Copilot is the path of least resistance.

Need help integrating AI into your web stack?

We design, build, and ship modern websites and web apps (Angular, Shopify, Squarespace, custom stacks) with AI features baked in—from chat and summarization to analytics and automation.

Let’s talk: marian@abzglobal.net or see our Contact page

Sorca Marian

Founder, CEO & CTO of Self-Manager.net & abZGlobal.net | Senior Software Engineer

https://self-manager.net/
Previous
Previous

SEO → App Stores → AI Platforms: The New Front Door to Customers

Next
Next

All Digital Fields Have Been Shaken by AI