Best Practices for Working with AI Agents: A Verification-Driven Approach

Working effectively with AI agents requires a fundamental shift in how we approach development. While AI can generate vast amounts of code instantly, the primary challenge is no longer authorship, but verification. Modern software engineering with AI is less about crafting the "perfect prompt" and more about maintaining a disciplined, step-by-step process.

Here is a comprehensive guide on how to optimally interact with AI agents, supported by real-world examples.

1. The Mindset Shift: Engineering Over Prompting

In the AI era, your core value shifts from typing speed to three essential competencies: Problem Definition, Decomposition, and Verification.

2. Precise vs. Imprecise: The Power of Constraints

Your prompts must be highly precise when it comes to rules, constraints, and edge cases. Ambiguity is the enemy of secure AI-generated code.

3. Short vs. Long Prompts: The Iterative Workflow

Instead of writing one massive prompt, the most effective strategy is iterative prompting. Start with a structured, medium-length prompt to define the goal and constraints, then transition to short, highly focused commands to build and refine the output incrementally.

4. The 7-Step Verification Loop: Trust but Verify

Never assume the agent's first output is flawless. You must treat AI-generated code like code from a stranger—useful, but untrusted until proven by tests. Fundamentals matter more than ever: security, data flow, and edge-case thinking are your primary tools.

To ensure quality and maintain control, adopt this repeatable 7-Step Iterative Loop:

  1. Define the Goal: State the objective in one clear sentence.
  2. Establish Rules: List the non-negotiable technical constraints (what must be true).
  3. Provide Examples: Define the exact expected Input $\rightarrow$ Output mappings.
  4. Identify Edge Cases: List "weird" or bad situations the system needs to handle.
  5. Request a "Small Piece": Ask for a specific function or logic gate, not the whole system.
  6. Demand Tests: Require the AI to provide runnable assertions to prove its logic.
  7. Iterate: Treat failing tests as data. Use them as a "flashlight" to refine your next prompt and fix ambiguities in your rules.

Revision #1
Created 2026-03-13 11:13:11 UTC by Carsten
Updated 2026-03-13 11:23:35 UTC by Carsten