Five phases. Verification gates. The PREVC protocol ensures agents prepare before they act, research before they build, and verify before they commit. Every time.
Most AI coding tools work the same way: you send a prompt, the agent generates code, you hope for the best. There's no structure. No verification. No accountability.
of AI-generated code contains bugs that pass initial visual inspection
uses deprecated or hallucinated APIs not present in the actual codebase
verification gates between "generate code" and "merge to production"
One-shot prompting treats AI like a vending machine: insert request, receive output. No context about your architecture. No check against your ADRs. No test suite run. No human review gate. Just code, delivered with false confidence.
"Should work." — the two most dangerous words in software development.
Every task. Every agent. Every time. Five phases, each with an explicit gate that must pass before the next phase begins.
Read specs, check codebase, plan approach. Context before code.
Study patterns, read ADRs, understand constraints and dependencies.
Implement with TDD. Write the failing test first. Create branch + PR.
Run tests, type-check, lint, self-review. Evidence before claims.
Clean commit, PR description linked to spec. Human reviews and merges.
Watch an agent work through a real task — building an authentication endpoint — using each PREVC phase. Navigate with Prev/Next or click any phase directly.
Step 1/5
Read the task spec, check existing code patterns, and identify affected files before writing a single line.
No working from memory. Every task has a plan, a log, and notes. Agents can be interrupted, resumed, or handed off — and pick up exactly where they left off. Nothing lives only in a session's context window.
What needs to happen and in what order
What happened, with timestamps and evidence
Why decisions were made, not just what was done
The difference between shipping and wishing.
The one-shot lottery
"Hey AI, build me a login system"
Agent generates code in one shot
No planning. No context. No constraints.
No research, no verification
Agent doesn't know your stack or ADRs
Hallucinated APIs, wrong stack
Used bcrypt when you specified argon2
No tests, no documentation
Coverage: 0%. Docs: none.
Merged directly without review
SQL injection in prod. Fun weekend.
Structured execution with gates
Prepare
Read specs, check existing code, plan approach
Research
Study patterns, read ADRs, understand constraints
Execute
Implement with TDD, create branch, write tests first
Verify
Run tests, type-check, lint, self-review output
Commit
Clean commit message, PR description, request review
Human reviews, approves, and merges
Agents cannot merge. Only you can.
Before any major decision, EnGenAI agents run the reboot test. Five questions that ensure no agent operates on stale context or false assumptions.
Current phase in the task plan
Remaining phases and what they require
The original project objective, restated
Key findings from research and execution
Actions already completed this session
If an agent cannot answer all five questions with evidence from its task files, it stops and reloads context before proceeding. No guesswork. No drift.
Once you know how agents execute, see how you configure which AI model powers each one.