Context Engineering

Structured Execution,
Not Random Prompting

Five phases. Verification gates. The PREVC protocol ensures agents prepare before they act, research before they build, and verify before they commit. Every time.

The Problem With Random Prompting

Most AI coding tools work the same way: you send a prompt, the agent generates code, you hope for the best. There's no structure. No verification. No accountability.

60%

of AI-generated code contains bugs that pass initial visual inspection

40%

uses deprecated or hallucinated APIs not present in the actual codebase

0

verification gates between "generate code" and "merge to production"

One-shot prompting treats AI like a vending machine: insert request, receive output. No context about your architecture. No check against your ADRs. No test suite run. No human review gate. Just code, delivered with false confidence.

"Should work." — the two most dangerous words in software development.

The PREVC Protocol

Every task. Every agent. Every time. Five phases, each with an explicit gate that must pass before the next phase begins.

P

Prepare

Read specs, check codebase, plan approach. Context before code.

R

Research

Study patterns, read ADRs, understand constraints and dependencies.

E

Execute

Implement with TDD. Write the failing test first. Create branch + PR.

V

Verify

Run tests, type-check, lint, self-review. Evidence before claims.

C

Commit

Clean commit, PR description linked to spec. Human reviews and merges.

Step Through an Example

Watch an agent work through a real task — building an authentication endpoint — using each PREVC phase. Navigate with Prev/Next or click any phase directly.

Step 1/5

P

Prepare

Read the task spec, check existing code patterns, and identify affected files before writing a single line.

Task: Build user authentication endpoint

The 3-File Pattern

No working from memory. Every task has a plan, a log, and notes. Agents can be interrupted, resumed, or handed off — and pick up exactly where they left off. Nothing lives only in a session's context window.

tasks/
tasks/BUILD_AUTH_ENDPOINT/
├──
TASK_PLAN.md Phases, checkboxes, decisions
Current phase and sub-tasks with checkboxes
Architecture decisions made during planning
Phase gate criteria (GO / HOLD / KILL)
├──
TASK_LOG.md Session actions, test results
Timestamped record of every action taken
Test output: passed / failed counts, errors
TASK_STATUS blocks for session handoff
└──
TASK_NOTES.md Research, findings, rationale
Research findings from ADRs and docs
Why decisions were made (not just what)
Edge cases, gotchas, patterns discovered
$ 3 files tracked. 0 orphan context. Pick up exactly where you left off.
PLAN

What needs to happen and in what order

LOG

What happened, with timestamps and evidence

NOTES

Why decisions were made, not just what was done

Vibe Coding vs PREVC

The difference between shipping and wishing.

Vibe Coding

The one-shot lottery

1

"Hey AI, build me a login system"

2

Agent generates code in one shot

No planning. No context. No constraints.

3

No research, no verification

Agent doesn't know your stack or ADRs

4

Hallucinated APIs, wrong stack

Used bcrypt when you specified argon2

5

No tests, no documentation

Coverage: 0%. Docs: none.

6

Merged directly without review

SQL injection in prod. Fun weekend.

60% bug rate 40% deprecated APIs 0 verification gates

PREVC Protocol

Structured execution with gates

P

Prepare

Read specs, check existing code, plan approach

R

Research

Study patterns, read ADRs, understand constraints

E

Execute

Implement with TDD, create branch, write tests first

V

Verify

Run tests, type-check, lint, self-review output

C

Commit

Clean commit message, PR description, request review

Human reviews, approves, and merges

Agents cannot merge. Only you can.

5 verification gates TDD enforced Human-in-the-loop

The 5-Question Reboot Test

Before any major decision, EnGenAI agents run the reboot test. Five questions that ensure no agent operates on stale context or false assumptions.

1

Where am I?

Current phase in the task plan

2

Where am I going?

Remaining phases and what they require

3

What's the goal?

The original project objective, restated

4

What have I learned?

Key findings from research and execution

5

What have I done?

Actions already completed this session

If an agent cannot answer all five questions with evidence from its task files, it stops and reloads context before proceeding. No guesswork. No drift.

Next: Model Configuration

Once you know how agents execute, see how you configure which AI model powers each one.