What makes an AI coding task well structured?
It has a clear user, outcome, scope, non-goals, constraints, acceptance criteria, and review plan.
Docs
Structure AI coding tasks with a clear user, scope, constraints, acceptance criteria, non-goals, and review plan before implementation.
Why it matters
Good AI coding tasks are small enough to implement and small enough for a developer to review.
Structure AI coding tasks as small, reviewable stories with a user, desired behavior, explicit scope, non-goals, constraints, acceptance criteria, and a review plan. Hal works best when the agent receives bounded work and the developer can inspect the resulting diff before continuing.
Before writing the prompt, ask: can a developer review the output in one sitting? If not, split the task.
Risky tasks often involve authentication, payments, migrations, permissions, security-sensitive code, broad refactors, or many unrelated files. Use the AI coding task splitter when scope is unclear.
## Goal
What user-visible or developer-visible outcome should change?
## Scope
What files, systems, or behavior may the agent touch?
## Non-goals
What should the agent avoid?
## Acceptance criteria
What observable conditions must be true?
## Constraints
What commands, conventions, data rules, or edge cases matter?
## Review plan
What should a developer inspect before merge?
| Broad request | Safer AI coding story |
|---|---|
| Add account settings. | Add one saved display-name field with validation and tests. |
| Improve onboarding. | Add one empty-state CTA and confirm the first-run route still works. |
| Refactor billing. | Extract one pure formatting helper without changing payment behavior. |
hal plan "describe one bounded product change"
hal convert
hal validate
hal run
If validation highlights unclear stories, tighten the task before implementation. A vague story usually becomes a vague diff.
| Hal helps with | Hal does not replace |
|---|---|
| Planning, conversion, validation, running, reporting, archive state, and reviewable loops. | Product judgment, security review, QA, merge approval, or production ownership. |
Before relying on Hal in production, check the source repository, install guide, release notes, and supported engine documentation.
Short answers before you put this into an agent workflow.
It has a clear user, outcome, scope, non-goals, constraints, acceptance criteria, and review plan.
Small enough that a developer can inspect the resulting diff, tests, reports, and unresolved criteria in one review session.
Keep exploring the pieces of a reviewable coding loop.
Run a small Hal loop from PRD planning through validation, implementation, and human review without handing over a broad feature.
Use a PRD to give AI coding agents clearer scope, acceptance criteria, constraints, and review checkpoints before implementation starts.
Review AI-generated code by checking the requirement, changed files, tests, reports, edge cases, and unexpected edits before merging.