# Hal full LLM context Hal is a terminal-first CLI for PRD-native coding loops with AI agents such as Codex, Claude Code, and Pi. ## Product facts - Name: Hal - Category: terminal-first CLI for PRD-native, reviewable AI coding loops - Version shown on this site: v0.0.9 - Install command: brew tap j-yw/tap && brew install --cask hal - Source repository: https://github.com/ReScienceLab/hal - ReScience Lab product page: https://rescience.com/products/hal - Machine-readable pricing: https://hal.rocks/pricing.md - Pricing status: no hosted Hal Cloud plan, paid tier, seat price, usage quota, SLA, or managed service is listed on this website today. - Supported engines mentioned by this site: Codex, Claude Code, and Pi. - Primary workflow: init, plan, convert, validate, run, review, report, and archive. ## Features ## Plan the work before the agent writes code - URL: https://hal.rocks/features/prd-driven-planning/ - Description: Turn product intent into stories, acceptance criteria, and reviewable state before an AI coding agent touches the repo. What is PRD-driven planning for AI coding? PRD-driven planning for AI coding means turning product intent into stories, acceptance criteria, and constraints before an AI coding agent edits files. In Hal, this gives developers a reviewable plan before running implementation through supported engines such as Codex, Claude Code, or Pi. What breaks without planning AI coding agents are strongest when the target is narrow. A broad prompt like “add authentication” can sprawl into UI, database, sessions, middleware, tests, copy, and cleanup in one pass. The result may be useful, but it is harder to inspect. Hal puts planning before implementation. Product intent becomes stories, acceptance criteria, and runtime state so the agent is working against a defined unit of work instead of a loose request. How Hal handles it Run hal plan with a product change, then review the generated requirements before the loop starts. The important shift is not the command itself. It is the checkpoint. The developer gets to inspect scope before an agent edits files. What the developer gets - A PRD or structured requirement in markdown. - Story-sized implementation units. - Acceptance criteria that define “done.” - Runtime state that can be validated before hal run. How to review the plan Look for stories that are too broad, acceptance criteria that are not observable, and missing constraints such as authentication, persistence, migration, or UI behavior. If the plan is hard to review, the resulting code will be harder to trust. What Hal does and does not do | Hal does | Hal does not do | |---|---| | Structures PRD-native coding loops around planning, validation, implementation, reporting, and reviewable state. | Guarantee code quality, passing tests, delivery speed, revenue, rankings, or production readiness. | | Helps supported engines work against smaller, reviewable units of work. | Replace developer review, QA, security review, or merge judgment. | Sources and verification Before adopting this workflow, verify the current Hal source repository, install docs, pricing status, and machine-readable pricing. Check release notes and engine support before relying on Hal in production work. ## Fresh context for every implementation story - URL: https://hal.rocks/features/fresh-context-every-story/ - Description: Keep long AI coding work from turning into one drifting chat by giving each story its own bounded context window. Why does fresh context matter for AI coding stories? Fresh context matters because long agent sessions can drift across old instructions, stale assumptions, and unrelated implementation details. Hal favors bounded stories so each implementation step receives the current task, relevant standards, and recent state without dragging the entire conversation forward. What breaks in one long agent session A feature that begins cleanly can drift after several turns. The agent may carry old assumptions, respond to previous instructions, or blend unrelated implementation details into the next change. That is how a simple sequence of stories becomes one large, hard-to-review diff. How Hal handles it Hal runs implementation as a loop of bounded stories. Each iteration can load the current story, relevant standards, and recent state without dragging the entire conversation forward. The agent gets enough context to act. The reviewer gets smaller checkpoints. What the developer gets - Smaller implementation units. - Less prompt drift across long features. - Cleaner review boundaries between stories. - A workflow that favors inspectable state over endless chat history. How to review the output Review one story at a time. Confirm the changed files match the current story, the acceptance criteria are addressed, and the agent did not opportunistically modify unrelated parts of the repo. What Hal does and does not do | Hal does | Hal does not do | |---|---| | Structures PRD-native coding loops around planning, validation, implementation, reporting, and reviewable state. | Guarantee code quality, passing tests, delivery speed, revenue, rankings, or production readiness. | | Helps supported engines work against smaller, reviewable units of work. | Replace developer review, QA, security review, or merge judgment. | Sources and verification Before adopting this workflow, verify the current Hal source repository, install docs, pricing status, and machine-readable pricing. Check release notes and engine support before relying on Hal in production work. ## Bring your coding agent into the Hal loop - URL: https://hal.rocks/features/pluggable-agent-engines/ - Description: Use Codex, Claude Code, or Pi inside the same PRD-driven loop without rewriting your planning workflow. What are pluggable agent engines in Hal? Pluggable agent engines mean Hal treats the coding agent as the implementation engine rather than the whole workflow. This site mentions Codex, Claude Code, and Pi as supported engines, while Hal keeps planning, validation, running, reporting, archiving, and review checkpoints consistent. What breaks when the agent is the workflow If your process lives inside one agent tool, switching tools means rebuilding habits: prompts, conventions, review expectations, and handoff steps. The workflow becomes coupled to the model or interface. Hal separates the loop from the engine. The PRD-driven process stays stable while the configured coding agent does the implementation work. How Hal handles it Configure the engine, then run the same planning and execution flow. The developer keeps the same checkpoints: plan, validate, run, inspect. What the developer gets - One workflow across supported engines. - A stable PRD and story format. - Reviewable output regardless of which agent runs. - Less process churn when experimenting with tools. How to review the setup Check the current Hal configuration, confirm the selected engine is documented in the repo, and run a small loop before handing over a larger feature. Treat engine changes like any other workflow change: test them before trusting them. What Hal does and does not do | Hal does | Hal does not do | |---|---| | Structures PRD-native coding loops around planning, validation, implementation, reporting, and reviewable state. | Guarantee code quality, passing tests, delivery speed, revenue, rankings, or production readiness. | | Helps supported engines work against smaller, reviewable units of work. | Replace developer review, QA, security review, or merge judgment. | Sources and verification Before adopting this workflow, verify the current Hal source repository, install docs, pricing status, and machine-readable pricing. Check release notes and engine support before relying on Hal in production work. ## Inject project standards into each agent loop - URL: https://hal.rocks/features/project-standards/ - Description: Give each AI coding iteration the repo-specific commands, conventions, and review rules it needs to stay consistent. What are project standards for AI coding agents? Project standards are repo-specific rules, commands, conventions, and review expectations that an AI coding agent should follow during implementation. In Hal, standards help each story start with local context such as package manager, test command, architecture rules, and forbidden patterns. What breaks without local rules Agents can write plausible code that does not match the repo. They may choose the wrong package manager, miss the test command, invent folder patterns, or use a style the team already rejected. Those mistakes are not always model failures. Often, the agent simply did not receive the local rules. How Hal handles it Hal includes project standards as part of the loop context so each story starts with the commands, conventions, and constraints that matter for this repo. The point is to make the expected way of working explicit before implementation begins. What the developer gets - Repeatable instructions for future loops. - Fewer style and workflow surprises. - A clearer review basis when the agent deviates. - Standards that can evolve as the repo changes. How to review standards Ask whether each rule changes agent behavior. “Use our existing patterns” is weak. “Use Bun commands, keep Astro content pages in src/content, and run bun run build before finishing” is stronger. What Hal does and does not do | Hal does | Hal does not do | |---|---| | Structures PRD-native coding loops around planning, validation, implementation, reporting, and reviewable state. | Guarantee code quality, passing tests, delivery speed, revenue, rankings, or production readiness. | | Helps supported engines work against smaller, reviewable units of work. | Replace developer review, QA, security review, or merge judgment. | Sources and verification Before adopting this workflow, verify the current Hal source repository, install docs, pricing status, and machine-readable pricing. Check release notes and engine support before relying on Hal in production work. ## Preserve the state behind every coding loop - URL: https://hal.rocks/features/archive-and-restore/ - Description: Preserve progress, reports, workflow files, and loop state so AI coding work can be paused, inspected, and resumed. What is archive and restore for AI coding loops? Archive and restore for AI coding loops means preserving workflow state, reports, progress, and implementation artifacts so agent work can be paused, inspected, resumed, or discarded. In Hal, archive behavior supports reviewability by keeping context around the code, not just the final diff. What breaks when work has no trail A coding agent can make progress while leaving the developer unsure what happened. Which story ran? Which files changed? What reports were generated? What remains unresolved? Hal is built around reviewable state, not invisible progress. How Hal handles it After loop work, archive behavior can preserve progress, reports, and workflow state. That makes it easier to pause a feature, inspect it later, or restore the context needed to continue. The important output is not just code. It is the trail around the code. What the developer gets - Saved loop state for later inspection. - Reports and artifacts tied to the work. - A cleaner handoff between autonomous execution and human review. - A way to resume without reconstructing everything from memory. How to review archived work Treat the archive as supporting evidence. Inspect the changed files, commits, reports, unresolved stories, and acceptance criteria before deciding what to keep. What Hal does and does not do | Hal does | Hal does not do | |---|---| | Structures PRD-native coding loops around planning, validation, implementation, reporting, and reviewable state. | Guarantee code quality, passing tests, delivery speed, revenue, rankings, or production readiness. | | Helps supported engines work against smaller, reviewable units of work. | Replace developer review, QA, security review, or merge judgment. | Sources and verification Before adopting this workflow, verify the current Hal source repository, install docs, pricing status, and machine-readable pricing. Check release notes and engine support before relying on Hal in production work. ## A deterministic pipeline for AI coding work - URL: https://hal.rocks/features/auto-pipeline/ - Description: Move through planning, conversion, validation, implementation, reporting, and archive behavior as one explicit coding loop. What is Hal's auto pipeline? Hal's auto pipeline is the explicit sequence around PRD-driven AI coding work: plan the requirement, convert it into workflow state, validate stories, run implementation, preserve reports, and review the result. The pipeline makes checkpoints visible instead of burying them inside one open-ended agent chat. What breaks in open-ended chat Chat is flexible, but it is a weak production workflow. The boundaries are soft, the sequence is easy to skip, and the review trail can disappear inside the conversation. Hal makes the loop explicit. How Hal handles it A Hal run moves through named phases: plan the work, convert requirements, validate stories, run implementation, and preserve reviewable state. Each step has a job. Each step gives the developer a place to inspect the work. What the developer gets - A repeatable sequence for agent-driven implementation. - Clear checkpoints before and after code changes. - Less reliance on memory or manual prompt choreography. - A workflow that can be documented, reviewed, and improved. How to review the pipeline Start with a small feature. Confirm the plan is specific, validation passes, the agent changes only relevant files, and the resulting state is understandable before expanding to larger work. What Hal does and does not do | Hal does | Hal does not do | |---|---| | Structures PRD-native coding loops around planning, validation, implementation, reporting, and reviewable state. | Guarantee code quality, passing tests, delivery speed, revenue, rankings, or production readiness. | | Helps supported engines work against smaller, reviewable units of work. | Replace developer review, QA, security review, or merge judgment. | Sources and verification Before adopting this workflow, verify the current Hal source repository, install docs, pricing status, and machine-readable pricing. Check release notes and engine support before relying on Hal in production work. ## Hal vs direct agent prompting - URL: https://hal.rocks/features/hal-vs-direct-agent-prompting/ - Description: Compare Hal's PRD-native coding loop with direct AI coding agent prompting for planning, validation, reviewability, and merge safety. How is Hal different from direct agent prompting? Hal differs from direct agent prompting by making the AI coding workflow PRD-native and reviewable. Direct prompting sends a request to an agent. Hal adds planning, story conversion, validation, implementation, reporting, archive behavior, and human review checkpoints around supported engines. Quick decision summary Use direct prompting for tiny, low-risk edits or exploratory questions. Use Hal when the task needs product requirements, acceptance criteria, repo standards, repeatable steps, and a developer review trail before merge. Comparison table | Dimension | Hal | Direct agent prompting | |---|---|---| | Best fit | PRD-driven product changes that need reviewable implementation loops. | Small edits, questions, experiments, or one-off code help. | | Planning | Starts from product intent and supports commands such as hal plan. | Planning usually happens inside the prompt or chat. | | Validation | Encourages hal validate before implementation. | Depends on the user manually checking prompt quality. | | Context | Can carry stories, standards, workflow state, and recent loop artifacts. | Depends on the active chat context and tool memory. | | Review trail | Favors files, reports, commits, workflow state, and archive behavior. | Often relies on conversation history plus a final diff. | | Merge safety | Still requires developer review before merge. | Still requires developer review before merge. | When to choose Hal Choose Hal when the work has product behavior, multiple stories, acceptance criteria, local standards, or risk that should be reviewed before code changes. Hal is also a better fit when you want one workflow across supported engines such as Codex, Claude Code, and Pi. When direct prompting is enough Direct prompting can be enough for small edits, code explanations, quick refactors, or throwaway experiments. If the change is easy to describe, easy to inspect, and low risk, a full PRD-native loop may be unnecessary. What Hal does and does not do | Hal does | Hal does not do | |---|---| | Structures AI coding work into planning, conversion, validation, running, reporting, archive state, and review. | Guarantee code quality, passing tests, delivery speed, revenue, or production readiness. | | Makes agent work easier to inspect and repeat. | Replace Codex, Claude Code, Pi, human code review, QA, or merge approval. | How to try the structured path Start with a low-risk task: Review the plan before running and inspect the diff before merging. Related pages - PRD-driven planning - Fresh context for every story - How to structure AI coding tasks - How is Hal different from chatting with an AI coding agent? Sources and verification Before adopting this workflow, verify the current Hal source repository, install docs, pricing status, and machine-readable pricing. Check release notes and engine support before relying on Hal in production work. ## Free tools ## PRD readiness checker for AI coding - URL: https://hal.rocks/tools/prd-readiness-checker/ - Category: analyzer - Description: Check whether a product requirement is specific enough for an AI coding loop before an agent touches the repo. What does a PRD readiness checker do? A PRD readiness checker reviews whether a requirement is specific enough for AI coding. It looks for scope, users, constraints, acceptance criteria, edge cases, and review points before an agent edits files. The output helps developers tighten a PRD before using Hal or another coding workflow. How to use this PRD checker Paste a product requirement, feature brief, or early PRD draft. The tool looks for the details an AI coding loop needs: who the change is for, what is in scope, what is out of scope, how success is tested, and where a developer should review the output. What the score means A high score does not guarantee code quality. It means the requirement is better structured for a reviewable implementation loop. Hal still expects developers to inspect generated code, commits, reports, and remaining acceptance criteria before merging. When to use it Use the checker before hal plan, before assigning a task to a coding agent, or before splitting a large feature into stories. If the score is low, tighten the requirement before the agent edits files. Next step If the score is low, use the recommended fixes or send the brief through the AI Coding PRD Generator to turn it into a more complete markdown PRD. How this tool fits the Hal workflow Use this tool before a coding agent edits files. Then move from tool output into PRD-driven planning, a first Hal loop, or a review-before-merge checklist. Treat generated text as a draft and verify it against the repository before use. Verification rule Before adopting Hal, inspect the source repository, install guide, pricing page, and machine-readable pricing. This tool does not guarantee code quality, test success, or merge safety. ## AI coding PRD generator - URL: https://hal.rocks/tools/ai-coding-prd-generator/ - Category: generator - Description: Generate a markdown PRD, user stories, acceptance criteria, and a Hal-ready planning prompt from a feature idea. What does an AI coding PRD generator create? An AI coding PRD generator turns a feature idea into a structured markdown requirement with user context, scope, constraints, stories, acceptance criteria, and review notes. The result is meant to be reviewed by a developer before it becomes input for Hal or an AI coding agent. How to use the PRD generator Describe the feature, the user, the desired behavior, constraints, and the technology context. The generator turns those inputs into a markdown PRD with sections that are easier to inspect before running an agent loop. Why this helps AI coding workflows Broad prompts create broad changes. A PRD gives the agent a narrower target and gives the developer a document to review before implementation begins. What to check before running Hal Confirm the PRD names the user, defines in-scope and out-of-scope behavior, includes acceptance criteria, and calls out risky areas such as authentication, payments, migrations, permissions, or data loss. Recommended workflow Generate the PRD, run it through the PRD Readiness Checker, split risky work into smaller stories, then use the final markdown as the source for a small Hal loop. How this tool fits the Hal workflow Use this tool before a coding agent edits files. Then move from tool output into PRD-driven planning, a first Hal loop, or a review-before-merge checklist. Treat generated text as a draft and verify it against the repository before use. Verification rule Before adopting Hal, inspect the source repository, install guide, pricing page, and machine-readable pricing. This tool does not guarantee code quality, test success, or merge safety. ## Acceptance criteria generator for AI coding tasks - URL: https://hal.rocks/tools/acceptance-criteria-generator/ - Category: generator - Description: Turn a feature idea into testable Given/When/Then acceptance criteria, edge cases, and a review checklist. What does an acceptance criteria generator do? An acceptance criteria generator turns a feature idea into observable conditions for accepting generated code. It helps developers define expected behavior, edge cases, and review checks before implementation starts, so an AI coding loop has a clearer target than a vague natural-language request. How to use the acceptance criteria generator Enter a feature idea, user role, application type, expected success behavior, and likely failure modes. The tool returns criteria in a format that can be pasted into a PRD, issue, or story. Why acceptance criteria matter Agent loops are easier to review when each story has a clear definition of done. Acceptance criteria keep the work grounded in observable behavior rather than vague intent. What good criteria prevent They reduce arguments with the agent after the fact. Instead of asking whether a change “looks right,” the reviewer can check whether the expected behavior, edge cases, and failure states were handled. Recommended workflow Generate criteria first, add them to a PRD or task-splitting workflow, then run implementation only after the story is small enough to review. How this tool fits the Hal workflow Use this tool before a coding agent edits files. Then move from tool output into PRD-driven planning, a first Hal loop, or a review-before-merge checklist. Treat generated text as a draft and verify it against the repository before use. Verification rule Before adopting Hal, inspect the source repository, install guide, pricing page, and machine-readable pricing. This tool does not guarantee code quality, test success, or merge safety. ## AGENTS.md generator for AI coding agents - URL: https://hal.rocks/tools/agents-md-generator/ - Category: generator - Description: Generate repo-specific AGENTS.md instructions for Codex, Claude Code, Pi, and other AI coding workflows. What does an AGENTS.md generator do? An AGENTS.md generator creates repository-specific instructions for AI coding agents. It captures commands, architecture rules, conventions, forbidden patterns, and review expectations in a file that can travel with the repo, giving tools such as Codex, Claude Code, Pi, or Hal clearer local context. How to use the AGENTS.md generator Enter your stack, package manager, test command, lint command, architecture rules, forbidden patterns, and preferred agent workflow. The tool generates a copy-ready instruction file for your repo. Why project standards matter AI coding agents can drift into patterns that do not match the codebase. A standards file gives each loop concrete local rules before implementation starts. What to include Favor instructions that change behavior: exact commands, source directories, migration rules, review expectations, and patterns the agent must avoid. Keep generic advice out unless it helps the agent make a concrete decision. Recommended workflow Commit the generated AGENTS.md, keep it short enough to maintain, and update it when review feedback reveals repeated agent mistakes. How this tool fits the Hal workflow Use this tool before a coding agent edits files. Then move from tool output into PRD-driven planning, a first Hal loop, or a review-before-merge checklist. Treat generated text as a draft and verify it against the repository before use. Verification rule Before adopting Hal, inspect the source repository, install guide, pricing page, and machine-readable pricing. This tool does not guarantee code quality, test success, or merge safety. ## AI coding task splitter and risk scorer - URL: https://hal.rocks/tools/ai-coding-task-splitter/ - Category: analyzer - Description: Break a feature request into reviewable implementation stories and score whether it is safe for an AI coding loop. What does an AI coding task splitter do? An AI coding task splitter breaks a broad feature request into smaller implementation stories and flags risky work. It helps developers avoid handing an agent a sprawling task by creating reviewable steps, acceptance checkpoints, and a safer sequence for a Hal-style coding loop. How to use the task splitter Paste a feature request or technical change. Add stack details and an estimate of how many files or systems it might touch. The tool returns a risk score and a suggested sequence of smaller stories. Why this helps Hal loops Hal is built around bounded implementation loops. Splitting a large feature into reviewable stories keeps the agent focused and gives the developer smaller checkpoints. What the risk score is for The score is a planning signal, not a permission slip. Use it to decide whether to shrink scope, add acceptance criteria, require more manual review, or avoid autonomous execution for sensitive work. Recommended workflow Split the task, turn the highest-priority story into a PRD, validate the PRD, then run a small loop and inspect the result. How this tool fits the Hal workflow Use this tool before a coding agent edits files. Then move from tool output into PRD-driven planning, a first Hal loop, or a review-before-merge checklist. Treat generated text as a draft and verify it against the repository before use. Verification rule Before adopting Hal, inspect the source repository, install guide, pricing page, and machine-readable pricing. This tool does not guarantee code quality, test success, or merge safety. ## Documentation pages ## Run your first Hal loop - URL: https://hal.rocks/docs/first-loop/ - Description: Run a small Hal loop from PRD planning through validation, implementation, and human review without handing over a broad feature. How do you run a first Hal loop safely? Run a first Hal loop with one small requirement, a clean branch, validated stories, and a manual review before merge. The goal is not hands-off shipping. The goal is to learn the PRD-native sequence: initialize, plan, convert, validate, run, inspect, and decide what to keep. 1. Start from a clean branch Create a branch for the experiment before the agent edits files: A clean branch makes every generated change easier to review or discard. 2. Initialize Hal Run the project setup command: This creates the local workflow structure Hal uses for planning, validation, runtime state, and reviewable artifacts. 3. Plan a small requirement Use a narrow feature that has a visible outcome: Review the generated PRD or story output before moving forward. Look for ambiguous scope, missing constraints, and unclear acceptance criteria. 4. Convert and validate Prepare runtime state and check story quality: If validation reports unclear stories, fix the PRD before implementation. A vague story usually becomes a vague diff. 5. Run the implementation loop When the plan is reviewable, run the loop: Hal can run autonomous implementation iterations with the configured engine, but it should still leave state a developer can inspect. 6. Review before merging Inspect changed files, commits, reports, tests, generated state, and remaining acceptance criteria. Revert or repair anything that does not match the PRD. Use Hal to make agent work more structured, not to remove developer responsibility. Verification and review rule Before adopting Hal, verify the current source repository, install path, release notes, and supported engine behavior. A completed loop is reviewable agent output, not a guarantee that the code is safe to merge. Related workflow links Use the PRD readiness checker before planning, read the PRD-driven planning feature, and review whether Hal replaces code review before expanding to larger tasks. ## How to review AI-generated code before merge - URL: https://hal.rocks/docs/how-to-review-ai-generated-code/ - Description: Review AI-generated code by checking the requirement, changed files, tests, reports, edge cases, and unexpected edits before merging. How should you review AI-generated code? Review AI-generated code by comparing it against the original requirement, inspecting every changed file, running project checks, and confirming acceptance criteria before merge. In a Hal loop, also inspect reports, workflow state, commits, archives, and unresolved stories left by the run. Prerequisites - The PRD, issue, or task that started the work. - A clean branch or diff you can inspect. - The repository's normal build, lint, typecheck, and test commands. - Any project standards or AGENTS.md instructions used by the agent. Review sequence 1. Re-read the requirement and non-goals. 2. Inspect changed files before reading the agent's explanation. 3. Check whether every acceptance criterion is addressed. 4. Run local checks and tests. 5. Look for unexpected dependencies, abstractions, migrations, permissions changes, or broad refactors. 6. Confirm edge cases, empty states, error states, and rollback behavior where relevant. 7. Decide what to keep, fix, rerun, or discard. What to inspect in Hal output Hal is designed to leave reviewable state around the loop. Depending on current release behavior, inspect files such as reports, workflow state, commits, and archive output alongside the code diff. A completed command is not proof that the product behavior is correct. Review checklist | Area | Review question | |---|---| | Requirement | Does the diff solve the requested behavior and avoid non-goals? | | Scope | Did the agent touch only expected files? | | Tests | Do relevant checks pass locally? | | Product behavior | Are edge cases and failure states handled? | | Standards | Did the output follow repo commands and conventions? | | Merge safety | Can a human explain and support the change? | When to rerun instead of fix manually Rerun a smaller loop when the generated change is directionally useful but too broad, under-specified, or missing key constraints. Fix manually when the remaining work is small and the developer can confidently review the final diff. Verification and no-guarantees rule Verify current Hal behavior in the source repository and install docs. Hal can make AI coding work more inspectable, but it does not guarantee code quality, test success, or production readiness. ## How to structure AI coding tasks - URL: https://hal.rocks/docs/how-to-structure-ai-coding-tasks/ - Description: Structure AI coding tasks with a clear user, scope, constraints, acceptance criteria, non-goals, and review plan before implementation. How do you structure tasks for AI coding agents? Structure AI coding tasks as small, reviewable stories with a user, desired behavior, explicit scope, non-goals, constraints, acceptance criteria, and a review plan. Hal works best when the agent receives bounded work and the developer can inspect the resulting diff before continuing. Start with the review boundary Before writing the prompt, ask: can a developer review the output in one sitting? If not, split the task. Risky tasks often involve authentication, payments, migrations, permissions, security-sensitive code, broad refactors, or many unrelated files. Use the AI coding task splitter when scope is unclear. Task template Convert broad requests into stories | Broad request | Safer AI coding story | |---|---| | Add account settings. | Add one saved display-name field with validation and tests. | | Improve onboarding. | Add one empty-state CTA and confirm the first-run route still works. | | Refactor billing. | Extract one pure formatting helper without changing payment behavior. | Use Hal after structure is clear If validation highlights unclear stories, tighten the task before implementation. A vague story usually becomes a vague diff. What Hal does and does not do | Hal helps with | Hal does not replace | |---|---| | Planning, conversion, validation, running, reporting, archive state, and reviewable loops. | Product judgment, security review, QA, merge approval, or production ownership. | Verification rule Before relying on Hal in production, check the source repository, install guide, release notes, and supported engine documentation. ## How to use AI coding agents with PRDs - URL: https://hal.rocks/docs/how-to-use-ai-coding-agents-with-prds/ - Description: Use a PRD to give AI coding agents clearer scope, acceptance criteria, constraints, and review checkpoints before implementation starts. How do you use AI coding agents with PRDs? Use a PRD with an AI coding agent by turning product intent into scoped stories, acceptance criteria, constraints, and review checkpoints before implementation starts. In Hal, that means planning the requirement, validating the resulting work units, running a bounded loop, and reviewing output before merge. Prerequisites - A repository under version control. - A small product change or feature idea. - Access to a supported coding engine such as Codex, Claude Code, or Pi. - A willingness to review generated code before merging. Step 1: Write the smallest useful PRD Include: - user or operator affected by the change - desired behavior - in-scope and out-of-scope work - constraints such as auth, data, performance, or UI rules - acceptance criteria - review notes and risky files If the requirement is still vague, use the PRD readiness checker or AI coding PRD generator before implementation. Step 2: Plan and validate in Hal Use the documented Hal flow as the reviewable boundary: Review the generated plan or story output. Do not continue if stories are too broad, criteria are untestable, or constraints are missing. Step 3: Run one bounded implementation loop Start with one story or one narrow requirement. The goal is to create output that can be reviewed in one sitting, not to hand over a broad roadmap. Step 4: Review before merge Inspect changed files, commits, reports, tests, unresolved criteria, and unexpected edits. If the output is partially correct, keep only what you can verify and rerun a smaller loop if needed. PRD checklist for agent work | PRD section | Why it matters for agents | |---|---| | Scope | Prevents broad diffs and invented work. | | Non-goals | Tells the agent what not to touch. | | Acceptance criteria | Gives reviewers observable checks. | | Constraints | Carries local product and technical rules. | | Review plan | Makes the human checkpoint explicit. | Verification and no-guarantees rule Before adopting Hal, verify the current source repository, install guide, release notes, and engine support. Hal structures coding loops; it does not guarantee correctness or replace developer review. ## Install Hal CLI - URL: https://hal.rocks/docs/install/ - Description: Install the Hal CLI, verify the command is available, and check the AI coding engine prerequisites before running a PRD-native loop. How do you install Hal CLI? Install Hal by using the Homebrew command shown on this website, then verify the hal command is available before running a coding loop. Hal is a terminal-first CLI, so setup should end with a local command check and a review of current repository installation notes. Install with Homebrew Use the install command shown by this site: After installation, open a new terminal session if your shell needs to refresh its PATH. Verify the binary Confirm that the hal command is available before using it in a repository: If your installed release supports diagnostics, run: Use the GitHub README as the source of truth for release-specific installation notes. Check engine prerequisites Hal wraps a coding loop around an implementation engine. Make sure you have access to one of the supported agent engines mentioned by this site: Codex, Claude Code, or Pi. The engine may have its own authentication, subscription, API, IDE, or model requirements. Those costs and limits are separate from Hal. Initialize a repository Run Hal inside a repository you can inspect and revert safely: Start with a branch and a small task. Do not point a new tool at production code without version control and a review plan. Next step Once installation is verified, run a small first loop. Use a narrow requirement, validate the generated stories, and inspect the output before merging. Verification and review rule Before adopting Hal, verify the current source repository, install path, release notes, and supported engine behavior. A completed loop is reviewable agent output, not a guarantee that the code is safe to merge. Related workflow links Use the PRD readiness checker before planning, read the PRD-driven planning feature, and review whether Hal replaces code review before expanding to larger tasks. ## Review Hal output before merging - URL: https://hal.rocks/docs/review-before-merge/ - Description: Use this checklist to review AI coding loop output from Hal before merging commits, reports, workflow state, or generated code. How do you review Hal output before merge? Review Hal output by reopening the original requirement, inspecting every changed file, checking reports and workflow state, running local project checks, and comparing the result against acceptance criteria. A completed loop is not a merge approval; it is agent-generated work prepared for human review. Start with the requirement Before reading the diff, reopen the PRD or requirement that started the loop. Confirm what the agent was supposed to change and what was explicitly out of scope. If the requirement was vague, do not treat a large diff as a success. Tighten the plan and split the work. Inspect changed files Review every changed file and look for: - unexpected edits outside the planned scope - invented abstractions or dependencies - inconsistent style compared with the repository - missing error states, empty states, or edge cases - generated code that passes shape checks but not product intent Check commits and reports Hal can leave commits, reports, progress files, workflow files, and archive state. Use those artifacts to answer basic review questions: - Which story ran? - What changed? - What tests or checks were attempted? - What remains unresolved? - Can the work be resumed, reverted, or archived cleanly? Run local checks Run the repository's normal build, typecheck, lint, and test commands. If the project has an AGENTS.md or standards file, confirm the agent followed it. A passing command does not prove product correctness, but a failing command is a reason to stop and repair. Review acceptance criteria Match the implementation against each acceptance criterion. Mark criteria as complete only when behavior is observable and testable. If a criterion cannot be verified, rewrite it before the next loop. Merge only after human acceptance Hal makes coding work more repeatable and inspectable. It does not guarantee code quality, security, CI success, or product fit. Merge only after a developer accepts the change. Verification and review rule Before adopting Hal, verify the current source repository, install path, release notes, and supported engine behavior. A completed loop is reviewable agent output, not a guarantee that the code is safe to merge. Related workflow links Use the PRD readiness checker before planning, read the PRD-driven planning feature, and review whether Hal replaces code review before expanding to larger tasks. ## FAQ pages ## Does Hal replace code review? - URL: https://hal.rocks/faq/does-hal-replace-code-review/ - Description: Hal does not replace code review. It structures AI coding loops so developers can inspect generated state, diffs, reports, and commits. Does Hal replace code review? Hal does not replace code review. It structures AI coding loops so developers can inspect requirements, stories, changed files, reports, commits, and archive state before deciding what to merge. The developer still owns correctness, testing, security review, and production judgment. Short answer Hal does not replace code review. It creates a workflow around AI coding agents so generated work is easier to inspect. Why review still matters AI coding agents can produce plausible changes that miss product intent, skip edge cases, or modify files outside the expected scope. A structured loop reduces drift, but it does not prove correctness. What Hal contributes Hal can help preserve the trail around a loop: requirements, stories, runtime state, reports, commits, and archive behavior. Those artifacts make it easier to answer what changed and why. Safe merge rule Do not merge because a loop completed. Merge only after a developer verifies the diff against the PRD and the repository's standards. Verification rule Before adopting Hal, inspect the source repository, current install guide, pricing page, and machine-readable pricing. Do not assume hosted plans, unsupported engines, metrics, or guarantees unless the current source documents them. ## How is Hal different from chatting with an AI coding agent? - URL: https://hal.rocks/faq/how-is-hal-different-from-chatting-with-an-ai-coding-agent/ - Description: Hal differs from direct AI agent chat by turning requirements into a PRD-native, validated, reviewable coding loop. How is Hal different from chatting with an AI coding agent? Hal is different from chatting with an AI coding agent because it provides a PRD-native workflow around the agent. Instead of one open-ended conversation, Hal emphasizes planning, story conversion, validation, implementation, reports, archive state, and developer review before merge. Direct chat is useful but loose Direct agent chat can work well for tiny edits, exploration, or quick questions. It becomes harder to manage when scope grows, because requirements, constraints, review notes, and decisions may live only in the conversation. Hal makes the workflow explicit A Hal loop can include commands such as: The point is not that commands alone make work correct. The point is that the developer gets checkpoints before and after implementation. Hal vs direct prompting | Dimension | Hal | Direct agent chat | |---|---|---| | Source of truth | PRD, stories, standards, workflow state. | Conversation context. | | Scope control | Plan and validate before run. | Scope can expand turn by turn. | | Engine role | Codex, Claude Code, or Pi can be implementation engines. | The agent interface is both workflow and engine. | | Review | Inspect artifacts and diffs before merge. | Review often depends on chat history and final diff. | Verification rule Before relying on Hal, inspect the source repository, engine support FAQ, and install guide. Do not assume support for engines beyond what current source documents. ## Is AI-generated code safe to merge? - URL: https://hal.rocks/faq/is-ai-generated-code-safe-to-merge/ - Description: AI-generated code is not automatically safe to merge. Review the diff, tests, requirements, and edge cases before accepting it. Is AI-generated code safe to merge? AI-generated code is not automatically safe to merge. Treat it as draft implementation work until a developer verifies the requirement, changed files, tests, edge cases, and unexpected edits. Hal can make the work more reviewable, but the merge decision still belongs to a human. Why automatic merge is risky AI coding agents can produce plausible changes that miss product intent, skip edge cases, introduce unnecessary abstractions, or touch files outside the intended scope. Passing one check does not prove the behavior is correct. Safer merge rule Merge only after a developer can answer: - What requirement did this change address? - Which files changed and why? - Which acceptance criteria are satisfied? - Which tests or checks were run? - What remains unresolved? - Are there security, data, auth, or migration risks? Where Hal helps Hal is designed around PRD-native, reviewable coding loops. It can help preserve planning context, validation state, reports, commits, workflow files, and archive behavior so review is easier than reading an unstructured chat. Verification rule Before adopting Hal, inspect the source repository, review guide, and current release notes. Do not assume guarantees, automatic merge safety, or unsupported engine behavior. ## What is Hal? - URL: https://hal.rocks/faq/what-is-hal/ - Description: Hal is a terminal-first CLI for PRD-native AI coding loops. Learn what it does, where it fits, and what developers still review. What is Hal? Hal is a terminal-first CLI for PRD-native AI coding loops. It helps developers turn product requirements into planned, validated, reviewable implementation work using supported coding engines such as Codex, Claude Code, and Pi. Hal does not replace human review; it makes agent output easier to inspect before merge. Short answer Hal is a command-line tool for running PRD-native AI coding loops. Instead of handing a vague prompt to an agent, you plan the requirement, convert it into structured runtime state, validate the stories, run implementation, and review the output. What Hal does Hal helps developers move from product intent to reviewable implementation work. The site describes commands such as hal init, hal plan, hal convert, hal validate, and hal run. What Hal does not do Hal does not remove the need for developer review. It does not guarantee code quality, passing tests, delivery speed, revenue, rankings, or production readiness. Where to start Install Hal, choose a small task, run the first loop, and inspect every changed file before merging. Verification rule Before adopting Hal, inspect the source repository, current install guide, pricing page, and machine-readable pricing. Do not assume hosted plans, unsupported engines, metrics, or guarantees unless the current source documents them. ## Which engines does Hal support? - URL: https://hal.rocks/faq/which-engines-does-hal-support/ - Description: This site mentions Codex, Claude Code, and Pi as supported agent engines for Hal's PRD-native coding loop. Which AI coding engines does Hal support? This website mentions Codex, Claude Code, and Pi as supported coding engines for Hal. Treat the list as a site claim to verify against the current source repository and release notes before production use. Do not assume support for additional engines unless documented. Short answer This website mentions Codex, Claude Code, and Pi as supported coding engines for Hal. How engines fit into the loop Hal treats the coding agent as the implementation engine. The workflow around that engine is still PRD-native: plan the requirement, validate the stories, run the loop, and inspect output. Why this matters If the workflow lives entirely inside one agent tool, switching tools can mean rebuilding prompts, standards, and review habits. Hal is designed to keep those workflow pieces explicit. Verification rule Before relying on an engine for production work, verify the current source repository and release documentation. Do not rely on unsupported engine claims. ## Glossary pages ## Acceptance criteria - URL: https://hal.rocks/glossary/acceptance-criteria/ - Description: Acceptance criteria define observable conditions a story must satisfy before generated code can be accepted after review. What are acceptance criteria? Acceptance criteria are observable conditions a story must satisfy before generated code can be accepted. For AI coding, they give the agent a definition of done and give the developer a checklist for reviewing behavior, edge cases, constraints, and unresolved work. Definition Acceptance criteria are specific conditions a story must meet to be considered complete. They should be observable, testable, and tied to product behavior. In AI coding workflows Acceptance criteria help prevent vague prompts from turning into vague diffs. They give the implementation engine a target and give the reviewer a checklist. Good criteria Good acceptance criteria name the user-facing behavior, relevant edge cases, constraints, and non-goals. They avoid broad statements like “works well” or “make it better.” In Hal Hal's PRD-driven workflow encourages requirements and stories that can be validated before implementation. If acceptance criteria are unclear, split or rewrite the story before running the loop. Where to go next - Read the first Hal loop guide to see the workflow in practice. - Review PRD-driven planning for the feature mechanics. - Use the PRD readiness checker before handing a requirement to an agent. - Verify current product behavior in the Hal source repository. ## Agentic coding workflow - URL: https://hal.rocks/glossary/agentic-coding-workflow/ - Description: An agentic coding workflow uses AI agents to perform bounded software implementation steps while developers plan, validate, and review output. What is an agentic coding workflow? An agentic coding workflow uses an AI coding agent to perform implementation steps while a developer controls the requirement, constraints, review points, and merge decision. The workflow is strongest when tasks are bounded, standards are explicit, and output is inspected before production use. Core pieces A practical agentic coding workflow usually includes: 1. A written requirement or PRD. 2. Story-sized implementation tasks. 3. Acceptance criteria and non-goals. 4. Repo-specific standards and commands. 5. A run step using a coding agent. 6. Human review of files, tests, reports, and unresolved work. Where Hal fits Hal provides a terminal-first loop around supported coding engines such as Codex, Claude Code, and Pi. Hal is not the model. It is the workflow layer that helps structure planning, validation, execution, reporting, archiving, and review. Agentic workflow vs agent chat | Agentic coding workflow | Agent chat | |---|---| | Uses explicit tasks and checkpoints. | Can drift across many turns. | | Keeps requirements and standards inspectable. | Often stores decisions in conversation history. | | Preserves reviewable state. | May leave only a final diff. | | Treats merge as a developer decision. | Can create pressure to accept completion as correctness. | Example Hal sequence Start with a low-risk task and confirm the resulting state is understandable before expanding the workflow. Related pages - AI coding loop - Reviewable AI coding loop - How to structure AI coding tasks - AI coding task splitter Verification rule Before adopting Hal, inspect the source repository, current install guide, and release notes for documented command and engine behavior. ## AI coding loop - URL: https://hal.rocks/glossary/ai-coding-loop/ - Description: An AI coding loop is a repeated plan, implement, inspect, and continue workflow for agent-assisted software development. What is an AI coding loop? An AI coding loop is a repeatable workflow where a developer defines a bounded task, an AI coding agent implements against that task, and the resulting changes are inspected before continuing or merging. Hal structures this loop around PRDs, validation, runtime state, reports, and review. Definition An AI coding loop is a repeatable sequence for using an AI coding agent on software work. A healthy loop defines the task, runs implementation, records what happened, and gives a developer a review point. In Hal Hal frames the loop around PRDs and stories. The site describes the sequence as planning, conversion, validation, running, reporting, archiving, and human review. What makes a loop reviewable A reviewable loop leaves artifacts. Those can include changed files, commits, reports, workflow state, acceptance criteria, and archive history. What to avoid Avoid one broad prompt that asks an agent to implement an entire complex feature without boundaries. Long unstructured sessions are harder to inspect and easier to drift. Where to go next - Read the first Hal loop guide to see the workflow in practice. - Review PRD-driven planning for the feature mechanics. - Use the PRD readiness checker before handing a requirement to an agent. - Verify current product behavior in the Hal source repository. ## PRD-driven AI coding - URL: https://hal.rocks/glossary/prd-driven-ai-coding/ - Description: PRD-driven AI coding uses a product requirements document to guide agent planning, story splitting, implementation, and review. What is PRD-driven AI coding? PRD-driven AI coding means using a product requirements document to guide an AI coding agent from scope to implementation review. The PRD defines the user, behavior, constraints, acceptance criteria, and non-goals before the agent starts changing files. How it works in practice A developer writes or generates a PRD, reviews the scope, splits the work into smaller stories, and validates those stories before implementation. The agent then works against a more explicit target than a one-line prompt. In Hal, the workflow can look like this: Why PRDs help AI agents AI coding agents are sensitive to ambiguity. If the prompt does not define success, the agent may invent scope or optimize for the wrong outcome. A PRD gives both the agent and the reviewer a shared source of truth. PRD-driven vs direct prompting | Dimension | PRD-driven AI coding | Direct agent prompting | |---|---|---| | Scope | Written before implementation. | Often negotiated during the chat. | | Review | Compared against requirements and criteria. | Compared against the latest conversation. | | Risk control | Encourages story splitting and validation. | Easy to over-broaden. | | Best fit | Product changes with behavior and edge cases. | Tiny edits or exploratory work. | How Hal uses PRDs Hal is built around PRD-native coding loops. It helps move from product intent into planned, validated, reviewable work while still requiring a developer to inspect generated output before merging. Related pages - PRD-native development - PRD-driven planning - AI coding PRD generator - PRD readiness checker Verification rule Check the current Hal repository, install docs, and release notes before relying on command behavior in production work. ## PRD-native development - URL: https://hal.rocks/glossary/prd-native-development/ - Description: PRD-native development means using a product requirements document as the source of truth for planning, implementation, and review. What is PRD-native development? PRD-native development means the product requirements document is the source of truth for planning, implementation, and review. Instead of treating the PRD as a handoff artifact, the workflow keeps stories, acceptance criteria, constraints, and merge decisions tied back to the original requirement. Definition PRD-native development is a workflow where product requirements guide the implementation loop from the beginning. The PRD is used to define scope, split stories, write acceptance criteria, and review output. In Hal Hal is built around PRD-native loops. Commands such as hal plan, hal convert, and hal validate exist so implementation starts from structured product intent instead of a loose prompt. Why it helps A clear PRD gives both the agent and the reviewer a shared target. It also makes broad work easier to split into smaller stories that can be inspected one at a time. Related terms PRD-native development connects directly to AI coding loops, acceptance criteria, project standards, and review-before-merge practices. Where to go next - Read the first Hal loop guide to see the workflow in practice. - Review PRD-driven planning for the feature mechanics. - Use the PRD readiness checker before handing a requirement to an agent. - Verify current product behavior in the Hal source repository. ## Reviewable AI coding loop - URL: https://hal.rocks/glossary/reviewable-ai-coding-loop/ - Description: A reviewable AI coding loop is an agent-assisted workflow that leaves inspectable requirements, changes, reports, and checkpoints before merge. What is a reviewable AI coding loop? A reviewable AI coding loop is a workflow where an AI coding agent works on a bounded task and leaves enough evidence for a developer to inspect the result. In Hal, that evidence can include PRD context, stories, validation state, reports, changed files, commits, and archive behavior. Why reviewability matters Agent output can look plausible while missing product intent, edge cases, local standards, or security concerns. Reviewability does not make the output correct by itself. It makes the decision easier because the developer can compare the diff against the requirement and acceptance criteria. Example workflow A healthy loop has a checkpoint before hal run and another checkpoint after implementation. The first checkpoint reviews scope. The second reviews behavior, tests, changed files, and unresolved criteria. Reviewable vs unreviewable agent work | Reviewable loop | Unreviewable session | |---|---| | Starts from a PRD or scoped requirement. | Starts from a broad prompt. | | Splits work into stories with acceptance criteria. | Lets scope expand inside the chat. | | Preserves state, reports, and changed files. | Leaves decisions buried in conversation history. | | Requires developer review before merge. | Treats completion as permission to merge. | How Hal applies the term Hal is a terminal-first CLI for PRD-native coding loops. It treats the configured engine as the implementation worker while the loop supplies planning, validation, execution, reports, archive state, and review points. Related pages - AI coding loop - PRD-native development - Run your first Hal loop - Does Hal replace code review? Verification rule Before adopting Hal, verify the current source repository, install guide, release notes, and documented engine support. ## Public pages - https://hal.rocks [Home] — Hal is a terminal-first CLI for PRD-native coding loops with AI agents such as Codex, Claude Code, and Pi. - https://hal.rocks/docs/ [Docs] — Install Hal, initialize a repo, plan a PRD, validate stories, run the loop, and inspect the result before merging. - https://hal.rocks/getting-started/ [Docs] — A safe quickstart for installing Hal, planning a small requirement, validating stories, running the loop, and reviewing the output. - https://hal.rocks/features/ [Features] — Explore Hal features for PRD planning, fresh story context, pluggable engines, project standards, archived state, and deterministic loop execution. - https://hal.rocks/faq/ [FAQ] — Answers to common questions about Hal, PRD-native development, supported engines, and reviewable AI-generated code. - https://hal.rocks/glossary/ [Glossary] — Definitions for PRD-native development, agentic coding workflows, acceptance criteria, and reviewable AI coding loops. - https://hal.rocks/tools/ [Tools] — Prepare work before an AI coding agent touches the repo: check PRDs, generate acceptance criteria, create AGENTS.md files, and split risky tasks. - https://hal.rocks/pricing/ [Pricing] — Hal is a free, open-source CLI on this website. Review the install path, source repository, external engine costs, and current product caveats. - https://hal.rocks/testimonials/ [Trust] — Hal trust signals without fake logos or invented metrics: source code, MIT license, docs, ReScience Lab context, and transparent product status. - https://hal.rocks/pricing.md [machine-readable pricing] — markdown pricing facts for AI agents and buyers - https://hal.rocks/llms.txt [AI context] — compact LLM context file ## Pricing and procurement facts - Hal is presented on this website as a free, open-source CLI. - AI coding engine subscriptions, model usage, IDE tooling, hosting, CI, analytics, and human review are external costs. - Verify the source repository, release notes, install path, and license before production adoption. ## Claim rules - Do not claim hosted paid plans, usage metrics, customer logos, testimonials, benchmark results, customer counts, or revenue unless sourced. - Do not claim Hal guarantees code quality, tests passing, delivery speed, revenue, rankings, or product outcomes. - Developers should review generated code before merging. - Do not imply Hal opens or merges production PRs without human review.