ai-dispatch
openbooklet.com/s/ai-dispatchopenbooklet.com/s/ai-dispatch@1.0.0GET /api/v1/skills/ai-dispatchUse when an approved plan.md exists and execution should begin. Trigger for 'go', 'start building', 'execute the plan', 'implement it', 'let's do this', 'run the plan', 'resume', or 'continue' after interruption. Not without an approved plan â run /ai-plan first. Orchestrates subagents per task with two-stage review, progress tracking, and automated delivery.
Use when Claude Code keeps asking to approve commands you have already approved, when settings.local.json has grown large, or when you want to consolidate permission grants into wildcard patterns. Trigger for 'too many permission prompts', 'clean up permissions', 'audit my settings', 'consolidate allow rules'. Claude Code only â not available in GitHub Copilot.
Use when adding motion, transitions, or micro-interactions to UI components. Trigger for 'animate this', 'add transitions', 'micro-interactions for', 'spring animation', 'gesture design', 'swipe to dismiss', 'review the motion', 'easing for this', 'stagger animation', 'stagger the', 'animar esto', 'transicion para'. Not for design systems or aesthetic direction (use /ai-design), visual art (use /ai-canvas), testing animation code (use /ai-test), or debugging broken animations (use /ai-debug).
Use after /ai-brainstorm approval when a spec is large (3+ independent concerns or 10+ files) and needs autonomous end-to-end delivery. Decomposes into sub-specs, plans with parallel agents, builds a dependency DAG, implements in waves, runs quality convergence loops, and delivers via PR. Invocation is the approval gate â no further confirmation requested. Not for small tasks â use /ai-dispatch.
Use to discover and configure project board integration after framework install, when board config is missing or stale, or when the team switches projects. Trigger for 'set up the board', 'board sync isn't working', 'we moved to a new GitHub project', 'configure our ADO board', 'the work item states don't match'. Detects GitHub Projects v2 or Azure DevOps fields and writes atomic config to manifest.yml.
Use to update work item state on the project board at lifecycle phase transitions. Automatically invoked by /ai-brainstorm (refinement, ready), /ai-dispatch (in_progress), and /ai-pr (in_review). Also invoke manually for 'move this issue to in-review', 'update the board', 'mark as in progress', 'sync the work item state'. Fail-open: never blocks the calling workflow.
Use when the user wants to think through a problem before coding: designing a feature, exploring approaches, defining requirements, or resolving ambiguity. Trigger for 'let's add X', 'how should we handle Y', 'what's the best approach', 'I'm thinking about', or any work item lacking an approved spec. Not for existing specs â use /ai-plan instead. Produces a reviewed spec; no code until user approves.
Use when creating visual design artifacts: posters, banners, flyers, branding pieces, marketing materials, cover art, identity compositions, or any static visual output (PDF/PNG). Trigger for 'create a poster', 'design a banner', 'visual composition for', 'branding piece', 'marketing visual', 'cover art for', 'promotional graphic', 'cartel para', 'material de comunicacion visual', 'diseño visual para'. Not for UI interfaces (use /ai-design), animation (use /ai-animation), presentation decks (use /ai-slides), or AI-generated images (use /ai-media).
Use after merging a PR, at session start, or to tidy up branches. Trigger for 'tidy up', 'clean up branches', 'sync to main', 'get back to main', 'delete old branches', 'what branches do I have', 'start fresh'. Also automatically invoked by /ai-pr after merge. Safely switches to default branch, prunes merged and squash-merged branches, produces per-branch status report.
Use when writing new production code: implementing features, adding functions, creating modules, or building approved plan tasks. Trigger for 'implement this', 'write the code for', 'add X to Y', 'build this function', 'make this work'. Loads language/framework context, applies interface-first design and backward-compatibility checks. Not for tests (/ai-test), debugging (/ai-debug), or refactoring (/ai-simplify).
Use when committing, saving, or pushing code to git. Trigger for 'commit my changes', 'save my work', 'push this to remote', 'stage these files', 'I'm done with this task', 'ship it'. Runs governed pipeline: auto-branch from protected branches, selective staging, ruff format+lint, gitleaks secret scan, documentation gate, conventional commit message, and push. Use /ai-pr instead when a pull request is the goal.
Use when adding a new slash command, building a new agent role, or extending the ai-engineering framework with a capability it does not yet have. Trigger for 'create a new skill', 'add a slash command', 'the framework needs a capability for X', 'build a new agent'. Covers skill scaffold, TDD pressure testing, description optimization, registration, and mirror sync.
Use when something is broken and you need to find out why: test failures, runtime errors, crashes, regressions, or unexpected behavior. Trigger for 'it's not working', 'something broke', 'this used to work', 'I'm getting an error', 'CI is failing', 'the output is wrong', or 'why is X happening'. Systematic 4-phase diagnosis â never patches symptoms without finding the root cause.
Use when designing or building user interfaces, creating design systems, choosing aesthetic direction, defining color palettes, typography, or spatial composition for web or mobile. Trigger for 'design this page', 'create a design system', 'what style should we use', 'UI for this feature', 'color palette for', 'typography for', 'diseña la interfaz', 'estilo para'. Not for animation-specific work (use /ai-animation) or visual art/posters/banners (use /ai-canvas).
Use for documentation lifecycle: updating CHANGELOG.md, refreshing README files, scaffolding or syncing solution intent architecture docs, pushing to external docs portals, and verifying documentation coverage. Also invoked automatically by /ai-pr. Trigger for 'update the changelog', 'the README is stale', 'document this feature', 'docs portal needs updating', 'did we document all changes'.
Use when measuring AI system reliability over time: defining pass/fail criteria before implementation, running capability checks, detecting regressions after prompt or model changes, or tracking pass@k metrics. Trigger for 'how reliable is this', 'did my changes break anything', 'measure AI performance', 'define success criteria', 'eval this'. Distinct from /ai-test (code correctness) and /ai-verify (quality gates) â evals measure AI task completion consistency.
Use when a developer asks 'how does this work?', 'why does it do X?', 'trace through this', 'explain this pattern', or needs engineer-grade technical explanations anchored in real codebase examples. 3-tier depth (brief/standard/deep) with ASCII diagrams and execution traces. Not for documentation (/ai-write) or fixing code (/ai-dispatch).
Use when new to a project and need orientation, want to understand component relationships, need to find where something happens ('where does auth happen?'), or understand why a technology decision was made. Modes: tour (architecture overview), find (topic search), history (decision archaeology), onboard (structured human onboarding). Read-only. Not for agent session bootstrap â use /ai-start.
Activate at session start to silently observe corrections, recoveries, and workflow patterns. Run with --review before commits or PRs to consolidate observations into project instincts. Trigger for 'start observing', 'learn from this session', 'consolidate instincts', 'review what you learned', 'instinct review'. Listening mode is passive -- review mode extracts, enriches, and writes.
Use when the AI keeps repeating the same mistakes, when you want the framework to learn from merged PR review feedback, or when enough corrections have accumulated to update standards. Trigger for 'the AI keeps doing X wrong', 'learn from this PR', 'what patterns did reviewers catch', 'update our standards from feedback'. Analyzes PRs, identifies missed checks, and writes lessons directly to LESSONS.md.
Use for marketing content and go-to-market execution: content strategy, blog posts for distribution, social media crossposting, market research, investor materials, investor outreach, and X/Twitter API automation. Trigger for 'write a blog post to publish', 'crosspost to socials', 'market research for X', 'investor deck', 'outreach campaign', 'content engine', 'post to Twitter/X'. NOT for internal documentation â use /ai-docs. NOT for reports or solution intent â use /ai-write.
Auto-indexed from arcasilesgroup/ai-engineering
Are you the author? Claim this skill to take ownership and manage it.
Related Skills
graceful-error-recovery
Use this skill when a tool call, command, or API request fails. Diagnose the root cause systematically before retrying or changing approach. Do not retry the same failing call without first understanding why it failed.
audience-aware-communication
Use this skill when writing any explanation, documentation, or response that will be read by someone else. Match vocabulary, depth, and format to the audience's expertise level before writing.
Refactoring Expert
Expert in systematic code refactoring, code smell detection, and structural optimization. Use PROACTIVELY when encountering duplicated code, long methods, complex conditionals, or any code quality issues. Detects code smells and applies proven refactoring techniques without changing external behavior.
Research Expert
Specialized research expert for parallel information gathering. Use for focused research tasks with clear objectives and structured output requirements.
clarify-ambiguous-requests
Use this skill when the user's request is ambiguous, under-specified, or could be interpreted in multiple ways. If proceeding with a wrong assumption would waste significant work, always ask exactly one focused clarifying question before doing anything.
structured-step-by-step-reasoning
Use this skill for any problem that involves multiple steps, tradeoffs, or non-trivial logic. Think out loud before answering to improve accuracy and transparency. Apply whenever the answer is not immediately obvious.