ADR Review Panel
This skill runs a panel of specialist reviewer subagents over an Architectural Decision Record (ADR) and produces a consolidated report in two formats: a PDF document and a PPTX slide deck.
rlm
Run a Recursive Language Model-style loop for long-context tasks. Uses a persistent local Python REPL and an rlm-subcall subagent as the sub-LLM (llm_query).
Agents vs Commands vs Skills â When to Use What
A comparison of the three extension mechanisms in Claude Code: subagents, commands, and skills.
programmatic-agent-runs
Govern Cursor SDK local, cloud, self-hosted, and subagent coding runs before they create branches or PRs.
sidecar
Spawn conversations with other LLMs (Gemini, GPT, ChatGPT, Codex, o3, DeepSeek, Qwen, Grok, Mistral, etc.) and fold results back into your context. TRIGGER when: user asks to talk to, chat with, use, call, or spawn another LLM or model; user mentions Gemini, GPT, ChatGPT, Codex, o3, DeepSeek, Claude (as a sidecar target), Qwen, Grok, Mistral, or any non-current model by name; user asks to get a second opinion from another model; user wants parallel exploration with a different model; user says "sidecar", "fork", or "fold". CRITICAL RULES: (1) ALWAYS launch sidecar CLI commands with Bash tool's run_in_background: true. Never run sidecar start/resume/continue in the foreground. (2) The fold summary returns on stdout when the user clicks Fold in the GUI or the headless agent finishes. Use TaskOutput to read it when the background task completes. (3) Use --prompt for the start command (NOT --briefing). --briefing is only for subagent spawn. (4) NEVER use o3 or o3-pro unless the user explicitly asks for it by name. These models are extremely expensive ($10-60+ per request). If the user asks for o3, warn them about the cost before proceeding. Default to gemini for most tasks. (5) When the user asks to query MULTIPLE LLMs simultaneously (e.g., "ask Gemini AND ChatGPT", "compare Gemini vs GPT"), ALWAYS use --no-ui (headless) for all of them unless the user explicitly requests interactive. Opening multiple Electron windows at once is disruptive. Launch them all in parallel with run_in_background: true.
exploring-llm-traces
ABSOLUTE MUST to debug and inspect LLM/AI agent traces using PostHog's MCP tools. Use when the user pastes a trace URL (e.g. /llm-observability/traces/<id>), asks to debug a trace, figure out what went wrong, check if an agent used a tool correctly, verify context/files were surfaced, inspect subagent behavior, investigate LLM decisions, or analyze token usage and costs.
Pre-Commit Code Verification
Automated verification pipeline before code lands. Static scans, baseline-aware quality gates, an independent reviewer subagent, and an auto-fix loop.
PKM Session End â Knowledge Capture and Graph Maintenance
Workflow for session wrap-up. When running as a subagent, the delegation prompt provides the project path and devlog boundary context. The agent's system prompt handles transcript discovery (Step 0) before this workflow begins.
Implement Universal â Single-Context Workshop Implementation Loop
This is the harness-agnostic twin of `/implement`. It is functionally identical, but instead of dispatching subagents via Claude Code's `Task` tool, it instructs **you** to adopt two distinct roles in sequence:
context-pack
Creates structured handoff briefings between agents. Packages task context, constraints, and progress into a compact packet that subagents can consume without re-reading the full conversation. Prevents the 'lost context' problem in multi-agent delegation.
Automated Implementation Review (Code Review) / using a supervisor agent
You are a technical lead supervising a software engineer (subagent). You do not write code or use development tools yourself â you delegate all implementation work to the engineer.
swarm-iosm
Orchestrate complex development with AUTOMATIC parallel subagent execution, continuous dispatch scheduling, dependency analysis, file conflict detection, and IOSM quality gates. Analyzes task dependencies, builds critical path, launches parallel background workers with lock management, monitors progress, auto-spawns from discoveries. Use for multi-file features, parallel implementation streams, automated task decomposition, brownfield refactoring, or when user mentions "parallel agents", "orchestrate", "swarm", "continuous dispatch", "automatic scheduling", "PRD", "quality gates", "decompose work", "Mixed/brownfield".
workflow-orchestration
Disciplined task execution with planning, verification, and self-improvement loops. Use when starting non-trivial tasks (3+ steps), fixing bugs, building features, refactoring code, or when rigorous execution with quality gates is needed. Includes subagent delegation, lessons tracking, and staff-engineer-level verification.
cyberconan
Security Audit Swarm â Full repo security scan (SAST, SCA, secrets, config). Adaptive orchestration: subagents for small repos, Agent Teams for large. Pure Claude analysis.
Agent MCP Gateway
On-demand tool discovery for all your MCPs and per-subagent MCP controls to preserve context window.
review-loop
Runs multi-pass automated code review with per-issue fix subagents. Triggers when preparing a branch for PR, reviewing code changes, or when thorough automated code quality review is needed.
exec
Execute plan tasks sequentially using subagents. Use when user says 'exec', 'execute plan', 'run plan', or wants to implement a plan file task by task with isolated subagents.
teambuilder-apply-dispatch-loop
Dispatch pending tasks from an OpenSpec change's tasks.md to the right Teambuilder persona in fresh subagents. Use from `/opsx:apply` (or `openspec-apply-change`) when Teambuilder personas are present in `.claude/agents/`.
Hardcoded Secrets in Public Code Detection
You are performing a focused security assessment to find hardcoded sensitive data that is exposed in publicly accessible code. This skill uses a three-phase approach with subagents: **recon** (find all potential secret candidates), **batched verify** (confirm each is a real secret in publicly reacha
claude-docs-consultant
Consult official Claude Code documentation from docs.claude.com using selective fetching. Use this skill when working on Claude Code hooks, skills, subagents, MCP servers, or any Claude Code feature that requires referencing official documentation for accurate implementation. Fetches only the specific documentation needed rather than loading all docs upfront.
cast-subagents
Use when suggesting exactly one Codex subagent lineup before work begins for multi-lane tasks: branch/PR review across bugs, security, tests, maintainability, docs, or regression risk; codepath tracing plus docs/API verification; option research with tradeoff synthesis; auth/codebase mapping before risk assessment or planning. Advisory only; no auto-spawn; approval required. Do not use for delegated subagent handoffs, trivial single-file fixes, wording-only edits, one fact lookup, unclear requests, or explicit opt-out.