Git
v1.0.0

programmatic-agent-runs

by @IgorGanapolsky0 pulls
URLopenbooklet.com/s/programmatic-agent-runs
Pinnedopenbooklet.com/s/programmatic-agent-runs@1.0.0
APIGET /api/v1/skills/programmatic-agent-runs

Govern Cursor SDK local, cloud, self-hosted, and subagent coding runs before they create branches or PRs.

6 skills from this repoIgorGanapolsky/ThumbGate
programmatic-agent-runsviewing
resultplugins/claude-codex-bridge/skills/result/SKILL.md

Print the latest saved Codex bridge result from Claude Code without rerunning Codex. Use when the user asks for the last Codex output or wants to inspect the raw bridge result.

second-passplugins/claude-codex-bridge/skills/second-pass/SKILL.md

Hand the current task or repo state to Codex for an independent second pass from inside Claude Code. Use when the user explicitly wants another agent to take a shot after Claude's first pass.

thumbgateskills/thumbgate/SKILL.md

ThumbGate provides pre-action gates for AI coding agents. It captures thumbs-up/down feedback on agent actions, auto-promotes repeated failures into prevention rules, and blocks known-bad tool calls via PreToolUse hooks. Trigger when the user wants to add safety guardrails to an AI agent workflow, capture structured feedback on agent output, generate prevention rules from failure patterns, gate high-risk actions before execution, or export DPO training pairs from production feedback. Works with any MCP-compatible agent including Cursor, Codex, Gemini CLI, Amp, and OpenCode.

ThumbGate — Pre-Action Gates for AI Agents.agents/skills/thumbgate/SKILL.md

ThumbGate turns thumbs-up/down feedback into hard enforcement gates that block known-bad agent actions before they execute. Think of it as an immune system for your AI agent.

thumbgate-feedbackadapters/amp/skills/thumbgate-feedback/SKILL.md

Capture thumbs feedback and apply prevention rules before coding

Auto-indexed from IgorGanapolsky/ThumbGate

Are you the author? Claim this skill to take ownership and manage it.

Related Skills

@openbooklet

graceful-error-recovery

Use this skill when a tool call, command, or API request fails. Diagnose the root cause systematically before retrying or changing approach. Do not retry the same failing call without first understanding why it failed.

1.1K0
@openbooklet

audience-aware-communication

Use this skill when writing any explanation, documentation, or response that will be read by someone else. Match vocabulary, depth, and format to the audience's expertise level before writing.

1.1K0
@openbooklet

Refactoring Expert

Expert in systematic code refactoring, code smell detection, and structural optimization. Use PROACTIVELY when encountering duplicated code, long methods, complex conditionals, or any code quality issues. Detects code smells and applies proven refactoring techniques without changing external behavior.

600
@openbooklet

Research Expert

Specialized research expert for parallel information gathering. Use for focused research tasks with clear objectives and structured output requirements.

600
@openbooklet

clarify-ambiguous-requests

Use this skill when the user's request is ambiguous, under-specified, or could be interpreted in multiple ways. If proceeding with a wrong assumption would waste significant work, always ask exactly one focused clarifying question before doing anything.

1.1K0
@openbooklet

structured-step-by-step-reasoning

Use this skill for any problem that involves multiple steps, tradeoffs, or non-trivial logic. Think out loud before answering to improve accuracy and transparency. Apply whenever the answer is not immediately obvious.

1.1K0