output-credentials-edit
openbooklet.com/s/output-credentials-editopenbooklet.com/s/output-credentials-edit@1.0.0GET /api/v1/skills/output-credentials-editView and edit encrypted credentials in an Output.ai project. Use when adding secrets, updating API keys, verifying credential values, or retrieving a specific credential.
Initialize encrypted credentials for an Output.ai project. Use when setting up credentials for the first time, adding environment-specific credentials, or adding per-workflow credentials.
Debug Output SDK workflow issues. Use when user reports a workflow failing, erroring, hanging, producing wrong results, or asks to debug, troubleshoot, or investigate a workflow execution.
Use the Agent class for multi-step tool loops, conversation history, and reusable LLM agents. Use when building agents with skills, structured output, or stateful conversations.
Code style conventions for Output SDK workflow projects. Use when writing or reviewing any TypeScript/JavaScript code. Discovers the project's own linting rules first; falls back to Output SDK conventions when no linter is configured.
Generate workflow skeleton files using the Output SDK CLI. Use when starting a new workflow, scaffolding project structure, or understanding the generated file layout.
Store and reference encrypted secrets in Output SDK workflows using @outputai/credentials. Use when integrating API keys, database passwords, or third-party tokens.
Create offline evaluation tests for Output SDK workflows using @outputai/evals. Use when implementing test evaluators with verify(), creating dataset YAML files, building eval workflows, or running workflow tests via CLI.
Create evaluator functions in evaluators.ts for Output SDK workflows. Use when implementing quality assessment, validation logic, or content evaluation.
Workflow folder structure conventions for Output SDK. Use when creating new workflows, organizing workflow files, or understanding the standard project layout.
Create shared HTTP clients in src/shared/clients/ for Output SDK workflows. Use when integrating external APIs, creating service wrappers, or standardizing HTTP operations.
Pick the right LLM model for an Output SDK prompt file. Use when writing a new .prompt file, reviewing a model choice, or upgrading a stale model. Walks through priority (reasoning/balance/speed/cost), provider selection, and a live lookup against the Vercel AI Gateway model index.
Create test scenario JSON files for Output SDK workflows. Use when creating test inputs, documenting expected behaviors, or setting up workflow testing.
Create .md skill files for Output framework's lazy-loaded instruction system. Use when adding skills to prompts, configuring skill loading, or debugging skill resolution.
Create step functions in steps.ts for Output SDK workflows. Use when implementing I/O operations, error handling, HTTP requests, or LLM calls.
Create types.ts files with Zod schemas for Output SDK workflows. Use when defining input/output schemas, creating type definitions, or fixing schema-related errors.
Calculate and display the cost of an Output SDK workflow execution run. Use when checking LLM token costs, API service costs, or total spend for a specific workflow run.
Fix missing schema definitions in Output SDK steps. Use when seeing type errors, undefined properties at step boundaries, validation failures, or when step inputs/outputs aren't being properly typed.
Fix non-determinism errors in Output SDK workflows. Use when seeing replay failures, inconsistent results between runs, "non-deterministic" error messages, or workflows behaving differently on retry.
Fix try-catch anti-pattern in Output SDK workflows. Use when retries aren't working, errors are being swallowed, seeing unexpected FatalError wrapping, or when step failures don't trigger retry policies.
Auto-indexed from growthxai/output
Are you the author? Claim this skill to take ownership and manage it.
Related Skills
graceful-error-recovery
Use this skill when a tool call, command, or API request fails. Diagnose the root cause systematically before retrying or changing approach. Do not retry the same failing call without first understanding why it failed.
audience-aware-communication
Use this skill when writing any explanation, documentation, or response that will be read by someone else. Match vocabulary, depth, and format to the audience's expertise level before writing.
Refactoring Expert
Expert in systematic code refactoring, code smell detection, and structural optimization. Use PROACTIVELY when encountering duplicated code, long methods, complex conditionals, or any code quality issues. Detects code smells and applies proven refactoring techniques without changing external behavior.
Research Expert
Specialized research expert for parallel information gathering. Use for focused research tasks with clear objectives and structured output requirements.
clarify-ambiguous-requests
Use this skill when the user's request is ambiguous, under-specified, or could be interpreted in multiple ways. If proceeding with a wrong assumption would waste significant work, always ask exactly one focused clarifying question before doing anything.
structured-step-by-step-reasoning
Use this skill for any problem that involves multiple steps, tradeoffs, or non-trivial logic. Think out loud before answering to improve accuracy and transparency. Apply whenever the answer is not immediately obvious.