Versedb Mcp
Comic book database — search comics, creators, characters, and manage your collection.
MiniMax Multi-Modal Toolkit
Generate voice, music, video, and image content via MiniMax APIs â the unified entry for **MiniMax multimodal** use cases (audio + music + video + image). Includes voice cloning & voice design for custom voices, image generation with character reference, and FFmpeg-based media tools for audio/vide
A Christmas Carol
Semantic search through Dickens' A Christmas Carol by meaning, theme, or character.
agents
Build voice AI agents with ElevenLabs. Use when creating voice assistants, customer service bots, interactive voice characters, or any real-time voice conversation experience.
cw-brainstorming
Creative writing skill for capturing story brainstorming. Use when the user is exploring narrative ideas, discussing characters, planning episodes, or thinking through story possibilities. Creates minimal working notes that preserve creative freedom by recording only what was stated and marking sources.
azure-naming-research
Research Azure naming constraints and CAF abbreviations for a given resource type. Use when you need to look up the official CAF slug, naming rules (length, scope, valid characters), and derive validation/cleaning regex patterns for an Azure resource. Triggers on: CAF abbreviation lookup, Azure naming rules research, resource naming constraints.
Doubao TTS â è±å è¯é³åæ
Generate high-quality speech audio from text using Volcengine's Doubao TTS API. Supports short-form (real-time) and long-form (async, up to 100K characters) synthesis.
higgsfield-ugc-prompt
Generate complete, detailed Higgsfield AI Marketing Studio UGC video prompts for product advertising. Use when the user wants to create a UGC video ad prompt for Higgsfield, mentions Higgsfield, wants a marketing video prompt, or provides product/shop reference images and asks for a video prompt. Generates second-by-second prompts with full audio, camera, outfit, and character descriptions in English with Turkish dialogue.
Querying Indonesian Government Data
ð®ð© STARTER_CHARACTER = ð®ð©
meshy-3d-agent
Generate 3D models, textures, images, rig characters, animate them, and prepare for 3D printing using the Meshy AI API. Handles API key detection, task creation, polling, downloading, and full 3D print pipeline with slicer integration. Use when the user asks to create 3D models, convert text/images to 3D, texture models, rig or animate characters, 3D print a model, or interact with the Meshy API.
bible-merger
Merge multiple book analyses into a unified bible. Use after analyzing several books from a series to consolidate characters, style patterns, and structure into a single canonical reference.
character-sprite
Generate complete character sprite sheets for the Claude Office Visualizer agents. Creates all animation frames (idle, walking, typing, handoff, coffee) with consistent character design across all sheets. Uses iterative approval workflow and reference-based generation for consistency.
talking-character-pipeline
complete workflow to create talking character videos with lipsync and captions. use when creating ai character videos, talking avatars, narrated content, or social media character content with voiceover.
comfyui-skill-openclaw
Generate images utilizing ComfyUI's powerful node-based workflow capabilities. Supports dynamically loading multiple pre-configured generation workflows from different instances and their corresponding parameter mappings, importing saved workflows in bulk from ComfyUI or local JSON files, converting natural language into parameters, driving local or remote ComfyUI services, tracking execution history with parameters and results, and ultimately returning the images to the target client. **Use this Skill when:** (1) The user requests to "generate an image", "draw a picture", or "execute a ComfyUI workflow". (2) The user has specific stylistic, character, or scene requirements for image generation. (3) The user asks you to import, register, sync, or configure saved ComfyUI workflows for later reuse.
seedance-20
Generate and direct cinematic AI videos with Seedance 2.0 (ByteDance/Dreamina/Jimeng). Covers text-to-video, image-to-video, video-to-video, and reference-to-video workflows with @Tag asset references, multi-character scenes, audio design, and post-processing. Use when making AI video, writing Seedance prompts, directing a scene, fixing generation errors, or building an AI short film, product ad, or music video.
2d-pixel-asset
Generate 2D pixel art game assets using Google Gemini via Chrome browser automation. Triggers when the user wants to create pixel art, game sprites, tilesets, game assets, or edit existing game art. Supports reference image uploads for style consistency, model selection (Flash/Pro), resolution control, automatic background removal (chroma key, ML-based via BiRefNet, or Adobe Express), and rasterization to exact pixel dimensions. Use for requests like "create a pixel art sword", "generate a 32x32 character sprite", "make a tileset", or "edit this sprite".
saa-agent
Enables AI agents to generate images using the Character Select Stand Alone App (SAA) image generation backend via command-line interface.