swift-mlx-lm
MLX Swift LM - Run LLMs and VLMs on Apple Silicon using MLX. Covers local inference, streaming, wired memory coordination, tool calling, LoRA fine-tuning, embeddings, and model porting.
Codex MCP Go
MCP server wrapping Codex CLI for stdio-based tool calls.
narsil
Use narsil-mcp code intelligence tools effectively. Use when searching code, finding symbols, analyzing call graphs, scanning for security vulnerabilities, exploring dependencies, or performing static analysis on indexed repositories.
Doctree Mcp
BM25 search + tree navigation over markdown docs for AI agents. No embeddings, no LLM calls.
langsmith-fetch
Debug LangChain and LangGraph agents by fetching execution traces from LangSmith Studio. Use when debugging agent behavior, investigating errors, analyzing tool calls, checking memory operations, or examining agent performance. Automatically fetches recent traces and analyzes execution patterns. Requires langsmith-fetch CLI installed.
Io.Github.GavMason/Ani Mcp
A smart MCP server for AniList that gets your anime/manga taste - not just API calls.
PromptSpeak Governance
Pre-execution governance for AI agents. Validates tool calls before they execute.
X402 Api
DeFi data API for AI agents — pay-per-call via x402/USDC on Base
learned-codex-proxy-tool-calls-parse
Parse ChatGPT Codex backend SSE for tool_calls (content_part.added, output_item.done, function_call_arguments.delta/done) and return OpenAI Chat Completions format. Use when implementing or fixing tool call parsing in a Codex reverse proxy (OpenAI-compatible /v1/chat/completions) that consumes Responses API-style SSE.
Io.Github.CryptoAPIs Io/Mcp Signer
MCP server for local transaction signing on EVM, UTXO, Tron, and XRP — no API calls needed
Io.Github.Sns45/Better Call Claude
Voice Calls, SMS, and WhatsApp for Claude Code with cross-channel context sharing.
mcpwall
iptables for MCP — blocks dangerous tool calls, scans for secrets, logs everything.
ATXP — Agent Payment & Wallet Infrastructure
Agent wallet, email, phone, and 41 paid tools: search, image/video/music gen, SMS, calls.
agent-estimation
Accurately estimate AI agent work effort using the agent's own operational units (tool-call rounds) instead of human time. Use when asked to estimate, scope, plan, or evaluate how long a coding task will take. Prevents the common failure mode where agents anchor to human developer timelines and massively overestimate. Outputs a structured breakdown with round counts, risk factors, and a final wallclock conversion.
Ani Mcp
A smart MCP server for AniList that gets your anime/manga taste - not just API calls.
sidecar
Spawn conversations with other LLMs (Gemini, GPT, ChatGPT, Codex, o3, DeepSeek, Qwen, Grok, Mistral, etc.) and fold results back into your context. TRIGGER when: user asks to talk to, chat with, use, call, or spawn another LLM or model; user mentions Gemini, GPT, ChatGPT, Codex, o3, DeepSeek, Claude (as a sidecar target), Qwen, Grok, Mistral, or any non-current model by name; user asks to get a second opinion from another model; user wants parallel exploration with a different model; user says "sidecar", "fork", or "fold". CRITICAL RULES: (1) ALWAYS launch sidecar CLI commands with Bash tool's run_in_background: true. Never run sidecar start/resume/continue in the foreground. (2) The fold summary returns on stdout when the user clicks Fold in the GUI or the headless agent finishes. Use TaskOutput to read it when the background task completes. (3) Use --prompt for the start command (NOT --briefing). --briefing is only for subagent spawn. (4) NEVER use o3 or o3-pro unless the user explicitly asks for it by name. These models are extremely expensive ($10-60+ per request). If the user asks for o3, warn them about the cost before proceeding. Default to gemini for most tasks. (5) When the user asks to query MULTIPLE LLMs simultaneously (e.g., "ask Gemini AND ChatGPT", "compare Gemini vs GPT"), ALWAYS use --no-ui (headless) for all of them unless the user explicitly requests interactive. Opening multiple Electron windows at once is disruptive. Launch them all in parallel with run_in_background: true.
XRay-Vision
AI-powered codebase analysis — call graphs, security, dead code, complexity. 150+ tools.
Io.Github.Ai Aviate/Better Notion
Operate Notion with a single Markdown document — read, create, and update pages in one call.
add-atomic-chat-tool
Add Atomic Chat MCP server so the container agent can call local models served by the Atomic Chat desktop app via its OpenAI-compatible API.
Auto.dev
Automotive data for AI agents â via MCP tools, CLI commands, SDK methods, or direct API calls.
mcpc
Use mcpc CLI to interact with MCP servers - call tools, read resources, get prompts. Use this when working with Model Context Protocol servers, calling MCP tools, or accessing MCP resources programmatically.
Api Governance
API governance for AI agents. Detects breaking changes, scores blast radius, blocks unsafe calls.
Openapi Dynamic
Load OpenAPI 2.x/3.x specs and expose generic tools to discover and call multiple APIs.
Io.Github.BigJai/Codemunch Pro
Code indexing MCP: 15 tools, 10 languages, hybrid search, call graphs, O(1) retrieval.
Mcp Server
MCP server for Lightning Tools — pay-per-call AI tools via Bitcoin Lightning
coderlm
Primary tool for all code navigation and reading in supported languages (Rust, Python, TypeScript, JavaScript, Go, Java, Scala, SQL). Use instead of Read, Grep, and Glob for finding symbols, reading function implementations, tracing callers, discovering tests, and understanding execution paths. Provides tree-sitter-backed indexing that returns exact source code â full function bodies, call sites with line numbers, test locations â without loading entire files into context. Use for: finding functions by name or pattern, reading specific implementations, answering 'what calls X', 'where does this error come from', 'how does X work', tracing from entrypoint to outcome, and any codebase exploration. Use Read only for config files, markdown, and unsupported languages.
mcp2cli
Turn any MCP server, OpenAPI spec, or GraphQL endpoint into a CLI. Use this skill when the user wants to interact with an MCP server, OpenAPI/REST API, or GraphQL API via command line, discover available tools/endpoints, call API operations, or generate a new skill from an API. Triggers include "mcp2cli", "call this MCP server", "use this API", "list tools from", "create a skill for this API", "graphql", or any task involving MCP tool invocation, OpenAPI endpoint calls, or GraphQL queries without writing code.
Unity Bridge
Interact with Unity Editor via file-based IPC. Always read `params/{tool}.json` before calling a tool to get accurate parameters.
second-thought
Use when an idea, decision, design, plan, response, tool call, or action is about to be accepted or executed, especially during brainstorming, convergence, and pre-execution where shallow agreement or goal drift may occur.
pensieve
Project knowledge base and workflow router. knowledge/ caches previously explored file locations, module boundaries, and call chains for direct reuse; decisions/maxims are established architectural decisions and coding standards -- follow, don't re-debate; pipelines are reusable workflows; short-term/ holds new conclusions temporarily, promoted or deleted upon expiry. Use self-improve to capture new insights after completing tasks. Provides five tools: init, upgrade, migrate, doctor, self-improve.
AgentBuilders
Deploy full-stack web apps with database, file storage, auth, and RBAC via a single API call.
mcpx
Use this skill when interacting with MCP servers via CLI. Prefer mcpx over direct MCP SDK/protocol calls for tool discovery, schema inspection, invocation, and Unix-style output composition.
eino
Eino LLM/AI application development framework assistant (Golang). Use when the user needs to: (1) Build AI agents, (2) Create LLM applications, (3) Implement tool calling, (4) Build multi-agent systems, (5) Create workflows with Graph/Compose, (6) Implement streaming, (7) Human-in-the-loop patterns, or any other Eino framework development tasks. Triggers on phrases like "Eino å¼å", "å建 Agent", "LLM åºç¨", "AI Agent", "Eino æ¡æ¶", "æå»ºæºè½ä½".
Mcp
Connect AI assistants to Warpmetrics — query runs, calls, costs, and outcomes.
Aegis — AI Agent Governance
Policy-based governance for AI agent tool calls. YAML policy, approval gates, audit logging.
Io.Github.Shin Bot Litellm/Litellm Mcp
Give AI agents access to 100+ LLMs. Call any model, compare outputs.
Agent Reach â Usage Guide
Upstream tools for 13+ platforms. Call them directly.
ide-index-mcp
Guide for using JetBrains IDE Index MCP tools for code navigation, refactoring, and analysis. TRIGGER: When ANY of these MCP tools are available in the current session: ide_find_references, ide_find_definition, ide_find_class, ide_find_file, ide_search_text, ide_diagnostics, ide_index_status, ide_sync_files, ide_refactor_rename, ide_move_file, ide_type_hierarchy, ide_call_hierarchy, ide_find_implementations, ide_find_symbol, ide_find_super_methods, ide_file_structure, ide_refactor_safe_delete, ide_reformat_code, ide_build_project, ide_read_file, ide_get_active_file, ide_open_file. Use when performing code navigation (find usages, go to definition, find class), code analysis (diagnostics, type hierarchy, call hierarchy), refactoring (rename, move, safe delete, reformat), or searching code (text search, symbol search, file search). Prefer IDE tools over grep/find/sed for ALL semantic code operations.
Deepseek
MCP server for DeepSeek AI with chat, reasoning, sessions, function calling, and cost tracking
mcp-gateway
Discover and call tools from configured MCP servers. Use when external capabilities are needed beyond built-in workbook tools.
Io.Github.Agentc22/X402engine Mcp
50+ pay-per-call APIs for AI agents — images, LLM, code, crypto, travel & more via x402
Code Pathfinder
Code intelligence MCP server: call graphs, type inference, and symbol search for Python/Go.
Io.Github.Xie38388/Looply
Looply AI call center MCP Server - manage calls, leads, contacts, and analytics.
ai-chat-debug
Debug the VM agent pipeline â Phone â Agent-Proxy â VM (mobile) and Desktop Claude Agent Bridge. Use when AI chat messages don't come through, AI stops unexpectedly, second query times out, EPIPE errors, session confusion, tool calls failing, or agent bridge crashes. Triggers: 'AI chat not working', 'message not coming through', 'AI stopped', 'agent bridge', 'EPIPE', 'nodeNotFound', 'agent chat timeout', 'second query', 'query not responding', 'VM agent', 'agent proxy'.
policy
Author MCP tool-call policy rules without hand-editing access.json
Kai
Semantic code intelligence — call graphs, dependencies, impact analysis, and test coverage.
agent-design-review
Designs, reviews, and iterates on LLM agents and agent-like workflows. Use when asked to "design an agent", "review this agent", "improve our system prompt", "optimize prompts for caching", "improve tool calling", "reduce hallucinated tool calls", "add structured outputs", "decide if this should be multi-agent", "reduce false positives", "tune agent thresholds", or "build evals for this agent". Covers architecture choice, cache-friendly prompt templates, tool and schema design, runtime loops, trust boundaries, and eval-driven iteration.
rig
Build LLM-powered applications with Rig, the Rust AI framework. Use when creating agents, RAG pipelines, tool-calling workflows, structured extraction, or streaming completions. Covers all providers with a unified API.
Io.Github.JustinBeckwith/Gongio Mcp
MCP server for Gong.io - access calls, transcripts, and users
Magpipe
Manage AI voice agents, calls, SMS, contacts, and phone numbers via Magpipe.