Skills

All Skills

llm

Skills tagged with #llm

@VictoryInTech
MCP

TokenOracle

Hosted MCP server for LLM cost estimation, model comparison, and budget-aware routing.

mcpllm
VictoryInTech/TokenOracle-MCP
19d ago
0
@study8677

agent-repo-init

One-click initialization of a multi-agent repository from the Antigravity template. Use this skill when users want to scaffold a new project quickly (`quick` mode) or with runtime defaults (`full` mode) including LLM provider profile, MCP toggle, swarm preference context, sandbox type, and optional git init.

study8677/antigravity-workspace-template+4 more
19d ago
1.0K0
@jamierpond

yapi — LLM Skill Guide

yapi is a CLI-first, git-friendly API client. You define requests in YAML files and run them from the terminal. No GUI, no accounts, no state — just files and a binary.

jamierpond/yapi
18d ago
1090
@HZYAI
MCP

RAGScore

Generate QA datasets & evaluate RAG systems. Privacy-first, any LLM, local or cloud.

mcpgithubairagllm
HZYAI/RagScore
19d ago
0
@ankitpal181
MCP

Io.Github.Ankitpal181/Toon Parse Mcp

MCP server that reduces LLM context by removing code comments and converting data formats to TOON

mcpgithubllm
ankitpal181/toon-parse-mcp
19d ago
0
@K-Dense-AI

Hugging Science

Hugging Science is a curated, LLM-friendly index of scientific datasets, models, blog posts, and interactive demos for ML researchers. Use it when a scientific ML question lands in front of you — it's much higher signal than generic search and the entries are pre-filtered for quality and openness.

K-Dense-AI/scientific-agent-skills
5d ago
20.0K0
@prism-php

developing-with-prism

Guide for developing with Prism PHP package - a Laravel package for integrating LLMs. Activate or use when working with Prism features including text generation, structured output, embeddings, image generation, audio processing, streaming, tools/function calling, or any LLM provider integration (OpenAI, Anthropic, Gemini, Mistral, Groq, XAI, DeepSeek, OpenRouter, Ollama, VoyageAI, ElevenLabs). Activate for any Prism-related development tasks.

prism-php/prism
18d ago
2.3K0
@aplaceforallmystuff
MCP

Mcp Pihole

Pi-hole v6 MCP server - manage DNS blocking, stats, whitelists/blacklists

mcpgithubllm
aplaceforallmystuff/mcp-pihole
19d ago
0
@alonw0

llm-docs-optimizer

Optimize documentation for AI coding assistants and LLMs. Improves docs for Claude, Copilot, and other AI tools through c7score optimization, llms.txt generation, question-driven restructuring, and automated quality scoring. Use when asked to improve, optimize, or enhance documentation for AI assistants, LLMs, c7score, Context7, or when creating llms.txt files. Also use for documentation quality analysis, README optimization, or ensuring docs follow best practices for LLM retrieval systems.

alonw0/llm-docs-optimizer
18d ago
520
@joesaby
MCP

Doctree Mcp

BM25 search + tree navigation over markdown docs for AI agents. No embeddings, no LLM calls.

mcpgithubaisearchllm
joesaby/doctree-mcp
19d ago
0
@Astro-Han

karpathy-llm-wiki

Use when building or maintaining a personal LLM-powered knowledge base. Triggers: ingesting sources into a wiki, querying wiki knowledge, linting wiki quality, 'add to wiki', 'what do I know about', or any mention of 'LLM wiki' or 'Karpathy wiki'.

Astro-Han/karpathy-llm-wiki
19d ago
2470
@mcp-registry
MCP

Mcp Server

AI agent tools: web search, browser, 400+ LLMs, image gen, TTS, phone verify. Pay-per-use.

mcpgithubapiaisearchbrowser
19d ago
0
@cybertronai

anti-slop-guide

Use when drafting, editing, or reviewing any prose to detect and remove AI writing patterns including overused vocabulary (delve, tapestry, landscape), formulaic structures (binary contrasts, rule of three), throat-clearing openers, business jargon, and other LLM tells

cybertronai/SutroYaro+5 more
18d ago
60
@MetriLLM
MCP

Io.Github.MetriLLM/Metrillm

Benchmark local LLM models — speed, quality & hardware fitness verdict from any MCP client

mcpgithubllm
MetriLLM/metrillm
19d ago
0
@Arize-ai

phoenix-cli

Debug LLM applications using the Phoenix CLI. Fetch traces, analyze errors, review experiments, inspect datasets, and query the GraphQL API. Use when debugging AI/LLM applications, analyzing trace data, working with Phoenix observability, or investigating LLM performance issues.

Arize-ai/openinference+1 more
18d ago
8850
@shubhamekapure
MCP

Social Search Mcp

Deep social media search for LLMs: Facebook, Reddit, LinkedIn, Instagram & more.

mcpgithubsearchllm
shubhamekapure/social-search-mcp
19d ago
0
@selvage-lab
MCP

Io.Github.Selvage Lab/Selvage

An LLM-based code review MCP server with AST-powered smart context extraction

mcpgithubllm
selvage-lab/selvage
19d ago
0
@cerebrixos-org
MCP

Tuning Engines

Domain-specific LLM fine-tuning — sovereign models trained on your data, zero infrastructure.

mcpgithubaillm
cerebrixos-org/tuning-engines-cli
19d ago
0
@martinopiaggi

Video Summarizer

Transcribe and summarize videos from YouTube, local files, Google Drive, and Dropbox using any OpenAI-compatible LLM provider via the CLI.

martinopiaggi/summarize
18d ago
1510
@alo-exp

/ai-llm-safety — AI/LLM Safety Design Enforcement

Every system that involves LLM agents, tool use, or prompt construction MUST treat AI safety as a first-class constraint. Prompt injection is the SQL injection of the AI era — and it's harder to fix after deployment.

alo-exp/silver-bullet+46 more
11d ago
50
@TheSandemon
MCP

Kaito Query Service

AI LLM with Gemini, MiniMax, Replicate, OpenRouter. Vision, search, code review. USDC on Base.

mcpgithubaisearchllm
TheSandemon/sand-gallery
19d ago
0
@daxaur

AI / LLM Tools

daxaur/openpaw+31 more
18d ago
780
@brainqub3

rlm

Run a Recursive Language Model-style loop for long-context tasks. Uses a persistent local Python REPL and an rlm-subcall subagent as the sub-LLM (llm_query).

brainqub3/claude_code_RLM
18d ago
3640
@can1357

semantic-compression

Aggressively remove grammatical scaffolding LLMs reconstruct while preserving meaning-carrying content. Output may be fragments. Use when compressing text for prompts, reducing token count, preparing context for LLM input, or making documentation more token-efficient. Applies LLM-aware compression rules that delete predictable grammar while preserving semantics.

can1357/oh-my-pi+1 more
18d ago
2.0K0
@GreenSheep01201

design-taste-frontend

Senior UI/UX Engineer. Architect digital interfaces overriding default LLM biases. Enforces metric-based rules, strict component architecture, CSS hardware acceleration, and balanced design engineering.

GreenSheep01201/claw-empire
18d ago
7060
@GraflowAI

graflow-workflow

Create Python workflow pipelines using Graflow with a structured plan-implement-review process. Use when building task graphs, parallel pipelines, LLM workflows, or any Graflow-based automation. Triggers on requests for "workflow", "pipeline", "task graph", "Graflow", or when user wants to build an automated data/AI pipeline.

GraflowAI/graflow
18d ago
360
@spences10
MCP

Io.Github.Spences10/Mcp Turso Cloud

MCP server for integrating Turso with LLMs

mcpgithubllm
spences10/mcp-turso-cloud
19d ago
0
@alex-feel
MCP

Io.Github.Alex Feel/Mcp Context Server

An MCP server that provides persistent multimodal context storage for LLM agents.

mcpgithubragllm
alex-feel/mcp-context-server
19d ago
0
@echology-io
MCP

Decompose

Decompose text into classified semantic units. Authority, risk, attention. No LLM.

mcpgithubllm
echology-io/decompose
19d ago
0
@jonradoff
MCP

LLM Optimizer

AI brand visibility analytics: visibility scores, optimizations, video, Reddit, and search rankings.

mcpaisearchllm
jonradoff/llmopt
19d ago
0
@Ricky610329
MCP

Io.Github.Ricky610329/Mup

MCP server that turns HTML MUP panels into interactive UI tools for LLMs.

mcpgithubllm
Ricky610329/mup
19d ago
0
@AMD-AGI

magpie

Performs GPU kernel correctness and performance evaluation and LLM inference benchmarking with Magpie. Analyzes single or multiple kernels (HIP/CUDA/PyTorch), compares kernel implementations, runs vLLM/SGLang benchmarks with profiling and TraceLens, and runs gap analysis on torch traces. Creates kernel config YAMLs, discovers kernels in a project, and queries GPU specs. Use when the user mentions Magpie, kernel analyze or compare, HIP/CUDA kernel evaluation, vLLM/SGLang benchmark, gap analysis, TraceLens, creating kernel configs, or discovering GPU kernels.

AMD-AGI/Magpie
18d ago
450
@viktorbezdek

agent-project-development

This skill should be used when the user asks to "start an LLM project", "design batch pipeline", "evaluate task-model fit", "structure agent project", or mentions pipeline architecture, agent-assisted development, cost estimation, or choosing between LLM and traditional approaches. NOT for evaluating agent quality or building evaluation rubrics (use agent-evaluation), NOT for multi-agent coordination or agent handoffs (use multi-agent-patterns).

viktorbezdek/skillstack+48 more
3d ago
50
@kc23go
MCP

Io.Github.Kc23go/Anybrowse

Converts any URL to clean, LLM-ready Markdown using real Chrome browsers

mcpgithubbrowserllm
kc23go/anybrowse
19d ago
0
@greyhaven-ai

autocontext

Iterative strategy generation and evaluation system. Use when the user wants to evaluate agent output quality, run improvement loops, queue tasks for background evaluation, check run status, or discover available scenarios. Provides LLM-based judging with rubric-driven scoring.

greyhaven-ai/autocontext+1 more
19d ago
6640
@umitkavala
MCP

Mindpm

Persistent project memory for LLMs via SQLite. Never re-explain your project again.

mcpgithubaimemoryllmsqlite
umitkavala/mindpm
19d ago
0
@ABTdomain
MCP

DomainKits

Domain intelligence platform that turns your LLM into a professional domain consultant.

mcpgithubaillm
ABTdomain/domainkits-mcp
19d ago
0
@gdsfactory

gdsfactory Component Designer Skill

This skill lets an LLM agent **generate**, **visualize**, and **iteratively modify** photonic-IC components using the [gdsfactory](https://github.com/gdsfactory/gdsfactory) Python library.

gdsfactory/gdsfactory
18d ago
8800
@gersonfreire
MCP

Simpliq Server

Simpliq Data Proxy MCP Server - Connects LLMs to SQL Databases via Semantic Mapping

mcpgithubllm
gersonfreire/simpliq
19d ago
0
@SynaLinks

synalinks

Build neuro-symbolic LLM applications with Synalinks framework. Use when working with DataModel, Program, Generator, Module, training LLM pipelines, in-context learning, structured output, JSON operators, Branch/Decision control flow, FunctionCallingAgent, RAG/KAG, or Keras-like LLM workflows.

SynaLinks/synalinks-skills
18d ago
8990
@mcp-registry
MCP

Stockfish MCP

A Stockfish MCP server to allow an LLM to play chess against Stockfish

mcpgithubaillm
19d ago
0
@securityscan-api
MCP

SecurityScan

Scan GitHub-hosted AI skills for vulnerabilities: prompt injection, malware, OWASP LLM Top 10.

mcpgithubapiaillm
securityscan-api/securityscan-api
19d ago
0
@sudarshan-koirala

llm-resources

sudarshan-koirala/llm-resources
18d ago
700
@khromov
MCP

Garden.Stanislav.Svelte Llm/Svelte Llm Mcp

An MCP server that provides access to Svelte 5 and SvelteKit documentation

mcpllm
khromov/svelte-llm-mcp
19d ago
0
@PyJudge
MCP

Io.Github.PyJudge/Pdf4vllm

PDF reader for vision LLMs. Auto-detects text corruption and switches to image mode.

mcpgithubllm
PyJudge/pdf4vllm-mcp
19d ago
0
@truera

trulens-evaluation-setup

Configure feedback functions and selectors for TruLens evaluations

trulensllmevaluationfeedbackselectors
truera/trulens+4 more
18d ago
3.2K0
@jrenaldi79

sidecar

Spawn conversations with other LLMs (Gemini, GPT, ChatGPT, Codex, o3, DeepSeek, Qwen, Grok, Mistral, etc.) and fold results back into your context. TRIGGER when: user asks to talk to, chat with, use, call, or spawn another LLM or model; user mentions Gemini, GPT, ChatGPT, Codex, o3, DeepSeek, Claude (as a sidecar target), Qwen, Grok, Mistral, or any non-current model by name; user asks to get a second opinion from another model; user wants parallel exploration with a different model; user says "sidecar", "fork", or "fold". CRITICAL RULES: (1) ALWAYS launch sidecar CLI commands with Bash tool's run_in_background: true. Never run sidecar start/resume/continue in the foreground. (2) The fold summary returns on stdout when the user clicks Fold in the GUI or the headless agent finishes. Use TaskOutput to read it when the background task completes. (3) Use --prompt for the start command (NOT --briefing). --briefing is only for subagent spawn. (4) NEVER use o3 or o3-pro unless the user explicitly asks for it by name. These models are extremely expensive ($10-60+ per request). If the user asks for o3, warn them about the cost before proceeding. Default to gemini for most tasks. (5) When the user asks to query MULTIPLE LLMs simultaneously (e.g., "ask Gemini AND ChatGPT", "compare Gemini vs GPT"), ALWAYS use --no-ui (headless) for all of them unless the user explicitly requests interactive. Opening multiple Electron windows at once is disruptive. Launch them all in parallel with run_in_background: true.

jrenaldi79/sidecar
18d ago
80
@scispot-repo
MCP

Mcp

Turn any LLM into your lab assistant: search samples, track experiments, analyze data with AI.

mcpaisearchllm
scispot-repo/scispot-mcp-server
19d ago
0
@muxedai

exploring-llm-traces

ABSOLUTE MUST to debug and inspect LLM/AI agent traces using PostHog's MCP tools. Use when the user pastes a trace URL (e.g. /llm-observability/traces/<id>), asks to debug a trace, figure out what went wrong, check if an agent used a tool correctly, verify context/files were surfaced, inspect subagent behavior, investigate LLM decisions, or analyze token usage and costs.

muxedai/muxed+2 more
18d ago
140
@mcp-registry
MCP

Io.Github.OtherVibes/Mcp As A Judge

MCP as a Judge: a behavioral MCP that strengthens AI coding assistants via explicit LLM evaluations

mcpgithubaillm
19d ago
0