audio
Unity audio system â AudioMixer groups, snapshots, spatial audio, audio source pooling, compression per platform.
Skill: nap
> Context hygiene â compress, prune, archive .squad/ state
Entroly-Daemon: Self -Evolving Daemon. Compress 2M-token repos into a razor-sharp Principal Engineer's context. 95% fewer tokens—built for Cursor, Claude Code, Opus,Codex,GPT & Copilot.
semantic-compression
Aggressively remove grammatical scaffolding LLMs reconstruct while preserving meaning-carrying content. Output may be fragments. Use when compressing text for prompts, reducing token count, preparing context for LLM input, or making documentation more token-efficient. Applies LLM-aware compression rules that delete predictable grammar while preserving semantics.
Slimcontext Mcp Server
MCP Server for SlimContext - AI chat history compression tools
memory-bank
Token-efficient persistent memory system for Claude Code that extends your session limits by 3-5x. Layered architecture with progressive loading, compact encoding, branch-aware context, smart compression, session diffing, conflict detection, session continuation protocol, and recovery mode. Activates at session start (if MEMORY.md exists), on "remember this", "pick up where we left off", "what were we doing", "wrap up", "save progress", "don't forget", "switch context", "hand off", "memory health", "save state", "continue where I left off", "context budget", "how much context left", or any session start on a project with existing memory files. This skill solves two problems at once: Claude forgetting everything between sessions, AND sessions hitting context limits too fast. It replaces thousands of wasted re-explanation tokens with a compact, structured memory load that gives Claude full project context in under 2,000 tokens.
fennec-image-compression
Use this skill when asked to compress, resize, or analyze images in Go using the Fennec library, or when modifying the Fennec codebase itself.
Io.Github.Plasmate Labs/Plasmate
Agent-native headless browser. HTML in, Semantic Object Model out. 10x token compression.
node-minify
Compress JavaScript, CSS, HTML, JSON, and image files using node-minify library. Use when: minifying/compressing assets, bundling JS/CSS files, optimizing images (WebP/AVIF), concatenating files, or when user mentions "node-minify", "@node-minify", "minification". Triggers: "minify", "compress JS/CSS", "bundle", "optimize images", "reduce file size".
caveman
Ultra-compressed communication mode. Slash token usage ~75% by speaking like caveman while keeping full technical accuracy. Use when user says "caveman mode", "talk like caveman", "use caveman", "less tokens", "be brief", or invokes /caveman. Also auto-triggers when token efficiency is requested.
libatbus-protocol-crypto
libatbus protocol transport, ECDH key exchange, encryption/compression algorithm negotiation, message pack/unpack, and access token authentication. Use when working with connection_context, handshake flow, cipher algorithms, compression, message framing, or writing crypto-related tests.
Optical Context MCP
Compress OCR-heavy PDFs into dense packed images so agents can work with long visual documents.
token-saver-config
Configure and diagnose token-saver compression settings. Use when the user asks about adjusting compression levels, checking processor status, debugging hook issues, or reviewing savings statistics.
th0th-memory
Mandatory rules for using th0th semantic search, compression, and memory tools. Prioritize th0th tools over native tools (Glob, Grep, Read) to explore and understand code. Triggers on tasks involving code search, context compression, storing decisions, or retrieving project knowledge.
Context Compressor
Strategies for compressing context to maximize token efficiency
octave-compression
Specialized workflow for transforming verbose natural language into semantic OCTAVE structures. REQUIRES octave-literacy to be loaded first
nreki-optimizer
AST-aware context firewall for AI coding agents. Compresses code, validates edits before disk writes, detects blast radius, and enables atomic multi-file refactoring.
Io.Github.Morous Dev/Engram Cc
Universal AI coding assistant memory — session handoff, SLM compression, and semantic retrieval.
ai-coding-workflow
Use when the user wants to work with Codex, Claude Code, or other AI coding agents more efficiently, especially to avoid long-session slowdown, split large tasks into bounded steps, define scope and non-goals, separate diagnosis from implementation, compress context between sessions, or turn a vague coding request into a tighter execution prompt. Also use when the user asks for an AI coding prompt/template, asks why AI coding gets slower later in a task, or wants a repeatable collaboration workflow for medium-to-large codebases.
Io.Github.Octid Io/Osmp
Agentic AI instruction encoding. 60%+ compression over JSON. Inference-free decode. Any channel.
awq-quantization
Activation-aware weight quantization for 4-bit LLM compression with 3x speedup and minimal accuracy loss. Use when deploying large models (7B-70B) on limited GPU memory, when you need faster inference than GPTQ with better accuracy preservation, or for instruction-tuned and multimodal models. MLSys 2024 Best Paper Award winner.
add-to-leaderboard
Use this skill when the user wants to add a new codec entry to the leaderboard, update leaderboard rankings, or mentions adding someone's compression results.
total-recall
The only memory skill that watches on its own. No database. No vectors. No manual saves. Just an LLM observer that compresses your conversations into prioritised notes, consolidates when they grow, and recovers anything missed. Five layers of redundancy, zero maintenance. ~$0.00/month (using free-tier models). While other memory skills ask you to remember to remember, this one just pays attention.
Token Optimizer Mcp
Intelligent token optimization achieving 95%+ reduction through caching, compression, and 80+ tools
LeanCTX v2.1.1 â Context Intelligence Engine + CCP
LeanCTX is a Rust binary that optimizes LLM context through 21 MCP tools, 90+ shell compression patterns, and tree-sitter AST parsing for 14 languages (TS/JS, Rust, Python, Go, Java, C, C++, Ruby, C#, Kotlin, Swift, PHP). It provides adaptive file reading, incremental deltas, intent detection, cross
Skim MCP Server
MCP server for Skim code transformation. Compresses code 60-95% for LLM context optimization.
Epublys
EPUB/PDF tools: merge, split, compress, convert, edit metadata, validate ebooks.
artifact-type-tailored-context
Compresses artifacts for judge evaluation. Reads a single raw artifact, applies tiered summarization within a token budget, and returns compacted content with metadata. Isolation via forked context prevents pollution of agent context
omniwire
Control your entire server mesh from chat. Execute commands, transfer files, manage Docker, sync configs, and monitor all your nodes â VPS, Raspberry Pi, laptop, desktop â through one unified interface. 30 MCP tools. Works on any architecture (x64, ARM, Apple Silicon). SSH2 with compression, encrypted config sync, 1Password secrets backend. Just say what you need and your agent runs it across every machine.
Io.Github.Base76 Research Lab/Token Compressor
Compress prompts 40-60% using local LLM + embedding validation. Preserves all conditionals.
Io.Github.Anarkitty1/Semantic Frame
Token-efficient semantic compression for numerical data. 95%+ token reduction.