Skills

All Skills

compression

Skills tagged with #compression

@XeldarAlz

audio

Unity audio system — AudioMixer groups, snapshots, spatial audio, audio source pooling, compression per platform.

XeldarAlz/everything-claude-unity+3 more
15d ago
50
@bradygaster

Skill: nap

> Context hygiene — compress, prune, archive .squad/ state

bradygaster/squad
19d ago
8120
@juyterman1000

Entroly-Daemon: Self -Evolving Daemon. Compress 2M-token repos into a razor-sharp Principal Engineer's context. 95% fewer tokens—built for Cursor, Claude Code, Opus,Codex,GPT & Copilot.

juyterman1000/entroly
19d ago
1520
@can1357

semantic-compression

Aggressively remove grammatical scaffolding LLMs reconstruct while preserving meaning-carrying content. Output may be fragments. Use when compressing text for prompts, reducing token count, preparing context for LLM input, or making documentation more token-efficient. Applies LLM-aware compression rules that delete predictable grammar while preserving semantics.

can1357/oh-my-pi+1 more
18d ago
2.0K0
@agentailor
MCP

Slimcontext Mcp Server

MCP Server for SlimContext - AI chat history compression tools

mcpgithubai
agentailor/slimcontext-mcp-server
19d ago
0
@Nagendhra-web

memory-bank

Token-efficient persistent memory system for Claude Code that extends your session limits by 3-5x. Layered architecture with progressive loading, compact encoding, branch-aware context, smart compression, session diffing, conflict detection, session continuation protocol, and recovery mode. Activates at session start (if MEMORY.md exists), on "remember this", "pick up where we left off", "what were we doing", "wrap up", "save progress", "don't forget", "switch context", "hand off", "memory health", "save state", "continue where I left off", "context budget", "how much context left", or any session start on a project with existing memory files. This skill solves two problems at once: Claude forgetting everything between sessions, AND sessions hitting context limits too fast. It replaces thousands of wasted re-explanation tokens with a compact, structured memory load that gives Claude full project context in under 2,000 tokens.

memorycontextpersistencesessionstoken-efficiencybranch-aware
Nagendhra-web/memory-bank
18d ago
230
@shamspias

fennec-image-compression

Use this skill when asked to compress, resize, or analyze images in Go using the Fennec library, or when modifying the Fennec codebase itself.

shamspias/fennec
19d ago
640
@plasmate-labs
MCP

Io.Github.Plasmate Labs/Plasmate

Agent-native headless browser. HTML in, Semantic Object Model out. 10x token compression.

mcpgithubbrowser
plasmate-labs/plasmate
19d ago
0
@srod

node-minify

Compress JavaScript, CSS, HTML, JSON, and image files using node-minify library. Use when: minifying/compressing assets, bundling JS/CSS files, optimizing images (WebP/AVIF), concatenating files, or when user mentions "node-minify", "@node-minify", "minification". Triggers: "minify", "compress JS/CSS", "bundle", "optimize images", "reduce file size".

srod/node-minify
18d ago
5160
@JuliusBrussee

caveman

Ultra-compressed communication mode. Slash token usage ~75% by speaking like caveman while keeping full technical accuracy. Use when user says "caveman mode", "talk like caveman", "use caveman", "less tokens", "be brief", or invokes /caveman. Also auto-triggers when token efficiency is requested.

JuliusBrussee/caveman+3 more
18d ago
70
@owent

libatbus-protocol-crypto

libatbus protocol transport, ECDH key exchange, encryption/compression algorithm negotiation, message pack/unpack, and access token authentication. Use when working with connection_context, handshake flow, cipher algorithms, compression, message framing, or writing crypto-related tests.

owent/libatbus
18d ago
2340
@ChrBoebel
MCP

Optical Context MCP

Compress OCR-heavy PDFs into dense packed images so agents can work with long visual documents.

mcpgithub
ChrBoebel/optical-context-mcp
19d ago
0
@ppgranger

token-saver-config

Configure and diagnose token-saver compression settings. Use when the user asks about adjusting compression levels, checking processor status, debugging hook issues, or reviewing savings statistics.

ppgranger/token-saver
18d ago
880
@S1LV4

th0th-memory

Mandatory rules for using th0th semantic search, compression, and memory tools. Prioritize th0th tools over native tools (Glob, Grep, Read) to explore and understand code. Triggers on tasks involving code search, context compression, storing decisions, or retrieving project knowledge.

S1LV4/th0th
18d ago
1230
@toonight

Context Compressor

Strategies for compressing context to maximize token efficiency

toonight/get-shit-done-for-antigravity+7 more
18d ago
6170
@elevanaltd

octave-compression

Specialized workflow for transforming verbose natural language into semantic OCTAVE structures. REQUIRES octave-literacy to be loaded first

elevanaltd/octave-mcp+4 more
19d ago
410
@Ruso-0

nreki-optimizer

AST-aware context firewall for AI coding agents. Compresses code, validates edits before disk writes, detects blast radius, and enables atomic multi-file refactoring.

Ruso-0/Nreki
18d ago
90
@Morous-Dev
MCP

Io.Github.Morous Dev/Engram Cc

Universal AI coding assistant memory — session handoff, SLM compression, and semantic retrieval.

mcpgithubaimemory
Morous-Dev/engram-context-continuum
19d ago
0
@JayCRL

ai-coding-workflow

Use when the user wants to work with Codex, Claude Code, or other AI coding agents more efficiently, especially to avoid long-session slowdown, split large tasks into bounded steps, define scope and non-goals, separate diagnosis from implementation, compress context between sessions, or turn a vague coding request into a tighter execution prompt. Also use when the user asks for an AI coding prompt/template, asks why AI coding gets slower later in a task, or wants a repeatable collaboration workflow for medium-to-large codebases.

JayCRL/MobileVC
18d ago
160
@octid-io
MCP

Io.Github.Octid Io/Osmp

Agentic AI instruction encoding. 60%+ compression over JSON. Inference-free decode. Any channel.

mcpgithubai
octid-io/cloudless-sky
19d ago
0
@Orchestra-Research

awq-quantization

Activation-aware weight quantization for 4-bit LLM compression with 3x speedup and minimal accuracy loss. Use when deploying large models (7B-70B) on limited GPU memory, when you need faster inference than GPTQ with better accuracy preservation, or for instruction-tuned and multimodal models. MLSys 2024 Best Paper Award winner.

OptimizationAWQQuantization4-BitActivation-AwareMemory Optimization
Orchestra-Research/AI-research-SKILLs+46 more
18d ago
5.0K0
@agavra

add-to-leaderboard

Use this skill when the user wants to add a new codec entry to the leaderboard, update leaderboard rankings, or mentions adding someone's compression results.

agavra/compression-golf
19d ago
530
@gavdalf

total-recall

The only memory skill that watches on its own. No database. No vectors. No manual saves. Just an LLM observer that compresses your conversations into prioritised notes, consolidates when they grow, and recovers anything missed. Five layers of redundancy, zero maintenance. ~$0.00/month (using free-tier models). While other memory skills ask you to remember to remember, this one just pays attention.

gavdalf/total-recall
19d ago
2030
@ooples
MCP

Token Optimizer Mcp

Intelligent token optimization achieving 95%+ reduction through caching, compression, and 80+ tools

mcpgithub
ooples/token-optimizer-mcp
19d ago
0
@yvgude

LeanCTX v2.1.1 — Context Intelligence Engine + CCP

LeanCTX is a Rust binary that optimizes LLM context through 21 MCP tools, 90+ shell compression patterns, and tree-sitter AST parsing for 14 languages (TS/JS, Rust, Python, Go, Java, C, C++, Ruby, C#, Kotlin, Swift, PHP). It provides adaptive file reading, incremental deltas, intent detection, cross

yvgude/lean-ctx
18d ago
230
@mcp-registry
MCP

Skim MCP Server

MCP server for Skim code transformation. Compresses code 60-95% for LLM context optimization.

mcpgithubllm
19d ago
0
@mcp-registry
MCP

Epublys

EPUB/PDF tools: merge, split, compress, convert, edit metadata, validate ebooks.

mcp
19d ago
0
@closedloop-ai

artifact-type-tailored-context

Compresses artifacts for judge evaluation. Reads a single raw artifact, applies tiered summarization within a token budget, and returns compacted content with metadata. Isolation via forked context prevents pollution of agent context

closedloop-ai/claude-plugins+17 more
3d ago
760
@VoidChecksum

omniwire

Control your entire server mesh from chat. Execute commands, transfer files, manage Docker, sync configs, and monitor all your nodes — VPS, Raspberry Pi, laptop, desktop — through one unified interface. 30 MCP tools. Works on any architecture (x64, ARM, Apple Silicon). SSH2 with compression, encrypted config sync, 1Password secrets backend. Just say what you need and your agent runs it across every machine.

infrastructuremeshsshdevopsserversvps
VoidChecksum/omniwire
18d ago
60
@base76-research-lab
MCP

Io.Github.Base76 Research Lab/Token Compressor

Compress prompts 40-60% using local LLM + embedding validation. Preserves all conditionals.

mcpgithubsearchllm
base76-research-lab/token-compressor
19d ago
0
@Anarkitty1
MCP

Io.Github.Anarkitty1/Semantic Frame

Token-efficient semantic compression for numerical data. 95%+ token reduction.

mcpgithub
Anarkitty1/semantic-frame
19d ago
0