lazyllm-skill
LazyLLM framework for building multi-agent AI applications. Use when task mentioned LazyLLM or AI program for: (1) Flow orchestration - linear, branching, parallel, loop workflows for complex data pipelines, (2) Model fine-tuning and acceleration - finetuning LLMs with LLaMA-Factory/Alpaca-LoRA/Collie and acceleration with vLLM/LMDeploy/LightLLM. Includes comprehensive code examples for all components, (3) RAG systems - knowledge-based QA with document retrieval, vectorization, and generation, (4) Agent development - single/multi-agent systems with tools, memory, planning, and web interfaces.
wiki-manager
LLM-compiled knowledge base manager for Codex. Use it to initialize, ingest, compile, query, lint, research, and generate outputs from topic-scoped wikis. Activates when the user mentions wiki workflows, knowledge-base management, ingestion, compilation, querying, linting, research, or uses /wiki-style shorthand in a repo with .wiki/, ~/wiki/, or a configured hub path.
claude-memory-kit
Persistent memory system for Claude Code. Your agent remembers everything across sessions and projects. Two-layer architecture: hot cache (MEMORY.md) + knowledge wiki. Safety hooks prevent context loss. /close-day captures your day in one command. Zero external dependencies, runs on existing subscription.