Io.Github.Dan24ou Cpu/Agent Signal
Collective intelligence for AI shopping agents — product intel, deals, and more
Uniprof
Universal CPU profiler designed for humans and AI agents
unraid
This skill should be used when the user mentions Unraid, asks to check server health, monitor array or disk status, list or restart Docker containers, start or stop VMs, read system logs, check parity status, view notifications, manage API keys, configure rclone remotes, check UPS or power status, get live CPU or memory data, force stop a VM, check disk temperatures, or perform any operation on an Unraid NAS server. Also use when the user needs to set up or configure Unraid MCP credentials.
Performance Co-Pilot
Query system performance metrics via MCP - CPU, memory, disk I/O, network, processes
party
Programmatic guide for the @cazala/party library: engine setup, modules, particles, and performance across CPU + WebGPU.
brev-cli
Manage GPU and CPU cloud instances with the Brev CLI for ML workloads and general compute. Use when users want to create instances, search for GPUs or CPUs, SSH into instances, open editors, copy files, port forward, manage organizations, or work with cloud compute. Supports fine-tuning, reinforcement learning, training, inference, batch processing, and other ML/AI workloads. Trigger keywords - brev, gpu, cpu, instance, create instance, ssh, vram, vcpu, A100, H100, cloud gpu, cloud cpu, remote machine, finetune, fine-tune, RL, RLHF, training, inference, deploy model, serve model, batch job.
Audio Transcription with Whisper
Transcribe audio files locally using faster-whisper (CPU, int8 quantization). Supports all common audio formats (wav, mp3, m4a, flac, ogg, webm).
Io.Github.Jacobsd32 Cpu/Djd Agent Score
Reputation scoring for AI agent wallets on Base. Trust scores, fraud checks, x402.
llmfit-advisor
Detect local hardware (RAM, CPU, GPU/VRAM) and recommend the best-fit local LLM models with optimal quantization, speed estimates, and fit scoring.
codspeed-optimize
Autonomously optimize code for performance using CodSpeed benchmarks, flamegraph analysis, and iterative improvement. Use this skill whenever the user wants to make code faster, reduce CPU usage, optimize memory, improve throughput, find performance bottlenecks, or asks to 'optimize', 'speed up', 'make faster', 'reduce latency', 'improve performance', or points at a CodSpeed benchmark result wanting improvements. Also trigger when the user mentions a slow function, a regression, or wants to understand where time is spent in their code.
StressZero MCP - Burnout Risk Scoring
Burnout risk scoring across 3 dimensions (physical, emotional, effectiveness)