Skills

All Skills

measures

Skills tagged with #measures

@superhq-ai

blueprint-ui

Build landing pages and web UIs using a dark blueprint/wireframe aesthetic with sharp edges, connected sections, dashed outlines, measurement annotations, and technical typography. Use when creating marketing sites, landing pages, or product pages.

superhq-ai/shuru
18d ago
5500
@nucliweb

webperf-core-web-vitals

Intelligent Core Web Vitals analysis with automated workflows and decision trees. Measures LCP, CLS, INP with guided debugging that automatically determines follow-up analysis based on results. Includes workflows for LCP deep dive (5 phases), CLS investigation (loading vs interaction), INP debugging (latency breakdown + attribution), and cross-skill integration with loading, interaction, and media skills. Use when the user asks about Core Web Vitals, LCP optimization, layout shifts, or interaction responsiveness. Compatible with Chrome DevTools MCP.

nucliweb/webperf-snippets+4 more
18d ago
1.4K0
@AgriciDaniel

skill-forge-benchmark

Benchmark Claude Code skill performance with variance analysis, tracking pass rate, execution time, and token usage across iterations. Runs multiple trials per eval for statistical reliability, aggregates results into benchmark.json, and generates comparison reports between skill versions. Use when user says "benchmark skill", "measure skill performance", "skill metrics", "compare skill versions", "skill performance", "track skill improvement", "skill regression test", or "skill A/B test".

AgriciDaniel/skill-forge+6 more
18d ago
420
@PCIRCLE-AI

Agentic Orchestration (Experimental Working-Model Protocol)

> **Status — experimental, instrumented, validation in progress.** This > skill is shipped to begin collecting evidence about whether a structured > verifiability-router protocol changes Claude's behavior in ways that > measurably help users. `memesh patterns` exposes a local counter so you > can

PCIRCLE-AI/memesh-llm-memory+2 more
4d ago
110
@jsdelivr
MCP

Mcp

Interact with a global network measurement platform.Run network commands from any point in the world

mcp
jsdelivr/globalping-mcp-server
19d ago
0
@microsoft

duroxide-code-coverage

Measure and improve code coverage in the Duroxide durable execution runtime. Use when asked about coverage, testing coverage, running llvm-cov, or improving test coverage percentages.

microsoft/duroxide
19d ago
660
@boringdata

bsl-model-builder

Build BSL semantic models with dimensions, measures, joins, and YAML config. Use for creating/modifying data models.

boringdata/boring-semantic-layer+2 more
18d ago
4160
@K-Dense-AI

adaptyv

Cloud laboratory platform for automated protein testing and validation. Use when designing proteins and needing experimental validation including binding assays, expression testing, thermostability measurements, enzyme activity assays, or protein sequence optimization. Also use for submitting experiments via API, tracking experiment status, downloading results, optimizing protein sequences for better expression using computational tools (NetSolP, SoluProt, SolubleMPNN, ESM), or managing protein design workflows with wet-lab validation.

K-Dense-AI/claude-scientific-skills+153 more
18d ago
15.6K0
@MeasureSpace
MCP

Io.Github.MeasureSpace/Measure Space Mcp Server

MCP server for weather, climate, air quality, agriculture, pollen and geocoding from measurespace.io

mcpgithubai
MeasureSpace/measure-space-mcp-server
19d ago
0
@RefoundAI

ai-evals

Help users create and run AI evaluations. Use when someone is building evals for LLM products, measuring model quality, creating test cases, designing rubrics, or trying to systematically measure AI output quality.

RefoundAI/lenny-skills+73 more
18d ago
4230
@samber

golang-benchmark

Golang benchmarking, profiling, and performance measurement. Use when writing, running, or comparing Go benchmarks, profiling hot paths with pprof, interpreting CPU/memory/trace profiles, analyzing results with benchstat, setting up CI benchmark regression detection, or investigating production performance with Prometheus runtime metrics. Also use when the developer needs deep analysis on a specific performance indicator - this skill provides the measurement methodology, while golang-performance provides the optimization patterns.

samber/cc-skills-golang+30 more
8d ago
120
@Turbo-Puffin
MCP

Measure.events Analytics

Privacy-first web analytics. Query pageviews, referrers, trends, and AI insights.

mcpgithubaiweb
Turbo-Puffin/measure-mcp-server
19d ago
0
@cyberfabric

cypilot

Invoke when user asks to do something with Cypilot, or wants to analyze/validate artifacts, or create/generate/implement anything using Cypilot workflows, or plan phased execution. Core capabilities: workflow routing (plan/analyze/generate/auto-config); deterministic validation (structure, cross-refs, traceability, TOC); code↔artifact traceability with @cpt-* markers; spec coverage measurement; ID search/navigation; init/bootstrap; adapter + registry discovery; auto-configuration of brownfield projects (scan conventions, generate rules); kit management (install/update with file-level diff); TOC generation; agent integrations (Windsurf, Cursor, Claude, Copilot, OpenAI). Kit sdlc: Artifacts: ADR, CODEBASE, DECOMPOSITION, DESIGN, FEATURE, PR-CODE-REVIEW-TEMPLATE, PR-REVIEW, PR-STATUS-REPORT-TEMPLATE, PRD; Workflows: migrate-openspec, pr-review, pr-status.

cyberfabric/cyberfabric-core+8 more
19d ago
470
@tenequm

audio-quality-check

Analyze audio recording quality - echo detection, loudness, speech intelligibility, SNR, spectral analysis. Use when the user wants to check a recording's quality, detect echo or duplication in audio files, measure speech clarity, compare original vs processed audio, diagnose why a recording sounds bad, or analyze audio tracks from Blackbox or any call recording app. Triggers on audio quality, recording analysis, echo detection, check recording, sound quality, analyze audio, speech quality, PESQ, STOI, loudness, SNR, audio diagnostics, recording sounds bad, echo in recording, audio duplication.

tenequm/skills+25 more
9d ago
180
@ofershap
MCP

Ai Context Kit

Lint, measure, and sync AI context files across Cursor, Claude Code, Copilot.

mcpgithubaifile
ofershap/ai-context-kit
19d ago
0
@proyecto26

Autoresearch: Autonomous Experiment Loop

An autonomous optimization loop where Claude edits code, runs a benchmark, measures a metric, and keeps improvements or reverts — repeating forever until stopped. Inspired by [Karpathy's autoresearch](https://github.com/karpathy/autoresearch) and [pi-autoresearch](https://github.com/davebcn87/pi-a

proyecto26/autoresearch-ai-plugin+1 more
18d ago
50
@agentaeo
MCP

Agentaeo Mcp Server

AEO audit tool — measure brand visibility across ChatGPT, Perplexity, Claude, and Google AI.

mcpgithubai
agentaeo/agentaeo-mcp-server.git
19d ago
0
@ruska-ai

Complexity Audit

Runs a four-phase complexity audit (measure, identify patterns, benchmark, draft PR) on a specified target path. The skill owns orchestration: issue creation, branch/worktree setup, and final reporting. The `complexity-auditor` agent owns execution.

ruska-ai/orchestra+5 more
18d ago
120
@aitytech

analytics-attribution

Performance measurement, attribution modeling, and marketing ROI analysis. Use when setting up tracking, analyzing campaign performance, building attribution models, or creating marketing reports.

aitytech/agentkits-marketing+8 more
18d ago
3740
@aimoda
MCP

Rigol DHO824 Oscilloscope

Control and query Rigol DHO824 oscilloscope for waveform capture and measurements

mcpgithubai
aimoda/rigol-dho824-mcp
19d ago
0
@guia-matthieu

aarrr-metrics

Measure and optimize growth using the AARRR (Pirate Metrics) framework with stage-specific KPIs and funnel analysis

guia-matthieu/clawfu-skills+70 more
18d ago
330
@keep-starknet-strange

benchmarking-cairo

Use when profiling Cairo functions, measuring step counts, analyzing resource usage, generating call-graph PNGs, or launching pprof to visualize Cairo execution traces

keep-starknet-strange/garaga+1 more
18d ago
2570
@reidemeister94

ai-agent-bench

Use when the user wants to benchmark or compare AI agents (Claude Code, Codex, OpenCode) on a refactoring, perf, or code-change task in the current repo. Use when user says compare agents, benchmark Claude vs Codex, agent eval, measure agent, AI agent comparison, agent trial, /ai-agent-bench.

reidemeister94/development-skills+15 more
10d ago
90
@cleodin

Analytics Tracking & Measurement Strategy

You are an expert in **analytics implementation and measurement design**. Your goal is to ensure tracking produces **trustworthy signals that directly support decisions** across marketing, product, and growth.

cleodin/antigravity-awesome-skills+1 more
18d ago
160
@cglab-public

agenfk

Agile, measurable, and reliable workflow enforcement for AI-assisted engineering.

cglab-public/agenfk+1 more
19d ago
370
@nguyenyou

benchmark

Run scalex performance benchmarks, profiling, and timing analysis. Use this skill whenever the user asks to benchmark scalex, measure performance, profile index/query times, compare before/after performance of a change, investigate bottlenecks, or mentions "benchmark", "perf", "how fast", "timing", "hyperfine", "profile", "flame graph", "profiling", "--timings", "slow", "bottleneck", "regression", "memory", "heap", "GC", "allocation". Also use proactively after implementing performance improvements to verify gains. Covers 6 layers: built-in --timings, hyperfine benchmarks, async-profiler flame graphs, JFR recording, microbenchmarks, and memory profiling.

nguyenyou/scalex+2 more
18d ago
470
@coreos

alloc-tracker

Use when you need to measure per-phase memory allocation statistics to investigate memory usage.

coreos/chunkah+1 more
18d ago
460