Database Schema Reviewer
openbooklet.com/s/database-schema-revieweropenbooklet.com/s/database-schema-reviewer@1.0.0GET /api/v1/skills/database-schema-reviewerReviews database schemas for normalization issues, missing indexes, naming inconsistencies, and scalability risks.
Writes a high-quality CLAUDE.md, .cursorrules, or .windsurfrules file that gives a coding agent the right project context, conventions, and constraints to work effectively.
Designs an eval suite for an LLM agent or pipeline including success metrics, trajectory scoring, LLM-as-judge setup, and regression test cases.
Reads a codebase or system description and produces a clear, structured architecture overview with diagrams.
Takes a long-form article and repurposes it into multiple formats: tweet thread, LinkedIn post, TL;DR, and key quotes.
Designs a secure authentication and authorization flow for any application, covering login, sessions, roles, and edge cases.
Generates a detailed, SEO-optimized blog post outline with H2/H3 structure, key points per section, and a hook.
Compares two versions of a codebase or API and flags all breaking changes with migration hints.
Systematically diagnoses bugs by tracing execution flow and identifying root causes vs symptoms.
Recommends the right caching layer, TTL strategy, and invalidation approach for any application bottleneck.
Maintains and formats a CHANGELOG.md following Keep a Changelog conventions from git history or PR list.
Generates production-ready docker-compose.yml files for any application stack.
Audits a codebase or docs folder and lists everything that is undocumented, outdated, or unclear.
Systematically identifies edge cases and boundary conditions for any function, API, or user flow.
Parses raw error logs and produces a concise, prioritized summary of unique issues with root cause hints.
Diagnoses why tests pass inconsistently and suggests fixes for timing, ordering, and state isolation issues.
Generates conventional commit messages from staged changes or a diff.
Reviews an AI-generated response or LLM application output for factual risks, hallucination patterns, and confidence calibration issues.
Designs a hybrid retrieval pipeline combining dense vector search and BM25 sparse search with reciprocal rank fusion, and explains when to use each configuration.
Takes a GitHub issue and produces a step-by-step implementation plan with file locations and code changes needed.
Auto-indexed from Notysoty/openagentskills
Are you the author? Claim this skill to take ownership and manage it.
Related Skills
x-twitter-scraper
X (Twitter) data platform skill â tweet search, user lookup, follower extraction, engagement metrics, giveaway draws, monitoring, webhooks, 19 extraction tools, MCP server.
analytics-engineering
Use this skill when building dbt models, designing semantic layers, defining metrics, creating self-serve analytics, or structuring a data warehouse for analyst consumption. Triggers on dbt project setup, model layering (staging, intermediate, marts), ref() and source() usage, YAML schema definitions, metrics definitions, semantic layer configuration, dimensional modeling, slowly changing dimensions, data testing, and any task requiring analytics engineering best practices.
data-pipelines
Use this skill when building data pipelines, ETL/ELT workflows, or data transformation layers. Triggers on Airflow DAG design, dbt model creation, Spark job optimization, streaming vs batch architecture decisions, data ingestion, data quality checks, pipeline orchestration, incremental loads, CDC (change data capture), schema evolution, and data warehouse modeling. Acts as a senior data engineer advisor for building reliable, scalable data infrastructure.
data-quality
Use this skill when implementing data validation, data quality monitoring, data lineage tracking, data contracts, or Great Expectations test suites. Triggers on schema validation, data profiling, freshness checks, row-count anomalies, column drift, expectation suites, contract testing between producers and consumers, lineage graphs, data observability, and any task requiring data integrity enforcement across pipelines.
real-time-streaming
Use this skill when building real-time data pipelines, stream processing jobs, or change data capture systems. Triggers on tasks involving Apache Kafka (producers, consumers, topics, partitions, consumer groups, Connect, Streams), Apache Flink (DataStream API, windowing, checkpointing, stateful processing), event sourcing implementations, CDC with Debezium, stream processing patterns (windowing, watermarks, exactly-once semantics), and any pipeline that processes unbounded data in motion rather than data at rest.
spreadsheet-modeling
Use this skill when building, auditing, or optimizing spreadsheet models in Excel or Google Sheets. Triggers on formula writing, pivot table creation, dashboard design, data validation, conditional formatting, macro/VBA scripting, Apps Script automation, financial modeling, what-if analysis, XLOOKUP/INDEX-MATCH lookups, array formulas, and workbook architecture. Covers advanced Excel and Google Sheets for analysts, finance professionals, and operations teams.