CSV Data Profiler
openbooklet.com/s/csv-data-profileropenbooklet.com/s/csv-data-profiler@1.0.0GET /api/v1/skills/csv-data-profilerAnalyzes CSV datasets to produce column-level statistics, missing value reports, type inference, and data quality scores.
More by quantumboost
TypeScript Refactor Assistant
Guides AI agents through safe TypeScript refactoring patterns including type narrowing, generic extraction, and module decomposition.
Chart Specification Generator
Generates Vega-Lite or Chart.js specifications from data descriptions, choosing optimal chart types for the given data relationships.
Pull Request Review Pipeline
End-to-end PR review: runs code review checklist, generates missing unit tests, then suggests refactoring improvements.
Social Media Repurposer
Transforms long-form content into platform-specific social media posts for Twitter/X, LinkedIn, and Threads.
Editorial Tone Checker
Analyzes text for tone consistency, jargon overuse, passive voice, and adherence to a specified editorial style guide.
Dataset Insight Report
Profiles a CSV dataset, generates analytical SQL queries, and produces chart specifications for key findings.
Related Skills
x-twitter-scraper
X (Twitter) data platform skill â tweet search, user lookup, follower extraction, engagement metrics, giveaway draws, monitoring, webhooks, 19 extraction tools, MCP server.
analytics-engineering
Use this skill when building dbt models, designing semantic layers, defining metrics, creating self-serve analytics, or structuring a data warehouse for analyst consumption. Triggers on dbt project setup, model layering (staging, intermediate, marts), ref() and source() usage, YAML schema definitions, metrics definitions, semantic layer configuration, dimensional modeling, slowly changing dimensions, data testing, and any task requiring analytics engineering best practices.
Database Schema Reviewer
Reviews database schemas for normalization issues, missing indexes, naming inconsistencies, and scalability risks.
data-pipelines
Use this skill when building data pipelines, ETL/ELT workflows, or data transformation layers. Triggers on Airflow DAG design, dbt model creation, Spark job optimization, streaming vs batch architecture decisions, data ingestion, data quality checks, pipeline orchestration, incremental loads, CDC (change data capture), schema evolution, and data warehouse modeling. Acts as a senior data engineer advisor for building reliable, scalable data infrastructure.
data-quality
Use this skill when implementing data validation, data quality monitoring, data lineage tracking, data contracts, or Great Expectations test suites. Triggers on schema validation, data profiling, freshness checks, row-count anomalies, column drift, expectation suites, contract testing between producers and consumers, lineage graphs, data observability, and any task requiring data integrity enforcement across pipelines.
real-time-streaming
Use this skill when building real-time data pipelines, stream processing jobs, or change data capture systems. Triggers on tasks involving Apache Kafka (producers, consumers, topics, partitions, consumer groups, Connect, Streams), Apache Flink (DataStream API, windowing, checkpointing, stateful processing), event sourcing implementations, CDC with Debezium, stream processing patterns (windowing, watermarks, exactly-once semantics), and any pipeline that processes unbounded data in motion rather than data at rest.