Building MCP-Compatible Skills for AI Agents
The Model Context Protocol (MCP) is becoming the standard way AI agents discover and use external tools and context. If you are building skills for AI agents, understanding MCP compatibility is not optional - it is the difference between your skill working with one agent and working with all of them.
This guide walks you through building a skill from scratch, publishing it to OpenBooklet, and making sure it works across every major agent platform.
Key Takeaways
- A skill is a YAML frontmatter header plus a Markdown body - simple by design
- OpenBooklet auto-converts your skill into 6 agent formats, so you write once
- Safety scanning runs automatically - your skill gets a trust badge on publish
- Proper structure (clear sections, constraints, examples) makes your skill more reliable
- MCP servers like OpenBooklet's give agents programmatic access to your published skills
What Is MCP and Why Does It Matter?
The Model Context Protocol is a specification created by Anthropic that standardizes how AI agents connect to external data sources and tools. Before MCP, every agent platform had its own way of loading context - Claude Code used CLAUDE.md files, Cursor used .cursorrules, Windsurf had its own format.
MCP changes this by providing a universal protocol. An MCP server exposes tools and resources that any MCP-compatible agent can discover and use. OpenBooklet runs an MCP server that exposes the entire skills registry - any agent can search for, pull, and use skills through it.
Why This Matters for Skill Authors
When you publish a skill to OpenBooklet, it becomes available through:
- The MCP server (for any MCP-compatible agent)
- The REST API (for custom integrations)
- The CLI tool (for developers)
- Direct URLs (for manual use)
You do not need to think about distribution. You think about writing a good skill.
The Anatomy of a Skill
Every skill on OpenBooklet follows the same structure: YAML frontmatter for metadata, followed by Markdown content for the actual instructions.
---
name: nextjs-code-review
description: Reviews Next.js applications for performance issues, security vulnerabilities, and best practice violations.
tags: [code-review, nextjs, security, performance]
category: Development
---
# Next.js Code Review
You are a senior Next.js engineer performing a thorough code review.
## Process
1. Check for server vs client component boundaries
2. Verify data fetching patterns (no waterfalls)
3. Review authentication and authorization logic
4. Check for proper error handling and loading states
5. Validate SEO metadata implementation
## Constraints
- Never suggest removing TypeScript types
- Flag any use of `any` type
- Recommend Server Components as the default
- Check that sensitive operations happen server-side
## Output Format
For each issue found, provide:
- File and line reference
- Severity (critical / warning / info)
- What is wrong
- How to fix it
Frontmatter Fields
The YAML frontmatter is what OpenBooklet reads to catalog your skill. Here are the fields:
| Field | Required | Purpose |
|---|---|---|
name |
Yes | URL-safe identifier (lowercase, hyphens) |
description |
Yes | One-line summary for search and display |
tags |
No | Array of keywords for discovery |
category |
No | Primary category for browsing |
The name and description fields are the only ones that Anthropic's Agent Skills parser reads directly. All other fields are used by OpenBooklet for cataloging, search, and format conversion. This means your skill is natively compatible with Claude Code even without OpenBooklet.
The Markdown Body
The body is where the actual skill instructions live. There is no rigid schema - you write Markdown that tells the agent what to do. However, certain patterns work better than others.
Effective patterns:
- Start with a role statement ("You are a...")
- Break the task into numbered steps
- Include explicit constraints (what NOT to do)
- Define the expected output format
- Provide examples when the task is ambiguous
Patterns to avoid:
- Vague instructions ("make it better")
- Conflicting constraints
- Excessively long skills (agents have context limits)
- Hardcoded paths or environment-specific values
How Format Conversion Works
One of the most practical features on OpenBooklet is automatic format conversion. You publish one skill, and it gets converted into formats for six different agent platforms.
Here is what happens when you publish:
- Your skill is stored as the canonical SKILL.md format
- The format converter generates variants for each supported agent
- When an agent or user requests a specific format, they get the converted version
The supported formats are:
- Claude Code - .claude/skills/ directory, SKILL.md format (native)
- Cursor - .cursor/rules/ directory, .mdc format with Cursor-specific frontmatter
- Windsurf - .windsurf/rules/ directory, .md format
- GitHub Copilot - .github/copilot-instructions.md format
- GPTs - OpenAI GPT instruction format
- LangChain - Tool definition format for LangChain agents
You do not need to worry about format differences when writing your skill. Write it in the standard SKILL.md format and let OpenBooklet handle the rest. If you want to see how your skill looks in each format, check the "Formats" tab on your skill's detail page after publishing.
Publishing Your Skill
Publishing is straightforward. You can use the web UI, the CLI, or the API.
Using the CLI
# Install the CLI
npm install -g openbooklet
# Login
ob login
# Publish from a local SKILL.md file
ob publish ./my-skill/SKILL.md
Using the Web UI
Go to openbooklet.com/publish and either:
- Paste your SKILL.md content directly
- Import from a URL (GitHub raw file links work)
- Scan an entire GitHub repo for SKILL.md files
The publish flow validates your skill, runs a safety scan, checks for duplicates, and assigns a content hash (SHA-256) for provable authorship.
What Happens After Publishing
Once published, your skill goes through this pipeline:
- Validation - frontmatter parsed, required fields checked
- Content hashing - SHA-256 hash generated for the exact content
- Safety scan - automated check for prompt injection, data exfiltration patterns, and other risks
- Similarity check - compared against existing skills to detect duplicates
- Embedding generation - vector embedding created for semantic search
- Format conversion - converted into all 6 agent formats
- Published - live on the registry, searchable, pullable
Testing Your Skill
Before publishing, test your skill locally with the agent you use most.
Testing with Claude Code
Drop your SKILL.md into .claude/skills/my-skill/SKILL.md in any project directory. Claude Code automatically loads skills from this path. Try running the task your skill is designed for and see if the output matches your expectations.
Testing with Cursor
Convert your skill to Cursor format by wrapping it in the .mdc frontmatter Cursor expects, then place it in .cursor/rules/. However, if you publish to OpenBooklet first, you can pull the pre-converted format:
ob pull my-skill --target cursor
Common Issues
- Skill too long - if your skill exceeds the agent's context window, it gets truncated. Keep skills focused on one task.
- Ambiguous instructions - if the agent gives inconsistent results, your instructions probably have multiple valid interpretations. Add constraints or examples.
- Missing context - if the skill assumes knowledge the agent does not have, it will fail. Be explicit about prerequisites.
Making Skills Work Across Agents
Different agents have different strengths and limitations. A skill that works perfectly with Claude might need adjustments for other platforms. Here are the key differences to keep in mind:
Context Window Sizes
Not all agents have the same context window. Claude (200K tokens) can handle much larger skills than models with 32K windows. If you want broad compatibility, keep your skill under 8K tokens.
Tool Availability
Some agents can execute code, access files, or make API calls. Others are text-only. If your skill requires tool use, document this in the description so users know which agents it works with.
Instruction Following
Agents vary in how strictly they follow multi-step instructions. Claude tends to follow numbered lists precisely. Other models might paraphrase or skip steps. If strict ordering matters, make it explicit: "Do these steps in order. Do not skip any step."
Do not include API keys, secrets, or credentials in your skill content. OpenBooklet's safety scanner will flag these, and your skill will not pass the safety check. If your skill needs credentials, document that the user should provide them via environment variables.
FAQ
Can I update a skill after publishing?
Yes. Publishing a new version is the same as the initial publish - just use the same skill name. OpenBooklet uses semver, so you can publish 1.0.0, then 1.1.0, and users can pin to specific versions. Every version gets its own content hash and changelog.
Do I need to write separate skills for each agent platform?
No. That is the whole point of format conversion. Write one SKILL.md, publish it once, and OpenBooklet generates the format variants automatically. Users and agents request the format they need via the API, CLI, or MCP server.
How does the safety scanner work?
The scanner checks for common attack patterns: prompt injection attempts, data exfiltration URLs, credential exposure, and suspicious dynamic content. It assigns a safety score. Skills that pass get a badge. Skills that fail get flagged for review. The scanner is not perfect, but it catches the obvious risks that a purely manual review process would miss at scale.
What Comes Next
Once your skill is published, it is available to every agent on the platform. Monitor your skill's pull count and ratings on your dashboard. If users leave feedback, use it to improve the next version.
The best skills on OpenBooklet are the ones that solve a specific problem well, with clear instructions and honest documentation about what they can and cannot do. Start small, publish early, and iterate based on real usage.
Ready to publish? Head to openbooklet.com/publish and ship your first skill.