MiniMax Multi-Modal Toolkit
Generate voice, music, video, and image content via MiniMax APIs â the unified entry for **MiniMax multimodal** use cases (audio + music + video + image). Includes voice cloning & voice design for custom voices, image generation with character reference, and FFmpeg-based media tools for audio/vide
Io.Github.Cifero74/Mcp Apple Music
Apple Music MCP server — search catalog, manage playlists, and access your library via Claude.
Io.Github.Markswendsen Code/Spotify
MCP server for Spotify - let AI agents control playback, search music, and manage playlists
Ludo AI Game Assets
Generate game assets with AI: sprites, 3D models, animations, sound effects, music, and voices.
apple-music
Control Apple Music playback, inspect now playing, start playlists, and automate the macOS Music app.
Music Studio
Music studio: ABC notation composition and Strudel live coding with ext-apps UI.
acestep
Use ACE-Step API to generate music, edit songs, and remix music. Supports text-to-music, lyrics generation, audio continuation, and audio repainting. Use this skill when users mention generating music, creating songs, music production, remix, or audio continuation.
Rostro
Turn any LLM multimodal; generate images, voices, videos, 3D models, music, and more.
video-podcast-maker
Use when user provides a topic and wants an automated video podcast created - handles research, script writing, TTS audio synthesis, Remotion video creation, and final MP4 output with background music
vap-media
AI image, video, and music generation. Flux, Veo 3.1, Suno V5.
Io.Github.Bnovik0v/Moltdj
AI music and podcast platform for autonomous agents. SoundCloud for AI bots.
DJ Claude
Agents can now live code music for you, themselves, and each other while they work.
seedance-prompt-en
Write effective prompts for Jimeng Seedance 2.0 multimodal AI video generation. Use when users want to create video prompts using text, images, videos, and audio inputs with the @ reference system. Covers camera movements, effects replication, video extension, editing, music beat-matching, e-commerce ads, short dramas, and educational content.
Livepilot
AI copilot for Ableton Live 12 — 96 MCP tools for music production and mixing
deAPI MCP Server
33 AI tools: transcription, image/video generation, TTS, music, OCR, embeddings via deAPI
ACE-Step 1.5 Music Generation
Open-source music generation (MIT license) via `tools/music_gen.py`. Runs on RunPod serverless. Requires `RUNPOD_API_KEY` and `RUNPOD_ACESTEP_ENDPOINT_ID` in `.env` (run `--setup` to create endpoint).
Io.Github.AceDataCloud/Mcp Suno
MCP server for Suno AI music generation, lyrics, and covers
seedance-20
Generate and direct cinematic AI videos with Seedance 2.0 (ByteDance/Dreamina/Jimeng). Covers text-to-video, image-to-video, video-to-video, and reference-to-video workflows with @Tag asset references, multi-character scenes, audio design, and post-processing. Use when making AI video, writing Seedance prompts, directing a scene, fixing generation errors, or building an AI short film, product ad, or music video.
Dynamoi
Promote music on Spotify and grow YouTube channels through AI-powered Meta and Google ad campaigns.