> comfyui-video-generator
Generate short AI video clips (shots) using ComfyUI. This skill turns text prompts (and optional reference images) into 3–8 second scenes that can be stitched into longer “AI movies”.
curl "https://skillshub.wtf/tippyentertainment/skills/comfyui-video-generator?format=md"Provided by TippyEntertainment
https://github.com/tippyentertainment/skills.git
This skill is designed for use on the Tasking.tech agent platform (https://tasking.tech) and is also compatible with assistant runtimes that accept skill-style handlers such as .claude, .openai, and .mistral. Use this skill for both Claude code and Tasking.tech agent source.
comfyui-video-generator
Summary
Generate short AI video clips (shots) using ComfyUI. This skill turns text prompts (and optional reference images) into 3–8 second scenes that can be stitched into longer “AI movies”.
When to Use
- The user asks for an AI video, scene, or shot.
- You are building a multi‑shot story or trailer and need per‑scene clips.
- You need b‑roll, promo visuals, or anime‑style sequences.
Inputs to Collect
Ask the user for:
- Scene description
- Setting, characters/subjects, what happens in the shot.
- Visual style
- Anime, cinematic, painterly, realistic, cyberpunk, etc.
- Duration target
- Default 4–6 seconds; keep under ~200 frames per render.
- Frame rate
- Default 16–24 fps.
- Resolution
- Default 720p (1280×720); lower for tests.
- Camera / motion (optional)
- Static, slow zoom, pan, orbit, handheld, etc.
- Reference images (optional)
- URLs or file handles for style/character consistency.
- Output format
- MP4/WebM or frame sequence.
If the prompt is vague, ask 2–3 clarifying questions, then proceed with defaults.
Expected Behavior
- Normalize the request into structured parameters:
prompt,negativePromptdurationSeconds,fps,resolutioncameraStyle,referenceAssets
- Choose an appropriate ComfyUI workflow:
- text‑to‑video, image‑to‑video, or ref‑guided loop.
- Call the ComfyUI backend with those parameters.
- Monitor for completion and collect:
- Video file path/URL.
- Metadata (seed, model, duration, fps, resolution).
- Return a concise summary plus the link/file handle.
Output Format (to the caller)
The skill should return a JSON‑like structure (or equivalent in your system) with at least:
description: short human‑readable description of the clip.videoUrlorvideoPathdurationSecondsfpsresolution:{ width, height }seed(if available)modelInfo(model names/versions used)
Orchestration Notes
-
This skill is typically called first in a pipeline:
comfyui-video-generator→ create shots.comfyui-audio-creator→ create music/ambience for those shots.comfyui-soundfx-creator→ create per‑event sound effects.
-
Do NOT handle editing/muxing here; just generate the raw video asset and metadata.
> related_skills --same-repo
> worldclass-tailwind-v4-visual-design
A top-tier product/UI designer skill that uses Tailwind v4 plus Google Gemini Nano Banana image models to craft visually stunning, “award‑winning” marketing sites and apps with strong art direction, motion, and systems thinking.
> wasm-spa-autofix-react-imports
Meticulously detect and fix missing React/TSX imports, undefined components, and bundler runtime errors in the WASM SPA build/preview pipeline. Ensures JSX components, icons, and hooks are properly imported or defined before running the browser preview, so the runtime safety-net rarely triggers.
> vite-webcontainer-developer
Debug and auto-fix Vite projects running inside WebContainers: resolve mount/root issues, alias/path errors, missing scripts, and other common dev-time problems so the app boots cleanly.
> vite-config-react19-spa-expert
Diagnose and fix Vite + React 19 configuration issues for TypeScript SPA and WASM preview builds. Specializes in React 19’s JSX runtime, @vitejs/plugin-react, path aliases, SPA routing, and dev-server behavior so the app and in-browser preview bundle cleanly without manual trial-and-error.