> comfyui

Node-based graphical interface for Stable Diffusion workflows. Build complex image generation pipelines by connecting nodes visually. Supports custom nodes, ControlNet, LoRA, upscaling, and advanced workflows with full control over the diffusion process.

fetch
$curl "https://skillshub.wtf/TerminalSkills/skills/comfyui?format=md"
SKILL.mdcomfyui

ComfyUI

Installation

# install.sh — Clone and set up ComfyUI
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI

# Install dependencies (NVIDIA GPU)
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt

# Start the server
python main.py --listen 0.0.0.0 --port 8188
# Visit http://localhost:8188

Model Setup

# setup_models.sh — Download and place models in the correct directories
cd ComfyUI

# SDXL base model
wget -P models/checkpoints/ \
    "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors"

# VAE
wget -P models/vae/ \
    "https://huggingface.co/stabilityai/sdxl-vae/resolve/main/sdxl_vae.safetensors"

# LoRA adapters go in models/loras/
# ControlNet models go in models/controlnet/
# Upscale models go in models/upscale_models/

API: Queue a Workflow

# queue_prompt.py — Submit a workflow to ComfyUI via the API
import json
import requests
import uuid

COMFYUI_URL = "http://localhost:8188"

# Basic txt2img workflow
workflow = {
    "3": {
        "class_type": "KSampler",
        "inputs": {
            "seed": 42,
            "steps": 25,
            "cfg": 7.5,
            "sampler_name": "euler_ancestral",
            "scheduler": "normal",
            "denoise": 1.0,
            "model": ["4", 0],
            "positive": ["6", 0],
            "negative": ["7", 0],
            "latent_image": ["5", 0],
        },
    },
    "4": {
        "class_type": "CheckpointLoaderSimple",
        "inputs": {"ckpt_name": "sd_xl_base_1.0.safetensors"},
    },
    "5": {
        "class_type": "EmptyLatentImage",
        "inputs": {"width": 1024, "height": 1024, "batch_size": 1},
    },
    "6": {
        "class_type": "CLIPTextEncode",
        "inputs": {
            "text": "A majestic mountain landscape at golden hour, photorealistic, 8k",
            "clip": ["4", 1],
        },
    },
    "7": {
        "class_type": "CLIPTextEncode",
        "inputs": {
            "text": "blurry, low quality, distorted",
            "clip": ["4", 1],
        },
    },
    "8": {
        "class_type": "VAEDecode",
        "inputs": {"samples": ["3", 0], "vae": ["4", 2]},
    },
    "9": {
        "class_type": "SaveImage",
        "inputs": {"filename_prefix": "comfyui_output", "images": ["8", 0]},
    },
}

client_id = str(uuid.uuid4())
response = requests.post(
    f"{COMFYUI_URL}/prompt",
    json={"prompt": workflow, "client_id": client_id},
)
print(f"Queued: {response.json()}")

API: Get Results and Download Images

# get_results.py — Poll for completion and download generated images
import requests
import time
import urllib.request

COMFYUI_URL = "http://localhost:8188"

def wait_for_completion(prompt_id: str) -> dict:
    while True:
        response = requests.get(f"{COMFYUI_URL}/history/{prompt_id}")
        history = response.json()
        if prompt_id in history:
            return history[prompt_id]
        time.sleep(1)

def download_images(history: dict, output_dir: str = "./outputs"):
    import os
    os.makedirs(output_dir, exist_ok=True)
    for node_id, node_output in history["outputs"].items():
        if "images" in node_output:
            for image in node_output["images"]:
                url = f"{COMFYUI_URL}/view?filename={image['filename']}&subfolder={image.get('subfolder', '')}&type={image['type']}"
                filepath = os.path.join(output_dir, image["filename"])
                urllib.request.urlretrieve(url, filepath)
                print(f"Saved: {filepath}")

# Usage after queuing a prompt
prompt_id = "your-prompt-id"
history = wait_for_completion(prompt_id)
download_images(history)

Custom Nodes (ComfyUI Manager)

# install_manager.sh — Install ComfyUI Manager for easy custom node management
cd ComfyUI/custom_nodes
git clone https://github.com/ltdrdata/ComfyUI-Manager.git

# Restart ComfyUI — Manager button appears in the UI
# Popular custom node packs:
# - ComfyUI-Impact-Pack: Detection, segmentation, inpainting
# - ComfyUI-AnimateDiff: Animation from static images
# - ComfyUI-IPAdapter: Image prompt adapter for style transfer
# - rgthree-comfy: Workflow organization utilities

ControlNet Workflow

# controlnet_workflow.py — Generate images guided by ControlNet (edge detection, depth, pose)
controlnet_nodes = {
    "10": {
        "class_type": "ControlNetLoader",
        "inputs": {"control_net_name": "control_v11p_sd15_canny.pth"},
    },
    "11": {
        "class_type": "LoadImage",
        "inputs": {"image": "input_image.png"},
    },
    "12": {
        "class_type": "CannyEdgePreprocessor",
        "inputs": {"image": ["11", 0], "low_threshold": 100, "high_threshold": 200},
    },
    "13": {
        "class_type": "ControlNetApply",
        "inputs": {
            "conditioning": ["6", 0],
            "control_net": ["10", 0],
            "image": ["12", 0],
            "strength": 0.8,
        },
    },
}
# Connect node "13" output to KSampler positive conditioning instead of "6"

Docker Deployment

# docker-compose.yml — Run ComfyUI in Docker with GPU support
version: "3.8"
services:
  comfyui:
    image: ghcr.io/ai-dock/comfyui:latest
    ports:
      - "8188:8188"
    volumes:
      - ./models:/workspace/ComfyUI/models
      - ./output:/workspace/ComfyUI/output
      - ./custom_nodes:/workspace/ComfyUI/custom_nodes
    deploy:
      resources:
        reservations:
          devices:
            - capabilities: [gpu]

Key Concepts

  • Nodes and links: Visual programming — connect output slots to input slots to build pipelines
  • Workflows: Saved as JSON files — shareable, version-controllable, API-submittable
  • Custom nodes: Extend functionality via Python — community ecosystem via ComfyUI Manager
  • Checkpoints: Model files (.safetensors) placed in models/checkpoints/
  • LoRA: Lightweight fine-tuned adapters loaded alongside base models
  • ControlNet: Guide generation with structural inputs (edges, depth, pose)
  • API-first: Full HTTP API for queuing prompts and retrieving results programmatically

> related_skills --same-repo

> zustand

You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.

> zoho

Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.

> zod

You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.

> zipkin

Deploy and configure Zipkin for distributed tracing and request flow visualization. Use when a user needs to set up trace collection, instrument Java/Spring or other services with Zipkin, analyze service dependencies, or configure storage backends for trace data.

┌ stats

installs/wk0
░░░░░░░░░░
github stars17
███░░░░░░░
first seenMar 17, 2026
└────────────

┌ repo

TerminalSkills/skills
by TerminalSkills
└────────────

┌ tags

└────────────