> elevenlabs-core-workflow-a

Implement ElevenLabs text-to-speech and voice cloning workflows. Use when building TTS features, cloning voices from audio samples, or implementing the primary ElevenLabs money-path: voice generation. Trigger: "elevenlabs TTS", "text to speech", "voice cloning elevenlabs", "clone a voice", "generate speech", "elevenlabs voice".

fetch
$curl "https://skillshub.wtf/jeremylongshore/claude-code-plugins-plus-skills/elevenlabs-core-workflow-a?format=md"
SKILL.mdelevenlabs-core-workflow-a

ElevenLabs Core Workflow A — TTS & Voice Cloning

Overview

The primary ElevenLabs workflows: (1) Text-to-Speech with voice settings, (2) Instant Voice Cloning from audio samples, and (3) streaming TTS via WebSocket for real-time applications.

Prerequisites

  • Completed elevenlabs-install-auth setup
  • Valid API key with sufficient character quota
  • For voice cloning: audio recording(s) of the target voice (min 30 seconds, clean audio)

Instructions

Step 1: Advanced Text-to-Speech

import { ElevenLabsClient } from "@elevenlabs/elevenlabs-js";
import { createWriteStream } from "fs";
import { Readable } from "stream";
import { pipeline } from "stream/promises";

const client = new ElevenLabsClient();

async function generateSpeech(
  text: string,
  voiceId: string,
  outputPath: string
) {
  const audio = await client.textToSpeech.convert(voiceId, {
    text,
    model_id: "eleven_multilingual_v2",
    voice_settings: {
      stability: 0.5,          // Lower = more expressive, higher = more consistent
      similarity_boost: 0.75,  // How closely to match the original voice
      style: 0.3,              // Amplify the speaker's style (adds latency if > 0)
      speed: 1.0,              // 0.7 to 1.2 range
    },
    // Optional: enforce language for multilingual model
    // language_code: "en",    // ISO 639-1
  });

  await pipeline(Readable.fromWeb(audio as any), createWriteStream(outputPath));
  console.log(`Generated: ${outputPath}`);
}

// Generate with different voice settings for comparison
await generateSpeech("Welcome to our platform.", "21m00Tcm4TlvDq8ikWAM", "stable.mp3");

Step 2: Instant Voice Cloning (IVC)

Clone a voice from audio samples using POST /v1/voices/add:

import { createReadStream } from "fs";

async function cloneVoice(
  name: string,
  description: string,
  audioFiles: string[]  // Paths to audio samples
) {
  const voice = await client.voices.add({
    name,
    description,
    files: audioFiles.map(f => createReadStream(f)),
    // Optional: label the voice for organization
    labels: JSON.stringify({ accent: "american", age: "young" }),
  });

  console.log(`Cloned voice created: ${voice.voice_id}`);
  console.log(`Name: ${name}`);

  // Use the cloned voice immediately
  const audio = await client.textToSpeech.convert(voice.voice_id, {
    text: "This is my cloned voice speaking!",
    model_id: "eleven_multilingual_v2",
    voice_settings: {
      stability: 0.5,
      similarity_boost: 0.85,  // Higher for cloned voices to stay close to original
    },
  });

  return { voiceId: voice.voice_id, audio };
}

// Clone from 1-25 audio samples (more = better quality)
await cloneVoice(
  "My Custom Voice",
  "Professional narrator voice",
  ["sample1.mp3", "sample2.mp3"]
);

Step 3: WebSocket Streaming TTS

For real-time applications (chatbots, live narration), use the WebSocket endpoint:

import WebSocket from "ws";

async function streamTTSWebSocket(
  voiceId: string,
  textChunks: string[]
) {
  const modelId = "eleven_flash_v2_5"; // Best for real-time streaming
  const wsUrl = `wss://api.elevenlabs.io/v1/text-to-speech/${voiceId}/stream-input?model_id=${modelId}`;

  const ws = new WebSocket(wsUrl);
  const audioChunks: Buffer[] = [];

  return new Promise<Buffer>((resolve, reject) => {
    ws.on("open", () => {
      // Send initial config (BOS - Beginning of Stream)
      ws.send(JSON.stringify({
        text: " ",  // Space signals BOS
        voice_settings: {
          stability: 0.5,
          similarity_boost: 0.75,
        },
        xi_api_key: process.env.ELEVENLABS_API_KEY,
        // How many chars to buffer before generating audio
        chunk_length_schedule: [120, 160, 250, 290],
      }));

      // Stream text chunks
      for (const chunk of textChunks) {
        ws.send(JSON.stringify({ text: chunk }));
      }

      // Send EOS (End of Stream)
      ws.send(JSON.stringify({ text: "" }));
    });

    ws.on("message", (data: Buffer) => {
      const msg = JSON.parse(data.toString());
      if (msg.audio) {
        // Base64-encoded audio chunk
        audioChunks.push(Buffer.from(msg.audio, "base64"));
      }
      if (msg.isFinal) {
        ws.close();
      }
    });

    ws.on("close", () => resolve(Buffer.concat(audioChunks)));
    ws.on("error", reject);
  });
}

// Stream from an LLM response in chunks
const chunks = ["Hello, ", "this is ", "streamed ", "speech!"];
const audio = await streamTTSWebSocket("21m00Tcm4TlvDq8ikWAM", chunks);

Step 4: Voice Management

// List all available voices
async function listVoices() {
  const { voices } = await client.voices.getAll();
  for (const v of voices) {
    console.log(`${v.name} (${v.voice_id}) — ${v.category}`);
    // category: "premade" | "cloned" | "generated"
  }
}

// Get voice settings defaults
async function getVoiceSettings(voiceId: string) {
  const settings = await client.voices.getSettings(voiceId);
  console.log(`Stability: ${settings.stability}`);
  console.log(`Similarity: ${settings.similarity_boost}`);
}

// Update default voice settings
async function updateVoiceSettings(voiceId: string) {
  await client.voices.editSettings(voiceId, {
    stability: 0.6,
    similarity_boost: 0.8,
  });
}

// Delete a cloned voice
async function deleteVoice(voiceId: string) {
  await client.voices.delete(voiceId);
  console.log(`Voice ${voiceId} deleted.`);
}

Voice Cloning Requirements

AspectRequirement
Audio lengthMinimum 30 seconds total (1+ minute recommended)
Audio qualityClean, no background noise, no music
FormatMP3, WAV, M4A, FLAC, OGG
Samples1-25 files per voice
LanguagesWorks across all supported languages
PlanAvailable on all paid plans

Voice Settings Guide

SettingRangeLow Value EffectHigh Value Effect
stability0-1More expressive, variedConsistent, monotone
similarity_boost0-1More creative deviationStrictly matches voice
style0-1Neutral deliveryExaggerated emotion
speed0.7-1.2Slower speechFaster speech

Recommended starting points:

  • Narration: stability=0.5, similarity=0.75, style=0.0
  • Conversational: stability=0.4, similarity=0.6, style=0.3
  • Cloned voice: stability=0.5, similarity=0.85, style=0.0

Error Handling

ErrorHTTPCauseSolution
voice_not_found404Invalid voice_idList voices first: GET /v1/voices
text_too_long400Over 5,000 chars per requestSplit text and use previous_text/next_text for prosody
quota_exceeded401Character limit reachedCheck usage, upgrade plan
too_many_concurrent_requests429Exceeds plan concurrencyQueue requests; see concurrency limits
invalid_voice_sample400Bad audio file for cloningUse clean audio, supported format, 30s+
WebSocket model_not_supportedN/Aeleven_v3 not available for WSUse eleven_flash_v2_5 or eleven_multilingual_v2

Resources

Next Steps

For speech-to-speech, sound effects, and audio isolation, see elevenlabs-core-workflow-b.

┌ stats

installs/wk0
░░░░░░░░░░
github stars1.7K
██████████
first seenMar 23, 2026
└────────────

┌ repo

jeremylongshore/claude-code-plugins-plus-skills
by jeremylongshore
└────────────