> assemblyai-hello-world
Create a minimal working AssemblyAI transcription example. Use when starting a new AssemblyAI integration, testing your setup, or learning basic transcription patterns. Trigger with phrases like "assemblyai hello world", "assemblyai example", "assemblyai quick start", "simple assemblyai transcription".
curl "https://skillshub.wtf/jeremylongshore/claude-code-plugins-plus-skills/assemblyai-hello-world?format=md"AssemblyAI Hello World
Overview
Minimal working examples demonstrating AssemblyAI's three core capabilities: async transcription, audio intelligence features, and LeMUR (LLM-powered analysis).
Prerequisites
- Completed
assemblyai-install-authsetup - Valid API key configured in
ASSEMBLYAI_API_KEY
Instructions
Step 1: Basic Transcription (Remote URL)
import { AssemblyAI } from 'assemblyai';
const client = new AssemblyAI({
apiKey: process.env.ASSEMBLYAI_API_KEY!,
});
async function transcribeUrl() {
const transcript = await client.transcripts.transcribe({
audio: 'https://storage.googleapis.com/aai-web-samples/5_common_sports_702.wav',
});
if (transcript.status === 'error') {
throw new Error(`Transcription failed: ${transcript.error}`);
}
console.log('Transcript:', transcript.text);
console.log('Duration:', transcript.audio_duration, 'seconds');
console.log('Word count:', transcript.words?.length);
}
transcribeUrl().catch(console.error);
Step 2: Transcribe a Local File
async function transcribeLocal() {
// The SDK handles upload automatically when you pass a local path
const transcript = await client.transcripts.transcribe({
audio: './recording.mp3',
});
console.log('Transcript:', transcript.text);
// Access word-level timestamps
for (const word of transcript.words ?? []) {
console.log(`[${word.start}ms - ${word.end}ms] ${word.text} (${word.confidence})`);
}
}
Step 3: Enable Audio Intelligence Features
async function transcribeWithIntelligence() {
const transcript = await client.transcripts.transcribe({
audio: 'https://storage.googleapis.com/aai-web-samples/5_common_sports_702.wav',
speaker_labels: true, // Who said what
auto_highlights: true, // Key phrases extraction
sentiment_analysis: true, // Sentiment per sentence
entity_detection: true, // Named entities (people, orgs, locations)
summarization: true, // Auto-summary
summary_model: 'informative',
summary_type: 'bullets',
});
// Speaker diarization
for (const utterance of transcript.utterances ?? []) {
console.log(`Speaker ${utterance.speaker}: ${utterance.text}`);
}
// Key phrases
for (const result of transcript.auto_highlights_result?.results ?? []) {
console.log(`Key phrase: "${result.text}" (mentioned ${result.count} times)`);
}
// Sentiment analysis
for (const result of transcript.sentiment_analysis_results ?? []) {
console.log(`${result.sentiment}: "${result.text}"`);
}
// Summary
console.log('Summary:', transcript.summary);
}
Step 4: LeMUR — Ask Questions About Your Audio
async function lemurDemo() {
// First, transcribe
const transcript = await client.transcripts.transcribe({
audio: 'https://storage.googleapis.com/aai-web-samples/5_common_sports_702.wav',
});
// Then use LeMUR to analyze
const { response } = await client.lemur.task({
transcript_ids: [transcript.id],
prompt: 'Summarize the key topics discussed and list any action items mentioned.',
});
console.log('LeMUR response:', response);
}
Output
- Working transcription from a remote URL or local file
- Word-level timestamps with confidence scores
- Speaker-labeled utterances (diarization)
- Key phrases, sentiment analysis, entity detection
- LeMUR-powered summarization and Q&A
Error Handling
| Error | Cause | Solution |
|---|---|---|
transcript.status === 'error' | Bad audio URL/format | Verify URL is publicly accessible, supported format |
Authentication error | Invalid API key | Check ASSEMBLYAI_API_KEY environment variable |
File not found | Wrong local path | Verify file exists at the specified path |
Unsupported audio format | Incompatible format | Use MP3, WAV, M4A, FLAC, OGG, or WebM |
Resources
Next Steps
Proceed to assemblyai-local-dev-loop for development workflow setup.
> related_skills --same-repo
> fathom-cost-tuning
Optimize Fathom API usage and plan selection. Trigger with phrases like "fathom cost", "fathom pricing", "fathom plan".
> fathom-core-workflow-b
Sync Fathom meeting data to CRM and build automated follow-up workflows. Use when integrating Fathom with Salesforce, HubSpot, or custom CRMs, or creating automated post-meeting email summaries. Trigger with phrases like "fathom crm sync", "fathom salesforce", "fathom follow-up", "fathom post-meeting workflow".
> fathom-core-workflow-a
Build a meeting analytics pipeline with Fathom transcripts and summaries. Use when extracting insights from meetings, building CRM sync, or creating automated meeting follow-up workflows. Trigger with phrases like "fathom analytics", "fathom meeting pipeline", "fathom transcript analysis", "fathom action items sync".
> fathom-common-errors
Diagnose and fix Fathom API errors including auth failures and missing data. Use when API calls fail, transcripts are empty, or webhooks are not firing. Trigger with phrases like "fathom error", "fathom not working", "fathom api failure", "fix fathom".