found 4223 skills in registry
You are an expert in Trigger.dev, the open-source background jobs platform for TypeScript. You help developers build reliable long-running tasks, scheduled jobs, webhook handlers, and event-driven workflows with automatic retries, concurrency control, real-time logs, and deployment to serverless infrastructure — replacing BullMQ/Redis setups with a fully managed or self-hosted solution purpose-built for modern TypeScript apps.
Call 100+ LLM APIs with one interface using LiteLLM — unified API proxy for OpenAI, Anthropic, Google, Mistral, Cohere, and self-hosted models. Use when someone asks to "switch between LLM providers", "LiteLLM", "unified LLM API", "LLM proxy", "call Claude and GPT with the same code", "LLM load balancing", or "multi-model AI gateway". Covers provider routing, fallbacks, rate limiting, spend tracking, and self-hosted proxy.
Transcribe audio to text with OpenAI Whisper. Use when a user asks to transcribe audio files, generate subtitles (SRT/VTT), transcribe podcasts, convert speech to text, translate audio to English, build transcription pipelines, do speaker diarization, transcribe meetings, process voice memos, create searchable audio archives, or integrate speech-to-text into applications. Covers OpenAI Whisper (local), Whisper API, faster-whisper, whisper.cpp, and production deployment patterns.
You are an expert in BullMQ, the high-performance job queue for Node.js built on Redis. You help developers build reliable background processing systems with delayed jobs, rate limiting, prioritization, repeatable cron jobs, job dependencies, concurrency control, and dead-letter handling — powering email sending, image processing, webhook delivery, report generation, and any async workload.
You are an expert in Arize and its open-source Phoenix library for AI observability. You help developers monitor LLM applications with tracing, evaluation, embedding analysis, drift detection, and retrieval quality metrics — using Phoenix for local development (open-source, self-hosted) and Arize platform for production monitoring at scale.
You are an expert in LlamaIndex.TS, the TypeScript data framework for building RAG (Retrieval-Augmented Generation) applications. You help developers ingest, index, and query data from any source — documents, APIs, databases — and connect it to LLMs with vector indexes, knowledge graphs, structured extraction, agents, and multi-document synthesis.
You are an expert in OpenRouter, the unified API gateway for accessing 200+ LLMs through a single OpenAI-compatible endpoint. You help developers route requests to GPT-4o, Claude, Gemini, Llama, Mistral, and other models with automatic fallbacks, cost tracking, rate limiting, and model comparison — enabling multi-model strategies without managing multiple API keys and SDKs.
Build LLM-powered applications with LangChain. Use when a user asks to create AI chains, build RAG pipelines, implement agents with tools, set up document loaders, create vector stores, build conversational AI, implement prompt templates, chain LLM calls, add memory to chatbots, or orchestrate language model workflows. Covers LangChain v0.3+ with LCEL (LangChain Expression Language), structured output, tool calling, retrieval, and production deployment patterns.
End-to-end workflow for fine-tuning LLMs using Kaggle datasets. Use when downloading datasets from Kaggle for model training, preparing conversation/customer service data for chatbot fine-tuning, or building domain-specific AI assistants. Covers dataset discovery, download, preprocessing into chat format, and integration with PEFT/LoRA training.
Run AI agent and LLM evaluations in CI/CD pipelines — automated quality gates that fail the build when AI output quality drops. Use when someone asks to "test my AI agent", "add evals to CI", "catch prompt regressions", "compare models", "evaluate LLM output quality", "set up AI quality gates", or "benchmark my agent before deploying". Covers eval frameworks (Cobalt, Promptfoo, Braintrust), LLM-as-judge scoring, threshold-based assertions, and GitHub Actions integration.
Implement safety guardrails for AI systems — content filtering, prompt injection detection, output validation, bias mitigation, and responsible AI practices. Use when tasks involve adding safety layers to LLM applications, detecting prompt injection attacks, filtering harmful content, implementing rate limiting for AI APIs, validating LLM outputs against schemas, building moderation pipelines, or ensuring AI systems comply with safety policies.
You are an expert in Langfuse, the open-source LLM engineering platform. You help developers trace LLM calls, evaluate output quality, manage prompts, track costs and latency, run experiments, and build evaluation datasets — providing full observability into AI applications from development through production.
You are an expert in smolagents, Hugging Face's minimalist agent framework. You help developers build AI agents that write and execute Python code to solve tasks, use tools from the Hugging Face Hub, chain multiple agents together, and run on any LLM (OpenAI, Anthropic, local models) — providing a simple, code-first approach to building agents without complex abstractions.
Expert guidance for Groq, the LLM inference platform that provides the fastest token generation speeds available, powered by custom LPU (Language Processing Unit) hardware. Helps developers integrate Groq's API for real-time AI applications where latency matters — chatbots, code completion, and streaming responses.
generate text prompts into AI generated videos with this skill. Works with MP4, MOV, WebM, GIF files up to 500MB. content creators use it for generating videos from text prompts through a free chat interface — processing takes 1-2 minutes on cloud GPUs and you get 1080p MP4 files.
Get ready-to-share videos ready to post, without touching a single slider. Upload your text prompts (TXT, DOCX, PDF, copied text, up to 500MB), say something like "turn this blog intro into a 30-second video with visuals and background music", and download 1080p MP4 when it's done. Built for content creators, marketers, students who move fast and want to create videos without a camera or editing skills.
Get ready-to-share videos ready to post, without touching a single slider. Upload your images or text (MP4, MOV, JPG, PNG, up to 200MB), say something like "turn these product photos into a 30-second promo video with background music", and download 1080p MP4 when it's done. Built for content creators and small business owners who move fast and want to create videos quickly without editing software skills.
Skip the learning curve of professional editing software. Describe what you want — trim the first 30 seconds and cut the last 2 minutes to keep only the highlight — and get trimmed HD clips back in 20-40 seconds. Upload MP4, MOV, AVI, WebM files up to 500MB, and the AI handles HD video trimming automatically. Ideal for YouTubers, content creators, videographers who want to quickly cut footage without learning complex editing software.
add video clips into captioned video files with this skill. Works with MP4, MOV, AVI, MKV files up to 500MB. video editors and YouTubers use it for adding AI-generated subtitles to DaVinci Resolve exports — processing takes 30-60 seconds on cloud GPUs and you get 1080p MP4 files.
create raw footage into trimmed short clips with this skill. Works with MP4, MOV, AVI, WebM files up to 500MB. TikTok creators use it for generating short clips from long recordings — processing takes 30-60 seconds on cloud GPUs and you get 1080p MP4 files.