found 4223 skills in registry
You are an expert in Outlines, the Python library for reliable structured text generation with LLMs. You help developers generate guaranteed-valid JSON, regex-matching text, and grammar-constrained output from open-source models — using finite state machine guided generation that constrains the token sampling process to produce only valid output on the first try.
PandasAI enables natural language queries on pandas DataFrames using LLMs. Learn to ask questions in plain English, generate charts, clean data, and integrate with OpenAI and local models for conversational data analysis.
Pinecone is a managed vector database for AI and machine learning applications. Learn to create indexes, upsert embeddings, query by similarity, use namespaces and metadata filtering for semantic search and RAG pipelines.
You are an expert in Portkey, the AI gateway that sits between your app and LLM providers. You help developers add caching, fallbacks, load balancing, request retries, guardrails, semantic caching, budget limits, and observability to LLM calls — using a single unified API that works with 200+ models from OpenAI, Anthropic, Google, and open-source providers.
Test and evaluate LLM prompts systematically with Promptfoo — open-source eval framework. Use when someone asks to "test my prompts", "evaluate LLM output", "Promptfoo", "prompt regression testing", "compare LLM models", "LLM evaluation framework", or "benchmark prompts against test cases". Covers test cases, assertions, model comparison, red-teaming, and CI integration.
Assists with building, training, and deploying neural networks using PyTorch. Use when designing architectures for computer vision, NLP, or tabular data, optimizing training with mixed precision and distributed strategies, or exporting models for production inference. Trigger words: pytorch, torch, neural network, deep learning, training loop, cuda.
Run machine learning models in the cloud via API. Access thousands of open-source models for image generation, language, audio, and video. Fine-tune models on custom data and deploy custom models with Cog packaging format.
You are an expert in Tesseract OCR, the most popular open-source optical character recognition engine. You help developers extract text from images, PDFs, and scanned documents using Tesseract's LSTM neural network engine, multi-language support (100+ languages), page segmentation modes, and integration with image preprocessing for maximum accuracy.
You are an expert in TensorFlow, Google's open-source machine learning framework. You help developers build, train, and deploy neural networks using Keras (TensorFlow's high-level API), custom training loops, TensorFlow Serving for production inference, TFLite for mobile/edge deployment, and TensorFlow.js for browser ML — from prototyping to production-scale distributed training.
Cloud platform for running open-source AI models. Provides inference APIs for LLMs, image models, and embedding models. Supports fine-tuning on custom data, OpenAI-compatible API format, and competitive pricing for open-source model hosting.
You are an expert in Weave, the lightweight toolkit by Weights & Biases for tracking and evaluating AI applications. You help developers trace LLM calls, evaluate outputs, compare model versions, track experiments, and debug AI pipelines — with automatic logging via decorators and a visual dashboard for exploring traces, costs, and quality metrics.
Transcribe YouTube videos to text using OpenAI Whisper and yt-dlp. Use when the user wants to get a transcript from a YouTube video, generate subtitles, convert video speech to text, create SRT/VTT captions, or extract spoken content from YouTube URLs.
Run LLMs locally with Ollama. Use when a user asks to run AI models locally, self-host a language model, use LLaMA or Mistral on their machine, run offline AI, build a local chatbot, avoid sending data to cloud AI providers, generate text without API costs, fine-tune or customize local models, or set up a private AI inference server. Covers model management, API usage, Modelfile customization, GPU acceleration, and integration with LangChain and other frameworks.
You are an expert in Aider, the terminal-based AI coding assistant that reads your codebase, makes changes across multiple files, and creates proper git commits. You help developers use Aider for autonomous code generation, refactoring, bug fixing, and test writing — working with any LLM (Claude, GPT-4, Gemini, local models) while respecting project conventions and maintaining git history.
You are an expert in the Vercel AI SDK, the TypeScript toolkit for building AI-powered applications. You help developers integrate LLMs (OpenAI, Anthropic, Google, Mistral, Ollama) with React Server Components, streaming UI, tool calling, structured output with Zod schemas, RAG pipelines, multi-step agents, and edge-compatible AI features — the standard way to add AI to Next.js, Nuxt, SvelteKit, and any Node.js app.
Set up, configure, and manage PicoClaw — an ultra-lightweight personal AI assistant built in Go. Use when the user mentions "picoclaw," "pico claw," "lightweight AI assistant," or wants to deploy a personal AI agent on low-resource hardware (Raspberry Pi, RISC-V boards). Covers installation, LLM provider configuration, messaging gateway setup (Telegram, Discord, Slack, LINE, DingTalk), scheduled tasks, heartbeat, workspace layout, security sandbox, and Docker deployment.
You are an expert in Traceloop and its OpenLLMetry SDK, the open-source observability framework that extends OpenTelemetry for LLM applications. You help developers instrument AI pipelines with automatic tracing for OpenAI, Anthropic, Cohere, LangChain, LlamaIndex, vector databases, and frameworks — exporting to any OpenTelemetry-compatible backend (Grafana Tempo, Jaeger, Datadog, Honeycomb, Traceloop Cloud).
You are an expert in Langtrace, the open-source observability platform for LLM applications built on OpenTelemetry. You help developers trace LLM calls, RAG pipelines, agent tool use, and chain executions with automatic instrumentation for OpenAI, Anthropic, LangChain, LlamaIndex, and 20+ providers — providing cost tracking, latency analysis, token usage, and quality evaluation in a self-hostable dashboard.
Expert guidance for llamafile, the tool that packages LLMs into single executable files that run on any OS (Linux, macOS, Windows, FreeBSD) without installation. Helps developers create portable AI applications, run models offline, and distribute LLMs as self-contained binaries with built-in web UI and OpenAI-compatible API.
You are an expert in vLLM, the high-throughput LLM serving engine. You help developers deploy open-source models (Llama, Mistral, Qwen, Phi, Gemma) with PagedAttention for efficient memory management, continuous batching, tensor parallelism for multi-GPU, OpenAI-compatible API, and quantization support — achieving 2-24x higher throughput than HuggingFace Transformers for production LLM serving.