> openrouter
You are an expert in OpenRouter, the unified API gateway for accessing 200+ LLMs through a single OpenAI-compatible endpoint. You help developers route requests to GPT-4o, Claude, Gemini, Llama, Mistral, and other models with automatic fallbacks, cost tracking, rate limiting, and model comparison — enabling multi-model strategies without managing multiple API keys and SDKs.
curl "https://skillshub.wtf/TerminalSkills/skills/openrouter?format=md"OpenRouter — Unified LLM API Gateway
You are an expert in OpenRouter, the unified API gateway for accessing 200+ LLMs through a single OpenAI-compatible endpoint. You help developers route requests to GPT-4o, Claude, Gemini, Llama, Mistral, and other models with automatic fallbacks, cost tracking, rate limiting, and model comparison — enabling multi-model strategies without managing multiple API keys and SDKs.
Core Capabilities
OpenAI-Compatible API
import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "https://openrouter.ai/api/v1",
apiKey: process.env.OPENROUTER_API_KEY,
defaultHeaders: {
"HTTP-Referer": "https://myapp.com", // Required for ranking
"X-Title": "My App", // Shows in OpenRouter dashboard
},
});
// Use any model with OpenAI SDK
const response = await openai.chat.completions.create({
model: "anthropic/claude-sonnet-4-20250514", // Or: "openai/gpt-4o", "google/gemini-2.0-flash"
messages: [{ role: "user", content: "Hello!" }],
});
// Streaming
const stream = await openai.chat.completions.create({
model: "openai/gpt-4o",
messages: [{ role: "user", content: "Write a poem" }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
// Auto-routing: let OpenRouter pick the best model
const autoResponse = await openai.chat.completions.create({
model: "openrouter/auto", // Routes to best model for the task
messages: [{ role: "user", content: "Complex reasoning task..." }],
});
// Cost-optimized routing
const cheapResponse = await openai.chat.completions.create({
model: "openrouter/auto",
route: "fallback", // Try cheapest first, fall back to better
models: ["openai/gpt-4o-mini", "anthropic/claude-sonnet-4-20250514", "openai/gpt-4o"],
messages: [{ role: "user", content: "Simple task" }],
});
Model Comparison
// Compare models side-by-side
const models = [
"openai/gpt-4o",
"anthropic/claude-sonnet-4-20250514",
"google/gemini-2.0-flash",
"meta-llama/llama-3.1-70b-instruct",
];
const results = await Promise.all(
models.map(async (model) => {
const start = Date.now();
const response = await openai.chat.completions.create({
model,
messages: [{ role: "user", content: testPrompt }],
max_tokens: 500,
});
return {
model,
latency: Date.now() - start,
tokens: response.usage,
cost: response.usage?.total_tokens, // OpenRouter returns cost info
output: response.choices[0].message.content,
};
}),
);
With Vercel AI SDK
import { createOpenRouter } from "@openrouter/ai-sdk-provider";
import { generateText } from "ai";
const openrouter = createOpenRouter({ apiKey: process.env.OPENROUTER_API_KEY });
const { text } = await generateText({
model: openrouter("anthropic/claude-sonnet-4-20250514"),
prompt: "Explain quantum computing",
});
Installation
npm install openai # Use OpenAI SDK
# Or: npm install @openrouter/ai-sdk-provider # For Vercel AI SDK
Best Practices
- One API, all models — Single API key for GPT-4o, Claude, Gemini, Llama, Mistral; no vendor lock-in
- Fallback routing — Configure model fallbacks; if primary is down or overloaded, auto-switch to backup
- Cost tracking — OpenRouter dashboard shows per-model costs; optimize spend by routing simple tasks to cheap models
- OpenAI SDK compatible — Just change
baseURLandapiKey; all OpenAI SDK features work (tools, streaming, JSON mode) - Free models — Some models available for free (rate-limited); great for prototyping
- Auto routing — Use
openrouter/autoto let the system pick the best model based on task complexity - Provider preferences — Set model priorities and fallbacks; optimize for cost, speed, or quality
- Usage limits — Set per-key spending limits in dashboard; prevent runaway costs in production
> related_skills --same-repo
> zustand
You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.
> zoho
Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.
> zod
You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.
> zipkin
Deploy and configure Zipkin for distributed tracing and request flow visualization. Use when a user needs to set up trace collection, instrument Java/Spring or other services with Zipkin, analyze service dependencies, or configure storage backends for trace data.