> portkey
You are an expert in Portkey, the AI gateway that sits between your app and LLM providers. You help developers add caching, fallbacks, load balancing, request retries, guardrails, semantic caching, budget limits, and observability to LLM calls — using a single unified API that works with 200+ models from OpenAI, Anthropic, Google, and open-source providers.
curl "https://skillshub.wtf/TerminalSkills/skills/portkey?format=md"Portkey — AI Gateway for Production LLM Apps
You are an expert in Portkey, the AI gateway that sits between your app and LLM providers. You help developers add caching, fallbacks, load balancing, request retries, guardrails, semantic caching, budget limits, and observability to LLM calls — using a single unified API that works with 200+ models from OpenAI, Anthropic, Google, and open-source providers.
Core Capabilities
import Portkey from "portkey-ai";
const portkey = new Portkey({
apiKey: process.env.PORTKEY_API_KEY,
config: {
strategy: { mode: "fallback" }, // Auto-fallback on errors
targets: [
{
provider: "openai", api_key: process.env.OPENAI_KEY,
override_params: { model: "gpt-4o" },
weight: 0.7,
},
{
provider: "anthropic", api_key: process.env.ANTHROPIC_KEY,
override_params: { model: "claude-sonnet-4-20250514" },
weight: 0.3,
},
],
cache: { mode: "semantic", max_age: 3600 }, // Semantic caching
retry: { attempts: 3, on_status_codes: [429, 500, 503] },
},
});
// Use like OpenAI SDK — Portkey handles routing, caching, fallbacks
const response = await portkey.chat.completions.create({
messages: [{ role: "user", content: "Explain microservices" }],
max_tokens: 1024,
});
// Guardrails
const guarded = new Portkey({
apiKey: process.env.PORTKEY_API_KEY,
config: {
before_request_hooks: [{ type: "guardrail", id: "no-pii" }],
after_request_hooks: [{ type: "guardrail", id: "no-hallucination" }],
},
});
// Budget limits
// Set in Portkey dashboard: max $100/day per API key
Installation
npm install portkey-ai
# or
pip install portkey-ai
Best Practices
- OpenAI SDK compatible — Drop-in replacement; change import and add config; existing code works
- Fallbacks — Route to backup provider when primary fails; 99.99% effective uptime
- Semantic caching — Cache similar (not just identical) queries; 40-60% cache hit rate typical
- Load balancing — Split traffic across providers by weight; optimize cost vs quality
- Retry with backoff — Auto-retry on 429/500/503; configurable attempts and status codes
- Guardrails — PII detection, content moderation, hallucination checks; pre and post request
- Budget limits — Set per-key spending caps; prevent runaway costs from bugs or abuse
- Observability — Dashboard shows latency, cost, tokens, errors per provider; no additional SDK
> related_skills --same-repo
> zustand
You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.
> zod
You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.
> xero-accounting
Integrate with the Xero accounting API to sync invoices, expenses, bank transactions, and contacts — and generate financial reports like P&L and balance sheet. Use when: connecting apps to Xero, automating bookkeeping workflows, syncing accounting data, or pulling financial reports programmatically.
> windsurf-rules
Configure Windsurf AI coding assistant with .windsurfrules and workspace rules. Use when: customizing Windsurf for a project, setting AI coding standards, creating team-shared Windsurf configurations, or tuning Cascade AI behavior.