> anth-migration-deep-dive
Migrate to Claude API from OpenAI, Gemini, or other LLM providers. Use when switching from GPT-4 to Claude, migrating from Text Completions, or building a multi-provider abstraction layer. Trigger with phrases like "migrate to claude", "openai to anthropic", "switch from gpt to claude", "multi-provider llm".
curl "https://skillshub.wtf/jeremylongshore/claude-code-plugins-plus-skills/anth-migration-deep-dive?format=md"Anthropic Migration Deep Dive
Overview
Migration strategies for switching to Claude from OpenAI, Google, or other LLM providers, including API mapping, prompt translation, and multi-provider abstraction.
OpenAI to Anthropic API Mapping
| OpenAI | Anthropic | Notes |
|---|---|---|
openai.ChatCompletion.create() | anthropic.messages.create() | Different response shape |
model: "gpt-4" | model: "claude-sonnet-4-20250514" | Different model IDs |
messages: [{role, content}] | messages: [{role, content}] | Same format |
functions / tools | tools | Similar but different schema key names |
function_call | tool_choice | Different naming |
response.choices[0].message.content | response.content[0].text | Different access path |
stream: true → yields chunks | stream: true → SSE events | Different event format |
System message in messages[] | system parameter (separate) | Claude separates system prompt |
n (multiple completions) | Not supported | Use multiple requests |
logprobs | Not supported | N/A |
Side-by-Side Code Comparison
# === OpenAI ===
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are helpful."},
{"role": "user", "content": "Hello"}
],
max_tokens=1024,
temperature=0.7
)
text = response.choices[0].message.content
# === Anthropic ===
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-sonnet-4-20250514",
system="You are helpful.", # System prompt is separate
messages=[
{"role": "user", "content": "Hello"}
],
max_tokens=1024, # Required (not optional)
temperature=0.7
)
text = response.content[0].text
Tool Use Migration
# OpenAI tools format
openai_tools = [{
"type": "function",
"function": {
"name": "get_weather",
"parameters": {"type": "object", "properties": {"city": {"type": "string"}}}
}
}]
# Anthropic tools format — flatter structure
anthropic_tools = [{
"name": "get_weather",
"description": "Get weather for a city", # Required in Anthropic
"input_schema": {"type": "object", "properties": {"city": {"type": "string"}}}
}]
Multi-Provider Abstraction
from abc import ABC, abstractmethod
class LLMProvider(ABC):
@abstractmethod
def complete(self, prompt: str, system: str = "", **kwargs) -> str: ...
class AnthropicProvider(LLMProvider):
def __init__(self):
import anthropic
self.client = anthropic.Anthropic()
def complete(self, prompt: str, system: str = "", **kwargs) -> str:
msg = self.client.messages.create(
model=kwargs.get("model", "claude-sonnet-4-20250514"),
max_tokens=kwargs.get("max_tokens", 1024),
system=system,
messages=[{"role": "user", "content": prompt}]
)
return msg.content[0].text
class OpenAIProvider(LLMProvider):
def __init__(self):
from openai import OpenAI
self.client = OpenAI()
def complete(self, prompt: str, system: str = "", **kwargs) -> str:
messages = []
if system:
messages.append({"role": "system", "content": system})
messages.append({"role": "user", "content": prompt})
resp = self.client.chat.completions.create(
model=kwargs.get("model", "gpt-4"),
messages=messages,
max_tokens=kwargs.get("max_tokens", 1024)
)
return resp.choices[0].message.content
Migration Checklist
- Map model names (GPT-4 → Claude Sonnet, GPT-3.5 → Claude Haiku)
- Move system prompts from
messages[]tosystemparameter - Update response access path (
.choices[0].message.content→.content[0].text) - Make
max_tokensexplicit (required in Anthropic, optional in OpenAI) - Update tool definitions to Anthropic format
- Test prompt behavior (Claude may respond differently to same prompts)
- Update error handling for Anthropic error types
Resources
Next Steps
For advanced debugging, see anth-advanced-troubleshooting.
> related_skills --same-repo
> fathom-cost-tuning
Optimize Fathom API usage and plan selection. Trigger with phrases like "fathom cost", "fathom pricing", "fathom plan".
> fathom-core-workflow-b
Sync Fathom meeting data to CRM and build automated follow-up workflows. Use when integrating Fathom with Salesforce, HubSpot, or custom CRMs, or creating automated post-meeting email summaries. Trigger with phrases like "fathom crm sync", "fathom salesforce", "fathom follow-up", "fathom post-meeting workflow".
> fathom-core-workflow-a
Build a meeting analytics pipeline with Fathom transcripts and summaries. Use when extracting insights from meetings, building CRM sync, or creating automated meeting follow-up workflows. Trigger with phrases like "fathom analytics", "fathom meeting pipeline", "fathom transcript analysis", "fathom action items sync".
> fathom-common-errors
Diagnose and fix Fathom API errors including auth failures and missing data. Use when API calls fail, transcripts are empty, or webhooks are not firing. Trigger with phrases like "fathom error", "fathom not working", "fathom api failure", "fix fathom".