> mistral-api
Mistral AI API — European LLM provider with strong code and reasoning models. Use when you need GDPR-compliant AI inference, code generation with Codestral, multilingual tasks, cost-efficient inference, or a European data-residency option.
curl "https://skillshub.wtf/TerminalSkills/skills/mistral-api?format=md"Mistral AI API
Overview
Mistral AI is a French AI company providing high-quality, cost-efficient language models with EU data residency and GDPR compliance. Their models excel at code generation (Codestral), multilingual tasks, and reasoning. Mistral's API follows OpenAI conventions closely, making integration straightforward.
Setup
# Python
pip install mistralai
# TypeScript/Node
npm install @mistralai/mistralai
export MISTRAL_API_KEY=...
Available Models
| Model | Context | Best For |
|---|---|---|
mistral-large-latest | 128k | Most capable, complex reasoning |
mistral-small-latest | 128k | Cost-efficient, everyday tasks |
codestral-latest | 256k | Code generation & completion |
mistral-embed | 8k | Text embeddings |
open-mistral-nemo | 128k | Open-weight, edge deployment |
Instructions
Basic Chat Completion (Python)
from mistralai import Mistral
client = Mistral(api_key="your_api_key") # or reads MISTRAL_API_KEY
response = client.chat.complete(
model="mistral-large-latest",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain the difference between async and sync programming."},
],
)
print(response.choices[0].message.content)
print(f"Prompt tokens: {response.usage.prompt_tokens}")
print(f"Completion tokens: {response.usage.completion_tokens}")
TypeScript/Node.js
import Mistral from "@mistralai/mistralai";
const client = new Mistral({ apiKey: process.env.MISTRAL_API_KEY });
const response = await client.chat.complete({
model: "mistral-large-latest",
messages: [{ role: "user", content: "Hello from TypeScript!" }],
});
console.log(response.choices[0].message.content);
Streaming
from mistralai import Mistral
client = Mistral()
stream = client.chat.stream(
model="mistral-small-latest",
messages=[{"role": "user", "content": "Write a haiku about programming."}],
)
for event in stream:
chunk = event.data.choices[0].delta.content
if chunk:
print(chunk, end="", flush=True)
print()
Function Calling
import json
from mistralai import Mistral
client = Mistral()
tools = [
{
"type": "function",
"function": {
"name": "search_products",
"description": "Search for products in a catalog",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string"},
"max_price": {"type": "number"},
"category": {"type": "string"},
},
"required": ["query"],
},
},
}
]
messages = [{"role": "user", "content": "Find laptops under $1000"}]
response = client.chat.complete(
model="mistral-large-latest",
messages=messages,
tools=tools,
tool_choice="auto",
)
if response.choices[0].finish_reason == "tool_calls":
tool_call = response.choices[0].message.tool_calls[0]
args = json.loads(tool_call.function.arguments)
print(f"Function: {tool_call.function.name}, Args: {args}")
# Add tool result and continue
messages.append(response.choices[0].message)
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": json.dumps([{"name": "ThinkPad X1", "price": 899}]),
})
final = client.chat.complete(model="mistral-large-latest", messages=messages)
print(final.choices[0].message.content)
JSON Mode
from mistralai import Mistral
import json
client = Mistral()
response = client.chat.complete(
model="mistral-small-latest",
messages=[
{
"role": "user",
"content": "Return a JSON object with fields: title, author, year for the book '1984'",
}
],
response_format={"type": "json_object"},
)
data = json.loads(response.choices[0].message.content)
print(data) # {"title": "1984", "author": "George Orwell", "year": 1949}
Text Embeddings
from mistralai import Mistral
client = Mistral()
response = client.embeddings.create(
model="mistral-embed",
inputs=["Machine learning is transforming industries.", "AI is the future of technology."],
)
embeddings = [item.embedding for item in response.data]
print(f"Embedding dimension: {len(embeddings[0])}") # 1024
# Compute cosine similarity
import numpy as np
def cosine_similarity(a, b):
return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))
similarity = cosine_similarity(embeddings[0], embeddings[1])
print(f"Similarity: {similarity:.3f}")
Codestral for Code Completion
from mistralai import Mistral
client = Mistral()
# Fill-in-the-middle (FIM) — Codestral's signature feature
response = client.fim.complete(
model="codestral-latest",
prompt="def fibonacci(n):\n if n <= 1:\n return n\n ",
suffix="\n\nresult = fibonacci(10)\nprint(result)",
)
print(response.choices[0].message.content)
# Returns the middle code that connects prompt to suffix
# Standard code generation
response = client.chat.complete(
model="codestral-latest",
messages=[
{
"role": "user",
"content": "Write a Python class for a rate limiter using token bucket algorithm.",
}
],
)
print(response.choices[0].message.content)
GDPR Compliance Notes
- All API data processed in EU data centers by default.
- Mistral AI is headquartered in Paris, France — subject to EU/GDPR jurisdiction.
- For enterprise data residency guarantees, use Mistral's Azure or GCP deployments.
- No training on user data by default — check your plan's DPA for details.
Guidelines
- Use
mistral-large-latestfor complex tasks,mistral-small-latestfor cost savings. - Codestral is specialized for code and significantly outperforms general models on FIM tasks.
- The
mistral-embedmodel produces 1024-dimensional vectors. - Mistral models have strong multilingual performance, especially in French, Spanish, Italian, German, and Portuguese.
- Function calling requires
tool_choiceto be set — use"auto"for model-driven decisions. - JSON mode requires the system or user prompt to explicitly mention JSON output.
> related_skills --same-repo
> zustand
You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.
> zod
You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.
> xero-accounting
Integrate with the Xero accounting API to sync invoices, expenses, bank transactions, and contacts — and generate financial reports like P&L and balance sheet. Use when: connecting apps to Xero, automating bookkeeping workflows, syncing accounting data, or pulling financial reports programmatically.
> windsurf-rules
Configure Windsurf AI coding assistant with .windsurfrules and workspace rules. Use when: customizing Windsurf for a project, setting AI coding standards, creating team-shared Windsurf configurations, or tuning Cascade AI behavior.