> mcp-client
Connect to and consume MCP servers from applications. Use when: integrating MCP servers into your app, building AI agents that use MCP tools, testing MCP server implementations, or programmatically calling MCP tools and reading resources.
curl "https://skillshub.wtf/TerminalSkills/skills/mcp-client?format=md"MCP Client
Overview
The MCP client SDK lets you connect to any MCP server, discover its capabilities (tools, resources, prompts), and invoke them programmatically. Use this when building AI agents, testing MCP servers, or integrating MCP capabilities into your application.
Instructions
Step 1: Install the SDK
TypeScript:
npm install @modelcontextprotocol/sdk
Python:
pip install mcp
Step 2: Connect to a stdio server
TypeScript:
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
const client = new Client(
{ name: "my-client", version: "1.0.0" },
{ capabilities: {} }
);
const transport = new StdioClientTransport({
command: "node",
args: ["path/to/server/dist/index.js"],
env: { API_KEY: process.env.API_KEY! },
});
await client.connect(transport);
console.log("Connected to MCP server");
Python:
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
server_params = StdioServerParameters(
command="python",
args=["path/to/server.py"],
env={"API_KEY": "your-key"},
)
async def main():
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# Use session here
print("Connected to MCP server")
asyncio.run(main())
Step 3: Connect to an SSE server (remote)
TypeScript:
import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js";
const transport = new SSEClientTransport(
new URL("http://localhost:3000/sse")
);
await client.connect(transport);
Python:
from mcp.client.sse import sse_client
async with sse_client("http://localhost:3000/sse") as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
Step 4: Discover tools
const toolsResponse = await client.listTools();
console.log("Available tools:", toolsResponse.tools);
// toolsResponse.tools is an array of:
// {
// name: string,
// description: string,
// inputSchema: JSONSchema
// }
for (const tool of toolsResponse.tools) {
console.log(`- ${tool.name}: ${tool.description}`);
}
Python:
tools = await session.list_tools()
for tool in tools.tools:
print(f"- {tool.name}: {tool.description}")
Step 5: Call a tool
const result = await client.callTool({
name: "get_weather",
arguments: {
city: "Berlin",
units: "celsius",
},
});
// result.content is an array of content blocks
for (const block of result.content) {
if (block.type === "text") {
console.log(block.text);
}
}
// Check for errors
if (result.isError) {
console.error("Tool returned an error:", result.content);
}
Python:
result = await session.call_tool("get_weather", {"city": "Berlin", "units": "celsius"})
for block in result.content:
if block.type == "text":
print(block.text)
Step 6: List and read resources
// List all available resources
const resources = await client.listResources();
for (const resource of resources.resources) {
console.log(`- ${resource.uri}: ${resource.description}`);
}
// Read a specific resource
const content = await client.readResource({ uri: "config://app" });
for (const block of content.contents) {
if (block.mimeType === "application/json") {
console.log(JSON.parse(block.text as string));
}
}
Python:
resources = await session.list_resources()
resource_content = await session.read_resource("config://app")
for item in resource_content.contents:
print(item.text)
Step 7: List and use prompts
const prompts = await client.listPrompts();
for (const prompt of prompts.prompts) {
console.log(`- ${prompt.name}: ${prompt.description}`);
}
const prompt = await client.getPrompt({
name: "code_review",
arguments: { language: "TypeScript", focus: "security" },
});
// Use prompt.messages with your LLM
Step 8: Build an AI agent with tool use
import Anthropic from "@anthropic-ai/sdk";
const anthropic = new Anthropic();
// Convert MCP tools to Anthropic format
const tools = toolsResponse.tools.map((tool) => ({
name: tool.name,
description: tool.description,
input_schema: tool.inputSchema,
}));
async function runAgent(userMessage: string) {
const messages = [{ role: "user" as const, content: userMessage }];
while (true) {
const response = await anthropic.messages.create({
model: "claude-opus-4-5",
max_tokens: 4096,
tools,
messages,
});
if (response.stop_reason === "end_turn") {
return response.content;
}
// Process tool calls
const toolUses = response.content.filter((b) => b.type === "tool_use");
const toolResults = await Promise.all(
toolUses.map(async (toolUse) => {
if (toolUse.type !== "tool_use") return null;
const result = await client.callTool({
name: toolUse.name,
arguments: toolUse.input as Record<string, unknown>,
});
return {
type: "tool_result" as const,
tool_use_id: toolUse.id,
content: result.content,
};
})
);
messages.push({ role: "assistant", content: response.content });
messages.push({ role: "user", content: toolResults.filter(Boolean) });
}
}
Step 9: Clean up
// Always disconnect when done
await client.close();
Examples
Example 1: List all capabilities of an MCP server
const transport = new StdioClientTransport({
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
});
await client.connect(transport);
const [tools, resources, prompts] = await Promise.all([
client.listTools(),
client.listResources(),
client.listPrompts(),
]);
console.log(`Tools: ${tools.tools.map((t) => t.name).join(", ")}`);
console.log(`Resources: ${resources.resources.map((r) => r.uri).join(", ")}`);
console.log(`Prompts: ${prompts.prompts.map((p) => p.name).join(", ")}`);
await client.close();
Example 2: Test a specific tool
async function testTool(toolName: string, args: Record<string, unknown>) {
const result = await client.callTool({ name: toolName, arguments: args });
if (result.isError) {
console.error("FAIL:", result.content);
return false;
}
console.log("PASS:", result.content);
return true;
}
await testTool("query_database", { sql: "SELECT COUNT(*) FROM users", limit: 1 });
Guidelines
- Always call
client.close()after you're done to free resources - Handle
isError: trueresponses fromcallTool— these are tool-level errors, not exceptions - Use
listTools()at startup to dynamically discover available tools - For long-running agents, reconnect if the server disconnects
- Validate tool arguments against the
inputSchemabefore calling to catch errors early - Use
StdioClientTransportfor local servers andSSEClientTransportfor remote ones - The
envfield inStdioClientTransportmerges with the current process environment
> related_skills --same-repo
> zustand
You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.
> zod
You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.
> xero-accounting
Integrate with the Xero accounting API to sync invoices, expenses, bank transactions, and contacts — and generate financial reports like P&L and balance sheet. Use when: connecting apps to Xero, automating bookkeeping workflows, syncing accounting data, or pulling financial reports programmatically.
> windsurf-rules
Configure Windsurf AI coding assistant with .windsurfrules and workspace rules. Use when: customizing Windsurf for a project, setting AI coding standards, creating team-shared Windsurf configurations, or tuning Cascade AI behavior.