AI 工具 | | 约 40 分钟 | 15,972 字

Anthropic SDK 深度使用指南

Anthropic Python/TypeScript SDK 的高级用法:流式、工具调用、多轮对话

为什么要直接用 SDK

LangChain 和 LlamaIndex 很好,但有时候你不需要那么多抽象层。直接使用 Anthropic SDK 有几个好处:

  1. 更少的依赖,更轻量
  2. 完全控制请求和响应
  3. 第一时间使用最新功能
  4. 更容易调试和排查问题
  5. 性能开销更小

如果你的应用主要使用 Claude 模型,直接用 Anthropic SDK 是最直接的选择。

安装与配置

Python SDK

pip install anthropic

TypeScript SDK

npm install @anthropic-ai/sdk

配置 API Key

# 环境变量(推荐)
export ANTHROPIC_API_KEY="your-api-key"

或者在代码中配置:

# Python
import anthropic
client = anthropic.Anthropic(api_key="your-api-key")
// TypeScript
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({ apiKey: 'your-api-key' });

基本消息创建

Python

import anthropic

client = anthropic.Anthropic()

message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "用一句话解释什么是 REST API"}
    ]
)

print(message.content[0].text)
print(f"输入 tokens: {message.usage.input_tokens}")
print(f"输出 tokens: {message.usage.output_tokens}")

TypeScript

import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic();

const message = await client.messages.create({
  model: 'claude-sonnet-4-20250514',
  max_tokens: 1024,
  messages: [
    { role: 'user', content: '用一句话解释什么是 REST API' }
  ],
});

console.log(message.content[0].type === 'text' ? message.content[0].text : '');
console.log(`输入 tokens: ${message.usage.input_tokens}`);
console.log(`输出 tokens: ${message.usage.output_tokens}`);

System Prompt

message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    system="你是一个 Python 专家。用中文回答,代码示例使用 Python 3.12+。",
    messages=[
        {"role": "user", "content": "如何实现一个线程安全的单例模式?"}
    ]
)

参数说明

参数类型说明
modelstring模型 ID
max_tokensint最大输出 token 数
messagesarray消息列表
systemstring系统提示词
temperaturefloat随机性(0-1)
top_pfloat核采样参数
top_kintTop-K 采样
stop_sequencesarray停止序列

流式响应

流式响应对于用户体验至关重要。用户不需要等待完整回复生成,可以实时看到输出。

Python 流式

import anthropic

client = anthropic.Anthropic()

with client.messages.stream(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "写一个 Python 装饰器教程"}
    ]
) as stream:
    for text in stream.text_stream:
        print(text, end="", flush=True)

# 获取最终消息(包含 usage 信息)
final_message = stream.get_final_message()
print(f"\n\nTokens: {final_message.usage.output_tokens}")

TypeScript 流式

import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic();

const stream = client.messages.stream({
  model: 'claude-sonnet-4-20250514',
  max_tokens: 1024,
  messages: [
    { role: 'user', content: '写一个 Python 装饰器教程' }
  ],
});

for await (const event of stream) {
  if (
    event.type === 'content_block_delta' &&
    event.delta.type === 'text_delta'
  ) {
    process.stdout.write(event.delta.text);
  }
}

const finalMessage = await stream.finalMessage();
console.log(`\nTokens: ${finalMessage.usage.output_tokens}`);

流式事件类型

事件类型说明
message_start消息开始
content_block_start内容块开始
content_block_delta内容增量(文本片段)
content_block_stop内容块结束
message_delta消息增量(stop_reason 等)
message_stop消息结束

多轮对话

基本多轮对话

import anthropic

client = anthropic.Anthropic()

conversation = []

def chat(user_message: str) -> str:
    conversation.append({
        "role": "user",
        "content": user_message
    })

    response = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=1024,
        system="你是一个编程导师,耐心地回答学生的问题。",
        messages=conversation
    )

    assistant_message = response.content[0].text
    conversation.append({
        "role": "assistant",
        "content": assistant_message
    })

    return assistant_message

# 多轮对话
print(chat("什么是闭包?"))
print(chat("能给我一个 Python 的例子吗?"))
print(chat("闭包和装饰器有什么关系?"))

管理对话长度

对话太长会超出上下文窗口,需要管理:

def manage_conversation(conversation: list, max_messages: int = 20):
    """保留最近的 N 条消息"""
    if len(conversation) > max_messages:
        # 保留第一条(可能包含重要上下文)和最近的消息
        conversation = conversation[:1] + conversation[-(max_messages - 1):]
    return conversation

def chat_with_management(user_message: str) -> str:
    global conversation
    conversation = manage_conversation(conversation)

    conversation.append({"role": "user", "content": user_message})

    response = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=1024,
        system="你是一个编程导师。",
        messages=conversation
    )

    assistant_message = response.content[0].text
    conversation.append({"role": "assistant", "content": assistant_message})
    return assistant_message

工具调用(Tool Use)

工具调用让 Claude 能够调用你定义的函数来获取信息或执行操作。

定义工具

import anthropic
import json

client = anthropic.Anthropic()

tools = [
    {
        "name": "get_weather",
        "description": "获取指定城市的当前天气信息",
        "input_schema": {
            "type": "object",
            "properties": {
                "city": {
                    "type": "string",
                    "description": "城市名称,如 '北京'、'上海'"
                },
                "unit": {
                    "type": "string",
                    "enum": ["celsius", "fahrenheit"],
                    "description": "温度单位"
                }
            },
            "required": ["city"]
        }
    },
    {
        "name": "search_code",
        "description": "在代码库中搜索相关代码",
        "input_schema": {
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                    "description": "搜索关键词"
                },
                "language": {
                    "type": "string",
                    "description": "编程语言过滤"
                }
            },
            "required": ["query"]
        }
    }
]

处理工具调用

def process_tool_call(tool_name: str, tool_input: dict) -> str:
    """执行工具调用并返回结果"""
    if tool_name == "get_weather":
        # 实际应用中调用天气 API
        return json.dumps({
            "city": tool_input["city"],
            "temperature": 22,
            "condition": "晴",
            "humidity": 45
        })
    elif tool_name == "search_code":
        # 实际应用中搜索代码库
        return json.dumps({
            "results": [
                {"file": "src/auth.py", "line": 42, "content": "def authenticate(...)"},
                {"file": "src/middleware.py", "line": 15, "content": "class AuthMiddleware(...)"}
            ]
        })
    return json.dumps({"error": "Unknown tool"})

def chat_with_tools(user_message: str) -> str:
    messages = [{"role": "user", "content": user_message}]

    # 第一次调用
    response = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=1024,
        tools=tools,
        messages=messages
    )

    # 检查是否需要工具调用
    while response.stop_reason == "tool_use":
        # 找到工具调用块
        tool_use_block = next(
            block for block in response.content
            if block.type == "tool_use"
        )

        tool_name = tool_use_block.name
        tool_input = tool_use_block.input
        tool_id = tool_use_block.id

        print(f"调用工具: {tool_name}({json.dumps(tool_input, ensure_ascii=False)})")

        # 执行工具
        tool_result = process_tool_call(tool_name, tool_input)

        # 将工具结果发回给 Claude
        messages.append({"role": "assistant", "content": response.content})
        messages.append({
            "role": "user",
            "content": [
                {
                    "type": "tool_result",
                    "tool_use_id": tool_id,
                    "content": tool_result
                }
            ]
        })

        # 继续对话
        response = client.messages.create(
            model="claude-sonnet-4-20250514",
            max_tokens=1024,
            tools=tools,
            messages=messages
        )

    # 返回最终文本回复
    return response.content[0].text

# 使用
result = chat_with_tools("北京今天天气怎么样?适合户外活动吗?")
print(result)

图片输入(Vision)

Claude 支持图片输入,可以分析和理解图片内容。

从 URL 加载图片

message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "image",
                    "source": {
                        "type": "url",
                        "url": "https://example.com/screenshot.png"
                    }
                },
                {
                    "type": "text",
                    "text": "这个 UI 设计有什么可以改进的地方?"
                }
            ]
        }
    ]
)

从 Base64 加载图片

import base64

with open("screenshot.png", "rb") as f:
    image_data = base64.standard_b64encode(f.read()).decode("utf-8")

message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "image",
                    "source": {
                        "type": "base64",
                        "media_type": "image/png",
                        "data": image_data
                    }
                },
                {
                    "type": "text",
                    "text": "请分析这段代码截图中的错误"
                }
            ]
        }
    ]
)

支持的图片格式

格式MIME 类型
JPEGimage/jpeg
PNGimage/png
GIFimage/gif
WebPimage/webp

错误处理

Python 错误处理

import anthropic

client = anthropic.Anthropic()

try:
    message = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=1024,
        messages=[{"role": "user", "content": "Hello"}]
    )
except anthropic.APIConnectionError:
    print("网络连接失败,请检查网络")
except anthropic.RateLimitError:
    print("请求过于频繁,请稍后重试")
except anthropic.APIStatusError as e:
    print(f"API 错误: {e.status_code} - {e.message}")

TypeScript 错误处理

import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic();

try {
  const message = await client.messages.create({
    model: 'claude-sonnet-4-20250514',
    max_tokens: 1024,
    messages: [{ role: 'user', content: 'Hello' }],
  });
} catch (error) {
  if (error instanceof Anthropic.APIConnectionError) {
    console.error('网络连接失败');
  } else if (error instanceof Anthropic.RateLimitError) {
    console.error('请求过于频繁');
  } else if (error instanceof Anthropic.APIError) {
    console.error(`API 错误: ${error.status} - ${error.message}`);
  }
}

重试策略

SDK 内置了自动重试机制:

# Python - 自定义重试
client = anthropic.Anthropic(
    max_retries=3,        # 最大重试次数(默认 2)
    timeout=60.0          # 超时时间(秒)
)
// TypeScript - 自定义重试
const client = new Anthropic({
  maxRetries: 3,
  timeout: 60 * 1000,  // 毫秒
});

实用模式

1. 带重试的批量处理

import asyncio
import anthropic

async_client = anthropic.AsyncAnthropic()

async def process_batch(prompts: list[str]) -> list[str]:
    """批量处理多个提示词"""
    semaphore = asyncio.Semaphore(5)  # 限制并发数

    async def process_one(prompt: str) -> str:
        async with semaphore:
            response = await async_client.messages.create(
                model="claude-sonnet-4-20250514",
                max_tokens=1024,
                messages=[{"role": "user", "content": prompt}]
            )
            return response.content[0].text

    tasks = [process_one(p) for p in prompts]
    return await asyncio.gather(*tasks)

# 使用
prompts = [
    "解释 Python 的 GIL",
    "解释 JavaScript 的事件循环",
    "解释 Go 的 goroutine",
]
results = asyncio.run(process_batch(prompts))

2. Token 计数和成本估算

def estimate_cost(
    input_tokens: int,
    output_tokens: int,
    model: str = "claude-sonnet-4-20250514"
) -> float:
    """估算 API 调用成本(美元)"""
    pricing = {
        "claude-sonnet-4-20250514": {
            "input": 3.0 / 1_000_000,
            "output": 15.0 / 1_000_000
        },
        "claude-opus-4-20250514": {
            "input": 15.0 / 1_000_000,
            "output": 75.0 / 1_000_000
        }
    }

    rates = pricing.get(model, pricing["claude-sonnet-4-20250514"])
    return input_tokens * rates["input"] + output_tokens * rates["output"]

# 使用
message = client.messages.create(...)
cost = estimate_cost(
    message.usage.input_tokens,
    message.usage.output_tokens
)
print(f"本次调用成本: ${cost:.4f}")

3. 结构化输出

import json

def get_structured_output(prompt: str, schema_description: str) -> dict:
    """获取结构化的 JSON 输出"""
    message = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=1024,
        system=f"""请以 JSON 格式回答。输出格式:
{schema_description}
只输出 JSON,不要其他内容。""",
        messages=[{"role": "user", "content": prompt}]
    )

    return json.loads(message.content[0].text)

# 使用
result = get_structured_output(
    "分析 Python 和 JavaScript 的优缺点",
    """
    {
        "languages": [
            {
                "name": "语言名称",
                "pros": ["优点1", "优点2"],
                "cons": ["缺点1", "缺点2"],
                "best_for": ["适用场景1", "适用场景2"]
            }
        ]
    }
    """
)

TypeScript 类型

TypeScript SDK 提供了完整的类型定义:

import Anthropic from '@anthropic-ai/sdk';

// 消息类型
type Message = Anthropic.Message;
type MessageParam = Anthropic.MessageParam;
type ContentBlock = Anthropic.ContentBlock;
type TextBlock = Anthropic.TextBlock;
type ToolUseBlock = Anthropic.ToolUseBlock;

// 工具类型
type Tool = Anthropic.Tool;
type ToolResultBlockParam = Anthropic.ToolResultBlockParam;

// 使用类型
function processMessage(message: Message) {
  for (const block of message.content) {
    if (block.type === 'text') {
      console.log(block.text);
    } else if (block.type === 'tool_use') {
      console.log(`工具调用: ${block.name}`);
      console.log(`参数: ${JSON.stringify(block.input)}`);
    }
  }
}

最佳实践

1. 使用环境变量管理 API Key

# 不要这样做
client = anthropic.Anthropic(api_key="sk-ant-xxx")

# 应该这样做
client = anthropic.Anthropic()  # 自动读取 ANTHROPIC_API_KEY

2. 合理设置 max_tokens

# 根据任务调整 max_tokens
# 短回答
message = client.messages.create(max_tokens=256, ...)

# 代码生成
message = client.messages.create(max_tokens=2048, ...)

# 长文档
message = client.messages.create(max_tokens=4096, ...)

3. 使用异步客户端处理高并发

import anthropic

# 同步客户端(简单场景)
client = anthropic.Anthropic()

# 异步客户端(高并发场景)
async_client = anthropic.AsyncAnthropic()

4. 监控 Token 使用量

total_input_tokens = 0
total_output_tokens = 0

def tracked_create(**kwargs):
    global total_input_tokens, total_output_tokens
    response = client.messages.create(**kwargs)
    total_input_tokens += response.usage.input_tokens
    total_output_tokens += response.usage.output_tokens
    return response

5. 优雅降级

def safe_generate(prompt: str, fallback: str = "抱歉,暂时无法处理您的请求。") -> str:
    try:
        response = client.messages.create(
            model="claude-sonnet-4-20250514",
            max_tokens=1024,
            messages=[{"role": "user", "content": prompt}]
        )
        return response.content[0].text
    except anthropic.RateLimitError:
        # 降级到更小的模型
        try:
            response = client.messages.create(
                model="claude-haiku-4-20250514",
                max_tokens=1024,
                messages=[{"role": "user", "content": prompt}]
            )
            return response.content[0].text
        except Exception:
            return fallback
    except Exception:
        return fallback

直接使用 SDK 意味着更多的控制权和更少的抽象。当你清楚自己在做什么时,简单就是最好的架构。

评论

加载中...

相关文章

分享:

评论

加载中...