1
0

docs: 重构 API 参考文档,拆分为独立文件并统一 CLI 风格格式

This commit is contained in:
2026-04-19 14:21:03 +08:00
parent b92974716f
commit 4dc518a5f4
17 changed files with 25227 additions and 8678 deletions

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,240 @@
## List
`$ ant models list`
**get** `/v1/models`
List available models.
The Models API response can be used to determine which models are available for use in the API. More recently released models are listed first.
### Parameters
- `--after-id: optional string`
Query param: ID of the object to use as a cursor for pagination. When provided, returns the page of results immediately after this object.
- `--before-id: optional string`
Query param: ID of the object to use as a cursor for pagination. When provided, returns the page of results immediately before this object.
- `--limit: optional number`
Query param: Number of items to return per page.
Defaults to `20`. Ranges from `1` to `1000`.
- `--beta: optional array of AnthropicBeta`
Header param: Optional header to specify the beta version(s) you want to use.
### Returns
- `ListResponse_ModelInfo_: object { data, first_id, has_more, last_id }`
- `data: array of ModelInfo`
- `id: string`
Unique model identifier.
- `capabilities: object { batch, citations, code_execution, 6 more }`
Model capability information.
- `batch: object { supported }`
Whether the model supports the Batch API.
- `supported: boolean`
Whether this capability is supported by the model.
- `citations: object { supported }`
Whether the model supports citation generation.
- `supported: boolean`
Whether this capability is supported by the model.
- `code_execution: object { supported }`
Whether the model supports code execution tools.
- `supported: boolean`
Whether this capability is supported by the model.
- `context_management: object { clear_thinking_20251015, clear_tool_uses_20250919, compact_20260112, supported }`
Context management support and available strategies.
- `clear_thinking_20251015: object { supported }`
Indicates whether a capability is supported.
- `supported: boolean`
Whether this capability is supported by the model.
- `clear_tool_uses_20250919: object { supported }`
Indicates whether a capability is supported.
- `supported: boolean`
Whether this capability is supported by the model.
- `compact_20260112: object { supported }`
Indicates whether a capability is supported.
- `supported: boolean`
Whether this capability is supported by the model.
- `supported: boolean`
Whether this capability is supported by the model.
- `effort: object { high, low, max, 3 more }`
Effort (reasoning_effort) support and available levels.
- `high: object { supported }`
Whether the model supports high effort level.
- `supported: boolean`
Whether this capability is supported by the model.
- `low: object { supported }`
Whether the model supports low effort level.
- `supported: boolean`
Whether this capability is supported by the model.
- `max: object { supported }`
Whether the model supports max effort level.
- `supported: boolean`
Whether this capability is supported by the model.
- `medium: object { supported }`
Whether the model supports medium effort level.
- `supported: boolean`
Whether this capability is supported by the model.
- `supported: boolean`
Whether this capability is supported by the model.
- `xhigh: object { supported }`
Indicates whether a capability is supported.
- `supported: boolean`
Whether this capability is supported by the model.
- `image_input: object { supported }`
Whether the model accepts image content blocks.
- `supported: boolean`
Whether this capability is supported by the model.
- `pdf_input: object { supported }`
Whether the model accepts PDF content blocks.
- `supported: boolean`
Whether this capability is supported by the model.
- `structured_outputs: object { supported }`
Whether the model supports structured output / JSON mode / strict tool schemas.
- `supported: boolean`
Whether this capability is supported by the model.
- `thinking: object { supported, types }`
Thinking capability and supported type configurations.
- `supported: boolean`
Whether this capability is supported by the model.
- `types: object { adaptive, enabled }`
Supported thinking type configurations.
- `adaptive: object { supported }`
Whether the model supports thinking with type 'adaptive' (auto).
- `supported: boolean`
Whether this capability is supported by the model.
- `enabled: object { supported }`
Whether the model supports thinking with type 'enabled'.
- `supported: boolean`
Whether this capability is supported by the model.
- `created_at: string`
RFC 3339 datetime string representing the time at which the model was released. May be set to an epoch value if the release date is unknown.
- `display_name: string`
A human-readable name for the model.
- `max_input_tokens: number`
Maximum input context window size in tokens for this model.
- `max_tokens: number`
Maximum value for the `max_tokens` parameter when using this model.
- `type: "model"`
Object type.
For Models, this is always `"model"`.
- `first_id: string`
First ID in the `data` list. Can be used as the `before_id` for the previous page.
- `has_more: boolean`
Indicates if there are more results in the requested page direction.
- `last_id: string`
Last ID in the `data` list. Can be used as the `after_id` for the next page.
### Example
```cli
ant models list \
--api-key my-anthropic-api-key
```

File diff suppressed because it is too large Load Diff

View File

@@ -1,291 +0,0 @@
## List
**get** `/v1/models`
List available models.
The Models API response can be used to determine which models are available for use in the API. More recently released models are listed first.
### Query Parameters
- `after_id: optional string`
ID of the object to use as a cursor for pagination. When provided, returns the page of results immediately after this object.
- `before_id: optional string`
ID of the object to use as a cursor for pagination. When provided, returns the page of results immediately before this object.
- `limit: optional number`
Number of items to return per page.
Defaults to `20`. Ranges from `1` to `1000`.
### Header Parameters
- `"anthropic-beta": optional array of AnthropicBeta`
Optional header to specify the beta version(s) you want to use.
- `UnionMember0 = string`
- `UnionMember1 = "message-batches-2024-09-24" or "prompt-caching-2024-07-31" or "computer-use-2024-10-22" or 20 more`
- `"message-batches-2024-09-24"`
- `"prompt-caching-2024-07-31"`
- `"computer-use-2024-10-22"`
- `"computer-use-2025-01-24"`
- `"pdfs-2024-09-25"`
- `"token-counting-2024-11-01"`
- `"token-efficient-tools-2025-02-19"`
- `"output-128k-2025-02-19"`
- `"files-api-2025-04-14"`
- `"mcp-client-2025-04-04"`
- `"mcp-client-2025-11-20"`
- `"dev-full-thinking-2025-05-14"`
- `"interleaved-thinking-2025-05-14"`
- `"code-execution-2025-05-22"`
- `"extended-cache-ttl-2025-04-11"`
- `"context-1m-2025-08-07"`
- `"context-management-2025-06-27"`
- `"model-context-window-exceeded-2025-08-26"`
- `"skills-2025-10-02"`
- `"fast-mode-2026-02-01"`
- `"output-300k-2026-03-24"`
- `"advisor-tool-2026-03-01"`
- `"user-profiles-2026-03-24"`
### Returns
- `data: array of ModelInfo`
- `id: string`
Unique model identifier.
- `capabilities: ModelCapabilities`
Model capability information.
- `batch: CapabilitySupport`
Whether the model supports the Batch API.
- `supported: boolean`
Whether this capability is supported by the model.
- `citations: CapabilitySupport`
Whether the model supports citation generation.
- `supported: boolean`
Whether this capability is supported by the model.
- `code_execution: CapabilitySupport`
Whether the model supports code execution tools.
- `supported: boolean`
Whether this capability is supported by the model.
- `context_management: ContextManagementCapability`
Context management support and available strategies.
- `clear_thinking_20251015: CapabilitySupport`
Indicates whether a capability is supported.
- `supported: boolean`
Whether this capability is supported by the model.
- `clear_tool_uses_20250919: CapabilitySupport`
Indicates whether a capability is supported.
- `supported: boolean`
Whether this capability is supported by the model.
- `compact_20260112: CapabilitySupport`
Indicates whether a capability is supported.
- `supported: boolean`
Whether this capability is supported by the model.
- `supported: boolean`
Whether this capability is supported by the model.
- `effort: EffortCapability`
Effort (reasoning_effort) support and available levels.
- `high: CapabilitySupport`
Whether the model supports high effort level.
- `supported: boolean`
Whether this capability is supported by the model.
- `low: CapabilitySupport`
Whether the model supports low effort level.
- `supported: boolean`
Whether this capability is supported by the model.
- `max: CapabilitySupport`
Whether the model supports max effort level.
- `supported: boolean`
Whether this capability is supported by the model.
- `medium: CapabilitySupport`
Whether the model supports medium effort level.
- `supported: boolean`
Whether this capability is supported by the model.
- `supported: boolean`
Whether this capability is supported by the model.
- `xhigh: CapabilitySupport`
Indicates whether a capability is supported.
- `supported: boolean`
Whether this capability is supported by the model.
- `image_input: CapabilitySupport`
Whether the model accepts image content blocks.
- `supported: boolean`
Whether this capability is supported by the model.
- `pdf_input: CapabilitySupport`
Whether the model accepts PDF content blocks.
- `supported: boolean`
Whether this capability is supported by the model.
- `structured_outputs: CapabilitySupport`
Whether the model supports structured output / JSON mode / strict tool schemas.
- `supported: boolean`
Whether this capability is supported by the model.
- `thinking: ThinkingCapability`
Thinking capability and supported type configurations.
- `supported: boolean`
Whether this capability is supported by the model.
- `types: ThinkingTypes`
Supported thinking type configurations.
- `adaptive: CapabilitySupport`
Whether the model supports thinking with type 'adaptive' (auto).
- `supported: boolean`
Whether this capability is supported by the model.
- `enabled: CapabilitySupport`
Whether the model supports thinking with type 'enabled'.
- `supported: boolean`
Whether this capability is supported by the model.
- `created_at: string`
RFC 3339 datetime string representing the time at which the model was released. May be set to an epoch value if the release date is unknown.
- `display_name: string`
A human-readable name for the model.
- `max_input_tokens: number`
Maximum input context window size in tokens for this model.
- `max_tokens: number`
Maximum value for the `max_tokens` parameter when using this model.
- `type: "model"`
Object type.
For Models, this is always `"model"`.
- `"model"`
- `first_id: string`
First ID in the `data` list. Can be used as the `before_id` for the previous page.
- `has_more: boolean`
Indicates if there are more results in the requested page direction.
- `last_id: string`
Last ID in the `data` list. Can be used as the `after_id` for the next page.
### Example
```http
curl https://api.anthropic.com/v1/models \
-H 'anthropic-version: 2023-06-01' \
-H "X-Api-Key: $ANTHROPIC_API_KEY"
```

View File

@@ -1,86 +1,36 @@
## Retrieve
`$ ant models retrieve`
**get** `/v1/models/{model_id}`
Get a specific model.
The Models API response can be used to determine information about a specific model or resolve a model alias to a model ID.
### Path Parameters
### Parameters
- `model_id: string`
- `--model-id: string`
Model identifier or alias.
### Header Parameters
- `"anthropic-beta": optional array of AnthropicBeta`
- `--beta: optional array of AnthropicBeta`
Optional header to specify the beta version(s) you want to use.
- `UnionMember0 = string`
- `UnionMember1 = "message-batches-2024-09-24" or "prompt-caching-2024-07-31" or "computer-use-2024-10-22" or 20 more`
- `"message-batches-2024-09-24"`
- `"prompt-caching-2024-07-31"`
- `"computer-use-2024-10-22"`
- `"computer-use-2025-01-24"`
- `"pdfs-2024-09-25"`
- `"token-counting-2024-11-01"`
- `"token-efficient-tools-2025-02-19"`
- `"output-128k-2025-02-19"`
- `"files-api-2025-04-14"`
- `"mcp-client-2025-04-04"`
- `"mcp-client-2025-11-20"`
- `"dev-full-thinking-2025-05-14"`
- `"interleaved-thinking-2025-05-14"`
- `"code-execution-2025-05-22"`
- `"extended-cache-ttl-2025-04-11"`
- `"context-1m-2025-08-07"`
- `"context-management-2025-06-27"`
- `"model-context-window-exceeded-2025-08-26"`
- `"skills-2025-10-02"`
- `"fast-mode-2026-02-01"`
- `"output-300k-2026-03-24"`
- `"advisor-tool-2026-03-01"`
- `"user-profiles-2026-03-24"`
### Returns
- `ModelInfo = object { id, capabilities, created_at, 4 more }`
- `model_info: object { id, capabilities, created_at, 4 more }`
- `id: string`
Unique model identifier.
- `capabilities: ModelCapabilities`
- `capabilities: object { batch, citations, code_execution, 6 more }`
Model capability information.
- `batch: CapabilitySupport`
- `batch: object { supported }`
Whether the model supports the Batch API.
@@ -88,7 +38,7 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `citations: CapabilitySupport`
- `citations: object { supported }`
Whether the model supports citation generation.
@@ -96,7 +46,7 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `code_execution: CapabilitySupport`
- `code_execution: object { supported }`
Whether the model supports code execution tools.
@@ -104,11 +54,11 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `context_management: ContextManagementCapability`
- `context_management: object { clear_thinking_20251015, clear_tool_uses_20250919, compact_20260112, supported }`
Context management support and available strategies.
- `clear_thinking_20251015: CapabilitySupport`
- `clear_thinking_20251015: object { supported }`
Indicates whether a capability is supported.
@@ -116,7 +66,7 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `clear_tool_uses_20250919: CapabilitySupport`
- `clear_tool_uses_20250919: object { supported }`
Indicates whether a capability is supported.
@@ -124,7 +74,7 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `compact_20260112: CapabilitySupport`
- `compact_20260112: object { supported }`
Indicates whether a capability is supported.
@@ -136,11 +86,11 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `effort: EffortCapability`
- `effort: object { high, low, max, 3 more }`
Effort (reasoning_effort) support and available levels.
- `high: CapabilitySupport`
- `high: object { supported }`
Whether the model supports high effort level.
@@ -148,7 +98,7 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `low: CapabilitySupport`
- `low: object { supported }`
Whether the model supports low effort level.
@@ -156,7 +106,7 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `max: CapabilitySupport`
- `max: object { supported }`
Whether the model supports max effort level.
@@ -164,7 +114,7 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `medium: CapabilitySupport`
- `medium: object { supported }`
Whether the model supports medium effort level.
@@ -176,7 +126,7 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `xhigh: CapabilitySupport`
- `xhigh: object { supported }`
Indicates whether a capability is supported.
@@ -184,7 +134,7 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `image_input: CapabilitySupport`
- `image_input: object { supported }`
Whether the model accepts image content blocks.
@@ -192,7 +142,7 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `pdf_input: CapabilitySupport`
- `pdf_input: object { supported }`
Whether the model accepts PDF content blocks.
@@ -200,7 +150,7 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `structured_outputs: CapabilitySupport`
- `structured_outputs: object { supported }`
Whether the model supports structured output / JSON mode / strict tool schemas.
@@ -208,7 +158,7 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `thinking: ThinkingCapability`
- `thinking: object { supported, types }`
Thinking capability and supported type configurations.
@@ -216,11 +166,11 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `types: ThinkingTypes`
- `types: object { adaptive, enabled }`
Supported thinking type configurations.
- `adaptive: CapabilitySupport`
- `adaptive: object { supported }`
Whether the model supports thinking with type 'adaptive' (auto).
@@ -228,7 +178,7 @@ The Models API response can be used to determine information about a specific mo
Whether this capability is supported by the model.
- `enabled: CapabilitySupport`
- `enabled: object { supported }`
Whether the model supports thinking with type 'enabled'.
@@ -258,12 +208,10 @@ The Models API response can be used to determine information about a specific mo
For Models, this is always `"model"`.
- `"model"`
### Example
```http
curl https://api.anthropic.com/v1/models/$MODEL_ID \
-H 'anthropic-version: 2023-06-01' \
-H "X-Api-Key: $ANTHROPIC_API_KEY"
```cli
ant models retrieve \
--api-key my-anthropic-api-key \
--model-id model_id
```

View File

@@ -0,0 +1,175 @@
## Create embeddings
**post** `/embeddings`
Creates an embedding vector representing the input text.
### Body Parameters
- `input: string or array of string or array of number or array of array of number`
Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for all embedding models), cannot be an empty string, and any array must be 2048 dimensions or less. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens. In addition to the per-input token limit, all embedding models enforce a maximum of 300,000 tokens summed across all inputs in a single request.
- `String = string`
The string that will be turned into an embedding.
- `Array = array of string`
The array of strings that will be turned into an embedding.
- `Array = array of number`
The array of integers that will be turned into an embedding.
- `Array = array of array of number`
The array of arrays containing integers that will be turned into an embedding.
- `model: string or EmbeddingModel`
ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models) for descriptions of them.
- `string`
- `EmbeddingModel = "text-embedding-ada-002" or "text-embedding-3-small" or "text-embedding-3-large"`
- `"text-embedding-ada-002"`
- `"text-embedding-3-small"`
- `"text-embedding-3-large"`
- `dimensions: optional number`
The number of dimensions the resulting output embeddings should have. Only supported in `text-embedding-3` and later models.
- `encoding_format: optional "float" or "base64"`
The format to return the embeddings in. Can be either `float` or [`base64`](https://pypi.org/project/pybase64/).
- `"float"`
- `"base64"`
- `user: optional string`
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices#end-user-ids).
### Returns
- `CreateEmbeddingResponse object { data, model, object, usage }`
- `data: array of Embedding`
The list of embeddings generated by the model.
- `embedding: array of number`
The embedding vector, which is a list of floats. The length of vector depends on the model as listed in the [embedding guide](/docs/guides/embeddings).
- `index: number`
The index of the embedding in the list of embeddings.
- `object: "embedding"`
The object type, which is always "embedding".
- `"embedding"`
- `model: string`
The name of the model used to generate the embedding.
- `object: "list"`
The object type, which is always "list".
- `"list"`
- `usage: object { prompt_tokens, total_tokens }`
The usage information for the request.
- `prompt_tokens: number`
The number of tokens used by the prompt.
- `total_tokens: number`
The total number of tokens used by the request.
### Example
```http
curl https://api.openai.com/v1/embeddings \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"input": "The quick brown fox jumped over the lazy dog",
"model": "text-embedding-3-small",
"encoding_format": "float",
"user": "user-1234"
}'
```
#### Response
```json
{
"data": [
{
"embedding": [
0
],
"index": 0,
"object": "embedding"
}
],
"model": "model",
"object": "list",
"usage": {
"prompt_tokens": 0,
"total_tokens": 0
}
}
```
### Example
```http
curl https://api.openai.com/v1/embeddings \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"input": "The food was delicious and the waiter...",
"model": "text-embedding-ada-002",
"encoding_format": "float"
}'
```
#### Response
```json
{
"object": "list",
"data": [
{
"object": "embedding",
"embedding": [
0.0023064255,
-0.009327292,
.... (1536 floats total for ada-002)
-0.0028842222,
],
"index": 0
}
],
"model": "text-embedding-ada-002",
"usage": {
"prompt_tokens": 8,
"total_tokens": 8
}
}
```

View File

@@ -0,0 +1,367 @@
## Create image
**post** `/images/generations`
Creates an image given a prompt. [Learn more](/docs/guides/images).
### Body Parameters
- `prompt: string`
A text description of the desired image(s). The maximum length is 32000 characters for the GPT image models, 1000 characters for `dall-e-2` and 4000 characters for `dall-e-3`.
- `background: optional "transparent" or "opaque" or "auto"`
Allows to set transparency for the background of the generated image(s).
This parameter is only supported for the GPT image models. Must be one of
`transparent`, `opaque` or `auto` (default value). When `auto` is used, the
model will automatically determine the best background for the image.
If `transparent`, the output format needs to support transparency, so it
should be set to either `png` (default value) or `webp`.
- `"transparent"`
- `"opaque"`
- `"auto"`
- `model: optional string or ImageModel`
The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or a GPT image model (`gpt-image-1`, `gpt-image-1-mini`, `gpt-image-1.5`). Defaults to `dall-e-2` unless a parameter specific to the GPT image models is used.
- `string`
- `ImageModel = "gpt-image-1.5" or "dall-e-2" or "dall-e-3" or 2 more`
- `"gpt-image-1.5"`
- `"dall-e-2"`
- `"dall-e-3"`
- `"gpt-image-1"`
- `"gpt-image-1-mini"`
- `moderation: optional "low" or "auto"`
Control the content-moderation level for images generated by the GPT image models. Must be either `low` for less restrictive filtering or `auto` (default value).
- `"low"`
- `"auto"`
- `n: optional number`
The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only `n=1` is supported.
- `output_compression: optional number`
The compression level (0-100%) for the generated images. This parameter is only supported for the GPT image models with the `webp` or `jpeg` output formats, and defaults to 100.
- `output_format: optional "png" or "jpeg" or "webp"`
The format in which the generated images are returned. This parameter is only supported for the GPT image models. Must be one of `png`, `jpeg`, or `webp`.
- `"png"`
- `"jpeg"`
- `"webp"`
- `partial_images: optional number`
The number of partial images to generate. This parameter is used for
streaming responses that return partial images. Value must be between 0 and 3.
When set to 0, the response will be a single image sent in one streaming event.
Note that the final image may be sent before the full number of partial images
are generated if the full image is generated more quickly.
- `quality: optional "standard" or "hd" or "low" or 3 more`
The quality of the image that will be generated.
- `auto` (default value) will automatically select the best quality for the given model.
- `high`, `medium` and `low` are supported for the GPT image models.
- `hd` and `standard` are supported for `dall-e-3`.
- `standard` is the only option for `dall-e-2`.
- `"standard"`
- `"hd"`
- `"low"`
- `"medium"`
- `"high"`
- `"auto"`
- `response_format: optional "url" or "b64_json"`
The format in which generated images with `dall-e-2` and `dall-e-3` are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated. This parameter isn't supported for the GPT image models, which always return base64-encoded images.
- `"url"`
- `"b64_json"`
- `size: optional "auto" or "1024x1024" or "1536x1024" or 5 more`
The size of the generated images. Must be one of `1024x1024`, `1536x1024` (landscape), `1024x1536` (portrait), or `auto` (default value) for the GPT image models, one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`, and one of `1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3`.
- `"auto"`
- `"1024x1024"`
- `"1536x1024"`
- `"1024x1536"`
- `"256x256"`
- `"512x512"`
- `"1792x1024"`
- `"1024x1792"`
- `stream: optional boolean`
Generate the image in streaming mode. Defaults to `false`. See the
[Image generation guide](/docs/guides/image-generation) for more information.
This parameter is only supported for the GPT image models.
- `style: optional "vivid" or "natural"`
The style of the generated images. This parameter is only supported for `dall-e-3`. Must be one of `vivid` or `natural`. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images.
- `"vivid"`
- `"natural"`
- `user: optional string`
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices#end-user-ids).
### Returns
- `ImagesResponse object { created, background, data, 4 more }`
The response from the image generation endpoint.
- `created: number`
The Unix timestamp (in seconds) of when the image was created.
- `background: optional "transparent" or "opaque"`
The background parameter used for the image generation. Either `transparent` or `opaque`.
- `"transparent"`
- `"opaque"`
- `data: optional array of Image`
The list of generated images.
- `b64_json: optional string`
The base64-encoded JSON of the generated image. Returned by default for the GPT image models, and only present if `response_format` is set to `b64_json` for `dall-e-2` and `dall-e-3`.
- `revised_prompt: optional string`
For `dall-e-3` only, the revised prompt that was used to generate the image.
- `url: optional string`
When using `dall-e-2` or `dall-e-3`, the URL of the generated image if `response_format` is set to `url` (default value). Unsupported for the GPT image models.
- `output_format: optional "png" or "webp" or "jpeg"`
The output format of the image generation. Either `png`, `webp`, or `jpeg`.
- `"png"`
- `"webp"`
- `"jpeg"`
- `quality: optional "low" or "medium" or "high"`
The quality of the image generated. Either `low`, `medium`, or `high`.
- `"low"`
- `"medium"`
- `"high"`
- `size: optional "1024x1024" or "1024x1536" or "1536x1024"`
The size of the image generated. Either `1024x1024`, `1024x1536`, or `1536x1024`.
- `"1024x1024"`
- `"1024x1536"`
- `"1536x1024"`
- `usage: optional object { input_tokens, input_tokens_details, output_tokens, 2 more }`
For `gpt-image-1` only, the token usage information for the image generation.
- `input_tokens: number`
The number of tokens (images and text) in the input prompt.
- `input_tokens_details: object { image_tokens, text_tokens }`
The input tokens detailed information for the image generation.
- `image_tokens: number`
The number of image tokens in the input prompt.
- `text_tokens: number`
The number of text tokens in the input prompt.
- `output_tokens: number`
The number of output tokens generated by the model.
- `total_tokens: number`
The total number of tokens (images and text) used for the image generation.
- `output_tokens_details: optional object { image_tokens, text_tokens }`
The output token details for the image generation.
- `image_tokens: number`
The number of image output tokens generated by the model.
- `text_tokens: number`
The number of text output tokens generated by the model.
### Example
```http
curl https://api.openai.com/v1/images/generations \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"prompt": "A cute baby sea otter",
"background": "transparent",
"moderation": "low",
"n": 1,
"output_compression": 100,
"output_format": "png",
"partial_images": 1,
"quality": "medium",
"response_format": "url",
"size": "1024x1024",
"style": "vivid",
"user": "user-1234"
}'
```
#### Response
```json
{
"created": 0,
"background": "transparent",
"data": [
{
"b64_json": "b64_json",
"revised_prompt": "revised_prompt",
"url": "url"
}
],
"output_format": "png",
"quality": "low",
"size": "1024x1024",
"usage": {
"input_tokens": 0,
"input_tokens_details": {
"image_tokens": 0,
"text_tokens": 0
},
"output_tokens": 0,
"total_tokens": 0,
"output_tokens_details": {
"image_tokens": 0,
"text_tokens": 0
}
}
}
```
### Generate image
```http
curl https://api.openai.com/v1/images/generations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-image-1.5",
"prompt": "A cute baby sea otter",
"n": 1,
"size": "1024x1024"
}'
```
#### Response
```json
{
"created": 1713833628,
"data": [
{
"b64_json": "..."
}
],
"usage": {
"total_tokens": 100,
"input_tokens": 50,
"output_tokens": 50,
"input_tokens_details": {
"text_tokens": 10,
"image_tokens": 40
}
}
}
```
### Streaming
```http
curl https://api.openai.com/v1/images/generations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-image-1.5",
"prompt": "A cute baby sea otter",
"n": 1,
"size": "1024x1024",
"stream": true
}' \
--no-buffer
```
#### Response
```json
event: image_generation.partial_image
data: {"type":"image_generation.partial_image","b64_json":"...","partial_image_index":0}
event: image_generation.completed
data: {"type":"image_generation.completed","b64_json":"...","usage":{"total_tokens":100,"input_tokens":50,"output_tokens":50,"input_tokens_details":{"text_tokens":10,"image_tokens":40}}}
```

View File

@@ -0,0 +1,69 @@
## Retrieve model
**get** `/models/{model}`
Retrieves a model instance, providing basic information about the model such as the owner and permissioning.
### Path Parameters
- `model: string`
### Returns
- `Model object { id, created, object, owned_by }`
Describes an OpenAI model offering that can be used with the API.
- `id: string`
The model identifier, which can be referenced in the API endpoints.
- `created: number`
The Unix timestamp (in seconds) when the model was created.
- `object: "model"`
The object type, which is always "model".
- `"model"`
- `owned_by: string`
The organization that owns the model.
### Example
```http
curl https://api.openai.com/v1/models/$MODEL \
-H "Authorization: Bearer $OPENAI_API_KEY"
```
#### Response
```json
{
"id": "id",
"created": 0,
"object": "model",
"owned_by": "owned_by"
}
```
### Example
```http
curl https://api.openai.com/v1/models/VAR_chat_model_id \
-H "Authorization: Bearer $OPENAI_API_KEY"
```
#### Response
```json
{
"id": "VAR_chat_model_id",
"object": "model",
"created": 1686935002,
"owned_by": "openai"
}
```

View File

@@ -0,0 +1,248 @@
## Streaming events
Stream Chat Completions in real time. Receive chunks of completions returned from the model using server-sent events. [Learn more](https://developers.openai.com/docs/guides/streaming-responses?api-mode=chat).
Represents a streamed chunk of a chat completion response returned by the model, based on the provided input. [Learn more](https://developers.openai.com/docs/guides/streaming-responses).
id: string
A unique identifier for the chat completion. Each chunk has the same ID.
choices: array of object { delta, finish\_reason, index, logprobs }
A list of chat completion choices. Can contain more than one elements if `n` is greater than 1. Can also be empty for the last chunk if you set `stream_options: {"include_usage": true}`.
delta: object { content, function\_call, refusal, 2 more }
A chat completion delta generated by streamed model responses.
content: optional string
The contents of the chunk message.
Deprecatedfunction\_call: optional object { arguments, name }
Deprecated and replaced by `tool_calls`. The name and arguments of a function that should be called, as generated by the model.
arguments: optional string
The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
name: optional string
The name of the function to call.
refusal: optional string
The refusal message generated by the model.
role: optional "developer" or "system" or "user" or 2 more
The role of the author of this message.
One of the following:
"developer"
"system"
"user"
"assistant"
"tool"
tool\_calls: optional array of object { index, id, function, type }
index: number
id: optional string
The ID of the tool call.
function: optional object { arguments, name }
arguments: optional string
The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
name: optional string
The name of the function to call.
type: optional "function"
The type of the tool. Currently, only `function` is supported.
finish\_reason: "stop" or "length" or "tool\_calls" or 2 more
The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function.
One of the following:
"stop"
"length"
"tool\_calls"
"content\_filter"
"function\_call"
index: number
The index of the choice in the list of choices.
logprobs: optional object { content, refusal }
Log probability information for the choice.
A list of message content tokens with log probability information.
token: string
The token.
bytes: array of number
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be `null` if there is no bytes representation for the token.
logprob: number
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value `-9999.0` is used to signify that the token is very unlikely.
top\_logprobs: array of object { token, bytes, logprob }
List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested `top_logprobs` returned.
token: string
The token.
bytes: array of number
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be `null` if there is no bytes representation for the token.
logprob: number
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value `-9999.0` is used to signify that the token is very unlikely.
A list of message refusal tokens with log probability information.
token: string
The token.
bytes: array of number
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be `null` if there is no bytes representation for the token.
logprob: number
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value `-9999.0` is used to signify that the token is very unlikely.
top\_logprobs: array of object { token, bytes, logprob }
List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested `top_logprobs` returned.
token: string
The token.
bytes: array of number
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be `null` if there is no bytes representation for the token.
logprob: number
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value `-9999.0` is used to signify that the token is very unlikely.
created: number
The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.
model: string
The model to generate the completion.
object: "chat.completion.chunk"
The object type, which is always `chat.completion.chunk`.
service\_tier: optional "auto" or "default" or "flex" or 2 more
Specifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to '[flex](https://developers.openai.com/docs/guides/flex-processing)' or '[priority](https://openai.com/api-priority-processing/)', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the `service_tier` parameter is set, the response body will include the `service_tier` value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
One of the following:
"auto"
"default"
"flex"
"scale"
"priority"
Deprecatedsystem\_fingerprint: optional string
This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism.
usage: optional [CompletionUsage](https://developers.openai.com/api/reference/resources/completions#\(resource\)%20completions%20%3E%20\(model\)%20completion_usage%20%3E%20\(schema\)) { completion\_tokens, prompt\_tokens, total\_tokens, 2 more }
An optional field that will only be present when you set `stream_options: {"include_usage": true}` in your request. When present, it contains a null value **except for the last chunk** which contains the token usage statistics for the entire request.
**NOTE:** If the stream is interrupted or cancelled, you may not receive the final usage chunk which contains the total token usage for the request.
completion\_tokens: number
Number of tokens in the generated completion.
prompt\_tokens: number
Number of tokens in the prompt.
total\_tokens: number
Total number of tokens used in the request (prompt + completion).
completion\_tokens\_details: optional object { accepted\_prediction\_tokens, audio\_tokens, reasoning\_tokens, rejected\_prediction\_tokens }
Breakdown of tokens used in a completion.
accepted\_prediction\_tokens: optional number
When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion.
audio\_tokens: optional number
Audio input tokens generated by the model.
reasoning\_tokens: optional number
Tokens generated by the model for reasoning.
rejected\_prediction\_tokens: optional number
When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits.
prompt\_tokens\_details: optional object { audio\_tokens, cached\_tokens }
Breakdown of tokens used in the prompt.
audio\_tokens: optional number
Audio input tokens present in the prompt.
cached\_tokens: optional number
Cached tokens present in the prompt.

View File

@@ -0,0 +1,786 @@
# Anthropic 协议适配清单
> 依据 [conversion_design.md](./conversion_design.md) 附录 D 模板编撰,覆盖 Anthropic API 的全部对接细节。
---
## 目录
1. [协议基本信息](#1-协议基本信息)
2. [接口识别](#2-接口识别)
3. [请求头构建](#3-请求头构建)
4. [核心层 — Chat 请求编解码](#4-核心层--chat-请求编解码)
5. [核心层 — Chat 响应编解码](#5-核心层--chat-响应编解码)
6. [核心层 — 流式编解码](#6-核心层--流式编解码)
7. [扩展层接口](#7-扩展层接口)
8. [错误编码](#8-错误编码)
9. [自检清单](#9-自检清单)
---
## 1. 协议基本信息
| 项目 | 说明 |
|------|------|
| 协议名称 | `"anthropic"` |
| 协议版本 | `2023-06-01`(通过 `anthropic-version` Header 传递) |
| Base URL | `https://api.anthropic.com` |
| 认证方式 | `x-api-key: <api_key>` |
---
## 2. 接口识别
### 2.1 URL 路径模式
| URL 路径 | InterfaceType |
|----------|---------------|
| `/v1/messages` | CHAT |
| `/v1/models` | MODELS |
| `/v1/models/{model}` | MODEL_INFO |
| `/v1/batches` | 透传 |
| `/v1/messages/count_tokens` | 透传 |
| `/v1/*` | 透传 |
### 2.2 接口能力矩阵
```
Anthropic.supportsInterface(type):
CHAT: return true
MODELS: return true
MODEL_INFO: return true
EMBEDDINGS: return false // Anthropic 无此接口
RERANK: return false // Anthropic 无此接口
default: return false
```
### 2.3 URL 映射表
```
Anthropic.buildUrl(nativePath, interfaceType):
switch interfaceType:
case CHAT: return "/v1/messages"
case MODELS: return "/v1/models"
case MODEL_INFO: return "/v1/models/{modelId}"
default: return nativePath
```
不支持 EMBEDDINGS 和 RERANK`supportsInterface` 返回 false引擎会自动走透传逻辑。
---
## 3. 请求头构建
### 3.1 buildHeaders
```
Anthropic.buildHeaders(provider):
result = {}
result["x-api-key"] = provider.api_key
result["anthropic-version"] = provider.adapter_config["anthropic_version"] ?? "2023-06-01"
if provider.adapter_config["anthropic_beta"]:
result["anthropic-beta"] = provider.adapter_config["anthropic_beta"].join(",")
result["Content-Type"] = "application/json"
return result
```
### 3.2 adapter_config 契约
| Key | 类型 | 必填 | 默认值 | 说明 |
|-----|------|------|--------|------|
| `anthropic_version` | String | 否 | `"2023-06-01"` | API 版本,映射为 `anthropic-version` Header |
| `anthropic_beta` | Array\<String\> | 否 | `[]` | Beta 功能标识列表,逗号拼接为 `anthropic-beta` Header |
---
## 4. 核心层 — Chat 请求编解码
### 4.1 DecoderAnthropic → Canonical
#### 系统消息
```
decodeSystem(system):
if system is None: return None
if system is String: return system
return system.map(s => SystemBlock {text: s.text})
```
Anthropic 使用顶层 `system` 字段String 或 SystemBlock 数组),直接提取为 `canonical.system`
#### 消息角色映射
| Anthropic role | Canonical role | 说明 |
|----------------|---------------|------|
| `user` | `user` | 直接映射;可能包含 tool_result |
| `assistant` | `assistant` | 直接映射 |
**关键差异**Anthropic 没有 `system``tool` 角色。system 通过顶层字段传递tool_result 嵌入 user 消息的 content 数组中。
#### 内容块解码
```
decodeContentBlocks(content):
if content is String: return [{type: "text", text: content}]
return content.map(block => {
switch block.type:
"text" → TextBlock{text: block.text}
"tool_use" → ToolUseBlock{id: block.id, name: block.name, input: block.input}
"tool_result" → ToolResultBlock{tool_use_id: block.tool_use_id, ...}
"thinking" → ThinkingBlock{thinking: block.thinking}
"redacted_thinking" → 丢弃 // 仅 Anthropic 使用,不在中间层保留
})
```
**tool_result 角色转换**
```
decodeMessage(msg):
switch msg.role:
case "user":
blocks = decodeContentBlocks(msg.content)
toolResults = blocks.filter(b => b.type == "tool_result")
others = blocks.filter(b => b.type != "tool_result")
if toolResults.length > 0:
return [
...(others.length > 0 ? [{role: "user", content: others}] : []),
{role: "tool", content: toolResults}]
return [{role: "user", content: blocks}]
case "assistant":
return [{role: "assistant", content: decodeContentBlocks(msg.content)}]
```
Anthropic user 消息中的 `tool_result` 块被拆分为独立的 Canonical `tool` 角色消息。
#### 工具定义
| Anthropic | Canonical | 说明 |
|-----------|-----------|------|
| `tools[].name` | `tools[].name` | 直接映射 |
| `tools[].description` | `tools[].description` | 直接映射 |
| `tools[].input_schema` | `tools[].input_schema` | 字段名相同 |
| `tools[].type` | — | Anthropic 无 function 包装层 |
#### 工具选择
| Anthropic tool_choice | Canonical ToolChoice |
|-----------------------|---------------------|
| `{type: "auto"}` | `{type: "auto"}` |
| `{type: "none"}` | `{type: "none"}` |
| `{type: "any"}` | `{type: "any"}` |
| `{type: "tool", name}` | `{type: "tool", name}` |
#### 参数映射
| Anthropic | Canonical | 说明 |
|-----------|-----------|------|
| `max_tokens` | `parameters.max_tokens` | 直接映射Anthropic 必填 |
| `temperature` | `parameters.temperature` | 直接映射 |
| `top_p` | `parameters.top_p` | 直接映射 |
| `top_k` | `parameters.top_k` | 直接映射 |
| `stop_sequences` (Array) | `parameters.stop_sequences` (Array) | 直接映射 |
| `stream` | `stream` | 直接映射 |
#### 新增公共字段
```
decodeExtras(raw):
user_id = raw.metadata?.user_id
output_format = decodeOutputFormat(raw.output_config)
parallel_tool_use = raw.disable_parallel_tool_use == true ? false : null
thinking = raw.thinking ? ThinkingConfig {
type: raw.thinking.type, // "enabled" | "disabled" | "adaptive"
budget_tokens: raw.thinking.budget_tokens,
effort: raw.output_config?.effort } : null
```
**ThinkingConfig 三种类型解码**
| Anthropic thinking.type | Canonical thinking.type | 说明 |
|-------------------------|----------------------|------|
| `"enabled"` | `"enabled"` | 有 budget_tokens直接映射 |
| `"disabled"` | `"disabled"` | 直接映射 |
| `"adaptive"` | `"adaptive"` | Anthropic 自动决定是否启用思考,映射为 `"adaptive"`(新增 Canonical 值) |
> **注意**`thinking.display``"summarized"` / `"omitted"`)为 Anthropic 特有字段,控制响应中思考内容的显示方式,不晋升为公共字段。
**output_config 解码**
```
decodeOutputFormat(output_config):
if output_config?.format?.type == "json_schema":
return { type: "json_schema", json_schema: { name: "output", schema: output_config.format.schema, strict: true } }
return null
```
| Anthropic | Canonical | 提取规则 |
|-----------|-----------|---------|
| `metadata.user_id` | `user_id` | 从嵌套对象提取 |
| `output_config.format` | `output_format` | 仅支持 `json_schema` 类型;映射为 Canonical OutputFormat |
| `output_config.effort` | `thinking.effort` | `"low"` / `"medium"` / `"high"` / `"xhigh"` / `"max"` 直接映射 |
| `disable_parallel_tool_use` | `parallel_tool_use` | **语义反转**true → false |
| `thinking.type` | `thinking.type` | 直接映射 |
| `thinking.budget_tokens` | `thinking.budget_tokens` | 直接映射 |
#### 协议特有字段
| 字段 | 处理方式 |
|------|---------|
| `cache_control` | 忽略(仅 Anthropic 使用,不晋升为公共字段) |
| `redacted_thinking` | 解码时丢弃,不在中间层保留 |
| `metadata` (除 user_id) | 忽略 |
| `thinking.display` | 忽略(控制响应显示方式,不影响请求语义) |
| `container` | 忽略(容器标识,协议特有) |
| `inference_geo` | 忽略(地理区域控制,协议特有) |
| `service_tier` | 忽略(服务层级选择,协议特有) |
#### 协议约束
- `max_tokens` 为**必填**字段
- messages 必须以 `user` 角色开始
- `user``assistant` 角色必须严格交替(除连续 tool_result 场景)
- tool_result 必须紧跟在包含对应 tool_use 的 assistant 消息之后
### 4.2 EncoderCanonical → Anthropic
#### 模型名称
使用 `provider.model_name` 覆盖 `canonical.model`
#### 系统消息注入
```
encodeSystem(system):
if system is String: return system
return system.map(s => ({text: s.text}))
```
`canonical.system` 编码为 Anthropic 顶层 `system` 字段。
#### 消息编码
**关键差异**Canonical 的 `tool` 角色消息需要合并到 Anthropic 的 `user` 消息中:
```
encodeMessages(canonical):
result = []
for msg in canonical.messages:
switch msg.role:
case "user":
result.append({role: "user", content: encodeContentBlocks(msg.content)})
case "assistant":
result.append({role: "assistant", content: encodeContentBlocks(msg.content)})
case "tool":
// tool 角色转为 Anthropic 的 user 消息内 tool_result 块
toolResults = msg.content.filter(b => b.type == "tool_result")
if result.length > 0 && result.last.role == "user":
result.last.content = result.last.content + toolResults
else:
result.append({role: "user", content: toolResults})
```
#### 角色约束处理
Anthropic 要求 user/assistant 严格交替。编码时需要:
1. 将 Canonical `tool` 角色合并到相邻 `user` 消息中
2. 确保首条消息为 `user` 角色(若无,自动注入空 user 消息)
3. 合并连续同角色消息
#### 工具编码
```
encodeTools(canonical):
if canonical.tools:
result.tools = canonical.tools.map(t => ({
name: t.name, description: t.description, input_schema: t.input_schema}))
encodeToolChoice(choice):
switch choice.type:
"auto" → {type: "auto"}
"none" → {type: "none"}
"any" → {type: "any"}
"tool" → {type: "tool", name: choice.name}
```
#### 公共字段编码
```
encodeRequest(canonical, provider):
result = {
model: provider.model_name,
messages: encodeMessages(canonical),
max_tokens: canonical.parameters.max_tokens,
temperature: canonical.parameters.temperature,
top_p: canonical.parameters.top_p,
top_k: canonical.parameters.top_k,
stream: canonical.stream
}
if canonical.system:
result.system = encodeSystem(canonical.system)
if canonical.parameters.stop_sequences:
result.stop_sequences = canonical.parameters.stop_sequences
if canonical.user_id:
result.metadata = {user_id: canonical.user_id}
if canonical.output_format or canonical.thinking?.effort:
result.output_config = {}
if canonical.output_format:
result.output_config.format = encodeOutputFormat(canonical.output_format)
if canonical.thinking?.effort:
result.output_config.effort = canonical.thinking.effort
if canonical.parallel_tool_use == false:
result.disable_parallel_tool_use = true
if canonical.tools:
result.tools = canonical.tools.map(t => ({
name: t.name, description: t.description, input_schema: t.input_schema}))
if canonical.tool_choice:
result.tool_choice = encodeToolChoice(canonical.tool_choice)
if canonical.thinking:
result.thinking = encodeThinkingConfig(canonical.thinking)
return result
encodeThinkingConfig(canonical):
switch canonical.type:
"enabled":
cfg = {type: "enabled", budget_tokens: canonical.budget_tokens}
return cfg
"disabled":
return {type: "disabled"}
"adaptive":
return {type: "adaptive"}
return {type: "disabled"}
encodeOutputFormat(output_format):
switch output_format.type:
"json_schema":
return {type: "json_schema", schema: output_format.json_schema.schema}
"json_object":
return {type: "json_schema", schema: {type: "object"}}
```
#### 降级处理
对照架构文档 §8.4 三级降级策略,确认每个不支持字段的处理:
| Canonical 字段 | Anthropic 不支持时 | 降级策略 |
|---------------|-------------------|---------|
| `thinking.effort` | Anthropic 通过 `output_config.effort` 传递 | 自动映射为 `output_config.effort` |
| `stop_reason: "content_filter"` | Anthropic 无此值 | 自动映射为 `"end_turn"` |
| `output_format: "text"` | Anthropic 无 text 输出格式 | 丢弃,不设置 output_config |
| `output_format: "json_object"` | Anthropic 用 json_schema 替代 | 替代方案:生成空 schema 的 json_schema |
---
## 5. 核心层 — Chat 响应编解码
逐字段对照 §4.7 CanonicalResponse 确认映射关系。
### 5.1 响应结构
```
Anthropic 响应顶层结构:
{
id: String,
type: "message",
role: "assistant",
model: String,
content: [ContentBlock...],
stop_reason: String,
stop_sequence: String | null,
stop_details: Object | null,
container: Object | null,
usage: { input_tokens, output_tokens, cache_read_input_tokens?, cache_creation_input_tokens?,
cache_creation?, inference_geo?, server_tool_use?, service_tier? }
}
```
**新增字段**(对比 §4.7 CanonicalResponse
| Anthropic 字段 | 说明 |
|----------------|------|
| `stop_details` | 结构化拒绝信息:`{type: "refusal", category, explanation}`,仅 `stop_reason == "refusal"` 时存在 |
| `container` | 容器信息:`{id, expires_at}`,仅使用 code execution 工具时存在 |
### 5.2 DecoderAnthropic → Canonical
```
decodeResponse(anthropicResp):
blocks = []
for block in anthropicResp.content:
switch block.type:
"text" → blocks.append({type: "text", text: block.text})
"tool_use" → blocks.append({type: "tool_use", id: block.id, name: block.name, input: block.input})
"thinking" → blocks.append({type: "thinking", thinking: block.thinking})
"redacted_thinking" → 丢弃 // 仅 Anthropic 使用,不在中间层保留
return CanonicalResponse {id, model, content: blocks, stop_reason: mapStopReason(anthropicResp.stop_reason),
usage: CanonicalUsage {input_tokens, output_tokens,
cache_read_tokens: anthropicResp.usage.cache_read_input_tokens,
cache_creation_tokens: anthropicResp.usage.cache_creation_input_tokens}}
```
**内容块解码**
- `text` → TextBlock直接映射忽略 `citations` 字段)
- `tool_use` → ToolUseBlock直接映射忽略 `caller` 字段)
- `thinking` → ThinkingBlock直接映射忽略 `signature` 字段)
- `redacted_thinking` → 丢弃(协议特有,不晋升为公共字段)
- `server_tool_use` / `web_search_tool_result` / `code_execution_tool_result` 等 → 丢弃(服务端工具块,协议特有)
**停止原因映射**
| Anthropic stop_reason | Canonical stop_reason | 说明 |
|-----------------------|-----------------------|------|
| `"end_turn"` | `"end_turn"` | 直接映射 |
| `"max_tokens"` | `"max_tokens"` | 直接映射 |
| `"tool_use"` | `"tool_use"` | 直接映射 |
| `"stop_sequence"` | `"stop_sequence"` | 直接映射 |
| `"pause_turn"` | `"pause_turn"` | 长轮次暂停,映射为 Canonical 新增值 |
| `"refusal"` | `"refusal"` | 安全拒绝,直接映射 |
**Token 用量映射**
| Anthropic usage | Canonical Usage | 说明 |
|-----------------|-----------------|------|
| `input_tokens` | `input_tokens` | 直接映射 |
| `output_tokens` | `output_tokens` | 直接映射 |
| `cache_read_input_tokens` | `cache_read_tokens` | 字段名映射 |
| `cache_creation_input_tokens` | `cache_creation_tokens` | 字段名映射 |
| `cache_creation` | — | 协议特有(按 TTL 细分),不晋升 |
| `inference_geo` | — | 协议特有,不晋升 |
| `server_tool_use` | — | 协议特有,不晋升 |
| `service_tier` | — | 协议特有,不晋升 |
| — | `reasoning_tokens` | Anthropic 不返回此字段,始终为 null |
**协议特有内容**
| 字段 | 处理方式 |
|------|---------|
| `redacted_thinking` | 解码时丢弃 |
| `stop_sequence` | 解码时忽略Canonical 用 stop_reason 覆盖) |
| `stop_details` | 解码时忽略(协议特有,不晋升) |
| `container` | 解码时忽略(协议特有,不晋升) |
| `text.citations` | 解码时忽略(协议特有,不晋升) |
| `tool_use.caller` | 解码时忽略(协议特有,不晋升) |
| `thinking.signature` | 解码时忽略(协议特有,不晋升;同协议透传时自然保留) |
### 5.3 EncoderCanonical → Anthropic
```
encodeResponse(canonical):
blocks = canonical.content.map(block => {
switch block.type:
"text" → {type: "text", text: block.text}
"tool_use" → {type: "tool_use", id: block.id, name: block.name, input: block.input}
"thinking" → {type: "thinking", thinking: block.thinking}})
return {id: canonical.id, type: "message", role: "assistant", model: canonical.model,
content: blocks,
stop_reason: mapCanonicalStopReason(canonical.stop_reason),
stop_sequence: None,
usage: {input_tokens: canonical.usage.input_tokens, output_tokens: canonical.usage.output_tokens,
cache_read_input_tokens: canonical.usage.cache_read_tokens,
cache_creation_input_tokens: canonical.usage.cache_creation_tokens}}
```
**内容块编码**
- TextBlock → `{type: "text", text}`(直接映射)
- ToolUseBlock → `{type: "tool_use", id, name, input}`(直接映射)
- ThinkingBlock → `{type: "thinking", thinking}`(直接映射)
**停止原因映射**
| Canonical stop_reason | Anthropic stop_reason |
|-----------------------|-----------------------|
| `"end_turn"` | `"end_turn"` |
| `"max_tokens"` | `"max_tokens"` |
| `"tool_use"` | `"tool_use"` |
| `"stop_sequence"` | `"stop_sequence"` |
| `"pause_turn"` | `"pause_turn"` |
| `"refusal"` | `"refusal"` |
| `"content_filter"` | `"end_turn"`(降级) |
**降级处理**
| Canonical 字段 | Anthropic 不支持时 | 降级策略 |
|---------------|-------------------|---------|
| `stop_reason: "content_filter"` | Anthropic 无此值 | 自动映射为 `"end_turn"` |
| `reasoning_tokens` | Anthropic 无此字段 | 丢弃 |
**协议特有内容**
| 字段 | 处理方式 |
|------|---------|
| `redacted_thinking` | 编码时不产出 |
| `stop_sequence` | 编码时始终为 null |
| `stop_details` | 编码时不产出 |
| `container` | 编码时不产出 |
| `text.citations` | 编码时不产出 |
| `thinking.signature` | 编码时不产出(同协议透传时自然保留) |
---
## 6. 核心层 — 流式编解码
### 6.1 SSE 格式
Anthropic 使用命名 SSE 事件,与 CanonicalStreamEvent 几乎 1:1 对应:
```
event: message_start
data: {"type":"message_start","message":{"id":"msg_xxx","model":"claude-4",...}}
event: content_block_start
data: {"type":"content_block_start","index":0,"content_block":{"type":"text","text":""}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":"Hello"}}
event: content_block_stop
data: {"type":"content_block_stop","index":0}
event: message_delta
data: {"type":"message_delta","delta":{"stop_reason":"end_turn"},"usage":{"output_tokens":10}}
event: message_stop
data: {"type":"message_stop"}
event: ping
data: {"type":"ping"}
```
### 6.2 StreamDecoderAnthropic SSE → Canonical 事件)
Anthropic SSE 事件与 CanonicalStreamEvent 几乎 1:1 映射,状态机最简单:
| Anthropic SSE 事件 | Canonical 事件 | 说明 |
|---|---|---|
| `message_start` | MessageStartEvent | 直接映射 |
| `content_block_start` | ContentBlockStartEvent | 直接映射 content_block |
| `content_block_delta` | ContentBlockDeltaEvent | 见下方 delta 类型映射表 |
| `content_block_stop` | ContentBlockStopEvent | 直接映射 |
| `message_delta` | MessageDeltaEvent | 直接映射 delta 和 usage |
| `message_stop` | MessageStopEvent | 直接映射 |
| `ping` | PingEvent | 直接映射 |
| `error` | ErrorEvent | 直接映射 |
**delta 类型映射**`content_block_delta` 事件内):
| Anthropic delta 类型 | Canonical delta 类型 | 说明 |
|---------------------|---------------------|------|
| `text_delta` | `{type: "text_delta", text}` | 直接映射 |
| `input_json_delta` | `{type: "input_json_delta", partial_json}` | 直接映射 |
| `thinking_delta` | `{type: "thinking_delta", thinking}` | 直接映射 |
| `citations_delta` | 丢弃 | 协议特有,不晋升为公共字段 |
| `signature_delta` | 丢弃 | 协议特有(用于多轮思考签名连续性),不晋升 |
**content_block_start 类型映射**
| Anthropic content_block 类型 | Canonical content_block | 说明 |
|------------------------------|----------------------|------|
| `{type: "text", text: ""}` | `{type: "text", text: ""}` | 直接映射 |
| `{type: "tool_use", id, name, input: {}}` | `{type: "tool_use", id, name, input: {}}` | 直接映射 |
| `{type: "thinking", thinking: ""}` | `{type: "thinking", thinking: ""}` | 直接映射 |
| `{type: "redacted_thinking", data: ""}` | 丢弃整个 block | 跳过后续 delta 直到 content_block_stop |
| `server_tool_use` / `web_search_tool_result` 等 | 丢弃 | 服务端工具块,协议特有 |
### 6.3 StreamDecoder 状态机
```
StreamDecoderState {
messageStarted: Boolean
openBlocks: Set<Integer>
currentBlockType: Map<Integer, String>
currentBlockId: Map<Integer, String>
redactedBlocks: Set<Integer> // 追踪需要丢弃的 redacted_thinking block
utf8Remainder: Option<ByteArray> // UTF-8 跨 chunk 安全
accumulatedUsage: Option<CanonicalUsage>
}
```
Anthropic Decoder 无需 OpenAI 的 `toolCallIdMap` / `toolCallNameMap` / `toolCallArguments`,因为 Anthropic 的事件已经有明确的结构。
**关键处理**
- **`redacted_thinking`**:在 `content_block_start` 事件中检测类型,将 index 加入 `redactedBlocks`,后续 delta 和 stop 事件均丢弃
- **`citations_delta` / `signature_delta`**:在 delta 映射时直接丢弃,不影响 block 生命周期
- **`server_tool_use` 等服务端工具块**:与 `redacted_thinking` 处理方式一致,加入 `redactedBlocks` 丢弃
- **UTF-8 安全**:跨 chunk 截断的 UTF-8 字节需要用 `utf8Remainder` 缓冲
- **usage 累积**`message_delta` 中的 usage 与 `message_start` 中的 usage 合并
### 6.4 StreamEncoderCanonical → Anthropic SSE
| Canonical 事件 | Anthropic SSE 事件 | 说明 |
|---|---|---|
| MessageStartEvent | `event: message_start` | 直接映射 |
| ContentBlockStartEvent | `event: content_block_start` | 直接映射 content_block |
| ContentBlockDeltaEvent | `event: content_block_delta` | 见下方 delta 编码表 |
| ContentBlockStopEvent | `event: content_block_stop` | 直接映射 |
| MessageDeltaEvent | `event: message_delta` | 直接映射 |
| MessageStopEvent | `event: message_stop` | 直接映射 |
| PingEvent | `event: ping` | 直接映射 |
| ErrorEvent | `event: error` | 直接映射 |
**delta 编码表**
| Canonical delta 类型 | Anthropic delta 类型 | 说明 |
|---------------------|---------------------|------|
| `{type: "text_delta", text}` | `text_delta` | 直接映射 |
| `{type: "input_json_delta", partial_json}` | `input_json_delta` | 直接映射 |
| `{type: "thinking_delta", thinking}` | `thinking_delta` | 直接映射 |
**缓冲策略**:无需缓冲,每个 Canonical 事件直接编码为对应的 Anthropic SSE 事件。
**SSE 编码格式**
```
event: <event_type>\n
data: <json_payload>\n
\n
```
---
## 7. 扩展层接口
### 7.1 /models & /models/{model}
**列表接口** `GET /v1/models`
| 项目 | 说明 |
|------|------|
| 接口是否存在 | 是 |
| 请求格式 | GET 请求,支持 `limit``after_id``before_id` 查询参数 |
响应 DecoderAnthropic → Canonical
```
decodeModelsResponse(anthropicResp):
return CanonicalModelList {
models: anthropicResp.data.map(m => CanonicalModel {
id: m.id, name: m.display_name ?? m.id, created: parseTimestamp(m.created_at),
owned_by: "anthropic"})}
parseTimestamp(timestamp):
// Anthropic 返回 RFC 3339 字符串(如 "2025-05-14T00:00:00Z"),需转为 Unix 时间戳
return rfc3339ToUnix(timestamp) ?? 0
```
响应 EncoderCanonical → Anthropic
```
encodeModelsResponse(canonical):
return {data: canonical.models.map(m => ({
id: m.id,
display_name: m.name ?? m.id,
created_at: m.created ? unixToRfc3339(m.created) : epochRfc3339(),
type: "model"})),
has_more: false,
first_id: canonical.models[0]?.id, last_id: canonical.models.last?.id}
```
**详情接口** `GET /v1/models/{model}`
| 项目 | 说明 |
|------|------|
| 接口是否存在 | 是 |
| 请求格式 | GET 请求,路径参数 `model_id` |
响应 DecoderAnthropic → Canonical
```
decodeModelInfoResponse(anthropicResp):
return CanonicalModelInfo {
id: anthropicResp.id, name: anthropicResp.display_name ?? anthropicResp.id,
created: parseTimestamp(anthropicResp.created_at), owned_by: "anthropic" }
```
响应 EncoderCanonical → Anthropic
```
encodeModelInfoResponse(canonical):
return {id: canonical.id,
display_name: canonical.name ?? canonical.id,
created_at: canonical.created ? unixToRfc3339(canonical.created) : epochRfc3339(),
type: "model"}
```
**字段映射**(列表和详情共用):
| Anthropic | Canonical | 说明 |
|-----------|-----------|------|
| `data[].id` | `models[].id` | 直接映射 |
| `data[].display_name` | `models[].name` | Anthropic 特有的显示名称 |
| `data[].created_at` | `models[].created` | **类型转换**Anthropic 为 RFC 3339 字符串Canonical 为 Unix 时间戳 |
| `data[].type: "model"` | — | 固定值 |
| `has_more` | — | 编码时固定为 false |
| `first_id` / `last_id` | — | 从列表提取 |
| `data[].capabilities` | — | 协议特有,不晋升 |
| `data[].max_input_tokens` | — | 协议特有,不晋升 |
| `data[].max_tokens` | — | 协议特有,不晋升 |
**跨协议对接示例**(入站 `/anthropic/v1/models`,目标 OpenAI
```
入站: GET /anthropic/v1/models, x-api-key: sk-ant-xxx
→ client=anthropic, provider=openai
→ URL: /v1/models, Headers: Authorization: Bearer sk-xxx
OpenAI 上游响应: {object: "list", data: [{id: "gpt-4o", object: "model", created: 1700000000, owned_by: "openai"}]}
→ OpenAI.decodeModelsResponse → CanonicalModelList
→ Anthropic.encodeModelsResponse
返回客户端: {data: [{id: "gpt-4o", display_name: "gpt-4o", created_at: "2023-11-04T18:26:40Z", type: "model"}],
has_more: false, first_id: "gpt-4o", last_id: "gpt-4o"}
```
---
## 8. 错误编码
### 8.1 错误响应格式
```json
{
"type": "error",
"error": {
"type": "invalid_request_error",
"message": "Error message"
}
}
```
### 8.2 encodeError
```
Anthropic.encodeError(error):
return {type: "error", error: {type: error.code, message: error.message}}
```
### 8.3 常用 HTTP 状态码
| HTTP Status | 说明 |
|-------------|------|
| 400 | 请求格式错误 |
| 401 | 认证失败(无效 API key |
| 403 | 无权限访问 |
| 404 | 接口不存在 |
| 429 | 速率限制 |
| 500 | 服务器内部错误 |
| 529 | 服务过载 |
---
## 9. 自检清单
| 章节 | 检查项 |
|------|--------|
| §2 | [x] 所有 InterfaceType 的 `supportsInterface` 返回值已确定 |
| §2 | [x] 所有 InterfaceType 的 `buildUrl` 映射已确定 |
| §3 | [x] `buildHeaders(provider)` 已实现adapter_config 契约已文档化 |
| §4 | [x] Chat 请求的 Decoder 和 Encoder 已实现(逐字段对照 §4.1/§4.2 |
| §4 | [x] 角色映射和消息顺序约束已处理tool→user 合并、首消息 user 保证、交替约束) |
| §4 | [x] 工具调用tool_use / tool_result的编解码已处理 |
| §4 | [x] 协议特有字段已识别并确定处理方式cache_control 忽略、redacted_thinking 丢弃) |
| §5 | [x] Chat 响应的 Decoder 和 Encoder 已实现(逐字段对照 §4.7 |
| §5 | [x] stop_reason 映射表已确认 |
| §5 | [x] usage 字段映射已确认input_tokens / cache_read_input_tokens 等) |
| §6 | [x] 流式 StreamDecoder 和 StreamEncoder 已实现(对照 §4.8 |
| §7 | [x] 扩展层接口的编解码已实现(/models、/models/{model} |
| §8 | [x] `encodeError` 已实现 |

File diff suppressed because it is too large Load Diff

1165
docs/conversion_openai.md Normal file

File diff suppressed because it is too large Load Diff