1
0

docs: 重构 API 参考文档,拆分为独立文件并统一 CLI 风格格式

This commit is contained in:
2026-04-19 14:21:03 +08:00
parent b92974716f
commit 4dc518a5f4
17 changed files with 25227 additions and 8678 deletions

View File

@@ -0,0 +1,248 @@
## Streaming events
Stream Chat Completions in real time. Receive chunks of completions returned from the model using server-sent events. [Learn more](https://developers.openai.com/docs/guides/streaming-responses?api-mode=chat).
Represents a streamed chunk of a chat completion response returned by the model, based on the provided input. [Learn more](https://developers.openai.com/docs/guides/streaming-responses).
id: string
A unique identifier for the chat completion. Each chunk has the same ID.
choices: array of object { delta, finish\_reason, index, logprobs }
A list of chat completion choices. Can contain more than one elements if `n` is greater than 1. Can also be empty for the last chunk if you set `stream_options: {"include_usage": true}`.
delta: object { content, function\_call, refusal, 2 more }
A chat completion delta generated by streamed model responses.
content: optional string
The contents of the chunk message.
Deprecatedfunction\_call: optional object { arguments, name }
Deprecated and replaced by `tool_calls`. The name and arguments of a function that should be called, as generated by the model.
arguments: optional string
The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
name: optional string
The name of the function to call.
refusal: optional string
The refusal message generated by the model.
role: optional "developer" or "system" or "user" or 2 more
The role of the author of this message.
One of the following:
"developer"
"system"
"user"
"assistant"
"tool"
tool\_calls: optional array of object { index, id, function, type }
index: number
id: optional string
The ID of the tool call.
function: optional object { arguments, name }
arguments: optional string
The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
name: optional string
The name of the function to call.
type: optional "function"
The type of the tool. Currently, only `function` is supported.
finish\_reason: "stop" or "length" or "tool\_calls" or 2 more
The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function.
One of the following:
"stop"
"length"
"tool\_calls"
"content\_filter"
"function\_call"
index: number
The index of the choice in the list of choices.
logprobs: optional object { content, refusal }
Log probability information for the choice.
A list of message content tokens with log probability information.
token: string
The token.
bytes: array of number
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be `null` if there is no bytes representation for the token.
logprob: number
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value `-9999.0` is used to signify that the token is very unlikely.
top\_logprobs: array of object { token, bytes, logprob }
List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested `top_logprobs` returned.
token: string
The token.
bytes: array of number
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be `null` if there is no bytes representation for the token.
logprob: number
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value `-9999.0` is used to signify that the token is very unlikely.
A list of message refusal tokens with log probability information.
token: string
The token.
bytes: array of number
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be `null` if there is no bytes representation for the token.
logprob: number
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value `-9999.0` is used to signify that the token is very unlikely.
top\_logprobs: array of object { token, bytes, logprob }
List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested `top_logprobs` returned.
token: string
The token.
bytes: array of number
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be `null` if there is no bytes representation for the token.
logprob: number
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value `-9999.0` is used to signify that the token is very unlikely.
created: number
The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.
model: string
The model to generate the completion.
object: "chat.completion.chunk"
The object type, which is always `chat.completion.chunk`.
service\_tier: optional "auto" or "default" or "flex" or 2 more
Specifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to '[flex](https://developers.openai.com/docs/guides/flex-processing)' or '[priority](https://openai.com/api-priority-processing/)', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the `service_tier` parameter is set, the response body will include the `service_tier` value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
One of the following:
"auto"
"default"
"flex"
"scale"
"priority"
Deprecatedsystem\_fingerprint: optional string
This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism.
usage: optional [CompletionUsage](https://developers.openai.com/api/reference/resources/completions#\(resource\)%20completions%20%3E%20\(model\)%20completion_usage%20%3E%20\(schema\)) { completion\_tokens, prompt\_tokens, total\_tokens, 2 more }
An optional field that will only be present when you set `stream_options: {"include_usage": true}` in your request. When present, it contains a null value **except for the last chunk** which contains the token usage statistics for the entire request.
**NOTE:** If the stream is interrupted or cancelled, you may not receive the final usage chunk which contains the total token usage for the request.
completion\_tokens: number
Number of tokens in the generated completion.
prompt\_tokens: number
Number of tokens in the prompt.
total\_tokens: number
Total number of tokens used in the request (prompt + completion).
completion\_tokens\_details: optional object { accepted\_prediction\_tokens, audio\_tokens, reasoning\_tokens, rejected\_prediction\_tokens }
Breakdown of tokens used in a completion.
accepted\_prediction\_tokens: optional number
When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion.
audio\_tokens: optional number
Audio input tokens generated by the model.
reasoning\_tokens: optional number
Tokens generated by the model for reasoning.
rejected\_prediction\_tokens: optional number
When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits.
prompt\_tokens\_details: optional object { audio\_tokens, cached\_tokens }
Breakdown of tokens used in the prompt.
audio\_tokens: optional number
Audio input tokens present in the prompt.
cached\_tokens: optional number
Cached tokens present in the prompt.