1
0

feat: 完善流式测试覆盖并精简用例

- 提取共享定义(tool_weather, image_url, json_schema_math)到功能块前
- 流式用例精简为代表子集:核心 6-8 个 + 扩展各 1-2 个 + 高级参数代表
- OpenAI: 15 个流式用例(核心 8 + vision/tools/logprobs/json_schema + 高级参数)
- Anthropic: 11 个流式用例(核心 6 + vision/tools/thinking + 高级参数)
- 更新 README:新增流式测试覆盖原则、parse_sse_events 函数说明
This commit is contained in:
2026-04-21 17:18:35 +08:00
parent 6e11ada42c
commit 44d6af026a
3 changed files with 340 additions and 124 deletions

View File

@@ -23,6 +23,7 @@ scripts/
| `TestResult` | 测试结果数据类(状态码、耗时、错误类型、响应内容) |
| `http_request()` | 普通 HTTP 请求(支持重试、自动 JSON 序列化) |
| `http_stream_request()` | 流式 HTTP 请求SSE支持重试 |
| `parse_sse_events()` | 从 SSE 响应文本中提取 `data:` 事件列表 |
| `create_ssl_context()` | 创建不验证证书的 SSL 上下文(测试环境用) |
| `run_test()` | 执行单个用例并打印结构化输出 |
| `run_test_suite()` | 执行完整测试套件并打印统计摘要 |
@@ -182,6 +183,7 @@ Headers:
2. **负面用例覆盖常见错误场景** — 缺参数、类型错误、范围越界、认证失败
3. **扩展功能通过 flag 按需开启** — 避免在基础测试中引入不必要的依赖
4. **验证器基于协议规范编写** — 严格检查必需字段、类型、枚举值
5. **流式与非流式覆盖一致** — 流式只是传输方式不同,功能覆盖范围应完全对应(见下文)
## 新增检测脚本开发流程
@@ -221,6 +223,7 @@ def validate_gemini_generate_content_response(response_text: str) -> Tuple[bool,
| `check_enum_value()` | 检查枚举值 |
| `check_array_items_type()` | 检查数组元素类型 |
| `validate_response_structure()` | 组合上述函数的通用验证器 |
| `parse_sse_events()` | 从 SSE 响应文本中提取 `data:` 事件 |
### 3. 创建检测脚本模板
@@ -246,25 +249,65 @@ def validate_xxx_response(response_text: str) -> Tuple[bool, List[str]]:
"""验证响应结构(协议专用)"""
...
def validate_xxx_streaming_response(response_text: str) -> Tuple[bool, List[str]]:
"""验证流式响应结构(协议专用)"""
from core import parse_sse_events
...
def main():
parser = argparse.ArgumentParser(...)
parser.add_argument("--base_url", required=True, ...)
parser.add_argument("--api_key", default="", ...)
parser.add_argument("--model", default="...", ...)
parser.add_argument("--stream", action="store_true", ...)
parser.add_argument("--all", action="store_true", ...)
args = parser.parse_args()
cases: List[TestCase] = []
# 正面用例(都添加 validator
# ---- 共享定义(供流式和非流式用例共同使用)----
# 将 tool、image_url 等定义放在所有功能块之前,
# 避免流式和非流式块中重复定义
tool_xxx = { ... }
image_url = "..."
# ==== 非流式正面用例(都添加 validator====
cases.append(TestCase(
desc="...", method="...", url=..., headers=..., body=...,
validator=validate_xxx_response
))
# 负面用例(不添加 validator
# ==== 非流式负面用例(不添加 validator====
cases.append(TestCase(desc="...", method="...", url=..., headers=..., body=...))
# ==== --stream ====
if args.stream:
# 核心对话流式用例:每个非流式正面用例都应有对应的流式版本
# 仅传输方式不同stream=True, stream=True
# 功能覆盖(参数、角色、多轮等)必须与非流式一致
cases.append(TestCase(
desc="流式...", method="POST", url=..., headers=headers,
body={ ..., "stream": True },
stream=True,
validator=validate_xxx_streaming_response
))
# 流式 + 其他 flag 组合(放在 --stream 块内部)
if args.vision:
cases.append(TestCase(
desc="流式图片输入 (--stream + --vision)",
...,
stream=True,
validator=validate_xxx_streaming_response
))
if args.tools:
cases.append(TestCase(
desc="流式工具调用 (--stream + --tools)",
...,
stream=True,
validator=validate_xxx_streaming_response
))
run_test_suite(cases=cases, ssl_ctx=ssl_ctx, title="...", base_url=..., model=..., flags=...)
if __name__ == "__main__":
@@ -280,6 +323,28 @@ if __name__ == "__main__":
- **扩展功能用 flag 控制** — 保持基础测试轻量
- **遵循现有命名和代码风格** — 中文注释、类型注解、dataclass 使用
### 流式测试覆盖原则
流式SSE与非流式只是数据传输方式不同服务端对请求参数的处理逻辑应完全一致。因此
1. **每个非流式正面用例都应有对应的流式版本** — 包括不同的消息角色组合、参数组合、工具调用等
2. **共享定义提前声明**`tool``image_url``json_schema` 等定义放在所有功能块之前,流式和非流式共用同一实例,避免重复定义
3. **flag 组合放在 `--stream` 块内部** — 流式+工具、流式+视觉等组合用例放在 `if args.stream:` 内部的 `if args.tools:` / `if args.vision:` 子块中,不需要单独的组合 flag
4. **负面用例不需要流式版本** — 参数校验发生在请求处理之前,与传输方式无关
5. **Models API 等非 Chat 端点不需要流式测试** — 它们本身不支持流式传输
| 用例类别 | 非流式 | 流式 |
|----------|--------|------|
| 基本对话 / 多轮对话 | ✓ | ✓ |
| 消息角色组合system, developer 等) | ✓ | ✓ |
| 参数组合temperature, top_p, max_tokens 等) | ✓ | ✓ |
| 工具调用tool_choice 各模式) | ✓ | ✓(在 `--stream` 块内检查 `--tools` |
| 视觉(图片输入) | ✓ | ✓(在 `--stream` 块内检查 `--vision` |
| 扩展思维 / Logprobs 等特性 | ✓ | ✓(在 `--stream` 块内检查对应 flag |
| 高级参数service_tier, reasoning_effort 等) | ✓ | ✓ |
| 负面用例(缺参数、越界、认证失败) | ✓ | ✗(参数校验与传输方式无关) |
| Models APIGET 端点) | ✓ | ✗(不支持流式) |
## 许可证
MIT

View File

@@ -318,6 +318,24 @@ def main():
models_url = f"{base_url}/v1/models"
count_tokens_url = f"{base_url}/v1/messages/count_tokens"
# ---- 共享定义(供流式和非流式用例共同使用)----
image_url = (
"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/"
"Gfp-wisconsin-madison-the-nature-boardwalk.jpg/"
"2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
)
tool_weather = {
"name": "get_weather",
"description": "获取指定城市的天气",
"input_schema": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "城市名称"}
},
"required": ["location"]
}
}
# --- 收集测试用例 ---
cases: List[TestCase] = []
@@ -744,9 +762,7 @@ def main():
{"type": "text", "text": "用一个词描述这张图"},
{"type": "image", "source": {
"type": "url",
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/"
"Gfp-wisconsin-madison-the-nature-boardwalk.jpg/"
"2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
"url": image_url
}}
]}]
},
@@ -755,64 +771,120 @@ def main():
# ==== --stream ====
if args.stream:
# 核心用例
cases.append(TestCase(
desc="基本流式 (--stream)",
desc="流式基本对话",
method="POST",
url=messages_url,
headers=headers,
body={
"model": model,
"max_tokens": 5,
"stream": True,
"messages": [{"role": "user", "content": "Hi"}]
},
body={"model": model, "max_tokens": 5, "stream": True, "messages": [{"role": "user", "content": "Hi"}]},
stream=True,
validator=validate_anthropic_streaming_response
))
cases.append(TestCase(
desc="流式 + system prompt (--stream)",
desc="流式 + system prompt",
method="POST",
url=messages_url,
headers=headers,
body={
"model": model,
"max_tokens": 5,
"stream": True,
"system": "Reply in one word.",
"messages": [{"role": "user", "content": "1+1="}]
},
body={"model": model, "max_tokens": 5, "stream": True, "system": "有帮助的助手", "messages": [{"role": "user", "content": "Hi"}]},
stream=True,
validator=validate_anthropic_streaming_response
))
cases.append(TestCase(
desc="流式 + stop_sequences (--stream)",
desc="流式多轮对话",
method="POST",
url=messages_url,
headers=headers,
body={
"model": model,
"max_tokens": 20,
"stream": True,
"stop_sequences": ["5"],
"messages": [{"role": "user", "content": "数数: 1,2,3,"}]
},
body={"model": model, "max_tokens": 5, "stream": True, "messages": [{"role": "user", "content": "Hi"}, {"role": "assistant", "content": "Hello"}, {"role": "user", "content": "1+1"}]},
stream=True,
validator=validate_anthropic_streaming_response
))
cases.append(TestCase(
desc="流式 temperature + top_p",
method="POST",
url=messages_url,
headers=headers,
body={"model": model, "max_tokens": 5, "stream": True, "temperature": 0.5, "top_p": 0.9, "messages": [{"role": "user", "content": "Hi"}]},
stream=True,
validator=validate_anthropic_streaming_response
))
cases.append(TestCase(
desc="流式 max_tokens",
method="POST",
url=messages_url,
headers=headers,
body={"model": model, "max_tokens": 3, "stream": True, "messages": [{"role": "user", "content": "Hi"}]},
stream=True,
validator=validate_anthropic_streaming_response
))
cases.append(TestCase(
desc="流式 stop_sequences",
method="POST",
url=messages_url,
headers=headers,
body={"model": model, "max_tokens": 10, "stream": True, "stop_sequences": ["5"], "messages": [{"role": "user", "content": "数数: 1,2,3,"}]},
stream=True,
validator=validate_anthropic_streaming_response
))
# 流式 + vision
if args.vision:
cases.append(TestCase(
desc="流式图片输入",
method="POST",
url=messages_url,
headers=headers,
body={"model": model, "max_tokens": 10, "stream": True, "messages": [{"role": "user", "content": [{"type": "text", "text": "描述图"}, {"type": "image", "source": {"type": "url", "url": image_url}}]}]},
stream=True,
validator=validate_anthropic_streaming_response
))
# 流式 + tools
if args.tools:
cases.append(TestCase(
desc="流式工具调用 auto",
method="POST",
url=messages_url,
headers=headers,
body={"model": model, "max_tokens": 50, "stream": True, "tools": [tool_weather], "tool_choice": {"type": "auto"}, "messages": [{"role": "user", "content": "北京天气?"}]},
stream=True,
validator=validate_anthropic_streaming_response
))
cases.append(TestCase(
desc="流式多轮工具调用",
method="POST",
url=messages_url,
headers=headers,
body={"model": model, "max_tokens": 20, "stream": True, "tools": [tool_weather], "messages": [{"role": "user", "content": "北京天气?"}, {"role": "assistant", "content": [{"type": "tool_use", "id": "toolu_001", "name": "get_weather", "input": {"location": "Beijing"}}]}, {"role": "user", "content": [{"type": "tool_result", "tool_use_id": "toolu_001", "content": '{"temp": 22}'}]}]},
stream=True,
validator=validate_anthropic_streaming_response
))
# 流式 + thinking
if args.thinking:
cases.append(TestCase(
desc="流式扩展思维",
method="POST",
url=messages_url,
headers=headers,
body={"model": model, "max_tokens": 100, "stream": True, "thinking": {"type": "enabled", "budget_tokens": 50}, "messages": [{"role": "user", "content": "1+1=?"}]},
stream=True,
validator=validate_anthropic_streaming_response
))
# 流式高级参数
cases.append(TestCase(
desc="流式 service_tier: auto",
method="POST",
url=messages_url,
headers=headers,
body={"model": model, "max_tokens": 5, "stream": True, "service_tier": "auto", "messages": [{"role": "user", "content": "Hi"}]},
stream=True,
validator=validate_anthropic_streaming_response
))
# ==== --tools ====
if args.tools:
tool_weather = {
"name": "get_weather",
"description": "获取指定城市的天气",
"input_schema": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "城市名称"}
},
"required": ["location"]
}
}
cases.append(TestCase(
desc="工具调用 tool_choice: auto (--tools)",
method="POST",
@@ -982,36 +1054,6 @@ def main():
validator=validate_anthropic_messages_response
))
# ==== --stream + --tools 组合 ====
if args.stream and args.tools:
tool_weather_stream = {
"name": "get_weather",
"description": "获取指定城市的天气",
"input_schema": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "城市名称"}
},
"required": ["location"]
}
}
cases.append(TestCase(
desc="流式工具调用 (--stream --tools)",
method="POST",
url=messages_url,
headers=headers,
body={
"model": model,
"max_tokens": 50,
"stream": True,
"tools": [tool_weather_stream],
"tool_choice": {"type": "auto"},
"messages": [{"role": "user", "content": "北京天气怎么样?"}]
},
stream=True,
validator=validate_anthropic_streaming_response
))
# ==== 高级参数测试 ====
# cache_control: 缓存控制
cases.append(TestCase(

View File

@@ -661,13 +661,42 @@ def main():
}
))
# ---- 共享定义(供流式和非流式用例共同使用)----
image_url = (
"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/"
"Gfp-wisconsin-madison-the-nature-boardwalk.jpg/"
"2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
)
tool_weather = {
"type": "function",
"function": {
"name": "get_weather",
"description": "获取指定城市的天气",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "城市名称"}
},
"required": ["location"]
}
}
}
json_schema_math = {
"name": "math_answer",
"strict": True,
"schema": {
"type": "object",
"properties": {
"answer": {"type": "number"},
"explanation": {"type": "string"}
},
"required": ["answer", "explanation"],
"additionalProperties": False
}
}
# ---- --vision ----
if args.vision:
image_url = (
"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/"
"Gfp-wisconsin-madison-the-nature-boardwalk.jpg/"
"2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
)
cases.append(TestCase(
desc="图片 URL 输入 + detail 参数 (--vision)",
method="POST",
@@ -691,67 +720,159 @@ def main():
# ---- --stream ----
if args.stream:
# 核心用例
cases.append(TestCase(
desc="基本流式 (--stream)",
desc="流式基本对话",
method="POST",
url=chat_url,
headers=headers,
body={
"model": model,
"messages": [{"role": "user", "content": "Hi"}],
"max_tokens": 5,
"stream": True
},
body={"model": model, "messages": [{"role": "user", "content": "Hi"}], "max_tokens": 5, "stream": True},
stream=True,
validator=validate_openai_streaming_response
))
cases.append(TestCase(
desc="流式 + include_usage (--stream)",
desc="流式 + include_usage",
method="POST",
url=chat_url,
headers=headers,
body={
"model": model,
"messages": [{"role": "user", "content": "Hi"}],
"max_tokens": 5,
"stream": True,
"stream_options": {"include_usage": True}
},
body={"model": model, "messages": [{"role": "user", "content": "Hi"}], "max_tokens": 5, "stream": True, "stream_options": {"include_usage": True}},
stream=True,
validator=validate_openai_streaming_response
))
cases.append(TestCase(
desc="流式 + stop sequences (--stream)",
desc="流式 + system prompt",
method="POST",
url=chat_url,
headers=headers,
body={
"model": model,
"messages": [{"role": "user", "content": "数数: 1,2,3,"}],
"max_tokens": 20,
"stream": True,
"stop": ["5"]
},
body={"model": model, "messages": [{"role": "system", "content": "有帮助的助手"}, {"role": "user", "content": "Hi"}], "max_tokens": 5, "stream": True},
stream=True,
validator=validate_openai_streaming_response
))
cases.append(TestCase(
desc="流式多轮对话",
method="POST",
url=chat_url,
headers=headers,
body={"model": model, "messages": [{"role": "user", "content": "1+1?"}, {"role": "assistant", "content": "2"}, {"role": "user", "content": "2+2?"}], "max_tokens": 5, "stream": True},
stream=True,
validator=validate_openai_streaming_response
))
cases.append(TestCase(
desc="流式 temperature + top_p",
method="POST",
url=chat_url,
headers=headers,
body={"model": model, "messages": [{"role": "user", "content": "Hi"}], "temperature": 0.5, "top_p": 0.9, "max_tokens": 5, "stream": True},
stream=True,
validator=validate_openai_streaming_response
))
cases.append(TestCase(
desc="流式 max_tokens",
method="POST",
url=chat_url,
headers=headers,
body={"model": model, "messages": [{"role": "user", "content": "Hi"}], "max_tokens": 3, "stream": True},
stream=True,
validator=validate_openai_streaming_response
))
cases.append(TestCase(
desc="流式 stop_sequences",
method="POST",
url=chat_url,
headers=headers,
body={"model": model, "messages": [{"role": "user", "content": "数数: 1,2,3,"}], "max_tokens": 10, "stop": ["5"], "stream": True},
stream=True,
validator=validate_openai_streaming_response
))
cases.append(TestCase(
desc="流式 JSON mode",
method="POST",
url=chat_url,
headers=headers,
body={"model": model, "messages": [{"role": "system", "content": "以JSON回复"}, {"role": "user", "content": "颜色"}], "max_tokens": 20, "response_format": {"type": "json_object"}, "stream": True},
stream=True,
validator=validate_openai_streaming_response
))
# 流式 + vision
if args.vision:
cases.append(TestCase(
desc="流式图片输入",
method="POST",
url=chat_url,
headers=headers,
body={"model": model, "messages": [{"role": "user", "content": [{"type": "text", "text": "描述图"}, {"type": "image_url", "image_url": {"url": image_url}}]}], "max_tokens": 10, "stream": True},
stream=True,
validator=validate_openai_streaming_response
))
# 流式 + tools
if args.tools:
cases.append(TestCase(
desc="流式工具调用 auto",
method="POST",
url=chat_url,
headers=headers,
body={"model": model, "messages": [{"role": "user", "content": "北京天气?"}], "max_tokens": 50, "stream": True, "tools": [tool_weather], "tool_choice": "auto"},
stream=True,
validator=validate_openai_streaming_response
))
cases.append(TestCase(
desc="流式多轮工具调用",
method="POST",
url=chat_url,
headers=headers,
body={"model": model, "messages": [{"role": "user", "content": "北京天气?"}, {"role": "assistant", "content": None, "tool_calls": [{"id": "call_001", "type": "function", "function": {"name": "get_weather", "arguments": '{"location": "Beijing"}'}}]}, {"role": "tool", "tool_call_id": "call_001", "content": '{"temp": 22}'}], "max_tokens": 10, "stream": True, "tools": [tool_weather]},
stream=True,
validator=validate_openai_streaming_response
))
# 流式 + logprobs
if args.logprobs:
cases.append(TestCase(
desc="流式 logprobs",
method="POST",
url=chat_url,
headers=headers,
body={"model": model, "messages": [{"role": "user", "content": "Hi"}], "max_tokens": 5, "logprobs": True, "top_logprobs": 2, "stream": True},
stream=True,
validator=validate_openai_streaming_response
))
# 流式 + json_schema
if args.json_schema:
cases.append(TestCase(
desc="流式 json_schema",
method="POST",
url=chat_url,
headers=headers,
body={"model": model, "messages": [{"role": "user", "content": "1+1=?"}], "max_tokens": 20, "response_format": {"type": "json_schema", "json_schema": json_schema_math}, "stream": True},
stream=True,
validator=validate_openai_streaming_response
))
# 流式高级参数(选代表)
cases.append(TestCase(
desc="流式 reasoning_effort: medium",
method="POST",
url=chat_url,
headers=headers,
body={"model": model, "messages": [{"role": "user", "content": "Hi"}], "max_tokens": 5, "reasoning_effort": "medium", "stream": True},
stream=True,
validator=validate_openai_streaming_response
))
cases.append(TestCase(
desc="流式 service_tier: auto",
method="POST",
url=chat_url,
headers=headers,
body={"model": model, "messages": [{"role": "user", "content": "Hi"}], "max_tokens": 5, "service_tier": "auto", "stream": True},
stream=True,
validator=validate_openai_streaming_response
))
# ---- --tools ----
if args.tools:
tool_weather = {
"type": "function",
"function": {
"name": "get_weather",
"description": "获取指定城市的天气",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "城市名称"}
},
"required": ["location"]
}
}
}
cases.append(TestCase(
desc="工具调用 tool_choice: auto (--tools)",
method="POST",
@@ -867,19 +988,7 @@ def main():
"max_tokens": 20,
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "math_answer",
"strict": True,
"schema": {
"type": "object",
"properties": {
"answer": {"type": "number"},
"explanation": {"type": "string"}
},
"required": ["answer", "explanation"],
"additionalProperties": False
}
}
"json_schema": json_schema_math
}
},
validator=validate_openai_chat_completion_response