1
0

feat: 优化兼容性检测脚本

- 重命名脚本为 detect_xxx.py 格式
- 移除所有装饰线,精简输出格式
- 请求/响应输出增加 URL/Headers/入参/响应 标题标记
- 为所有正面用例添加响应验证器
- 补充 OpenAI 版缺失的负面测试(max_tokens 负数/0、temperature 越界)
- 移除未使用的 format_validation_errors 导入
- 新增 scripts/README.md 文档
This commit is contained in:
2026-04-21 12:50:49 +08:00
parent 7f0f831226
commit 980875ecf3
4 changed files with 441 additions and 67 deletions

285
scripts/README.md Normal file
View File

@@ -0,0 +1,285 @@
# 兼容性检测脚本
## 概述
本目录包含一组用于检测 LLM API 网关对 **OpenAI****Anthropic** 协议兼容性的测试脚本。通过向目标服务发送一系列结构化请求,验证响应格式、字段类型、错误处理等是否符合协议规范。
## 脚本结构
```
scripts/
├── core.py # 公共基础设施
├── detect_openai.py # OpenAI 兼容协议测试
└── detect_anthropic.py # Anthropic 兼容协议测试
```
### core.py — 公共模块
提供所有检测脚本共享的基础功能:
| 函数/类 | 说明 |
|---------|------|
| `TestCase` | 测试用例数据类URL、方法、请求头、请求体、验证器 |
| `TestResult` | 测试结果数据类(状态码、耗时、错误类型、响应内容) |
| `http_request()` | 普通 HTTP 请求(支持重试、自动 JSON 序列化) |
| `http_stream_request()` | 流式 HTTP 请求SSE支持重试 |
| `create_ssl_context()` | 创建不验证证书的 SSL 上下文(测试环境用) |
| `run_test()` | 执行单个用例并打印结构化输出 |
| `run_test_suite()` | 执行完整测试套件并打印统计摘要 |
| `check_required_fields()` | 检查必需字段(通用验证辅助) |
| `check_field_type()` | 检查字段类型(通用验证辅助) |
| `check_enum_value()` | 检查枚举值(通用验证辅助) |
| `check_array_items_type()` | 检查数组元素类型(通用验证辅助) |
| `validate_response_structure()` | 组合上述函数的通用验证器 |
**注意**`core.py` 只包含协议无关的通用功能。每个协议独有的响应验证函数应定义在各自的检测脚本中(如 `validate_openai_chat_completion_response``detect_openai.py` 中)。
### detect_openai.py — OpenAI 兼容测试
检测目标服务对 OpenAI Chat Completions API 的兼容程度。
**覆盖的 API 端点:**
- `GET /models` — 模型列表
- `GET /models/{model}` — 模型详情
- `POST /chat/completions` — 对话补全
**测试类别:**
- **正面用例**基本对话、system/developer 角色、多轮对话、参数组合temperature、top_p、seed、penalty、stop、n、max_tokens、max_completion_tokens、logit_bias、reasoning_effort、service_tier、verbosity、response_format
- **扩展功能**`--vision`(图片输入)、`--stream`(流式响应)、`--tools`(工具调用)、`--logprobs`(对数概率)、`json_schema`(结构化输出)
- **负面用例**:缺参数、空消息、无效认证、不存在的模型、畸形 JSON、max_tokens 负数/0、temperature 越界
**响应验证:**
- Models List检查 `object: "list"``data` 数组中每个模型的 `id``object``created``owned_by`
- Model Retrieve检查 `id``object: "model"``created``owned_by`
- Chat Completion检查 `id``object: "chat.completion"``created``model``choices` 数组结构、`usage` 对象
### detect_anthropic.py — Anthropic 兼容测试
检测目标服务对 Anthropic Messages API 的兼容程度。
**覆盖的 API 端点:**
- `GET /v1/models` — 模型列表
- `GET /v1/models/{model}` — 模型详情
- `POST /v1/messages` — 消息对话
- `POST /v1/messages/count_tokens` — Token 计数
**测试类别:**
- **正面用例**基本对话、system prompt字符串/数组格式、多轮对话、assistant prefill、content 数组格式、参数组合temperature、top_p、top_k、max_tokens、stop_sequences、metadata
- **扩展功能**`--vision`(图片输入)、`--stream`(流式响应)、`--tools`(工具调用)、`--thinking`(扩展思维)
- **负面用例**:缺 header、无效认证、缺参数、空消息、畸形 JSON、非法 role、max_tokens 负数/0、temperature 越界
**响应验证:**
- Models List检查 `data``has_more`、每个模型的 `id``type: "model"``display_name``created_at`
- Model Retrieve检查 `id``type: "model"``display_name``created_at`
- Messages检查 `id``type: "message"``role: "assistant"``content` 数组、`model``usage`
- Count Tokens检查 `input_tokens` 为数字
## 使用方式
### 基本用法
```bash
# OpenAI 兼容测试
python3 scripts/detect_openai.py --base_url http://localhost:9826/v1
# Anthropic 兼容测试
python3 scripts/detect_anthropic.py --base_url http://localhost:9826
```
### 带认证
```bash
python3 scripts/detect_openai.py --base_url http://localhost:9826/v1 --api_key sk-xxx --model gpt-4o
python3 scripts/detect_anthropic.py --base_url http://localhost:9826 --api_key sk-xxx --model claude-sonnet-4-5
```
### 扩展测试
```bash
# 开启所有扩展测试
python3 scripts/detect_openai.py --base_url http://localhost:9826/v1 --all
python3 scripts/detect_anthropic.py --base_url http://localhost:9826 --all
# 单独开启某项
python3 scripts/detect_openai.py --base_url http://localhost:9826/v1 --stream --tools
python3 scripts/detect_anthropic.py --base_url http://localhost:9826 --stream --tools --thinking
```
### 命令行参数
| 参数 | 说明 | 默认值 |
|------|------|--------|
| `--base_url` | API 基础地址(必填) | — |
| `--api_key` | API 密钥 | 空 |
| `--model` | 测试使用的模型名称 | `gpt-4o` / `claude-sonnet-4-5` |
| `--vision` | 执行视觉相关测试 | 关闭 |
| `--stream` | 执行流式响应测试 | 关闭 |
| `--tools` | 执行工具调用测试 | 关闭 |
| `--logprobs` | 执行 logprobs 测试(仅 OpenAI | 关闭 |
| `--json_schema` | 执行 Structured Output 测试(仅 OpenAI | 关闭 |
| `--thinking` | 执行扩展思维测试(仅 Anthropic | 关闭 |
| `--all` | 开启所有扩展测试 | 关闭 |
## 输出示例
```
Anthropic 兼容性测试
目标: http://localhost:9826
模型: claude-sonnet-4-5
时间: 2026-04-21 10:30:00
用例: 35 个 | 扩展: stream, tools
[1/35] 获取模型列表 (GET /v1/models)
URL: GET http://localhost:9826/v1/models
Headers:
x-api-key: sk-xxx
anthropic-version: 2023-06-01
响应 (200, 0.12s):
{
"data": [...],
"has_more": false
}
✓ 响应验证通过
[5/35] 基本对话(仅 user
URL: POST http://localhost:9826/v1/messages
Headers:
x-api-key: sk-xxx
Content-Type: application/json
入参:
{
"model": "claude-sonnet-4-5",
"max_tokens": 5,
"messages": [{"role": "user", "content": "Hi"}]
}
响应 (200, 0.23s):
{
"id": "msg_xxx",
"type": "message",
"role": "assistant",
"content": [...],
"model": "claude-sonnet-4-5",
"usage": {"input_tokens": 10, "output_tokens": 5}
}
✓ 响应验证通过
测试完成 | 总计: 35 | 成功: 33 | 客户端错误: 2 | 服务端错误: 0 | 网络错误: 0
```
## 测试设计原则
1. **所有正面用例都启用响应验证器** — 任何响应结构偏差都会立即暴露,避免掩盖错误
2. **负面用例覆盖常见错误场景** — 缺参数、类型错误、范围越界、认证失败
3. **扩展功能通过 flag 按需开启** — 避免在基础测试中引入不必要的依赖
4. **验证器基于协议规范编写** — 严格检查必需字段、类型、枚举值
## 新增检测脚本开发流程
如需为新的协议(如 Google Gemini、Cohere 等)开发检测脚本,遵循以下流程:
### 1. 在新脚本中定义协议专用的验证函数
每个协议的响应结构是独特的,验证函数应定义在各自的脚本中,不要放入 `core.py`。例如:
```python
# 在 detect_gemini.py 中
def validate_gemini_generate_content_response(response_text: str) -> Tuple[bool, List[str]]:
"""验证 Gemini GenerateContent 响应"""
errors = []
try:
data = json.loads(response_text)
except json.JSONDecodeError as e:
return False, [f"响应不是有效的JSON: {e}"]
# 检查 Gemini 特有的字段
required_fields = ["candidates", "usageMetadata"]
for field in required_fields:
if field not in data:
errors.append(f"缺少必需字段: {field}")
...
return len(errors) == 0, errors
```
### 2. 在 `core.py` 中只添加通用验证辅助
只有当多个协议都需要相同的验证逻辑时,才将函数提取到 `core.py`。目前已有的通用函数:
| 函数 | 说明 |
|------|------|
| `check_required_fields()` | 检查必需字段是否存在 |
| `check_field_type()` | 检查字段类型 |
| `check_enum_value()` | 检查枚举值 |
| `check_array_items_type()` | 检查数组元素类型 |
| `validate_response_structure()` | 组合上述函数的通用验证器 |
### 3. 创建检测脚本模板
```python
#!/usr/bin/env python3
"""新协议兼容性接口测试脚本"""
import json
import argparse
from typing import Dict, List, Tuple, Any
from core import (
create_ssl_context,
TestCase,
run_test_suite,
validate_response_structure,
)
def build_headers(api_key: str) -> Dict[str, str]:
"""构建请求头"""
...
def validate_xxx_response(response_text: str) -> Tuple[bool, List[str]]:
"""验证响应结构(协议专用)"""
...
def main():
parser = argparse.ArgumentParser(...)
parser.add_argument("--base_url", required=True, ...)
parser.add_argument("--api_key", default="", ...)
parser.add_argument("--model", default="...", ...)
parser.add_argument("--all", action="store_true", ...)
args = parser.parse_args()
cases: List[TestCase] = []
# 正面用例(都添加 validator
cases.append(TestCase(
desc="...", method="...", url=..., headers=..., body=...,
validator=validate_xxx_response
))
# 负面用例(不添加 validator
cases.append(TestCase(desc="...", method="...", url=..., headers=..., body=...))
run_test_suite(cases=cases, ssl_ctx=ssl_ctx, title="...", base_url=..., model=..., flags=...)
if __name__ == "__main__":
main()
```
### 关键要点
- **协议专用验证函数放在各自的脚本中** — 不要污染 `core.py`
- **只有多协议通用的验证逻辑才提取到 `core.py`** — 遵循 DRY 原则但不过度抽象
- **所有正面用例必须添加 validator** — 确保响应结构正确
- **负面用例不添加 validator** — 预期返回错误响应
- **扩展功能用 flag 控制** — 保持基础测试轻量
- **遵循现有命名和代码风格** — 中文注释、类型注解、dataclass 使用
## 许可证
MIT

View File

@@ -237,8 +237,15 @@ def run_test(
TestResult 对象
"""
print(f"\n[{index}/{total}] {test_case.desc}")
print(f">>> {test_case.method} {test_case.url}")
print(f"\nURL: {test_case.method} {test_case.url}")
if test_case.headers:
print("\nHeaders:")
for k, v in test_case.headers.items():
print(f" {k}: {v}")
if test_case.body is not None:
print("\n入参:")
if isinstance(test_case.body, str):
print(test_case.body)
else:
@@ -261,18 +268,16 @@ def run_test(
)
if result.status is not None:
print(f"状态码: {result.status} | 耗时: {result.elapsed:.2f}s")
print(f"\n响应 ({result.status}, {result.elapsed:.2f}s):")
else:
print(f"请求失败 | 耗时: {result.elapsed:.2f}s")
print(f"\n请求失败 ({result.elapsed:.2f}s):")
if test_case.stream and result.status and result.status < 300:
# 流式响应按 SSE 行逐行输出
for line in result.response.split("\n"):
print(line)
else:
print(format_json(result.response))
# 执行响应验证
if test_case.validator and result.status and 200 <= result.status < 300:
is_valid, errors = test_case.validator(result.response)
if is_valid:
@@ -312,8 +317,7 @@ def run_test_suite(
count_server_error = 0
count_network_error = 0
print("=" * 60)
print(title)
print(f"\n{title}")
print(f"目标: {base_url}")
print(f"模型: {model}")
print(f"时间: {time.strftime('%Y-%m-%d %H:%M:%S')}")
@@ -321,7 +325,7 @@ def run_test_suite(
print(f"用例: {total} 个 | 扩展: {', '.join(flags)}")
else:
print(f"用例: {total}")
print("=" * 60)
print()
for i, test_case in enumerate(cases, 1):
result = run_test(i, total, test_case, ssl_ctx)
@@ -336,11 +340,9 @@ def run_test_suite(
count_network_error += 1
print()
print("=" * 60)
print(f"测试完成 | 总计: {total} | 成功: {count_success} | "
f"客户端错误: {count_client_error} | 服务端错误: {count_server_error} | "
f"网络错误: {count_network_error}")
print("=" * 60)
return total, count_success, count_client_error, count_server_error

View File

@@ -18,7 +18,6 @@ from core import (
TestCase,
run_test_suite,
validate_response_structure,
format_validation_errors
)
ANTHROPIC_VERSION = "2023-06-01"
@@ -267,7 +266,8 @@ def main():
"max_tokens": 5,
"system": "You are a helpful assistant.",
"messages": [{"role": "user", "content": "1+1="}]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="system prompt 数组格式(带缓存控制)",
@@ -281,7 +281,8 @@ def main():
{"type": "text", "text": "You are a helpful assistant.", "cache_control": {"type": "ephemeral"}}
],
"messages": [{"role": "user", "content": "Hi"}]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="多轮对话(含 assistant 历史)",
@@ -296,7 +297,8 @@ def main():
{"role": "assistant", "content": "Hello!"},
{"role": "user", "content": "1+1="}
]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="assistant prefill部分回复填充",
@@ -310,7 +312,8 @@ def main():
{"role": "user", "content": "What is latin for Ant? (A) Apoidea (B) Rhopalocera (C) Formicidae"},
{"role": "assistant", "content": "The answer is ("}
]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="content 数组格式(多个 text block",
@@ -324,7 +327,8 @@ def main():
{"type": "text", "text": "Hello"},
{"type": "text", "text": "1+1=?"}
]}]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="temperature + top_p",
@@ -337,7 +341,8 @@ def main():
"temperature": 0.5,
"top_p": 0.9,
"messages": [{"role": "user", "content": "Hi"}]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="temperature = 0类确定性输出",
@@ -349,7 +354,8 @@ def main():
"max_tokens": 5,
"temperature": 0,
"messages": [{"role": "user", "content": "1+1="}]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="top_k 参数",
@@ -361,7 +367,8 @@ def main():
"max_tokens": 5,
"top_k": 40,
"messages": [{"role": "user", "content": "Hi"}]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="max_tokens 限制",
@@ -372,7 +379,8 @@ def main():
"model": model,
"max_tokens": 10,
"messages": [{"role": "user", "content": "讲一个故事"}]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="stop_sequences",
@@ -384,7 +392,8 @@ def main():
"max_tokens": 20,
"stop_sequences": ["5"],
"messages": [{"role": "user", "content": "数数: 1,2,3,"}]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="metadata 参数user_id",
@@ -396,7 +405,8 @@ def main():
"max_tokens": 5,
"metadata": {"user_id": "test-user-001"},
"messages": [{"role": "user", "content": "Hi"}]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="assistant content 数组格式text + tool_use 块)",
@@ -418,7 +428,8 @@ def main():
]}
]}
]
}
},
validator=validate_anthropic_messages_response
))
# ==== Count Tokens API ====
@@ -631,7 +642,8 @@ def main():
"2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
}}
]}]
}
},
validator=validate_anthropic_messages_response
))
# ==== --stream ====
@@ -647,7 +659,8 @@ def main():
"stream": True,
"messages": [{"role": "user", "content": "Hi"}]
},
stream=True
stream=True,
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="流式 + system prompt (--stream)",
@@ -661,7 +674,8 @@ def main():
"system": "Reply in one word.",
"messages": [{"role": "user", "content": "1+1="}]
},
stream=True
stream=True,
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="流式 + stop_sequences (--stream)",
@@ -675,7 +689,8 @@ def main():
"stop_sequences": ["5"],
"messages": [{"role": "user", "content": "数数: 1,2,3,"}]
},
stream=True
stream=True,
validator=validate_anthropic_messages_response
))
# ==== --tools ====
@@ -702,7 +717,8 @@ def main():
"tools": [tool_weather],
"tool_choice": {"type": "auto"},
"messages": [{"role": "user", "content": "北京天气怎么样?"}]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="工具调用 tool_choice: any (--tools)",
@@ -715,7 +731,8 @@ def main():
"tools": [tool_weather],
"tool_choice": {"type": "any"},
"messages": [{"role": "user", "content": "北京天气怎么样?"}]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="指定工具调用 tool_choice: {name} (--tools)",
@@ -728,7 +745,8 @@ def main():
"tools": [tool_weather],
"tool_choice": {"type": "tool", "name": "get_weather"},
"messages": [{"role": "user", "content": "北京天气怎么样?"}]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="tool_choice: none (--tools)",
@@ -741,7 +759,8 @@ def main():
"tools": [tool_weather],
"tool_choice": {"type": "none"},
"messages": [{"role": "user", "content": "北京天气怎么样?"}]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="多轮工具调用tool_result 返回)(--tools)",
@@ -762,7 +781,8 @@ def main():
{"type": "tool_result", "tool_use_id": "toolu_001", "content": "{\"temperature\": 22, \"condition\": \"\"}"}
]}
]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="多轮工具调用tool_result 带 is_error(--tools)",
@@ -782,7 +802,8 @@ def main():
{"type": "tool_result", "tool_use_id": "toolu_002", "is_error": True, "content": "天气服务不可用"}
]}
]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="tool_choice 指向不存在的工具(负面)(--tools)",
@@ -821,7 +842,8 @@ def main():
],
"tool_choice": {"type": "auto"},
"messages": [{"role": "user", "content": "北京现在几点了?天气怎么样?"}]
}
},
validator=validate_anthropic_messages_response
))
# ==== --thinking ====
@@ -836,7 +858,8 @@ def main():
"max_tokens": 200,
"thinking": {"type": "enabled", "budget_tokens": 100},
"messages": [{"role": "user", "content": "1+1=?"}]
}
},
validator=validate_anthropic_messages_response
))
cases.append(TestCase(
desc="扩展思维 adaptive (--thinking)",
@@ -848,7 +871,8 @@ def main():
"max_tokens": 200,
"thinking": {"type": "adaptive", "budget_tokens": 100},
"messages": [{"role": "user", "content": "1+1=?"}]
}
},
validator=validate_anthropic_messages_response
))
# ==== --stream + --tools 组合 ====
@@ -877,7 +901,8 @@ def main():
"tool_choice": {"type": "auto"},
"messages": [{"role": "user", "content": "北京天气怎么样?"}]
},
stream=True
stream=True,
validator=validate_anthropic_messages_response
))
# ==== 高级参数测试 ====
@@ -892,7 +917,8 @@ def main():
"max_tokens": 10,
"cache_control": {"type": "ephemeral"},
"messages": [{"role": "user", "content": "Hello"}]
}
},
validator=validate_anthropic_messages_response
))
# output_config: 输出配置
@@ -906,7 +932,8 @@ def main():
"max_tokens": 10,
"output_config": {"format": "text"},
"messages": [{"role": "user", "content": "Hi"}]
}
},
validator=validate_anthropic_messages_response
))
# service_tier: 服务层级
@@ -920,7 +947,8 @@ def main():
"max_tokens": 5,
"service_tier": "auto",
"messages": [{"role": "user", "content": "Hello"}]
}
},
validator=validate_anthropic_messages_response
))
# ==== Models API 分页测试 ====

View File

@@ -18,7 +18,6 @@ from core import (
TestCase,
run_test_suite,
validate_response_structure,
format_validation_errors
)
@@ -261,7 +260,8 @@ def main():
{"role": "user", "content": "1+1="}
],
"max_tokens": 5
}
},
validator=validate_openai_chat_completion_response
))
cases.append(TestCase(
desc="多轮对话(含 assistant 历史)",
@@ -276,7 +276,8 @@ def main():
{"role": "user", "content": "1+1="}
],
"max_tokens": 5
}
},
validator=validate_openai_chat_completion_response
))
cases.append(TestCase(
desc="temperature + top_p",
@@ -289,7 +290,8 @@ def main():
"max_tokens": 5,
"temperature": 0.5,
"top_p": 0.9
}
},
validator=validate_openai_chat_completion_response
))
cases.append(TestCase(
desc="max_tokens 限制",
@@ -300,7 +302,8 @@ def main():
"model": model,
"messages": [{"role": "user", "content": "讲一个故事"}],
"max_tokens": 10
}
},
validator=validate_openai_chat_completion_response
))
cases.append(TestCase(
desc="stop sequences",
@@ -312,7 +315,8 @@ def main():
"messages": [{"role": "user", "content": "数数: 1,2,3,"}],
"max_tokens": 20,
"stop": ["5"]
}
},
validator=validate_openai_chat_completion_response
))
cases.append(TestCase(
desc="n=2 多候选",
@@ -324,7 +328,8 @@ def main():
"messages": [{"role": "user", "content": "Hi"}],
"max_tokens": 5,
"n": 2
}
},
validator=validate_openai_chat_completion_response
))
cases.append(TestCase(
desc="seed 参数",
@@ -336,7 +341,8 @@ def main():
"messages": [{"role": "user", "content": "Hi"}],
"max_tokens": 5,
"seed": 42
}
},
validator=validate_openai_chat_completion_response
))
cases.append(TestCase(
desc="frequency_penalty + presence_penalty",
@@ -349,7 +355,8 @@ def main():
"max_tokens": 5,
"frequency_penalty": 0.5,
"presence_penalty": 0.5
}
},
validator=validate_openai_chat_completion_response
))
cases.append(TestCase(
desc="max_completion_tokens 参数",
@@ -360,7 +367,8 @@ def main():
"model": model,
"messages": [{"role": "user", "content": "讲一个故事"}],
"max_completion_tokens": 10
}
},
validator=validate_openai_chat_completion_response
))
cases.append(TestCase(
desc="JSON mode (response_format: json_object)",
@@ -375,7 +383,8 @@ def main():
],
"max_tokens": 10,
"response_format": {"type": "json_object"}
}
},
validator=validate_openai_chat_completion_response
))
# ---- Chat Completions: 负面用例 ----
@@ -440,6 +449,41 @@ def main():
body="invalid json{"
))
cases.append(TestCase(
desc="max_tokens 为负数",
method="POST",
url=chat_url,
headers=headers,
body={
"model": model,
"messages": [{"role": "user", "content": "Hi"}],
"max_tokens": -1
}
))
cases.append(TestCase(
desc="max_tokens = 0",
method="POST",
url=chat_url,
headers=headers,
body={
"model": model,
"messages": [{"role": "user", "content": "Hi"}],
"max_tokens": 0
}
))
cases.append(TestCase(
desc="temperature 超出范围 (2.5)",
method="POST",
url=chat_url,
headers=headers,
body={
"model": model,
"messages": [{"role": "user", "content": "Hi"}],
"max_tokens": 5,
"temperature": 2.5
}
))
# ---- --vision ----
if args.vision:
image_url = (
@@ -464,7 +508,8 @@ def main():
]}
],
"max_tokens": 10
}
},
validator=validate_openai_chat_completion_response
))
# ---- --stream ----
@@ -480,7 +525,8 @@ def main():
"max_tokens": 5,
"stream": True
},
stream=True
stream=True,
validator=validate_openai_chat_completion_response
))
cases.append(TestCase(
desc="流式 + include_usage (--stream)",
@@ -494,7 +540,8 @@ def main():
"stream": True,
"stream_options": {"include_usage": True}
},
stream=True
stream=True,
validator=validate_openai_chat_completion_response
))
cases.append(TestCase(
desc="流式 + stop sequences (--stream)",
@@ -508,7 +555,8 @@ def main():
"stream": True,
"stop": ["5"]
},
stream=True
stream=True,
validator=validate_openai_chat_completion_response
))
# ---- --tools ----
@@ -538,7 +586,8 @@ def main():
"max_tokens": 50,
"tools": [tool_weather],
"tool_choice": "auto"
}
},
validator=validate_openai_chat_completion_response
))
cases.append(TestCase(
desc="工具调用 tool_choice: required (--tools)",
@@ -551,7 +600,8 @@ def main():
"max_tokens": 50,
"tools": [tool_weather],
"tool_choice": "required"
}
},
validator=validate_openai_chat_completion_response
))
cases.append(TestCase(
desc="指定函数调用 tool_choice: {name} (--tools)",
@@ -567,7 +617,8 @@ def main():
"type": "function",
"function": {"name": "get_weather"}
}
}
},
validator=validate_openai_chat_completion_response
))
cases.append(TestCase(
desc="多轮工具调用(构造 tool 结果)(--tools)",
@@ -590,7 +641,8 @@ def main():
],
"max_tokens": 20,
"tools": [tool_weather]
}
},
validator=validate_openai_chat_completion_response
))
cases.append(TestCase(
desc="parallel_tool_calls: false (--tools)",
@@ -604,7 +656,8 @@ def main():
"tools": [tool_weather],
"tool_choice": "auto",
"parallel_tool_calls": False
}
},
validator=validate_openai_chat_completion_response
))
# ---- --logprobs ----
@@ -620,7 +673,8 @@ def main():
"max_tokens": 5,
"logprobs": True,
"top_logprobs": 2
}
},
validator=validate_openai_chat_completion_response
))
# ---- --json-schema ----
@@ -650,7 +704,8 @@ def main():
}
}
}
}
},
validator=validate_openai_chat_completion_response
))
# ---- 高级参数测试 ----
@@ -664,8 +719,9 @@ def main():
"model": model,
"messages": [{"role": "user", "content": "Hello"}],
"max_tokens": 5,
"logit_bias": {"1234": -100, "5678": 50} # token_id: bias
}
"logit_bias": {"1234": -100, "5678": 50}
},
validator=validate_openai_chat_completion_response
))
# reasoning_effort: 推理努力级别(需要模型支持)
@@ -679,7 +735,8 @@ def main():
"messages": [{"role": "user", "content": "1+1=?"}],
"max_tokens": 10,
"reasoning_effort": "medium"
}
},
validator=validate_openai_chat_completion_response
))
# service_tier: 服务层级
@@ -693,7 +750,8 @@ def main():
"messages": [{"role": "user", "content": "Hi"}],
"max_tokens": 5,
"service_tier": "auto"
}
},
validator=validate_openai_chat_completion_response
))
# verbosity: 冗长程度
@@ -707,7 +765,8 @@ def main():
"messages": [{"role": "user", "content": "介绍一下Python"}],
"max_tokens": 50,
"verbosity": "low"
}
},
validator=validate_openai_chat_completion_response
))
# ---- 执行测试 ----