1
0

fix: 修正 conversion 代理路径和错误边界

This commit is contained in:
2026-04-25 23:12:54 +08:00
parent f5c82b6980
commit 2c043c6cf7
25 changed files with 2020 additions and 214 deletions

View File

@@ -47,7 +47,7 @@ nex/
- **双协议支持**:同时支持 OpenAI 和 Anthropic 协议
- **跨协议转换**Hub-and-Spoke 架构实现 OpenAI ↔ Anthropic 双向转换
- **统一模型 ID**`provider_id/model_name` 格式全局唯一标识模型(如 `openai/gpt-4`
- **Smart Passthrough**:同协议请求零序列化开销,仅改写 model 字段
- **Smart Passthrough**:同协议请求跳过 Canonical 全量转换,仅在 JSON 层改写 model 字段
- **流式响应**:完整支持 SSE 流式传输,包括跨协议流式转换
- **Function Calling**支持工具调用Tools
- **Thinking / Reasoning**:支持 OpenAI `reasoning_effort` 和 Anthropic `thinking` 配置
@@ -162,7 +162,7 @@ bun dev
### 代理接口(对外部应用)
代理接口统一使用 `/{protocol}/*path` 路由格式,模型 ID 使用 `provider_id/model_name` 格式(如 `openai/gpt-4`)。同协议请求走 Smart Passthrough最小化 JSON 改写保持参数保真;跨协议请求走完整 decode/encode 转换。
代理接口统一使用 `/{protocol}/*path` 路由格式,模型 ID 使用 `provider_id/model_name` 格式(如 `openai/gpt-4`)。同协议请求走 Smart Passthrough最小化 JSON 改写保持未改写字段的 JSON 内容和类型不变;跨协议请求走完整 decode/encode 转换。
**OpenAI 协议**`protocol=openai`
- `POST /openai/chat/completions` - 对话补全
@@ -176,6 +176,14 @@ bun dev
- `GET /anthropic/v1/models` - 模型列表(本地数据库聚合)
- `GET /anthropic/v1/models/{provider_id}/{model_name}` - 模型详情(本地数据库查询)
路径边界:网关只剥离第一段协议前缀,剩余路径保持协议原生形态交给 adapter。OpenAI adapter 接收 `/chat/completions``/models``/embeddings``/rerank`Anthropic adapter 接收 `/v1/messages``/v1/models`。因此 OpenAI 供应商 `base_url` 配置到版本路径一级(如 `https://api.openai.com/v1`Anthropic 供应商 `base_url` 配置到域名级(如 `https://api.anthropic.com`)。
代理错误边界:网关层错误统一返回 `{"error":"...","code":"..."}`,例如 `INVALID_JSON``MODEL_NOT_FOUND``CONVERSION_FAILED``UPSTREAM_UNAVAILABLE`。只要上游已经返回 HTTP 响应,非 2xx 的 status、过滤 hop-by-hop header 后的 headers 和 body 会直接透传,不包装为应用错误或协议错误。
模型路由边界:只有 adapter 明确适配的接口会解析请求体中的 `model` 并使用统一模型 ID 路由;未知接口即使包含顶层 `model` 也按无 model 透传处理。
流式边界:同协议无响应 model 改写时原样透传 SSE frame 和 `[DONE]`;同协议需要响应 model 改写时只解析 SSE frame 的 `data` JSON 并改写 `model`;跨协议流式仍走 provider decoder → Canonical stream event → client encoder。
### 管理接口(对前端)
#### 供应商管理

View File

@@ -4,10 +4,10 @@ AI 网关后端服务,提供统一的大模型 API 代理接口。
## 功能特性
- 支持 OpenAI 协议(`/openai/v1/...`
- 支持 OpenAI 协议(`/openai/...`,例如 `/openai/chat/completions`
- 支持 Anthropic 协议(`/anthropic/v1/...`
- 支持 Hub-and-Spoke 跨协议双向转换OpenAI ↔ Anthropic
- 同协议透传(零语义损失、零序列化开销
- 同协议透传(跳过 Canonical 全量转换,保持协议语义
- 支持流式响应SSE
- 支持 Function Calling / Tools
- 支持 Thinking / Reasoning
@@ -220,7 +220,7 @@ OpenAI Response ← Canonical Response ← Anthropic Response
### Smart Passthrough 机制
同协议请求走 Smart Passthrough 路径,**零序列化开销**
同协议请求走 Smart Passthrough 路径,不进入 Canonical 全量转换
```
1. 检测 clientProtocol == providerProtocol
@@ -229,12 +229,14 @@ OpenAI Response ← Canonical Response ← Anthropic Response
4. 响应中仅改写 model 字段upstream_model_name → unified_id
```
Smart Passthrough 保持未改写 JSON 字段的内容和类型不变,不承诺保留原始字节顺序、空白或对象字段顺序。
### 流式转换器层次
```
StreamConverter (接口)
├── PassthroughStreamConverter # 直接透传,无任何处理
├── SmartPassthroughStreamConverter # 同协议 + 逐 chunk 改写 model
├── SmartPassthroughStreamConverter # 同协议 + 按 SSE frame 改写 data JSON model
└── CanonicalStreamConverter # 跨协议完整转换decode → encode
```
@@ -301,6 +303,7 @@ StreamConverter (接口)
| `PROTOCOL_CONSTRAINT_VIOLATION` | 协议约束违反 |
| `ENCODING_FAILURE` | 编码失败 |
| `INTERFACE_NOT_SUPPORTED` | 接口不支持(如 Anthropic Embeddings |
| `UNSUPPORTED_MULTIMODAL` | 跨协议暂不支持多模态内容 |
### AppError 预定义错误
@@ -460,7 +463,7 @@ goose -dir migrations sqlite3 ~/.nex/config.db up
### 代理接口
使用 `/{protocol}/v1/{path}` URL 前缀路由
使用 `/{protocol}/*path` URL 前缀路由。网关只剥离第一段协议前缀,不统一添加或移除 `/v1`;剩余 path 是协议原生 nativePath由对应 adapter 识别和组合上游 URL。
#### OpenAI 协议
@@ -478,10 +481,20 @@ POST /anthropic/v1/messages
GET /anthropic/v1/models
```
**协议转换**:网关支持任意协议间的双向转换。客户端使用 OpenAI 协议请求,上游供应商可以是 Anthropic 协议(反之亦然)。同协议时自动透传,零序列化开销
**协议转换**:网关支持任意协议间的双向转换。客户端使用 OpenAI 协议请求,上游供应商可以是 Anthropic 协议(反之亦然)。同协议时自动透传或 Smart Passthrough跳过 Canonical 全量转换
**统一模型 ID**:代理请求中的 `model` 字段使用 `provider_id/model_name` 格式(如 `openai/gpt-4`),网关据此路由到对应供应商。同协议时自动改写为上游 `model_name`,跨协议时通过全量转换处理。
**base_url 约定**
- OpenAI 供应商配置到版本路径一级,例如 `https://api.openai.com/v1`
- Anthropic 供应商配置到域名级,例如 `https://api.anthropic.com`
**模型提取边界**:只有 adapter 明确适配的 Chat、Embeddings、Rerank 等接口会提取 `model` 并尝试统一模型 ID 路由。未知接口不做顶层 `model` 猜测,直接按无 model 透传。
**流式透传边界**:同协议无响应 model 改写时 raw passthrough保留 SSE frame 边界和 `[DONE]`;同协议需要改写时按 SSE frame 解析 `data` JSON仅改写 `model`;跨协议继续使用 StreamDecoder → CanonicalStreamConverter → StreamEncoder。
**错误边界**:网关层代理错误返回 `{"error":"...","code":"..."}`。已收到上游 HTTP 响应时,非 2xx status、过滤 hop-by-hop header 后的 headers 和 body 直接透传;没有收到上游响应的连接/DNS/TLS/超时错误返回 `UPSTREAM_UNAVAILABLE`
### 管理接口
#### 供应商管理

View File

@@ -49,6 +49,28 @@ func TestAdapter_DetectInterfaceType(t *testing.T) {
}
}
func TestAdapter_APIReferenceNativePaths(t *testing.T) {
a := NewAdapter()
// docs/api_reference/anthropic defines messages and models under /v1.
tests := []struct {
path string
expected conversion.InterfaceType
}{
{"/v1/messages", conversion.InterfaceTypeChat},
{"/v1/models", conversion.InterfaceTypeModels},
{"/v1/models/claude-sonnet-4-5", conversion.InterfaceTypeModelInfo},
{"/messages", conversion.InterfaceTypePassthrough},
{"/models", conversion.InterfaceTypePassthrough},
}
for _, tt := range tests {
t.Run(tt.path, func(t *testing.T) {
assert.Equal(t, tt.expected, a.DetectInterfaceType(tt.path))
})
}
}
func TestAdapter_BuildUrl(t *testing.T) {
a := NewAdapter()

View File

@@ -3,11 +3,13 @@ package conversion
import (
"encoding/json"
"fmt"
"strings"
"time"
"github.com/google/uuid"
"go.uber.org/zap"
"nex/backend/internal/conversion/canonical"
pkglogger "nex/backend/pkg/logger"
)
@@ -71,7 +73,7 @@ func (e *ConversionEngine) IsPassthrough(clientProtocol, providerProtocol string
// ConvertHttpRequest 转换 HTTP 请求
func (e *ConversionEngine) ConvertHttpRequest(spec HTTPRequestSpec, clientProtocol, providerProtocol string, provider *TargetProvider) (*HTTPRequestSpec, error) {
nativePath := spec.URL
nativePath, rawQuery := splitRequestPath(spec.URL)
if e.IsPassthrough(clientProtocol, providerProtocol) {
providerAdapter, err := e.registry.Get(providerProtocol)
@@ -96,8 +98,11 @@ func (e *ConversionEngine) ConvertHttpRequest(spec HTTPRequestSpec, clientProtoc
}
}
providerURL := providerAdapter.BuildUrl(nativePath, interfaceType)
providerURL = appendRawQuery(providerURL, rawQuery)
return &HTTPRequestSpec{
URL: provider.BaseURL + nativePath,
URL: joinBaseURL(provider.BaseURL, providerURL),
Method: spec.Method,
Headers: providerAdapter.BuildHeaders(provider),
Body: rewrittenBody,
@@ -115,6 +120,7 @@ func (e *ConversionEngine) ConvertHttpRequest(spec HTTPRequestSpec, clientProtoc
interfaceType := clientAdapter.DetectInterfaceType(nativePath)
providerURL := providerAdapter.BuildUrl(nativePath, interfaceType)
providerURL = appendRawQuery(providerURL, rawQuery)
providerHeaders := providerAdapter.BuildHeaders(provider)
providerBody, err := e.convertBody(interfaceType, clientAdapter, providerAdapter, provider, spec.Body)
if err != nil {
@@ -122,7 +128,7 @@ func (e *ConversionEngine) ConvertHttpRequest(spec HTTPRequestSpec, clientProtoc
}
return &HTTPRequestSpec{
URL: provider.BaseURL + providerURL,
URL: joinBaseURL(provider.BaseURL, providerURL),
Method: spec.Method,
Headers: providerHeaders,
Body: providerBody,
@@ -198,7 +204,7 @@ func (e *ConversionEngine) CreateStreamConverter(clientProtocol, providerProtoco
ctx := ConversionContext{
ConversionID: uuid.New().String(),
InterfaceType: InterfaceTypeChat,
InterfaceType: interfaceType,
Timestamp: time.Now(),
}
@@ -268,7 +274,7 @@ func (e *ConversionEngine) convertResponseBody(interfaceType InterfaceType, clie
func (e *ConversionEngine) convertChatBody(clientAdapter, providerAdapter ProtocolAdapter, provider *TargetProvider, body []byte) ([]byte, error) {
canonicalReq, err := clientAdapter.DecodeRequest(body)
if err != nil {
return nil, NewConversionError(ErrorCodeJSONParseError, "解码请求失败").WithCause(err)
return nil, NewRequestJSONParseError("解码请求失败", err)
}
ctx := NewConversionContext(InterfaceTypeChat)
@@ -276,6 +282,9 @@ func (e *ConversionEngine) convertChatBody(clientAdapter, providerAdapter Protoc
if err != nil {
return nil, err
}
if containsUnsupportedMultimodal(canonicalReq) {
return nil, NewConversionError(ErrorCodeUnsupportedMultimodal, "跨协议暂不支持多模态内容")
}
encoded, err := providerAdapter.EncodeRequest(canonicalReq, provider)
if err != nil {
@@ -287,7 +296,7 @@ func (e *ConversionEngine) convertChatBody(clientAdapter, providerAdapter Protoc
func (e *ConversionEngine) convertChatResponseBody(clientAdapter, providerAdapter ProtocolAdapter, body []byte, modelOverride string) ([]byte, error) {
canonicalResp, err := providerAdapter.DecodeResponse(body)
if err != nil {
return nil, NewConversionError(ErrorCodeJSONParseError, "解码响应失败").WithCause(err)
return nil, NewResponseJSONParseError("解码响应失败", err)
}
if modelOverride != "" {
canonicalResp.Model = modelOverride
@@ -375,6 +384,7 @@ func (e *ConversionEngine) DetectInterfaceType(nativePath, clientProtocol string
if err != nil {
return InterfaceTypePassthrough, err
}
nativePath, _ = splitRequestPath(nativePath)
return adapter.DetectInterfaceType(nativePath), nil
}
@@ -398,3 +408,46 @@ func (e *ConversionEngine) EncodeError(err *ConversionError, clientProtocol stri
body, statusCode := adapter.EncodeError(err)
return body, statusCode, nil
}
func splitRequestPath(rawPath string) (string, string) {
path, query, found := strings.Cut(rawPath, "?")
if !found {
return rawPath, ""
}
return path, query
}
func appendRawQuery(path, rawQuery string) string {
if rawQuery == "" {
return path
}
if strings.Contains(path, "?") {
return path + "&" + rawQuery
}
return path + "?" + rawQuery
}
func joinBaseURL(baseURL, path string) string {
if baseURL == "" {
return path
}
if path == "" {
return baseURL
}
return strings.TrimRight(baseURL, "/") + "/" + strings.TrimLeft(path, "/")
}
func containsUnsupportedMultimodal(req *canonical.CanonicalRequest) bool {
if req == nil {
return false
}
for _, msg := range req.Messages {
for _, block := range msg.Content {
switch block.Type {
case "image", "audio", "video", "file":
return true
}
}
}
return false
}

View File

@@ -0,0 +1,63 @@
package conversion_test
import (
"testing"
"nex/backend/internal/conversion"
"nex/backend/internal/conversion/anthropic"
"nex/backend/internal/conversion/openai"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.uber.org/zap"
)
func TestConvertHttpRequest_SameProtocolUsesAdapterBuildURL(t *testing.T) {
tests := []struct {
name string
adapter conversion.ProtocolAdapter
clientProtocol string
providerProtocol string
baseURL string
nativePath string
expectedURL string
body []byte
}{
{
name: "openai base url includes version path",
adapter: openai.NewAdapter(),
clientProtocol: "openai",
providerProtocol: "openai",
baseURL: "http://example.com/v1",
nativePath: "/chat/completions",
expectedURL: "http://example.com/v1/chat/completions",
body: []byte(`{"model":"gpt-4","messages":[]}`),
},
{
name: "anthropic native path keeps v1",
adapter: anthropic.NewAdapter(),
clientProtocol: "anthropic",
providerProtocol: "anthropic",
baseURL: "http://example.com",
nativePath: "/v1/messages",
expectedURL: "http://example.com/v1/messages",
body: []byte(`{"model":"claude","messages":[]}`),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
registry := conversion.NewMemoryRegistry()
engine := conversion.NewConversionEngine(registry, zap.NewNop())
require.NoError(t, registry.Register(tt.adapter))
out, err := engine.ConvertHttpRequest(conversion.HTTPRequestSpec{
URL: tt.nativePath,
Method: "POST",
Body: tt.body,
}, tt.clientProtocol, tt.providerProtocol, conversion.NewTargetProvider(tt.baseURL, "key", "upstream-model"))
require.NoError(t, err)
assert.Equal(t, tt.expectedURL, out.URL)
})
}
}

View File

@@ -2,6 +2,7 @@ package conversion
import (
"encoding/json"
"strings"
"testing"
"nex/backend/internal/conversion/canonical"
@@ -498,12 +499,13 @@ func TestCreateStreamConverter_ModelOverride_SmartPassthrough(t *testing.T) {
_, ok := converter.(*SmartPassthroughStreamConverter)
assert.True(t, ok)
// 验证 chunk 改写
chunks := converter.ProcessChunk([]byte(`{"model":"gpt-4","choices":[]}`))
// 验证 SSE frame 中的 data JSON 被改写
chunks := converter.ProcessChunk([]byte(`data: {"model":"gpt-4","choices":[]}` + "\n\n"))
require.Len(t, chunks, 1)
var resp map[string]interface{}
require.NoError(t, json.Unmarshal(chunks[0], &resp))
payload := strings.TrimPrefix(strings.TrimSpace(string(chunks[0])), "data: ")
require.NoError(t, json.Unmarshal([]byte(payload), &resp))
assert.Equal(t, "openai/gpt-4", resp["model"])
}

View File

@@ -17,6 +17,13 @@ const (
ErrorCodeProtocolConstraint ErrorCode = "PROTOCOL_CONSTRAINT_VIOLATION"
ErrorCodeEncodingFailure ErrorCode = "ENCODING_FAILURE"
ErrorCodeInterfaceNotSupported ErrorCode = "INTERFACE_NOT_SUPPORTED"
ErrorCodeUnsupportedMultimodal ErrorCode = "UNSUPPORTED_MULTIMODAL"
)
const (
ErrorDetailPhase = "phase"
ErrorPhaseRequest = "request"
ErrorPhaseResponse = "response"
)
// ConversionError 协议转换错误
@@ -39,6 +46,20 @@ func NewConversionError(code ErrorCode, message string) *ConversionError {
}
}
// NewRequestJSONParseError 创建请求 JSON 解析错误。
func NewRequestJSONParseError(message string, cause error) *ConversionError {
return NewConversionError(ErrorCodeJSONParseError, message).
WithDetail(ErrorDetailPhase, ErrorPhaseRequest).
WithCause(cause)
}
// NewResponseJSONParseError 创建响应 JSON 解析错误。
func NewResponseJSONParseError(message string, cause error) *ConversionError {
return NewConversionError(ErrorCodeJSONParseError, message).
WithDetail(ErrorDetailPhase, ErrorPhaseResponse).
WithCause(cause)
}
// WithClientProtocol 设置客户端协议
func (e *ConversionError) WithClientProtocol(protocol string) *ConversionError {
e.ClientProtocol = protocol

View File

@@ -44,6 +44,29 @@ func TestAdapter_DetectInterfaceType(t *testing.T) {
}
}
func TestAdapter_APIReferenceNativePaths(t *testing.T) {
a := NewAdapter()
// docs/api_reference/openai, excluding responses, defines paths without /v1.
tests := []struct {
path string
expected conversion.InterfaceType
}{
{"/chat/completions", conversion.InterfaceTypeChat},
{"/models", conversion.InterfaceTypeModels},
{"/models/gpt-4.1", conversion.InterfaceTypeModelInfo},
{"/embeddings", conversion.InterfaceTypeEmbeddings},
{"/rerank", conversion.InterfaceTypeRerank},
{"/v1/chat/completions", conversion.InterfaceTypePassthrough},
}
for _, tt := range tests {
t.Run(tt.path, func(t *testing.T) {
assert.Equal(t, tt.expected, a.DetectInterfaceType(tt.path))
})
}
}
func TestAdapter_BuildUrl(t *testing.T) {
a := NewAdapter()

View File

@@ -1,6 +1,11 @@
package conversion
import "nex/backend/internal/conversion/canonical"
import (
"bytes"
"strings"
"nex/backend/internal/conversion/canonical"
)
// StreamDecoder 流式解码器接口
type StreamDecoder interface {
@@ -39,11 +44,12 @@ func (c *PassthroughStreamConverter) Flush() [][]byte {
}
// SmartPassthroughStreamConverter 同协议 Smart Passthrough 流式转换器
// 逐 chunk 改写 model 字段
// 按 SSE frame 改写 data JSON 中的 model 字段
type SmartPassthroughStreamConverter struct {
adapter ProtocolAdapter
modelOverride string
interfaceType InterfaceType
buffer []byte
}
// NewSmartPassthroughStreamConverter 创建 Smart Passthrough 流式转换器
@@ -55,24 +61,45 @@ func NewSmartPassthroughStreamConverter(adapter ProtocolAdapter, modelOverride s
}
}
// ProcessChunk 改写 chunk 中的 model 字段
// ProcessChunk 按 SSE frame 改写 data JSON 中的 model 字段
func (c *SmartPassthroughStreamConverter) ProcessChunk(rawChunk []byte) [][]byte {
if len(rawChunk) == 0 {
return nil
}
rewrittenChunk, err := c.adapter.RewriteResponseModelName(rawChunk, c.modelOverride, c.interfaceType)
if err != nil {
// 改写失败,返回原始 chunk
return [][]byte{rawChunk}
}
c.buffer = append(c.buffer, rawChunk...)
frames, rest := splitSSEFrames(c.buffer)
c.buffer = rest
return [][]byte{rewrittenChunk}
result := make([][]byte, 0, len(frames))
for _, frame := range frames {
result = append(result, c.rewriteFrame(frame))
}
return result
}
// Flush 无缓冲数据
func (c *SmartPassthroughStreamConverter) rewriteFrame(frame []byte) []byte {
payload, ok := sseFrameDataPayload(frame)
if !ok || strings.TrimSpace(payload) == "[DONE]" {
return frame
}
rewrittenPayload, err := c.adapter.RewriteResponseModelName([]byte(payload), c.modelOverride, c.interfaceType)
if err != nil {
return frame
}
return rebuildSSEFrameWithData(frame, string(rewrittenPayload))
}
// Flush 输出未形成完整 frame 的剩余数据
func (c *SmartPassthroughStreamConverter) Flush() [][]byte {
return nil
if len(c.buffer) == 0 {
return nil
}
frame := append([]byte(nil), c.buffer...)
c.buffer = nil
return [][]byte{c.rewriteFrame(frame)}
}
// CanonicalStreamConverter 跨协议规范流式转换器
@@ -153,3 +180,86 @@ func (c *CanonicalStreamConverter) applyModelOverride(event *canonical.Canonical
event.Message.Model = c.modelOverride
}
}
func splitSSEFrames(data []byte) ([][]byte, []byte) {
var frames [][]byte
for len(data) > 0 {
idx, sepLen := findSSEFrameSeparator(data)
if idx < 0 {
break
}
end := idx + sepLen
frames = append(frames, append([]byte(nil), data[:end]...))
data = data[end:]
}
return frames, data
}
func findSSEFrameSeparator(data []byte) (int, int) {
lf := bytes.Index(data, []byte("\n\n"))
crlf := bytes.Index(data, []byte("\r\n\r\n"))
switch {
case lf < 0 && crlf < 0:
return -1, 0
case lf < 0:
return crlf, 4
case crlf < 0:
return lf, 2
case crlf <= lf:
return crlf, 4
default:
return lf, 2
}
}
func sseFrameDataPayload(frame []byte) (string, bool) {
text := strings.TrimRight(string(frame), "\r\n")
lines := strings.Split(text, "\n")
var dataLines []string
for _, line := range lines {
line = strings.TrimRight(line, "\r")
if strings.HasPrefix(line, "data:") {
value := strings.TrimPrefix(line, "data:")
if strings.HasPrefix(value, " ") {
value = value[1:]
}
dataLines = append(dataLines, value)
}
}
if len(dataLines) == 0 {
return "", false
}
return strings.Join(dataLines, "\n"), true
}
func rebuildSSEFrameWithData(frame []byte, data string) []byte {
lineEnding, separator := sseLineEnding(frame)
text := strings.TrimRight(string(frame), "\r\n")
lines := strings.Split(text, "\n")
out := make([]string, 0, len(lines)+1)
dataWritten := false
for _, line := range lines {
line = strings.TrimRight(line, "\r")
if strings.HasPrefix(line, "data:") {
if !dataWritten {
for _, dataLine := range strings.Split(data, "\n") {
out = append(out, "data: "+dataLine)
}
dataWritten = true
}
continue
}
out = append(out, line)
}
if !dataWritten {
out = append(out, "data: "+data)
}
return []byte(strings.Join(out, lineEnding) + separator)
}
func sseLineEnding(frame []byte) (string, string) {
if bytes.Contains(frame, []byte("\r\n")) {
return "\r\n", "\r\n\r\n"
}
return "\n", "\n\n"
}

View File

@@ -13,6 +13,7 @@ import (
"nex/backend/internal/domain"
"nex/backend/internal/provider"
"nex/backend/internal/service"
appErrors "nex/backend/pkg/errors"
"nex/backend/pkg/modelid"
"github.com/gin-gonic/gin"
@@ -48,7 +49,7 @@ func (h *ProxyHandler) HandleProxy(c *gin.Context) {
// 从 URL 提取 clientProtocol: /{protocol}/v1/...
clientProtocol := c.Param("protocol")
if clientProtocol == "" {
c.JSON(http.StatusBadRequest, gin.H{"error": "缺少协议前缀"})
h.writeProxyError(c, http.StatusBadRequest, "INVALID_REQUEST", "缺少协议前缀")
return
}
@@ -58,12 +59,13 @@ func (h *ProxyHandler) HandleProxy(c *gin.Context) {
path = "/" + path
}
nativePath := path
requestPath := appendRawQuery(nativePath, c.Request.URL.RawQuery)
// 获取 client adapter
registry := h.engine.GetRegistry()
clientAdapter, err := registry.Get(clientProtocol)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "不支持的协议: " + clientProtocol})
h.writeProxyError(c, http.StatusNotFound, "UNSUPPORTED_INTERFACE", "不支持的协议: "+clientProtocol)
return
}
@@ -80,7 +82,7 @@ func (h *ProxyHandler) HandleProxy(c *gin.Context) {
if ifaceType == conversion.InterfaceTypeModelInfo {
unifiedID, err := clientAdapter.ExtractUnifiedModelID(nativePath)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "无效的模型 ID 格式"})
h.writeProxyError(c, http.StatusBadRequest, "INVALID_MODEL_ID", "无效的模型 ID 格式")
return
}
h.handleModelInfo(c, unifiedID, clientAdapter)
@@ -90,40 +92,50 @@ func (h *ProxyHandler) HandleProxy(c *gin.Context) {
// 读取请求体
body, err := io.ReadAll(c.Request.Body)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "读取请求体失败"})
h.writeProxyError(c, http.StatusBadRequest, "INVALID_REQUEST", "读取请求体失败")
return
}
// 解析统一模型 ID使用 adapter.ExtractModelName
var providerID, modelName string
if len(body) > 0 {
unifiedID, err := clientAdapter.ExtractModelName(body, ifaceType)
if err == nil && unifiedID != "" {
pid, mn, err := modelid.ParseUnifiedModelID(unifiedID)
if err == nil {
providerID = pid
modelName = mn
}
}
}
// 构建输入 HTTPRequestSpec
inSpec := conversion.HTTPRequestSpec{
URL: nativePath,
URL: requestPath,
Method: c.Request.Method,
Headers: extractHeaders(c),
Body: body,
}
isStream := h.isStreamRequest(body, clientProtocol, nativePath)
// 只有 adapter 明确适配的接口才提取 model。未知接口不做通用 model 猜测。
if len(body) == 0 || !supportsModelExtraction(ifaceType) {
h.forwardPassthrough(c, inSpec, clientProtocol, ifaceType, isStream)
return
}
unifiedID, err := clientAdapter.ExtractModelName(body, ifaceType)
if err != nil {
if isInvalidJSONError(err) {
h.writeProxyError(c, http.StatusBadRequest, "INVALID_JSON", "请求体 JSON 格式错误")
return
}
h.forwardPassthrough(c, inSpec, clientProtocol, ifaceType, isStream)
return
}
providerID, modelName, err := modelid.ParseUnifiedModelID(unifiedID)
if err != nil {
// 原始模型名兼容透传:非统一模型 ID 不参与路由。
h.forwardPassthrough(c, inSpec, clientProtocol, ifaceType, isStream)
return
}
if providerID == "" || modelName == "" {
h.forwardPassthrough(c, inSpec, clientProtocol, ifaceType, isStream)
return
}
// 路由
routeResult, err := h.routingService.RouteByModelName(providerID, modelName)
if err != nil {
// GET 请求或无法提取 model 时,直接转发到上游
if len(body) == 0 || modelName == "" {
h.forwardPassthrough(c, inSpec, clientProtocol)
return
}
h.writeError(c, err, clientProtocol)
h.writeRouteError(c, err)
return
}
@@ -143,9 +155,6 @@ func (h *ProxyHandler) HandleProxy(c *gin.Context) {
routeResult.Model.ModelName, // 上游模型名,用于请求改写
)
// 判断是否流式
isStream := h.isStreamRequest(body, clientProtocol, nativePath)
// 计算统一模型 ID用于响应覆写
unifiedModelID := routeResult.Model.UnifiedModelID()
@@ -156,6 +165,28 @@ func (h *ProxyHandler) HandleProxy(c *gin.Context) {
}
}
func supportsModelExtraction(ifaceType conversion.InterfaceType) bool {
switch ifaceType {
case conversion.InterfaceTypeChat, conversion.InterfaceTypeEmbeddings, conversion.InterfaceTypeRerank:
return true
default:
return false
}
}
func isInvalidJSONError(err error) bool {
var syntaxErr *json.SyntaxError
var typeErr *json.UnmarshalTypeError
return errors.As(err, &syntaxErr) || errors.As(err, &typeErr)
}
func appendRawQuery(path, rawQuery string) string {
if rawQuery == "" {
return path
}
return path + "?" + rawQuery
}
// handleNonStream 处理非流式请求
func (h *ProxyHandler) handleNonStream(c *gin.Context, inSpec conversion.HTTPRequestSpec, clientProtocol, providerProtocol string, targetProvider *conversion.TargetProvider, routeResult *domain.RouteResult, unifiedModelID string, ifaceType conversion.InterfaceType) {
// 转换请求
@@ -170,7 +201,11 @@ func (h *ProxyHandler) handleNonStream(c *gin.Context, inSpec conversion.HTTPReq
resp, err := h.client.Send(c.Request.Context(), *outSpec)
if err != nil {
h.logger.Error("发送请求失败", zap.Error(err))
h.writeConversionError(c, err, clientProtocol)
h.writeUpstreamUnavailable(c, err)
return
}
if resp.StatusCode < http.StatusOK || resp.StatusCode >= http.StatusMultipleChoices {
h.writeUpstreamResponse(c, *resp)
return
}
@@ -182,15 +217,7 @@ func (h *ProxyHandler) handleNonStream(c *gin.Context, inSpec conversion.HTTPReq
return
}
// 设置响应头
for k, v := range convertedResp.Headers {
c.Header(k, v)
}
if c.GetHeader("Content-Type") == "" {
c.Header("Content-Type", "application/json")
}
c.Data(convertedResp.StatusCode, "application/json", convertedResp.Body)
h.writeConvertedResponse(c, *convertedResp)
go func() {
_ = h.statsService.Record(routeResult.Provider.ID, routeResult.Model.ModelName) //nolint:errcheck // fire-and-forget 统计记录不阻塞请求
@@ -206,15 +233,23 @@ func (h *ProxyHandler) handleStream(c *gin.Context, inSpec conversion.HTTPReques
return
}
// 创建流式转换器,传入 modelOverride跨协议场景覆写 model 字段)
streamConverter, err := h.engine.CreateStreamConverter(clientProtocol, providerProtocol, unifiedModelID, ifaceType)
// 发送流式请求
streamResp, err := h.client.SendStream(c.Request.Context(), *outSpec)
if err != nil {
h.writeConversionError(c, err, clientProtocol)
h.writeUpstreamUnavailable(c, err)
return
}
if streamResp.StatusCode < http.StatusOK || streamResp.StatusCode >= http.StatusMultipleChoices {
h.writeUpstreamResponse(c, conversion.HTTPResponseSpec{
StatusCode: streamResp.StatusCode,
Headers: streamResp.Headers,
Body: streamResp.Body,
})
return
}
// 发送流式请求
eventChan, err := h.client.SendStream(c.Request.Context(), *outSpec)
// 创建流式转换器,传入 modelOverride跨协议场景覆写 model 字段)
streamConverter, err := h.engine.CreateStreamConverter(clientProtocol, providerProtocol, unifiedModelID, ifaceType)
if err != nil {
h.writeConversionError(c, err, clientProtocol)
return
@@ -225,8 +260,9 @@ func (h *ProxyHandler) handleStream(c *gin.Context, inSpec conversion.HTTPReques
c.Header("Connection", "keep-alive")
writer := bufio.NewWriter(c.Writer)
flushed := false
for event := range eventChan {
for event := range streamResp.Events {
if event.Error != nil {
h.logger.Error("流读取错误", zap.Error(event.Error))
break
@@ -237,6 +273,7 @@ func (h *ProxyHandler) handleStream(c *gin.Context, inSpec conversion.HTTPReques
if err := h.writeStreamChunks(writer, chunks); err != nil {
h.logger.Warn("流式响应写回失败", zap.Error(err))
}
flushed = true
break
}
@@ -246,6 +283,12 @@ func (h *ProxyHandler) handleStream(c *gin.Context, inSpec conversion.HTTPReques
break
}
}
if !flushed {
chunks := streamConverter.Flush()
if err := h.writeStreamChunks(writer, chunks); err != nil {
h.logger.Warn("流式响应写回失败", zap.Error(err))
}
}
go func() {
_ = h.statsService.Record(routeResult.Provider.ID, routeResult.Model.ModelName) //nolint:errcheck // fire-and-forget 统计记录不阻塞请求
@@ -291,7 +334,7 @@ func (h *ProxyHandler) handleModelsList(c *gin.Context, adapter conversion.Proto
models, err := h.providerService.ListEnabledModels()
if err != nil {
h.logger.Error("查询启用模型失败", zap.Error(err))
c.JSON(http.StatusInternalServerError, gin.H{"error": "查询模型失败"})
h.writeProxyError(c, http.StatusInternalServerError, "CONVERSION_FAILED", "查询模型失败")
return
}
@@ -313,7 +356,7 @@ func (h *ProxyHandler) handleModelsList(c *gin.Context, adapter conversion.Proto
body, err := adapter.EncodeModelsResponse(modelList)
if err != nil {
h.logger.Error("编码 Models 响应失败", zap.Error(err))
c.JSON(http.StatusInternalServerError, gin.H{"error": "编码响应失败"})
h.writeProxyError(c, http.StatusInternalServerError, "CONVERSION_FAILED", "编码响应失败")
return
}
@@ -325,17 +368,14 @@ func (h *ProxyHandler) handleModelInfo(c *gin.Context, unifiedID string, adapter
// 解析统一模型 ID
providerID, modelName, err := modelid.ParseUnifiedModelID(unifiedID)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{
"error": "无效的统一模型 ID 格式",
"code": "INVALID_MODEL_ID",
})
h.writeProxyError(c, http.StatusBadRequest, "INVALID_MODEL_ID", "无效的统一模型 ID 格式")
return
}
// 从数据库查询模型
model, err := h.providerService.GetModelByProviderAndName(providerID, modelName)
if err != nil {
c.JSON(http.StatusNotFound, gin.H{"error": "模型未找到"})
h.writeProxyError(c, http.StatusNotFound, "MODEL_NOT_FOUND", "模型未找到")
return
}
@@ -351,46 +391,103 @@ func (h *ProxyHandler) handleModelInfo(c *gin.Context, unifiedID string, adapter
body, err := adapter.EncodeModelInfoResponse(modelInfo)
if err != nil {
h.logger.Error("编码 ModelInfo 响应失败", zap.Error(err))
c.JSON(http.StatusInternalServerError, gin.H{"error": "编码响应失败"})
h.writeProxyError(c, http.StatusInternalServerError, "CONVERSION_FAILED", "编码响应失败")
return
}
c.Data(http.StatusOK, "application/json", body)
}
// writeConversionError 写入转换错误
// writeConversionError 写入网关层转换错误
func (h *ProxyHandler) writeConversionError(c *gin.Context, err error, clientProtocol string) {
var convErr *conversion.ConversionError
if errors.As(err, &convErr) {
body, statusCode, encodeErr := h.engine.EncodeError(convErr, clientProtocol)
if encodeErr != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": encodeErr.Error()})
return
}
c.Data(statusCode, "application/json", body)
statusCode, code, message := mapConversionError(convErr)
h.writeProxyError(c, statusCode, code, message)
return
}
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
h.writeProxyError(c, http.StatusInternalServerError, "CONVERSION_FAILED", err.Error())
}
// writeError 写入路由错误
func (h *ProxyHandler) writeError(c *gin.Context, err error, clientProtocol string) {
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
func mapConversionError(err *conversion.ConversionError) (int, string, string) {
switch err.Code {
case conversion.ErrorCodeJSONParseError:
if phase, ok := err.Details[conversion.ErrorDetailPhase].(string); ok && phase == conversion.ErrorPhaseRequest {
return http.StatusBadRequest, "INVALID_JSON", "请求体 JSON 格式错误"
}
return http.StatusInternalServerError, "CONVERSION_FAILED", err.Message
case conversion.ErrorCodeInvalidInput,
conversion.ErrorCodeMissingRequiredField,
conversion.ErrorCodeProtocolConstraint:
return http.StatusBadRequest, "INVALID_REQUEST", err.Message
case conversion.ErrorCodeInterfaceNotSupported:
return http.StatusBadRequest, "UNSUPPORTED_INTERFACE", err.Message
case conversion.ErrorCodeUnsupportedMultimodal:
return http.StatusBadRequest, "UNSUPPORTED_MULTIMODAL", err.Message
default:
return http.StatusInternalServerError, "CONVERSION_FAILED", err.Message
}
}
func (h *ProxyHandler) writeRouteError(c *gin.Context, err error) {
if appErr, ok := appErrors.AsAppError(err); ok {
switch appErr.Code {
case appErrors.ErrModelNotFound.Code, appErrors.ErrModelDisabled.Code:
h.writeProxyError(c, appErr.HTTPStatus, "MODEL_NOT_FOUND", appErr.Message)
case appErrors.ErrProviderNotFound.Code, appErrors.ErrProviderDisabled.Code:
h.writeProxyError(c, appErr.HTTPStatus, "PROVIDER_NOT_FOUND", appErr.Message)
default:
h.writeProxyError(c, appErr.HTTPStatus, "INVALID_REQUEST", appErr.Message)
}
return
}
h.writeProxyError(c, http.StatusNotFound, "MODEL_NOT_FOUND", err.Error())
}
func (h *ProxyHandler) writeUpstreamUnavailable(c *gin.Context, err error) {
h.logger.Error("上游不可达", zap.Error(err))
h.writeProxyError(c, http.StatusBadGateway, "UPSTREAM_UNAVAILABLE", "上游服务不可达")
}
func (h *ProxyHandler) writeProxyError(c *gin.Context, status int, code, message string) {
c.JSON(status, gin.H{
"error": message,
"code": code,
})
}
func (h *ProxyHandler) writeConvertedResponse(c *gin.Context, resp conversion.HTTPResponseSpec) {
for k, v := range resp.Headers {
c.Header(k, v)
}
contentType := headerValue(resp.Headers, "Content-Type")
if contentType == "" {
contentType = "application/json"
}
c.Data(resp.StatusCode, contentType, resp.Body)
}
func (h *ProxyHandler) writeUpstreamResponse(c *gin.Context, resp conversion.HTTPResponseSpec) {
for k, v := range filterHopByHopHeaders(resp.Headers) {
c.Header(k, v)
}
contentType := headerValue(resp.Headers, "Content-Type")
c.Data(resp.StatusCode, contentType, resp.Body)
}
// forwardPassthrough 直接转发请求到上游(用于 GET 等无 model 的请求)
func (h *ProxyHandler) forwardPassthrough(c *gin.Context, inSpec conversion.HTTPRequestSpec, clientProtocol string) {
func (h *ProxyHandler) forwardPassthrough(c *gin.Context, inSpec conversion.HTTPRequestSpec, clientProtocol string, ifaceType conversion.InterfaceType, isStream bool) {
registry := h.engine.GetRegistry()
adapter, err := registry.Get(clientProtocol)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "不支持的协议: " + clientProtocol})
h.writeProxyError(c, http.StatusNotFound, "UNSUPPORTED_INTERFACE", "不支持的协议: "+clientProtocol)
return
}
providers, err := h.providerService.List()
if err != nil || len(providers) == 0 {
h.logger.Warn("无可用供应商转发 GET 请求", zap.String("path", inSpec.URL))
c.JSON(http.StatusNotFound, gin.H{"error": "没有可用的供应商。请先创建供应商和模型。"})
h.logger.Warn("无可用供应商转发请求", zap.String("path", inSpec.URL))
h.writeProxyError(c, http.StatusNotFound, "PROVIDER_NOT_FOUND", "没有可用的供应商。请先创建供应商和模型。")
return
}
@@ -400,19 +497,18 @@ func (h *ProxyHandler) forwardPassthrough(c *gin.Context, inSpec conversion.HTTP
providerProtocol = "openai"
}
ifaceType := adapter.DetectInterfaceType(inSpec.URL)
targetProvider := conversion.NewTargetProvider(p.BaseURL, p.APIKey, "")
var outSpec *conversion.HTTPRequestSpec
if clientProtocol == providerProtocol {
upstreamURL := p.BaseURL + inSpec.URL
upstreamPath := adapter.BuildUrl(stripRawQuery(inSpec.URL), ifaceType)
upstreamPath = appendRawQuery(upstreamPath, rawQueryFromPath(inSpec.URL))
headers := adapter.BuildHeaders(targetProvider)
if _, ok := headers["Content-Type"]; !ok {
headers["Content-Type"] = "application/json"
}
outSpec = &conversion.HTTPRequestSpec{
URL: upstreamURL,
URL: joinBaseURL(p.BaseURL, upstreamPath),
Method: inSpec.Method,
Headers: headers,
Body: inSpec.Body,
@@ -425,9 +521,18 @@ func (h *ProxyHandler) forwardPassthrough(c *gin.Context, inSpec conversion.HTTP
}
}
if isStream {
h.forwardStream(c, *outSpec, clientProtocol, providerProtocol, ifaceType)
return
}
resp, err := h.client.Send(c.Request.Context(), *outSpec)
if err != nil {
h.writeConversionError(c, err, clientProtocol)
h.writeUpstreamUnavailable(c, err)
return
}
if resp.StatusCode < http.StatusOK || resp.StatusCode >= http.StatusMultipleChoices {
h.writeUpstreamResponse(c, *resp)
return
}
@@ -437,13 +542,111 @@ func (h *ProxyHandler) forwardPassthrough(c *gin.Context, inSpec conversion.HTTP
return
}
for k, v := range convertedResp.Headers {
c.Header(k, v)
h.writeConvertedResponse(c, *convertedResp)
}
func (h *ProxyHandler) forwardStream(c *gin.Context, outSpec conversion.HTTPRequestSpec, clientProtocol, providerProtocol string, ifaceType conversion.InterfaceType) {
streamResp, err := h.client.SendStream(c.Request.Context(), outSpec)
if err != nil {
h.writeUpstreamUnavailable(c, err)
return
}
if c.GetHeader("Content-Type") == "" {
c.Header("Content-Type", "application/json")
if streamResp.StatusCode < http.StatusOK || streamResp.StatusCode >= http.StatusMultipleChoices {
h.writeUpstreamResponse(c, conversion.HTTPResponseSpec{
StatusCode: streamResp.StatusCode,
Headers: streamResp.Headers,
Body: streamResp.Body,
})
return
}
c.Data(convertedResp.StatusCode, "application/json", convertedResp.Body)
streamConverter, err := h.engine.CreateStreamConverter(clientProtocol, providerProtocol, "", ifaceType)
if err != nil {
h.writeConversionError(c, err, clientProtocol)
return
}
c.Header("Content-Type", "text/event-stream")
c.Header("Cache-Control", "no-cache")
c.Header("Connection", "keep-alive")
writer := bufio.NewWriter(c.Writer)
flushed := false
for event := range streamResp.Events {
if event.Error != nil {
h.logger.Error("透传流读取错误", zap.Error(event.Error))
break
}
if event.Done {
chunks := streamConverter.Flush()
if err := h.writeStreamChunks(writer, chunks); err != nil {
h.logger.Warn("透传流式响应写回失败", zap.Error(err))
}
flushed = true
break
}
chunks := streamConverter.ProcessChunk(event.Data)
if err := h.writeStreamChunks(writer, chunks); err != nil {
h.logger.Warn("透传流式响应写回失败", zap.Error(err))
break
}
}
if !flushed {
chunks := streamConverter.Flush()
if err := h.writeStreamChunks(writer, chunks); err != nil {
h.logger.Warn("透传流式响应写回失败", zap.Error(err))
}
}
}
func stripRawQuery(path string) string {
pathOnly, _, _ := strings.Cut(path, "?")
return pathOnly
}
func rawQueryFromPath(path string) string {
_, rawQuery, found := strings.Cut(path, "?")
if !found {
return ""
}
return rawQuery
}
func joinBaseURL(baseURL, path string) string {
return strings.TrimRight(baseURL, "/") + "/" + strings.TrimLeft(path, "/")
}
func headerValue(headers map[string]string, key string) string {
for k, v := range headers {
if strings.EqualFold(k, key) {
return v
}
}
return ""
}
func filterHopByHopHeaders(headers map[string]string) map[string]string {
if len(headers) == 0 {
return nil
}
hopByHop := map[string]struct{}{
"connection": {},
"transfer-encoding": {},
"keep-alive": {},
"proxy-authenticate": {},
"proxy-authorization": {},
"te": {},
"trailer": {},
"upgrade": {},
}
filtered := make(map[string]string, len(headers))
for k, v := range headers {
if _, skip := hopByHop[strings.ToLower(k)]; skip {
continue
}
filtered[k] = v
}
return filtered
}
// extractHeaders 从 Gin context 提取请求头

View File

@@ -5,6 +5,7 @@ import (
"context"
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"testing"
@@ -73,7 +74,7 @@ func TestProxyHandler_HandleProxy_NonStreamSuccess(t *testing.T) {
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("", "").Return(&domain.RouteResult{
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com", Protocol: "openai", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "gpt-4", Enabled: true},
}, nil)
@@ -93,7 +94,7 @@ func TestProxyHandler_HandleProxy_NonStreamSuccess(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"gpt-4","messages":[{"role":"user","content":"hi"}]}`)))
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"p1/gpt-4","messages":[{"role":"user","content":"hi"}]}`)))
h.HandleProxy(c)
assert.Equal(t, 200, w.Code)
@@ -109,9 +110,8 @@ func TestProxyHandler_HandleProxy_RoutingError_WithBody(t *testing.T) {
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("", "").Return(nil, appErrors.ErrModelNotFound)
routingSvc.EXPECT().RouteByModelName("unknown", "model").Return(nil, appErrors.ErrModelNotFound)
providerSvc := mocks.NewMockProviderService(ctrl)
providerSvc.EXPECT().List().Return(nil, nil)
client := mocks.NewMockProviderClient(ctrl)
statsSvc := mocks.NewMockStatsService(ctrl)
h := newTestProxyHandler(engine, client, routingSvc, providerSvc, statsSvc)
@@ -119,10 +119,11 @@ func TestProxyHandler_HandleProxy_RoutingError_WithBody(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"unknown","messages":[{"role":"user","content":"hi"}]}`)))
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"unknown/model","messages":[{"role":"user","content":"hi"}]}`)))
h.HandleProxy(c)
assert.Equal(t, 404, w.Code)
assert.Contains(t, w.Body.String(), "MODEL_NOT_FOUND")
}
func TestProxyHandler_HandleProxy_ConversionError(t *testing.T) {
@@ -131,7 +132,7 @@ func TestProxyHandler_HandleProxy_ConversionError(t *testing.T) {
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("", "").Return(&domain.RouteResult{
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com", Protocol: "openai", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "gpt-4", Enabled: true},
}, nil)
@@ -146,10 +147,11 @@ func TestProxyHandler_HandleProxy_ConversionError(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"gpt-4","messages":[{"role":"user","content":"hi"}]}`)))
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"p1/gpt-4","messages":[{"role":"user","content":"hi"}]}`)))
h.HandleProxy(c)
assert.Equal(t, 500, w.Code)
assert.Equal(t, 502, w.Code)
assert.JSONEq(t, `{"error":"上游服务不可达","code":"UPSTREAM_UNAVAILABLE"}`, w.Body.String())
}
func TestProxyHandler_HandleProxy_ClientSendError(t *testing.T) {
@@ -158,7 +160,7 @@ func TestProxyHandler_HandleProxy_ClientSendError(t *testing.T) {
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("", "").Return(&domain.RouteResult{
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com", Protocol: "openai", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "gpt-4", Enabled: true},
}, nil)
@@ -173,10 +175,11 @@ func TestProxyHandler_HandleProxy_ClientSendError(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"gpt-4","messages":[{"role":"user","content":"hi"}]}`)))
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"p1/gpt-4","messages":[{"role":"user","content":"hi"}]}`)))
h.HandleProxy(c)
assert.Equal(t, 500, w.Code)
assert.Equal(t, 502, w.Code)
assert.JSONEq(t, `{"error":"上游服务不可达","code":"UPSTREAM_UNAVAILABLE"}`, w.Body.String())
}
func TestProxyHandler_HandleProxy_StreamSuccess(t *testing.T) {
@@ -185,12 +188,12 @@ func TestProxyHandler_HandleProxy_StreamSuccess(t *testing.T) {
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("", "").Return(&domain.RouteResult{
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com", Protocol: "openai", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "gpt-4", Enabled: true},
}, nil)
client := mocks.NewMockProviderClient(ctrl)
client.EXPECT().SendStream(gomock.Any(), gomock.Any()).DoAndReturn(func(ctx context.Context, spec conversion.HTTPRequestSpec) (<-chan provider.StreamEvent, error) {
client.EXPECT().SendStream(gomock.Any(), gomock.Any()).DoAndReturn(func(ctx context.Context, spec conversion.HTTPRequestSpec) (*provider.StreamResponse, error) {
ch := make(chan provider.StreamEvent, 10)
go func() {
defer close(ch)
@@ -199,7 +202,7 @@ func TestProxyHandler_HandleProxy_StreamSuccess(t *testing.T) {
ch <- provider.StreamEvent{Data: []byte("data: [DONE]\n\n")}
ch <- provider.StreamEvent{Done: true}
}()
return ch, nil
return &provider.StreamResponse{StatusCode: 200, Headers: map[string]string{"Content-Type": "text/event-stream"}, Events: ch}, nil
})
providerSvc := mocks.NewMockProviderService(ctrl)
statsSvc := mocks.NewMockStatsService(ctrl)
@@ -209,12 +212,13 @@ func TestProxyHandler_HandleProxy_StreamSuccess(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"gpt-4","messages":[{"role":"user","content":"hi"}],"stream":true}`)))
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"p1/gpt-4","messages":[{"role":"user","content":"hi"}],"stream":true}`)))
h.HandleProxy(c)
assert.Equal(t, 200, w.Code)
assert.Equal(t, "text/event-stream", w.Header().Get("Content-Type"))
assert.Contains(t, w.Body.String(), "Hello")
assert.Contains(t, w.Body.String(), "p1/gpt-4")
}
func TestProxyHandler_HandleProxy_StreamError(t *testing.T) {
@@ -223,12 +227,12 @@ func TestProxyHandler_HandleProxy_StreamError(t *testing.T) {
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("", "").Return(&domain.RouteResult{
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com", Protocol: "openai", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "gpt-4", Enabled: true},
}, nil)
client := mocks.NewMockProviderClient(ctrl)
client.EXPECT().SendStream(gomock.Any(), gomock.Any()).DoAndReturn(func(ctx context.Context, spec conversion.HTTPRequestSpec) (<-chan provider.StreamEvent, error) {
client.EXPECT().SendStream(gomock.Any(), gomock.Any()).DoAndReturn(func(ctx context.Context, spec conversion.HTTPRequestSpec) (*provider.StreamResponse, error) {
return nil, context.DeadlineExceeded
})
providerSvc := mocks.NewMockProviderService(ctrl)
@@ -238,10 +242,11 @@ func TestProxyHandler_HandleProxy_StreamError(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"gpt-4","messages":[{"role":"user","content":"hi"}],"stream":true}`)))
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"p1/gpt-4","messages":[{"role":"user","content":"hi"}],"stream":true}`)))
h.HandleProxy(c)
assert.Equal(t, 500, w.Code)
assert.Equal(t, 502, w.Code)
assert.JSONEq(t, `{"error":"上游服务不可达","code":"UPSTREAM_UNAVAILABLE"}`, w.Body.String())
}
func TestProxyHandler_ForwardPassthrough_GET(t *testing.T) {
@@ -286,7 +291,7 @@ func TestProxyHandler_ForwardPassthrough_UnsupportedProtocol(t *testing.T) {
c.Request = httptest.NewRequest("GET", "/unknown/models", nil)
h.HandleProxy(c)
assert.Equal(t, 400, w.Code)
assert.Equal(t, 404, w.Code)
}
func TestProxyHandler_ForwardPassthrough_NoProviders(t *testing.T) {
@@ -329,7 +334,7 @@ func TestProxyHandler_HandleProxy_ProviderProtocolDefault(t *testing.T) {
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("", "").Return(&domain.RouteResult{
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com", Protocol: "", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "gpt-4", Enabled: true},
}, nil)
@@ -348,7 +353,7 @@ func TestProxyHandler_HandleProxy_ProviderProtocolDefault(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"gpt-4","messages":[{"role":"user","content":"hi"}]}`)))
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"p1/gpt-4","messages":[{"role":"user","content":"hi"}]}`)))
h.HandleProxy(c)
assert.Equal(t, 200, w.Code)
@@ -371,6 +376,7 @@ func TestProxyHandler_WriteConversionError_NonConversionError(t *testing.T) {
h.writeConversionError(c, context.DeadlineExceeded, "openai")
assert.Equal(t, 500, w.Code)
assert.JSONEq(t, `{"error":"context deadline exceeded","code":"CONVERSION_FAILED"}`, w.Body.String())
}
func TestProxyHandler_WriteConversionError_ConversionError(t *testing.T) {
@@ -390,7 +396,40 @@ func TestProxyHandler_WriteConversionError_ConversionError(t *testing.T) {
convErr := conversion.NewConversionError(conversion.ErrorCodeInvalidInput, "bad request")
h.writeConversionError(c, convErr, "openai")
assert.Equal(t, 500, w.Code)
assert.Equal(t, 400, w.Code)
assert.JSONEq(t, `{"error":"bad request","code":"INVALID_REQUEST"}`, w.Body.String())
}
func TestProxyHandler_WriteConversionError_JSONPhase(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
engine := setupProxyEngine(t)
client := mocks.NewMockProviderClient(ctrl)
routingSvc := mocks.NewMockRoutingService(ctrl)
providerSvc := mocks.NewMockProviderService(ctrl)
statsSvc := mocks.NewMockStatsService(ctrl)
h := newTestProxyHandler(engine, client, routingSvc, providerSvc, statsSvc)
t.Run("request json parse error", func(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Request = httptest.NewRequest("POST", "/", nil)
h.writeConversionError(c, conversion.NewRequestJSONParseError("解码请求失败", context.Canceled), "openai")
assert.Equal(t, http.StatusBadRequest, w.Code)
assert.JSONEq(t, `{"error":"请求体 JSON 格式错误","code":"INVALID_JSON"}`, w.Body.String())
})
t.Run("response json parse error", func(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Request = httptest.NewRequest("POST", "/", nil)
h.writeConversionError(c, conversion.NewResponseJSONParseError("解码响应失败", context.Canceled), "openai")
assert.Equal(t, http.StatusInternalServerError, w.Code)
assert.JSONEq(t, `{"error":"解码响应失败","code":"CONVERSION_FAILED"}`, w.Body.String())
})
}
func TestProxyHandler_HandleProxy_EmptyBody(t *testing.T) {
@@ -423,19 +462,19 @@ func TestProxyHandler_HandleStream_MidStreamError(t *testing.T) {
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("", "").Return(&domain.RouteResult{
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com", Protocol: "openai", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "gpt-4", Enabled: true},
}, nil)
client := mocks.NewMockProviderClient(ctrl)
client.EXPECT().SendStream(gomock.Any(), gomock.Any()).DoAndReturn(func(ctx context.Context, spec conversion.HTTPRequestSpec) (<-chan provider.StreamEvent, error) {
client.EXPECT().SendStream(gomock.Any(), gomock.Any()).DoAndReturn(func(ctx context.Context, spec conversion.HTTPRequestSpec) (*provider.StreamResponse, error) {
ch := make(chan provider.StreamEvent, 10)
go func() {
defer close(ch)
ch <- provider.StreamEvent{Data: []byte("data: {\"id\":\"1\",\"object\":\"chat.completion.chunk\",\"choices\":[{\"index\":0,\"delta\":{\"content\":\"Hello\"}}]}\n\n")}
ch <- provider.StreamEvent{Error: fmt.Errorf("connection reset by peer")}
}()
return ch, nil
return &provider.StreamResponse{StatusCode: 200, Headers: map[string]string{"Content-Type": "text/event-stream"}, Events: ch}, nil
})
providerSvc := mocks.NewMockProviderService(ctrl)
statsSvc := mocks.NewMockStatsService(ctrl)
@@ -445,7 +484,7 @@ func TestProxyHandler_HandleStream_MidStreamError(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"gpt-4","messages":[{"role":"user","content":"hi"}],"stream":true}`)))
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"p1/gpt-4","messages":[{"role":"user","content":"hi"}],"stream":true}`)))
h.HandleProxy(c)
assert.Equal(t, 200, w.Code)
@@ -460,12 +499,12 @@ func TestProxyHandler_HandleStream_FlushOutput(t *testing.T) {
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("", "").Return(&domain.RouteResult{
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com", Protocol: "openai", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "gpt-4", Enabled: true},
}, nil)
client := mocks.NewMockProviderClient(ctrl)
client.EXPECT().SendStream(gomock.Any(), gomock.Any()).DoAndReturn(func(ctx context.Context, spec conversion.HTTPRequestSpec) (<-chan provider.StreamEvent, error) {
client.EXPECT().SendStream(gomock.Any(), gomock.Any()).DoAndReturn(func(ctx context.Context, spec conversion.HTTPRequestSpec) (*provider.StreamResponse, error) {
ch := make(chan provider.StreamEvent, 10)
go func() {
defer close(ch)
@@ -473,7 +512,7 @@ func TestProxyHandler_HandleStream_FlushOutput(t *testing.T) {
ch <- provider.StreamEvent{Data: []byte("data: [DONE]\n\n")}
ch <- provider.StreamEvent{Done: true}
}()
return ch, nil
return &provider.StreamResponse{StatusCode: 200, Headers: map[string]string{"Content-Type": "text/event-stream"}, Events: ch}, nil
})
providerSvc := mocks.NewMockProviderService(ctrl)
statsSvc := mocks.NewMockStatsService(ctrl)
@@ -483,7 +522,7 @@ func TestProxyHandler_HandleStream_FlushOutput(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"gpt-4","messages":[{"role":"user","content":"hi"}],"stream":true}`)))
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"p1/gpt-4","messages":[{"role":"user","content":"hi"}],"stream":true}`)))
h.HandleProxy(c)
assert.Equal(t, 200, w.Code)
@@ -505,7 +544,7 @@ func TestProxyHandler_HandleStream_CreateStreamConverterError(t *testing.T) {
require.NoError(t, err)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("", "").Return(&domain.RouteResult{
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com", Protocol: "nonexistent", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "gpt-4", Enabled: true},
}, nil)
@@ -517,7 +556,7 @@ func TestProxyHandler_HandleStream_CreateStreamConverterError(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"gpt-4","messages":[{"role":"user","content":"hi"}],"stream":true}`)))
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"p1/gpt-4","messages":[{"role":"user","content":"hi"}],"stream":true}`)))
h.HandleProxy(c)
assert.Equal(t, 500, w.Code)
@@ -532,7 +571,7 @@ func TestProxyHandler_HandleStream_ConvertRequestError(t *testing.T) {
require.NoError(t, registry.Register(openai.NewAdapter()))
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("", "").Return(&domain.RouteResult{
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com", Protocol: "nonexistent", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "gpt-4", Enabled: true},
}, nil)
@@ -544,7 +583,7 @@ func TestProxyHandler_HandleStream_ConvertRequestError(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"gpt-4","messages":[{"role":"user","content":"hi"}],"stream":true}`)))
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"p1/gpt-4","messages":[{"role":"user","content":"hi"}],"stream":true}`)))
h.HandleProxy(c)
assert.Equal(t, 500, w.Code)
@@ -560,7 +599,7 @@ func TestProxyHandler_HandleNonStream_ConvertResponseError(t *testing.T) {
require.NoError(t, registry.Register(anthropic.NewAdapter()))
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("", "").Return(&domain.RouteResult{
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com", Protocol: "anthropic", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "claude-3", Enabled: true},
}, nil)
@@ -579,7 +618,7 @@ func TestProxyHandler_HandleNonStream_ConvertResponseError(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"claude-3","messages":[{"role":"user","content":"hi"}]}`)))
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"p1/gpt-4","messages":[{"role":"user","content":"hi"}]}`)))
h.HandleProxy(c)
assert.Equal(t, 500, w.Code)
@@ -591,7 +630,7 @@ func TestProxyHandler_HandleNonStream_ResponseHeaders(t *testing.T) {
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("", "").Return(&domain.RouteResult{
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com", Protocol: "openai", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "gpt-4", Enabled: true},
}, nil)
@@ -611,7 +650,7 @@ func TestProxyHandler_HandleNonStream_ResponseHeaders(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"gpt-4","messages":[{"role":"user","content":"hi"}]}`)))
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"p1/gpt-4","messages":[{"role":"user","content":"hi"}]}`)))
h.HandleProxy(c)
assert.Equal(t, 200, w.Code)
@@ -720,8 +759,9 @@ func TestProxyHandler_WriteError_RouteError(t *testing.T) {
c, _ := gin.CreateTestContext(w)
c.Request = httptest.NewRequest("POST", "/", nil)
h.writeError(c, fmt.Errorf("model not found"), "openai")
h.writeRouteError(c, fmt.Errorf("model not found"))
assert.Equal(t, 404, w.Code)
assert.Contains(t, w.Body.String(), "MODEL_NOT_FOUND")
}
func TestProxyHandler_HandleProxy_RouteEmptyBody_NoModel(t *testing.T) {
@@ -994,7 +1034,7 @@ func TestProxyHandler_HandleProxy_CrossProtocol_Stream_UnifiedID(t *testing.T) {
Model: &domain.Model{ID: "m1", ProviderID: "anthropic_p", ModelName: "claude-3", Enabled: true},
}, nil)
client := mocks.NewMockProviderClient(ctrl)
client.EXPECT().SendStream(gomock.Any(), gomock.Any()).DoAndReturn(func(ctx context.Context, spec conversion.HTTPRequestSpec) (<-chan provider.StreamEvent, error) {
client.EXPECT().SendStream(gomock.Any(), gomock.Any()).DoAndReturn(func(ctx context.Context, spec conversion.HTTPRequestSpec) (*provider.StreamResponse, error) {
ch := make(chan provider.StreamEvent, 10)
go func() {
defer close(ch)
@@ -1012,7 +1052,7 @@ data: {"type":"message_stop"}
`)}
ch <- provider.StreamEvent{Done: true}
}()
return ch, nil
return &provider.StreamResponse{StatusCode: 200, Headers: map[string]string{"Content-Type": "text/event-stream"}, Events: ch}, nil
})
providerSvc := mocks.NewMockProviderService(ctrl)
statsSvc := mocks.NewMockStatsService(ctrl)
@@ -1100,3 +1140,314 @@ func TestProxyHandler_HandleProxy_UnifiedID_ModelNotFound(t *testing.T) {
require.NoError(t, json.Unmarshal(w.Body.Bytes(), &resp))
assert.Contains(t, resp, "error")
}
func TestProxyHandler_HandleProxy_OpenAIAndAnthropicNativePaths(t *testing.T) {
tests := []struct {
name string
protocol string
path string
requestPath string
baseURL string
expectedURL string
body string
responseBody string
responseModel string
}{
{
name: "openai path has no v1 after gateway prefix",
protocol: "openai",
path: "/chat/completions",
requestPath: "/openai/chat/completions",
baseURL: "https://api.test.com/v1",
expectedURL: "https://api.test.com/v1/chat/completions",
body: `{"model":"p1/gpt-4","messages":[{"role":"user","content":"hi"}]}`,
responseBody: `{"id":"r1","object":"chat.completion","model":"gpt-4","choices":[{"index":0,"message":{"role":"assistant","content":"ok"},"finish_reason":"stop"}],"usage":{"prompt_tokens":1,"completion_tokens":1,"total_tokens":2}}`,
responseModel: "p1/gpt-4",
},
{
name: "anthropic path keeps v1 after gateway prefix",
protocol: "anthropic",
path: "/v1/messages",
requestPath: "/anthropic/v1/messages",
baseURL: "https://api.anthropic.test",
expectedURL: "https://api.anthropic.test/v1/messages",
body: `{"model":"p1/gpt-4","max_tokens":1024,"messages":[{"role":"user","content":"hi"}]}`,
responseBody: `{"id":"msg-1","type":"message","role":"assistant","model":"gpt-4","content":[{"type":"text","text":"ok"}],"stop_reason":"end_turn","usage":{"input_tokens":1,"output_tokens":1}}`,
responseModel: "p1/gpt-4",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: tt.baseURL, Protocol: tt.protocol, Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "gpt-4", Enabled: true},
}, nil)
client := mocks.NewMockProviderClient(ctrl)
client.EXPECT().Send(gomock.Any(), gomock.Any()).DoAndReturn(func(ctx context.Context, spec conversion.HTTPRequestSpec) (*conversion.HTTPResponseSpec, error) {
assert.Equal(t, tt.expectedURL, spec.URL)
return &conversion.HTTPResponseSpec{
StatusCode: http.StatusOK,
Headers: map[string]string{"Content-Type": "application/json"},
Body: []byte(tt.responseBody),
}, nil
})
providerSvc := mocks.NewMockProviderService(ctrl)
statsSvc := mocks.NewMockStatsService(ctrl)
statsSvc.EXPECT().Record(gomock.Any(), gomock.Any()).Return(nil).AnyTimes()
h := newTestProxyHandler(engine, client, routingSvc, providerSvc, statsSvc)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: tt.protocol}, {Key: "path", Value: tt.path}}
c.Request = httptest.NewRequest("POST", tt.requestPath, bytes.NewReader([]byte(tt.body)))
h.HandleProxy(c)
require.Equal(t, http.StatusOK, w.Code)
assert.Contains(t, w.Body.String(), tt.responseModel)
})
}
}
func TestProxyHandler_UpstreamNon2xx_Passthrough(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com/v1", Protocol: "openai", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "gpt-4", Enabled: true},
}, nil)
client := mocks.NewMockProviderClient(ctrl)
client.EXPECT().Send(gomock.Any(), gomock.Any()).Return(&conversion.HTTPResponseSpec{
StatusCode: http.StatusTooManyRequests,
Headers: map[string]string{
"Content-Type": "application/json",
"X-Upstream-Error": "rate-limit",
"Transfer-Encoding": "chunked",
},
Body: []byte(`{"error":{"message":"rate limited"}}`),
}, nil)
providerSvc := mocks.NewMockProviderService(ctrl)
statsSvc := mocks.NewMockStatsService(ctrl)
h := newTestProxyHandler(engine, client, routingSvc, providerSvc, statsSvc)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"p1/gpt-4","messages":[{"role":"user","content":"hi"}]}`)))
h.HandleProxy(c)
require.Equal(t, http.StatusTooManyRequests, w.Code)
assert.JSONEq(t, `{"error":{"message":"rate limited"}}`, w.Body.String())
assert.Equal(t, "rate-limit", w.Header().Get("X-Upstream-Error"))
assert.Empty(t, w.Header().Get("Transfer-Encoding"))
}
func TestProxyHandler_StreamUpstreamNon2xx_Passthrough(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com/v1", Protocol: "openai", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "gpt-4", Enabled: true},
}, nil)
client := mocks.NewMockProviderClient(ctrl)
client.EXPECT().SendStream(gomock.Any(), gomock.Any()).Return(&provider.StreamResponse{
StatusCode: http.StatusServiceUnavailable,
Headers: map[string]string{"Content-Type": "application/json", "Connection": "close"},
Body: []byte(`{"error":"upstream down"}`),
}, nil)
providerSvc := mocks.NewMockProviderService(ctrl)
statsSvc := mocks.NewMockStatsService(ctrl)
h := newTestProxyHandler(engine, client, routingSvc, providerSvc, statsSvc)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"p1/gpt-4","messages":[{"role":"user","content":"hi"}],"stream":true}`)))
h.HandleProxy(c)
require.Equal(t, http.StatusServiceUnavailable, w.Code)
assert.JSONEq(t, `{"error":"upstream down"}`, w.Body.String())
assert.Empty(t, w.Header().Get("Connection"))
}
func TestFilterHopByHopHeaders(t *testing.T) {
filtered := filterHopByHopHeaders(map[string]string{
"Connection": "close",
"Transfer-Encoding": "chunked",
"Keep-Alive": "timeout=5",
"Proxy-Authenticate": "Basic",
"Proxy-Authorization": "Basic token",
"TE": "trailers",
"Trailer": "Expires",
"Upgrade": "websocket",
"Content-Type": "application/json",
"X-Request-ID": "req-1",
})
assert.Equal(t, map[string]string{
"Content-Type": "application/json",
"X-Request-ID": "req-1",
}, filtered)
}
func TestProxyHandler_UnknownInterface_DoesNotGuessModel(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
providerSvc := mocks.NewMockProviderService(ctrl)
providerSvc.EXPECT().List().Return([]domain.Provider{
{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com/v1", Protocol: "openai", Enabled: true},
}, nil)
client := mocks.NewMockProviderClient(ctrl)
client.EXPECT().Send(gomock.Any(), gomock.Any()).DoAndReturn(func(ctx context.Context, spec conversion.HTTPRequestSpec) (*conversion.HTTPResponseSpec, error) {
assert.Equal(t, "https://api.test.com/v1/unknown?trace=1", spec.URL)
assert.JSONEq(t, `{"model":"p1/gpt-4","payload":true}`, string(spec.Body))
return &conversion.HTTPResponseSpec{
StatusCode: http.StatusOK,
Headers: map[string]string{"Content-Type": "application/json"},
Body: []byte(`{"ok":true}`),
}, nil
})
statsSvc := mocks.NewMockStatsService(ctrl)
h := newTestProxyHandler(engine, client, routingSvc, providerSvc, statsSvc)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/unknown"}}
c.Request = httptest.NewRequest("POST", "/openai/unknown?trace=1", bytes.NewReader([]byte(`{"model":"p1/gpt-4","payload":true}`)))
h.HandleProxy(c)
require.Equal(t, http.StatusOK, w.Code)
assert.JSONEq(t, `{"ok":true}`, w.Body.String())
}
func TestProxyHandler_InvalidJSON_UsesGatewayError(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
engine := setupProxyEngine(t)
client := mocks.NewMockProviderClient(ctrl)
routingSvc := mocks.NewMockRoutingService(ctrl)
providerSvc := mocks.NewMockProviderService(ctrl)
statsSvc := mocks.NewMockStatsService(ctrl)
h := newTestProxyHandler(engine, client, routingSvc, providerSvc, statsSvc)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":`)))
h.HandleProxy(c)
require.Equal(t, http.StatusBadRequest, w.Code)
assert.JSONEq(t, `{"error":"请求体 JSON 格式错误","code":"INVALID_JSON"}`, w.Body.String())
}
func TestProxyHandler_CrossProtocolMultimodal_Unsupported(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("anthropic_p", "claude").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "anthropic_p", Name: "Anthropic", APIKey: "sk-test", BaseURL: "https://api.anthropic.test", Protocol: "anthropic", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "anthropic_p", ModelName: "claude", Enabled: true},
}, nil)
client := mocks.NewMockProviderClient(ctrl)
providerSvc := mocks.NewMockProviderService(ctrl)
statsSvc := mocks.NewMockStatsService(ctrl)
h := newTestProxyHandler(engine, client, routingSvc, providerSvc, statsSvc)
body := []byte(`{"model":"anthropic_p/claude","messages":[{"role":"user","content":[{"type":"text","text":"describe"},{"type":"image_url","image_url":{"url":"data:image/png;base64,abc"}}]}]}`)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader(body))
h.HandleProxy(c)
require.Equal(t, http.StatusBadRequest, w.Code)
assert.Contains(t, w.Body.String(), "UNSUPPORTED_MULTIMODAL")
}
func TestProxyHandler_SameProtocolMultimodal_SmartPassthrough(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
routingSvc.EXPECT().RouteByModelName("p1", "gpt-4").Return(&domain.RouteResult{
Provider: &domain.Provider{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com/v1", Protocol: "openai", Enabled: true},
Model: &domain.Model{ID: "m1", ProviderID: "p1", ModelName: "gpt-4", Enabled: true},
}, nil)
client := mocks.NewMockProviderClient(ctrl)
client.EXPECT().Send(gomock.Any(), gomock.Any()).DoAndReturn(func(ctx context.Context, spec conversion.HTTPRequestSpec) (*conversion.HTTPResponseSpec, error) {
assert.Contains(t, string(spec.Body), "image_url")
assert.Contains(t, string(spec.Body), `"model":"gpt-4"`)
return &conversion.HTTPResponseSpec{
StatusCode: http.StatusOK,
Headers: map[string]string{"Content-Type": "application/json"},
Body: []byte(`{"id":"r1","object":"chat.completion","model":"gpt-4","choices":[{"index":0,"message":{"role":"assistant","content":"ok"},"finish_reason":"stop"}],"usage":{"prompt_tokens":1,"completion_tokens":1,"total_tokens":2}}`),
}, nil
})
providerSvc := mocks.NewMockProviderService(ctrl)
statsSvc := mocks.NewMockStatsService(ctrl)
statsSvc.EXPECT().Record(gomock.Any(), gomock.Any()).Return(nil).AnyTimes()
h := newTestProxyHandler(engine, client, routingSvc, providerSvc, statsSvc)
body := []byte(`{"model":"p1/gpt-4","messages":[{"role":"user","content":[{"type":"text","text":"describe"},{"type":"image_url","image_url":{"url":"data:image/png;base64,abc"}}]}]}`)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader(body))
h.HandleProxy(c)
require.Equal(t, http.StatusOK, w.Code)
assert.Contains(t, w.Body.String(), "p1/gpt-4")
}
func TestProxyHandler_RawStreamPassthrough_PreservesSSEFrames(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
engine := setupProxyEngine(t)
routingSvc := mocks.NewMockRoutingService(ctrl)
providerSvc := mocks.NewMockProviderService(ctrl)
providerSvc.EXPECT().List().Return([]domain.Provider{
{ID: "p1", Name: "Test", APIKey: "sk-test", BaseURL: "https://api.test.com/v1", Protocol: "openai", Enabled: true},
}, nil)
client := mocks.NewMockProviderClient(ctrl)
client.EXPECT().SendStream(gomock.Any(), gomock.Any()).DoAndReturn(func(ctx context.Context, spec conversion.HTTPRequestSpec) (*provider.StreamResponse, error) {
assert.Contains(t, string(spec.Body), `"model":"gpt-4"`)
ch := make(chan provider.StreamEvent, 3)
go func() {
defer close(ch)
ch <- provider.StreamEvent{Data: []byte("data: {\"model\":\"gpt-4\",\"choices\":[]}\n\n")}
ch <- provider.StreamEvent{Data: []byte("data: [DONE]\n\n")}
ch <- provider.StreamEvent{Done: true}
}()
return &provider.StreamResponse{StatusCode: http.StatusOK, Headers: map[string]string{"Content-Type": "text/event-stream"}, Events: ch}, nil
})
statsSvc := mocks.NewMockStatsService(ctrl)
h := newTestProxyHandler(engine, client, routingSvc, providerSvc, statsSvc)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "protocol", Value: "openai"}, {Key: "path", Value: "/chat/completions"}}
c.Request = httptest.NewRequest("POST", "/openai/chat/completions", bytes.NewReader([]byte(`{"model":"gpt-4","messages":[{"role":"user","content":"hi"}],"stream":true}`)))
h.HandleProxy(c)
require.Equal(t, http.StatusOK, w.Code)
assert.Equal(t, "data: {\"model\":\"gpt-4\",\"choices\":[]}\n\ndata: [DONE]\n\n", w.Body.String())
}

View File

@@ -8,6 +8,7 @@ import (
"io"
"net"
"net/http"
"strings"
"syscall"
"time"
@@ -43,6 +44,14 @@ type StreamEvent struct {
Done bool
}
// StreamResponse 表示上游流式 HTTP 响应。
type StreamResponse struct {
StatusCode int
Headers map[string]string
Body []byte
Events <-chan StreamEvent
}
// Client 协议无关的供应商客户端
type Client struct {
httpClient *http.Client
@@ -55,7 +64,7 @@ type Client struct {
//go:generate go run go.uber.org/mock/mockgen -source=client.go -destination=../../tests/mocks/mock_provider_client.go -package=mocks
type ProviderClient interface {
Send(ctx context.Context, spec conversion.HTTPRequestSpec) (*conversion.HTTPResponseSpec, error)
SendStream(ctx context.Context, spec conversion.HTTPRequestSpec) (<-chan StreamEvent, error)
SendStream(ctx context.Context, spec conversion.HTTPRequestSpec) (*StreamResponse, error)
}
// NewClient 创建供应商客户端
@@ -116,7 +125,7 @@ func (c *Client) Send(ctx context.Context, spec conversion.HTTPRequestSpec) (*co
}
// SendStream 发送流式请求
func (c *Client) SendStream(ctx context.Context, spec conversion.HTTPRequestSpec) (<-chan StreamEvent, error) {
func (c *Client) SendStream(ctx context.Context, spec conversion.HTTPRequestSpec) (*StreamResponse, error) {
var bodyReader io.Reader
if len(spec.Body) > 0 {
bodyReader = bytes.NewReader(spec.Body)
@@ -139,23 +148,29 @@ func (c *Client) SendStream(ctx context.Context, spec conversion.HTTPRequestSpec
return nil, pkgErrors.ErrRequestSend.WithCause(err)
}
if resp.StatusCode != http.StatusOK {
respHeaders := extractResponseHeaders(resp.Header)
if resp.StatusCode < http.StatusOK || resp.StatusCode >= http.StatusMultipleChoices {
defer resp.Body.Close()
cancel()
errBody, readErr := io.ReadAll(resp.Body)
if readErr != nil {
return nil, fmt.Errorf("供应商返回错误: HTTP %d读取错误响应失败: %w", resp.StatusCode, readErr)
return nil, pkgErrors.ErrResponseRead.WithCause(readErr)
}
if len(errBody) > 0 {
return nil, fmt.Errorf("供应商返回错误: HTTP %d: %s", resp.StatusCode, string(errBody))
}
return nil, fmt.Errorf("供应商返回错误: HTTP %d", resp.StatusCode)
return &StreamResponse{
StatusCode: resp.StatusCode,
Headers: respHeaders,
Body: errBody,
}, nil
}
eventChan := make(chan StreamEvent, c.streamCfg.ChannelBufferSize)
go c.readStream(streamCtx, cancel, resp.Body, eventChan)
return eventChan, nil
return &StreamResponse{
StatusCode: resp.StatusCode,
Headers: respHeaders,
Events: eventChan,
}, nil
}
// readStream 读取 SSE 流
@@ -208,15 +223,17 @@ func (c *Client) readStream(ctx context.Context, cancel context.CancelFunc, body
}
for {
idx := bytes.Index(dataBuf, []byte("\n\n"))
idx, sepLen := findSSEFrameSeparator(dataBuf)
if idx == -1 {
break
}
rawEvent := dataBuf[:idx]
dataBuf = dataBuf[idx+2:]
frameEnd := idx + sepLen
rawEvent := append([]byte(nil), dataBuf[:frameEnd]...)
dataBuf = dataBuf[frameEnd:]
if bytes.Contains(rawEvent, []byte("data: [DONE]")) {
if isSSEDoneFrame(rawEvent) {
eventChan <- StreamEvent{Data: rawEvent}
eventChan <- StreamEvent{Done: true}
return
}
@@ -225,11 +242,66 @@ func (c *Client) readStream(ctx context.Context, cancel context.CancelFunc, body
}
if err == io.EOF {
if len(dataBuf) > 0 {
eventChan <- StreamEvent{Data: dataBuf}
}
return
}
}
}
func isSSEDoneFrame(frame []byte) bool {
payload, ok := sseFrameDataPayload(frame)
return ok && strings.TrimSpace(payload) == "[DONE]"
}
func sseFrameDataPayload(frame []byte) (string, bool) {
text := strings.TrimRight(string(frame), "\r\n")
lines := strings.Split(text, "\n")
var dataLines []string
for _, line := range lines {
line = strings.TrimRight(line, "\r")
if strings.HasPrefix(line, "data:") {
value := strings.TrimPrefix(line, "data:")
if strings.HasPrefix(value, " ") {
value = value[1:]
}
dataLines = append(dataLines, value)
}
}
if len(dataLines) == 0 {
return "", false
}
return strings.Join(dataLines, "\n"), true
}
func extractResponseHeaders(header http.Header) map[string]string {
respHeaders := make(map[string]string)
for k, vs := range header {
if len(vs) > 0 {
respHeaders[k] = vs[0]
}
}
return respHeaders
}
func findSSEFrameSeparator(data []byte) (int, int) {
lf := bytes.Index(data, []byte("\n\n"))
crlf := bytes.Index(data, []byte("\r\n\r\n"))
switch {
case lf < 0 && crlf < 0:
return -1, 0
case lf < 0:
return crlf, 4
case crlf < 0:
return lf, 2
case crlf <= lf:
return crlf, 4
default:
return lf, 2
}
}
// isNetworkError 判断是否为网络相关错误
func isNetworkError(err error) bool {
if err == nil {

View File

@@ -110,11 +110,13 @@ func TestClient_SendStream_CreatesChannel(t *testing.T) {
Body: []byte(`{}`),
}
eventChan, err := client.SendStream(context.Background(), spec)
streamResp, err := client.SendStream(context.Background(), spec)
require.NoError(t, err)
require.NotNil(t, eventChan)
require.NotNil(t, streamResp)
require.Equal(t, http.StatusOK, streamResp.StatusCode)
require.NotNil(t, streamResp.Events)
for range eventChan {
for range streamResp.Events {
}
}
@@ -132,8 +134,10 @@ func TestClient_SendStream_ErrorResponse(t *testing.T) {
Body: []byte(`{}`),
}
_, err := client.SendStream(context.Background(), spec)
assert.Error(t, err)
streamResp, err := client.SendStream(context.Background(), spec)
require.NoError(t, err)
require.NotNil(t, streamResp)
assert.Equal(t, http.StatusInternalServerError, streamResp.StatusCode)
}
func TestClient_SendStream_SSEEvents(t *testing.T) {
@@ -164,12 +168,13 @@ func TestClient_SendStream_SSEEvents(t *testing.T) {
Body: []byte(`{"model":"gpt-4","messages":[],"stream":true}`),
}
eventChan, err := client.SendStream(context.Background(), spec)
streamResp, err := client.SendStream(context.Background(), spec)
require.NoError(t, err)
require.NotNil(t, streamResp)
var dataEvents [][]byte
var doneEvents int
for event := range eventChan {
for event := range streamResp.Events {
switch {
case event.Done:
doneEvents++
@@ -180,9 +185,56 @@ func TestClient_SendStream_SSEEvents(t *testing.T) {
}
}
assert.Equal(t, 2, len(dataEvents), "expected exactly 2 data events from SSE stream")
assert.Equal(t, 3, len(dataEvents), "expected 2 data frames plus DONE frame from SSE stream")
assert.Contains(t, string(dataEvents[0]), "Hello")
assert.Contains(t, string(dataEvents[1]), "World")
assert.Contains(t, string(dataEvents[2]), "[DONE]")
assert.Equal(t, 1, doneEvents)
assert.Contains(t, string(dataEvents[0]), "\n\n")
}
func TestClient_SendStream_DONEOnlyWhenDataPayloadEqualsDone(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/event-stream")
w.WriteHeader(http.StatusOK)
flusher, ok := w.(http.Flusher)
require.True(t, ok)
_, err := w.Write([]byte("data: {\"text\":\"data: [DONE] is plain text\"}\n\n"))
require.NoError(t, err)
flusher.Flush()
_, err = w.Write([]byte("data: [DONE]\n\n"))
require.NoError(t, err)
flusher.Flush()
}))
defer server.Close()
client := NewClient(zap.NewNop())
spec := conversion.HTTPRequestSpec{
URL: server.URL + "/v1/chat/completions",
Method: "POST",
Body: []byte(`{}`),
}
streamResp, err := client.SendStream(context.Background(), spec)
require.NoError(t, err)
require.NotNil(t, streamResp)
var dataEvents [][]byte
var doneEvents int
for event := range streamResp.Events {
switch {
case event.Done:
doneEvents++
case event.Error != nil:
t.Fatalf("unexpected error: %v", event.Error)
default:
dataEvents = append(dataEvents, event.Data)
}
}
require.Len(t, dataEvents, 2)
assert.Contains(t, string(dataEvents[0]), "plain text")
assert.Contains(t, string(dataEvents[1]), "[DONE]")
assert.Equal(t, 1, doneEvents)
}
@@ -203,13 +255,13 @@ func TestClient_SendStream_ContextCancellation(t *testing.T) {
Body: []byte(`{}`),
}
eventChan, err := client.SendStream(ctx, spec)
streamResp, err := client.SendStream(ctx, spec)
require.NoError(t, err)
cancel()
var gotError bool
for event := range eventChan {
for event := range streamResp.Events {
if event.Error != nil {
gotError = true
}
@@ -264,12 +316,12 @@ func TestClient_SendStream_SlowSSE(t *testing.T) {
Body: []byte(`{}`),
}
eventChan, err := client.SendStream(context.Background(), spec)
streamResp, err := client.SendStream(context.Background(), spec)
require.NoError(t, err)
var dataCount int
var doneCount int
for event := range eventChan {
for event := range streamResp.Events {
switch {
case event.Done:
doneCount++
@@ -279,7 +331,7 @@ func TestClient_SendStream_SlowSSE(t *testing.T) {
dataCount++
}
}
assert.Equal(t, 1, dataCount, "expected exactly 1 data event from slow SSE")
assert.Equal(t, 2, dataCount, "expected 1 data frame plus DONE frame from slow SSE")
assert.Equal(t, 1, doneCount, "expected exactly 1 done event from slow SSE")
}
@@ -308,19 +360,19 @@ func TestClient_SendStream_SplitSSEEvents(t *testing.T) {
Body: []byte(`{}`),
}
eventChan, err := client.SendStream(context.Background(), spec)
streamResp, err := client.SendStream(context.Background(), spec)
require.NoError(t, err)
var dataEvents int
var doneEvents int
for event := range eventChan {
for event := range streamResp.Events {
if event.Done {
doneEvents++
} else {
dataEvents++
}
}
assert.Equal(t, 2, dataEvents, "expected exactly 2 data events from split SSE")
assert.Equal(t, 3, dataEvents, "expected 2 data frames plus DONE frame from split SSE")
assert.Equal(t, 1, doneEvents)
}
@@ -397,11 +449,11 @@ func TestClient_SendStream_MidStreamNetworkError(t *testing.T) {
Body: []byte(`{}`),
}
eventChan, err := client.SendStream(context.Background(), spec)
streamResp, err := client.SendStream(context.Background(), spec)
require.NoError(t, err)
var gotData bool
for event := range eventChan {
for event := range streamResp.Events {
if event.Error != nil {
} else if !event.Done {
gotData = true

View File

@@ -521,15 +521,14 @@ func TestConversion_OldRoutes_Return404(t *testing.T) {
req := httptest.NewRequest("POST", "/v1/chat/completions", strings.NewReader(`{"model":"test"}`))
req.Header.Set("Content-Type", "application/json")
r.ServeHTTP(w, req)
// Gin 路由匹配但协议不支持返回 400
assert.Equal(t, 400, w.Code)
assert.Equal(t, 404, w.Code)
// 旧 Anthropic 路由
w = httptest.NewRecorder()
req = httptest.NewRequest("POST", "/v1/messages", strings.NewReader(`{"model":"test"}`))
req.Header.Set("Content-Type", "application/json")
r.ServeHTTP(w, req)
assert.Equal(t, 400, w.Code)
assert.Equal(t, 404, w.Code)
}
// ============ Provider Protocol 字段测试 ============

View File

@@ -59,10 +59,10 @@ func (mr *MockProviderClientMockRecorder) Send(ctx, spec any) *gomock.Call {
}
// SendStream mocks base method.
func (m *MockProviderClient) SendStream(ctx context.Context, spec conversion.HTTPRequestSpec) (<-chan provider.StreamEvent, error) {
func (m *MockProviderClient) SendStream(ctx context.Context, spec conversion.HTTPRequestSpec) (*provider.StreamResponse, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "SendStream", ctx, spec)
ret0, _ := ret[0].(<-chan provider.StreamEvent)
ret0, _ := ret[0].(*provider.StreamResponse)
ret1, _ := ret[1].(error)
return ret0, ret1
}

View File

@@ -31,7 +31,7 @@
|------|------|
| **完整 HTTP 接口体系转换** | 覆盖 /models、/embeddings、/rerank 等全部接口的 URL 路由映射、请求头转换、请求体/响应体格式转换 |
| **输入输出解耦** | 客户端协议和服务端协议独立指定,任意组合 |
| **同协议透传** | client == provider 时跳过转换,零语义损失、零序列化开销 |
| **同协议透传** | client == provider 时跳过 Canonical 全量转换,保持协议语义 |
| **尽力转换** | 能对接的参数尽可能对接,不能对接的各自忽略,保障最大覆盖面 |
| **协议可扩展** | 添加新协议只需实现 Adapter不修改核心引擎 |
| **流式优先** | SSE 流式转换作为核心能力,与非流式同等地位 |
@@ -75,8 +75,8 @@
│ │ │ │
│ │ 入站: /{protocol}/{native_path} │ │
│ │ │ │
│ │ /<protocol_a>/v1/chat/completions → client=protocol_a, /v1/... │ │
│ │ /<protocol_b>/v1/messages → client=protocol_b, /v1/... │ │
│ │ /openai/chat/completions → client=openai, /chat/completions │ │
│ │ /anthropic/v1/messages → client=anthropic, /v1/messages │ │
│ │ │ │
│ │ Step 1: 识别 client protocolURL 前缀 / 配置映射 / 任意方式) │ │
│ │ Step 2: 剥离前缀 → 得到 nativePath │ │
@@ -112,17 +112,18 @@
### 2.2 URL 路由规则
调用方负责识别协议并剥离前缀,将 `nativePath``clientProtocol``providerProtocol` 传入引擎。调用方可自行决定识别方式URL 前缀、配置映射等)
调用方负责识别协议并剥离前缀,将 `nativePath``clientProtocol``providerProtocol` 传入引擎。网关只剥离第一段协议前缀,不对剩余路径做版本号归一化;`/v1` 是否存在属于协议原生路径,由对应 ProtocolAdapter 按本地 API reference 识别和映射
```
入站 URL 调用方剥离前缀后 引擎出站
──────────────────────────────────────────────────────────────────────────────
/<protocol_a>/v1/chat/completions → /v1/chat/completions → 目标协议路径
/<protocol_b>/v1/messages → /v1/messages目标协议路径
/<protocol_a>/v1/models → /v1/models → /v1/models通常不变
/openai/chat/completions → /chat/completions → /chat/completions
/openai/models → /models /models
/anthropic/v1/messages → /v1/messages → /v1/messages
/anthropic/v1/models → /v1/models → /v1/models
```
出站到上游 API 时使用服务端协议原生路径(无前缀)。
出站到上游 API 时使用服务端协议原生路径(无网关协议前缀),并由 `provider.BaseURL + providerAdapter.BuildUrl(nativePath, interfaceType)` 组合得到真实 URL。OpenAI `base_url` 配置到版本路径一级(如 `https://api.openai.com/v1`Anthropic `base_url` 配置到域名级(如 `https://api.anthropic.com`)。
### 2.3 请求处理流程
@@ -146,14 +147,14 @@
│ 响应处理 │ 原样返回 │ modelOverride非空时 │ Decode→modelOverride │
│ │ │ RewriteResponseModelName(body) │ →Encode │
│ │ │ │ │
│ 流式处理 │ chunk→[chunk] │ chunk→RewriteResponseModelName │ Decode→Middleware │
│ │ │ →[rewritten] │ →modelOverride→Encode │
│ 流式处理 │ raw SSE frame原样透传 │ SSE frame→仅改写data JSON中的model │ Decode→Middleware │
│ │ 保留[DONE]和frame边界 │ 解析失败则输出原frame继续处理 │ →modelOverride→Encode │
│ │ │ │ │
│ 性能开销 │ 最低 │ 低仅JSON字段改写 │ 高(完整序列化) │
└────────────────┴─────────────────────────┴────────────────────────────────────────┘
```
**智能透传的设计动机**:同协议场景下,若仅需改写 `model` 字段(如客户端请求模型 "X",上游需要模型 "Y"),无需完整解码/编码。直接在 JSON 层面手术式改写该字段,既保留原始请求的所有细节,又避免序列化开销
**智能透传的设计动机**:同协议场景下,若仅需改写 `model` 字段(如客户端请求模型 "X",上游需要模型 "Y"),无需进入 Canonical 全量解码/编码。直接在 JSON 层面改写该字段,保持未改写字段的 JSON 内容和类型不变;实现不承诺保留原始字节顺序、空白或对象字段顺序
#### 2.3.2 完整请求处理流程
@@ -162,7 +163,7 @@
┌──────────────┐ ┌──────────────┐
│ URL: │ 调用方完成: 1. 接口识别 │ URL: │
│ /<protocol>/ │ · clientProtocol 2. IsPassthrough? │ 目标协议 │
v1/... │ · nativePath ├─ yes ─┬─ 无 modelOverride → 透传车道 │ 原生路径 │
│ ... │ · nativePath ├─ yes ─┬─ 无 modelOverride → 透传车道 │ 原生路径 │
│ Headers: │ · providerProtocol │ └─ 有 modelOverride → 智能透传车道│ Headers: │
│ 协议原生格式 │ │ │ 目标协议格式 │
│ Body: │ └─ no → 完整转换车道 │ Body: │
@@ -174,7 +175,7 @@
**同协议透传**client == provider 时,仅重建 Header 后原样转发到上游。
**智能透传**:同协议且需改写 model 字段时,最小化 JSON 改写后转发。
**未知接口透传**无法识别的路径URL+Header 适配后 Body 原样转发。
**未知接口透传**无法识别的路径URL+Header 适配后 Body 原样转发;即使请求体存在顶层 `model`,也不做通用猜测
---
@@ -458,11 +459,13 @@ interface ProtocolAdapter {
**`buildHeaders` 的设计**Adapter 只需从 `provider` 中提取自己协议需要的认证和配置信息,构建自己的 Header 格式。不再需要理解其他协议的 Header。
**URL 事实来源**Adapter 的 `detectInterfaceType``buildUrl` 必须以本地 API reference 为事实来源。OpenAI 参考 `docs/api_reference/openai`(忽略 `responses` 目录),其 nativePath 不包含 `/v1`,例如 `/chat/completions``/models``/embeddings`。Anthropic 参考 `docs/api_reference/anthropic`,其 nativePath 保留 `/v1`,例如 `/v1/messages``/v1/models`。不得根据其他协议是否包含 `/v1` 推断当前协议路径。
**智能透传方法的契约**
- `rewriteRequestModelName` / `rewriteResponseModelName` 必须**幂等**(多次调用结果相同)
- Rewrite 方法必须**最小化**(仅修改 model 字段,不触碰其他字段)
- Rewrite 失败时,引擎使用宽容策略:记录警告日志,使用原始 body 继续处理
- `extractModelName` 支持的接口类型CHAT、EMBEDDINGS、RERANK这些接口的请求体包含 model 字段)
- `extractModelName` 支持 adapter 明确适配的接口类型CHAT、EMBEDDINGS、RERANK这些接口的请求体包含 model 字段)。PASSTHROUGH 或未适配接口返回错误或空结果,调用方按无 model 请求透传,不做顶层 `model` 猜测。
### 5.3 InterfaceType
@@ -713,7 +716,7 @@ interface StreamConverter {
| 转换器 | 触发条件 | processChunk | flush |
|--------|---------|--------------|-------|
| `PassthroughStreamConverter` | 同协议 + 无 modelOverride | `[rawChunk]` | `[]` |
| `SmartPassthroughStreamConverter` | 同协议 + 有 modelOverride | `[rewriteResponseModelName(rawChunk)]` | `[]` |
| `SmartPassthroughStreamConverter` | 同协议 + 有 modelOverride | 按 SSE frame 改写 `data` JSON 中的 model失败输出原 frame | 输出缓存中的未完整 frame |
| `CanonicalStreamConverter` | 不同协议 | Decode→Middleware→modelOverride→Encode | decoder.flush()→encoder.flush() |
#### 6.3.3 PassthroughStreamConverter
@@ -732,16 +735,30 @@ class SmartPassthroughStreamConverter implements StreamConverter {
adapter: ProtocolAdapter
modelOverride: String
interfaceType: InterfaceType
buffer: ByteArray
processChunk(rawChunk): Array<RawSSEChunk> {
if rawChunk为空: return []
rewrittenChunk = adapter.rewriteResponseModelName(rawChunk, modelOverride, interfaceType)
if rewrite失败:
log.warn("智能透传改写失败,使用原始 chunk")
return [rawChunk]
return [rewrittenChunk]
buffer.append(rawChunk)
frames = splitCompleteSSEFrames(buffer)
result = []
for frame in frames:
payload = extractDataPayload(frame)
if payload == "[DONE]":
result.append(frame)
continue
rewrittenPayload = adapter.rewriteResponseModelName(payload, modelOverride, interfaceType)
if rewrite失败:
log.warn("智能透传改写失败,使用原始 SSE frame")
result.append(frame)
else:
result.append(rebuildSSEFrameWithData(frame, rewrittenPayload))
return result
}
flush(): Array<RawSSEChunk> {
if buffer为空: return []
return [buffer.drain()] // 未完整 frame 原样输出
}
flush(): Array<RawSSEChunk> { return [] }
}
```
@@ -794,7 +811,9 @@ function createStreamConverter(clientProtocol, providerProtocol, modelOverride,
if isPassthrough(clientProtocol, providerProtocol):
if modelOverride非空:
adapter = registry.get(clientProtocol)
// 解析 SSE frame仅改写 data JSON 中的 model解析失败输出原 frame
return new SmartPassthroughStreamConverter(adapter, modelOverride, interfaceType)
// raw passthrough 保留 SSE frame 边界和 [DONE]
return new PassthroughStreamConverter()
providerAdapter = registry.get(providerProtocol)
@@ -849,7 +868,7 @@ engine.registerAdapter(new OpenAIAdapter())
engine.registerAdapter(new AnthropicAdapter())
// 场景1: 跨协议 Chat 转换
// 入站: /openai/v1/chat/completions
// 入站: /openai/chat/completions
provider = TargetProvider {
base_url: "https://api.anthropic.com",
api_key: "xxx",
@@ -881,7 +900,7 @@ converter.flush()
// 场景6: 同协议流式智能透传
converter = engine.createStreamConverter("openai", "openai", "gpt-4-turbo", CHAT)
// 使用 SmartPassthroughStreamConverter逐 chunk 改写 model 字段
// 使用 SmartPassthroughStreamConverter按 SSE frame 改写 data JSON 中的 model 字段
```
---
@@ -894,10 +913,10 @@ converter = engine.createStreamConverter("openai", "openai", "gpt-4-turbo", CHAT
上游 SSE 流
├─ 同协议 + 无 modelOverride: PassthroughStreamConverter
chunk → [chunk]
raw SSE frame → [raw SSE frame]
├─ 同协议 + 有 modelOverride: SmartPassthroughStreamConverter
chunk → [rewriteResponseModelName(chunk)]
SSE frame → data JSON model rewrite → [SSE frame]
└─ 不同协议: CanonicalStreamConverter
StreamDecoder StreamEncoder
@@ -1025,7 +1044,8 @@ ErrorCode = Enum<
UTF8_DECODE_ERROR, // UTF-8 解码错误
PROTOCOL_CONSTRAINT_VIOLATION, // 违反协议约束
ENCODING_FAILURE, // 编码失败
INTERFACE_NOT_SUPPORTED // 目标协议不支持此接口
INTERFACE_NOT_SUPPORTED, // 目标协议不支持此接口
UNSUPPORTED_MULTIMODAL // 多模态内容块暂不支持跨协议转换
>
```
@@ -1050,7 +1070,7 @@ ErrorCode = Enum<
│ │ - interceptStreamEvent 返回 error → continue │
├─────────────────┼───────────────────────────────────────────────────┤
│ 智能透传 │ 宽容模式:重写失败则使用原始 body/chunk │
│ │ - Rewrite 失败 → log.warn + 返回原始 body/chunk
│ │ - Rewrite 失败 → log.warn + 返回原始 body/SSE frame
├─────────────────┼───────────────────────────────────────────────────┤
│ 请求中间件 │ 严格模式:返回错误则中断整个转换 │
│ │ - intercept 返回 error → 返回 error │
@@ -1072,11 +1092,24 @@ ErrorCode = Enum<
具体策略由 `supportsInterface` 返回值决定:返回 false 时引擎直接透传 body。
**多模态处理**`UNSUPPORTED_MULTIMODAL`Canonical Model 保留 image、audio、video、file 内容块占位,但当前跨协议多模态编解码暂未实现。跨协议完整转换遇到这些内容块时返回 `UNSUPPORTED_MULTIMODAL`;同协议 raw passthrough 和 smart passthrough 不拒绝多模态字段,仍按原协议请求透传或仅改写 model。
### 9.3 错误响应格式
转换失败时,错误响应用**客户端协议client protocol**的格式编码。由 `clientAdapter.encodeError(error)` 完成。各协议的错误响应 JSON 结构和 HTTP 状态码映射详见各自的协议适配文档(附录 E
ConversionEngine 的 `encodeError` 是协议 Adapter 的错误编码能力,用于 SDK 内部或非代理场景把 `ConversionError` 编码为客户端协议格式。各协议的错误响应 JSON 结构和 HTTP 状态码映射详见各自的协议适配文档(附录 E
Middleware 中断转换时同理,引擎调用 clientAdapter.encodeError 将 ConversionError 编码为客户端可理解的格式。
ProxyHandler 对外遵循更明确的代理错误边界:
| 场景 | 响应策略 |
|------|---------|
| 网关层错误 | 返回应用统一 JSON`{"error": "...", "code": "..."}` |
| 上游已返回 HTTP 非 2xx | 透传上游 status、过滤 hop-by-hop 后的 headers、body |
| 未收到上游 HTTP 响应 | 返回 `UPSTREAM_UNAVAILABLE`HTTP 502 |
| 网关层 JSON 解析失败 | 返回 `INVALID_JSON`HTTP 400 |
| 路由统一模型失败 | 返回 `MODEL_NOT_FOUND`HTTP 404 |
| 跨协议转换失败 | 返回 `CONVERSION_FAILED` 或具体转换错误码 |
因此,代理接口中的网关层错误不会再编码成 OpenAI/Anthropic 协议错误格式;只有上游已经返回的错误响应才按上游原样透传。
#### 9.3.1 EncodeError Fallback 行为
@@ -1179,7 +1212,7 @@ ProtocolAdapter
// ─── 流式处理 ───
StreamConverter: .processChunk(raw) / .flush()
├─ PassthroughStreamConverter [raw] → [raw]
├─ SmartPassthroughStreamConverter [raw] → [rewrite(raw)]
├─ SmartPassthroughStreamConverter [SSE frame] → [rewrite(data JSON)]
└─ CanonicalStreamConverter decode → middleware → modelOverride → encode
// ─── 中间件 ───
@@ -1248,7 +1281,7 @@ Canonical Model 是**活的公共契约**,不是固定不变的。其字段集
- `decodeXxxRequest` / `encodeXxxRequest`:扩展层接口仅在 `supportsInterface` 返回 true 时被调用§6.2 convertBody 分支);返回 false 时引擎直接透传 body
- `createStreamDecoder` / `createStreamEncoder`:引擎在 `createStreamConverter` 中调用§6.1Decoder 来自 provider 协议(解码上游 SSEEncoder 来自 client 协议(编码给客户端)
- `buildHeaders`:每次请求出站时调用,同协议透传也会调用
- `encodeError`:转换失败或 Middleware 中断时调用,使用 client 协议格式编码错误响应
- `encodeError`:转换失败或 Middleware 中断时的协议错误编码能力ProxyHandler 的网关层错误使用应用统一格式,不调用协议错误编码
### D.1 协议基本信息
@@ -1268,7 +1301,7 @@ Canonical Model 是**活的公共契约**,不是固定不变的。其字段集
| URL 映射表 | 每种 InterfaceType 的目标 URL 路径(`buildUrl` |
**重要**`detectInterfaceType` 由各协议 Adapter 实现,因为不同协议有不同的 URL 路径约定。例如:
- OpenAI: `/v1/chat/completions` → CHAT
- OpenAI: `/chat/completions` → CHAT
- Anthropic: `/v1/messages` → CHAT
### D.3 请求头构建

View File

@@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-04-25

View File

@@ -0,0 +1,100 @@
## Context
当前后端代理入口采用 `/{protocol}/*path``ProxyHandler` 将剥离协议前缀后的 path 作为 `nativePath` 传入 adapter。这个模型本身是合理的OpenAI adapter 应接收 `/chat/completions`Anthropic adapter 应接收 `/v1/messages`。需要修正的是文档中曾把版本路径抽象为网关统一规则,但实际上版本路径是否存在属于协议原生路径的一部分,应由协议 adapter 识别和映射。
流式链路当前由 `ProviderClient.SendStream``\n\n` 拆分 SSE 事件,并把 `data: [DONE]` 转成 Done 信号。这个设计适合跨协议 decoder但不适合同协议 raw passthrough透传路径会丢失 SSE frame 结尾空行Smart Passthrough 也无法对 `data: {...}` 中的 JSON 顶层 model 做可靠改写。
错误处理目前混合了协议错误、应用错误和上游错误。新的产品决策是:网关层错误使用应用统一 `{error, code}` 格式;只要上游已经返回 HTTP 响应,即使是非 2xx也保持透明代理语义直接透传上游 status、headers、body。
## Goals / Non-Goals
**Goals:**
- 明确网关只剥离协议前缀,不对剩余 nativePath 做版本号归一化。
- 明确不同协议的 nativePath 形态由协议自身决定OpenAI 为 `/chat/completions`Anthropic 为 `/v1/messages`
- 保持现有 `base_url` 约定OpenAI 配置到版本路径一级Anthropic 配置到域名级。
- 由 adapter 负责识别协议原生 nativePath并通过 BuildUrl 产出真实上游路径。
- 同协议无 model 改写的流式请求保持 raw SSE 字节透传。
- 同协议需要 model 改写的流式请求按 SSE frame 解析,只改写 `data` JSON 中的 model 字段,再重建 SSE frame。
- 网关层错误统一使用应用错误格式;上游非 2xx 响应透明透传。
- 只有 adapter 明确适配的接口才提取 model未知接口不做通用 model 猜测。
- 跨协议遇到多模态占位内容时明确返回不支持错误,避免静默丢内容。
**Non-Goals:**
- 不统一不同协议是否使用 `/v1` 版本路径。
- 不调整 OpenAI/Anthropic `base_url` 配置约定。
- 不调整统计口径和统计数据模型。
- 不实现多模态正式转换。
- 不引入新依赖。
## Decisions
### Decision 1: nativePath 保持协议原生路径
外部请求 `/{protocol}/{path}` 只剥离协议前缀,剩余 path 原样作为 client protocol 的 nativePath。OpenAI 请求 `/openai/chat/completions` 进入 OpenAI adapter 时为 `/chat/completions`Anthropic 请求 `/anthropic/v1/messages` 进入 Anthropic adapter 时为 `/v1/messages`
替代方案是由网关统一添加或移除 `/v1`。该方案会把版本路径从协议知识错误提升为网关通用规则,导致 Anthropic 和 OpenAI 的 base_url 约定被混淆。
### Decision 2: 对外路径由协议 adapter 的 nativePath 决定
网关不提供跨协议统一路径规范。OpenAI 对外路径不带 `/v1`,因为 OpenAI provider 的 `base_url` 已配置到版本路径一级Anthropic 对外路径保留 `/v1`,因为 Anthropic 协议原生路径包含版本段且 `base_url` 配置到域名级。
替代方案是把所有协议对外路径都改成不带 `/v1` 或都带 `/v1`。该方案不符合各协议原生路径,也会让 adapter 的 `DetectInterfaceType` 失去协议边界。
### Decision 3: 上游路径由 adapter.BuildUrl 产生
无论同协议透传、Smart Passthrough 还是跨协议转换,出站 URL 都使用 `provider.BaseURL + providerAdapter.BuildUrl(nativePath, interfaceType)`。OpenAI adapter 对 `/chat/completions` 返回 `/chat/completions`,配合 `base_url=http://xxx.com/v1` 得到 `http://xxx.com/v1/chat/completions`。Anthropic adapter 对 `/v1/messages` 返回 `/v1/messages`,配合 `base_url=http://xxx.com` 得到 `http://xxx.com/v1/messages`
替代方案是在 engine 中直接拼接 `provider.BaseURL + nativePath`。该方案当前对部分协议可工作,但绕过了 adapter 的 URL 映射职责,不利于后续协议扩展和特殊路径映射。
### Decision 4: 同协议流式分 raw passthrough 和 smart passthrough 两条路径
当 clientProtocol 等于 providerProtocol 且不需要响应 model 改写时handler/provider client 应直接将上游响应 body 的 SSE 字节写回客户端,不进入 `StreamConverter` 事件拆分链路。
当需要响应 model 改写时,保留 SSE frame 边界,解析每个 SSE frame 的 `data` 行:`[DONE]` 原样输出JSON payload 调用 adapter 的响应 model 改写逻辑后重新写为 SSE frame解析失败则按宽容策略输出原 frame。
替代方案是继续让 `ProviderClient.SendStream` 只暴露去掉 `\n\n` 的 rawEvent。该方案无法满足 raw passthrough 的字节语义,也无法可靠输出 `[DONE]`
### Decision 5: 网关层错误与上游错误分离
网关层错误包括路由失败、请求 JSON 错误、转换失败、上游连接失败、跨协议暂不支持能力等,统一返回 `{error, code}`。上游错误指已经收到上游 HTTP 响应的情况,非 2xx 响应不进入 conversion直接透传 status、过滤后的 headers 和 body。
替代方案是把所有代理错误都编码为客户端协议错误格式。该方案会隐藏上游原始错误,也与管理接口统一错误格式不一致。
### Decision 6: adapter 明确声明可提取 model 的接口边界
Chat、Embeddings、Rerank 等已适配接口由 adapter 的 `ExtractModelName` 明确解析 model。未知接口即使 body 中存在顶层 `model`,也不做假设性提取,按无 model 透传处理。
替代方案是在 handler 中统一尝试解析顶层 model。该方案会误判未来协议特有接口的模型字段语义破坏 adapter 对协议知识的封装。
### Decision 7: 多模态占位保留但跨协议拒绝
Canonical 中现有 image/audio/video/file 占位保留,后续多模态实现可继续扩展。同协议 Smart Passthrough 保留请求 JSON 语义,不检查多模态。跨协议完整转换检测到 image/audio/video/file 时返回 `UNSUPPORTED_MULTIMODAL` 网关错误。
替代方案是继续编码空占位或静默丢弃。该方案会制造数据丢失,调用方难以诊断。
## Risks / Trade-offs
- [Risk] 对外路径由协议决定,调用方需要分别记住 OpenAI 和 Anthropic 的路径形态。→ Mitigation: README、backend README 和设计文档明确列出每个协议的路径和 base_url 约定。
- [Risk] raw stream passthrough 可能需要调整 ProviderClient 接口,影响现有测试。→ Mitigation: 将非流式、跨协议流式、同协议 raw 流式分别建模,并补充 E2E 测试。
- [Risk] 上游错误透传可能让 OpenAI 客户端看到 Anthropic 错误体。→ Mitigation: 文档明确透明代理边界;只有网关自身错误保证统一格式。
- [Risk] SSE frame 级 model 改写需要处理 CRLF、多 data 行和 `[DONE]`。→ Mitigation: 实现轻量 SSE frame parser覆盖 LF/CRLF、多行 data、解析失败回退原 frame 的测试。
- [Risk] 跨协议多模态拒绝会短期限制能力。→ Mitigation: 保留 Canonical 占位和同协议透传,后续多模态 change 可在此基础上扩展。
## Migration Plan
1. 更新 OpenSpec 和设计文档明确协议原生路径、base_url、错误和流式边界。
2. 校正 adapter 路径识别和 `BuildUrl` 映射测试,确保 OpenAI 接收 `/chat/completions`、Anthropic 接收 `/v1/messages`
3. 修改 `ConversionEngine` 同协议 URL 构建,统一使用 `BuildUrl`,避免核心逻辑绕过 adapter。
4. 调整 ProviderClient/ProxyHandler 流式链路,支持 raw passthrough、SSE frame 级 Smart Passthrough 和非 2xx 透传。
5. 调整网关层错误输出,非 2xx 上游响应绕过 conversion。
6. 补齐单测、集成测试和 E2E 测试。
7. 更新 README、backend README 和 `docs/conversion_design.md`
由于应用尚未上线,不提供额外路径别名兼容和数据迁移。回滚策略是恢复旧流式 provider client 和错误处理行为。
## Open Questions
无。

View File

@@ -0,0 +1,42 @@
## Why
当前 conversion/proxy 层的错误处理和流式透传行为与最新产品决策不一致;路径文档也需要明确职责边界:网关只剥离 `/{protocol}` 前缀,不对剩余路径做版本号归一化,是否包含 `/v1` 由具体协议 adapter 处理。
这次调整在应用上线前完成,可以避免错误的路径抽象固化,并为后续新增接口和多模态转换打好边界。
## What Changes
- 明确网关路径职责:只剥离协议前缀 `/{protocol}`,剩余 path 原样作为客户端协议 nativePath 交给 adapter。
- 明确不同协议的对外路径由协议自身决定不做跨协议统一OpenAI 使用 `/openai/chat/completions``/openai/models``/openai/embeddings``/openai/rerank`Anthropic 使用 `/anthropic/v1/messages``/anthropic/v1/models``/anthropic/v1/models/{provider_id}/{model_name}`
- 明确 `base_url` 约定不变OpenAI 配置到版本路径一级(如 `http://xxx.com/v1`Anthropic 配置到域名级(如 `http://xxx.com`)。
- adapter 负责识别剥离协议前缀后的协议原生 nativePath并通过 BuildUrl 产出真实上游路径。
- 同协议流式无 model 改写时直接透传上游 SSE 字节;需要 model 改写时按 SSE frame 解析和重建,只改写 data JSON 内的 model 字段。
- 网关层错误使用应用统一错误格式 `{error, code}`;上游已经返回 HTTP 响应时,非 2xx 状态码、headers、body 直接透传,不进入协议转换。
- 只有 adapter 明确适配的接口才提取 model 并参与统一模型 ID 路由;未知接口不做通用 model 猜测,按无 model 透传处理。
- 多模态占位保留;本次不实现多模态转换,跨协议检测到 image/audio/video/file 时返回网关层不支持错误。
- 本次不调整统计口径。
## Capabilities
### New Capabilities
-
### Modified Capabilities
- `unified-proxy-handler`: 修改 nativePath 语义、同协议流式透传、上游错误透传和未知接口路由规则。
- `openai-protocol-proxy`: 明确 OpenAI 对外路径和 adapter 接收的 nativePath 形态,并约束流式完成标记和上游错误透传行为。
- `anthropic-protocol-proxy`: 明确 Anthropic 对外路径保持 `/v1` 层级、adapter 接收 `/v1/...` nativePath、上游 URL 由 `base_url + nativePath`/BuildUrl 组合得到。
- `error-responses`: 增加网关层统一错误码设计,并明确代理接口中上游错误透传、不做协议错误包装。
- `conversion-engine`: 约束同协议 Smart Passthrough 的 URL 构建、SSE frame 级 model 改写、adapter 模型提取边界和跨协议多模态暂不支持行为。
## Impact
- 后端代理入口:`backend/internal/handler/proxy_handler.go`
- 供应商客户端流式接口:`backend/internal/provider/client.go`
- 转换引擎和流式转换器:`backend/internal/conversion/engine.go``backend/internal/conversion/stream.go`
- OpenAI/Anthropic adapter`backend/internal/conversion/openai/*``backend/internal/conversion/anthropic/*`
- 错误处理公共逻辑:`backend/pkg/errors`、handler 错误响应逻辑
- 测试conversion 单测、adapter 单测、代理集成测试、E2E conversion 测试
- 文档:`README.md``backend/README.md``docs/conversion_design.md`
- 不引入新依赖,不调整数据库 schema不调整统计模块

View File

@@ -0,0 +1,80 @@
## MODIFIED Requirements
### Requirement: 支持 Anthropic Messages API 端点
网关 SHALL 提供 Anthropic Messages API 端点供外部应用调用。
#### Scenario: 成功的非流式请求
- **WHEN** 应用发送 POST 请求到 `/anthropic/v1/messages`,携带有效的 Anthropic 请求格式(非流式)
- **THEN** 网关 SHALL 剥离 `/anthropic` 前缀并将 `/v1/messages` 作为 Anthropic nativePath
- **THEN** 网关 SHALL 通过 ConversionEngine 将 Anthropic 请求解码为 Canonical 格式
- **THEN** 网关 SHALL 将 Canonical 请求编码为目标供应商协议格式
- **THEN** 若上游返回 2xx网关 SHALL 将供应商的响应通过 ConversionEngine 转换为 Anthropic 格式返回给应用
#### Scenario: 成功的流式请求
- **WHEN** 应用发送 POST 请求到 `/anthropic/v1/messages`,携带 `stream: true`
- **THEN** 网关 SHALL 通过 ConversionEngine 创建 StreamConverter 或使用同协议流式透传路径
- **THEN** 网关 SHALL 将上游协议的 SSE 流转换为 Anthropic 命名事件格式,或在同协议 raw passthrough 下透传 Anthropic SSE
- **THEN** 网关 SHALL 使用 `event: <type>\ndata: <json>\n\n` 格式流式返回给应用
#### Scenario: 同协议透传Anthropic → Anthropic Provider
- **WHEN** 客户端使用 Anthropic 协议且目标供应商也是 Anthropic 协议
- **THEN** 网关 SHALL 跳过 Canonical 转换
- **THEN** 网关 SHALL 使用 Anthropic adapter 重建上游 URL 和认证 Header
- **THEN** 若请求使用统一模型 ID网关 SHALL 仅通过 Smart Passthrough 改写 model 字段
### Requirement: 双向协议转换
网关 SHALL 支持 Anthropic 协议与任意已注册协议间的双向转换。
#### Scenario: Anthropic 客户端 → OpenAI 供应商
- **WHEN** 客户端使用 Anthropic 协议且供应商使用 OpenAI 协议
- **THEN** SHALL 将 Anthropic MessagesRequest 解码为 CanonicalRequest
- **THEN** SHALL 将 CanonicalRequest 编码为 OpenAI ChatCompletionRequest
- **THEN** 若上游返回 2xxSHALL 将 OpenAI ChatCompletionResponse 解码为 CanonicalResponse
- **THEN** SHALL 将 CanonicalResponse 编码为 Anthropic MessagesResponse
#### Scenario: OpenAI 客户端 → Anthropic 供应商
- **WHEN** 客户端使用 OpenAI 协议且供应商使用 Anthropic 协议
- **THEN** SHALL 将 OpenAI ChatCompletionRequest 解码为 CanonicalRequest
- **THEN** SHALL 将 CanonicalRequest 编码为 Anthropic MessagesRequest
- **THEN** 若上游返回 2xxSHALL 将 Anthropic MessagesResponse 解码为 CanonicalResponse
- **THEN** SHALL 将 CanonicalResponse 编码为 OpenAI ChatCompletionResponse
#### Scenario: 上游错误透传
- **WHEN** Anthropic 代理请求收到上游非 2xx HTTP 响应
- **THEN** 网关 SHALL 直接透传上游 status code、过滤后的 headers 和 body
- **THEN** 网关 SHALL NOT 将上游错误转换为 Anthropic 错误格式
### Requirement: Anthropic 端点保持 v1 层级
Anthropic 协议的对外端点 SHALL 保持 `/v1` 层级以匹配 Anthropic 原生路径约定。
#### Scenario: Claude Code 调用 Anthropic 端点
- **WHEN** Claude Code 发送请求到 `/anthropic/v1/messages`
- **THEN** 网关 SHALL 正确处理请求
- **THEN** nativePath SHALL 为 `/v1/messages`
- **THEN** 最终上游 URL SHALL 为 `base_url + /v1/messages`
## ADDED Requirements
### Requirement: Anthropic 上游路径映射
Anthropic adapter SHALL 将剥离协议前缀后的 Anthropic nativePath 映射为 Anthropic 上游路径。
#### Scenario: Messages 上游路径
- **WHEN** nativePath 为 `/v1/messages`
- **THEN** Anthropic adapter BuildUrl SHALL 返回 `/v1/messages`
#### Scenario: Models 上游路径
- **WHEN** nativePath 为 `/v1/models`
- **THEN** Anthropic adapter BuildUrl SHALL 返回 `/v1/models`

View File

@@ -0,0 +1,108 @@
## ADDED Requirements
### Requirement: 同协议请求 URL 使用 Adapter 映射
ConversionEngine SHALL 在同协议透传和 Smart Passthrough 场景下使用 providerAdapter.BuildUrl 构建上游 URL 路径。
#### Scenario: 同协议 Chat URL 映射
- **WHEN** clientProtocol == providerProtocol 且 interfaceType 为 CHAT
- **THEN** ConversionEngine SHALL 调用 providerAdapter.BuildUrl(nativePath, interfaceType)
- **THEN** 上游 URL SHALL 为 provider.BaseURL 与 BuildUrl 返回路径的组合
- **THEN** ConversionEngine SHALL NOT 直接将 provider.BaseURL 与 nativePath 拼接为上游 URL
#### Scenario: 未知接口 URL 映射
- **WHEN** interfaceType 为 PASSTHROUGH
- **THEN** providerAdapter.BuildUrl SHALL 返回适合目标协议的路径或原 nativePath
- **THEN** ConversionEngine SHALL 使用该路径构建上游 URL
### Requirement: SSE Frame 级 Smart Passthrough
系统 SHALL 支持同协议流式 Smart Passthrough 对 SSE frame 中的 model 字段进行最小化改写。
#### Scenario: 改写 SSE data JSON
- **WHEN** 同协议流式响应需要将上游模型名改写为统一模型 ID
- **THEN** 系统 SHALL 按 SSE frame 解析上游字节流
- **THEN** 对包含 JSON payload 的 `data` 行 SHALL 调用 adapter.RewriteResponseModelName 改写 model 字段
- **THEN** SHALL 重建合法 SSE frame 输出
#### Scenario: 保留 DONE 事件
- **WHEN** SSE frame 的 data payload 为 `[DONE]`
- **THEN** 系统 SHALL 原样输出 `[DONE]`
- **THEN** SHALL NOT 尝试按 JSON 解析
#### Scenario: 改写失败宽容降级
- **WHEN** SSE frame 解析或 model 改写失败
- **THEN** 系统 SHALL 记录 warn 日志
- **THEN** SHALL 输出原始 SSE frame
- **THEN** SHALL 继续处理后续 frame
### Requirement: Adapter 模型提取边界
ProtocolAdapter SHALL 只对明确适配的接口提供 model 提取和 model 改写能力。
#### Scenario: 已适配接口提取 model
- **WHEN** ifaceType 为 adapter 明确支持提取 model 的接口
- **THEN** ExtractModelName SHALL 按该协议和接口的请求格式提取 model 字段
#### Scenario: 未适配接口不提取 model
- **WHEN** ifaceType 为 PASSTHROUGH 或 adapter 未明确支持提取 model 的接口
- **THEN** ExtractModelName SHALL 返回错误或空结果
- **THEN** 调用方 SHALL 按无 model 请求处理
### Requirement: 跨协议多模态暂不支持
ConversionEngine SHALL 在跨协议完整转换中对当前暂不支持的多模态内容返回明确错误。
#### Scenario: 跨协议请求包含多模态内容块
- **WHEN** clientProtocol != providerProtocol 且 CanonicalRequest 中包含 image、audio、video 或 file 内容块
- **THEN** ConversionEngine SHALL 中断转换
- **THEN** SHALL 返回网关层 `UNSUPPORTED_MULTIMODAL` 错误
- **THEN** SHALL NOT 静默丢弃多模态内容
#### Scenario: 同协议多模态请求
- **WHEN** clientProtocol == providerProtocol 且请求通过 Smart Passthrough 或 raw passthrough 处理
- **THEN** 系统 SHALL 保留原始请求体中未改写字段
- **THEN** SHALL NOT 因多模态字段存在而执行跨协议多模态校验
### Requirement: 上游非 2xx 响应不进入转换
ConversionEngine SHALL 只转换调用方传入的成功响应ProxyHandler SHALL 在调用转换前过滤上游非 2xx 响应。
#### Scenario: 非 2xx 响应绕过响应转换
- **WHEN** 上游响应状态码不是 2xx
- **THEN** ProxyHandler SHALL NOT 调用 ConversionEngine.ConvertHttpResponse
- **THEN** ProxyHandler SHALL 直接透传该响应
#### Scenario: 流式非 2xx 响应绕过流式转换
- **WHEN** 流式请求收到上游非 2xx 响应
- **THEN** ProxyHandler SHALL NOT 调用 ConversionEngine.CreateStreamConverter
- **THEN** ProxyHandler SHALL 直接透传该响应
### Requirement: 协议路径来源
ProtocolAdapter SHALL 以对应协议的本地 API reference 文档作为 URL 识别和 URL 映射的事实来源。
#### Scenario: OpenAI 路径来源
- **WHEN** 实现或测试 OpenAI adapter 的 DetectInterfaceType 或 BuildUrl
- **THEN** SHALL 参考 `docs/api_reference/openai` 中的接口路径
- **THEN** SHALL 忽略 `docs/api_reference/openai/responses` 目录
- **THEN** SHALL NOT 因其他协议包含 `/v1` 而给 OpenAI nativePath 添加 `/v1`
#### Scenario: Anthropic 路径来源
- **WHEN** 实现或测试 Anthropic adapter 的 DetectInterfaceType 或 BuildUrl
- **THEN** SHALL 参考 `docs/api_reference/anthropic` 中的接口路径
- **THEN** SHALL 保留文档中接口路径自带的 `/v1` 前缀
- **THEN** SHALL NOT 因 OpenAI nativePath 不含 `/v1` 而移除 Anthropic nativePath 中的 `/v1`

View File

@@ -0,0 +1,95 @@
## ADDED Requirements
### Requirement: 网关层代理错误使用应用统一格式
系统 SHALL 对代理接口中由网关自身产生的错误使用应用统一错误响应格式。
#### Scenario: 标准网关错误格式
- **WHEN** 代理接口返回网关层错误
- **THEN** SHALL 使用以下 JSON 格式:
```json
{
"error": "错误描述",
"code": "ERROR_CODE"
}
```
- **THEN** `error` 字段 SHALL 包含人类可读的错误描述
- **THEN** `code` 字段 SHALL 包含机器可读的错误码
#### Scenario: 网关错误码集合
- **WHEN** 代理接口返回网关层错误
- **THEN** code SHALL 使用以下枚举之一:`INVALID_JSON`、`INVALID_REQUEST`、`INVALID_MODEL_ID`、`MODEL_NOT_FOUND`、`PROVIDER_NOT_FOUND`、`UNSUPPORTED_INTERFACE`、`UNSUPPORTED_MULTIMODAL`、`CONVERSION_FAILED`、`UPSTREAM_UNAVAILABLE`
### Requirement: 代理接口上游错误透传
系统 SHALL 对代理接口中已经收到的上游 HTTP 错误响应执行透明透传。
#### Scenario: 非流式上游非 2xx 响应
- **WHEN** 非流式代理请求收到上游 HTTP 响应且状态码不是 2xx
- **THEN** SHALL 透传上游 status code
- **THEN** SHALL 透传过滤 hop-by-hop header 后的上游 headers
- **THEN** SHALL 透传上游 body
- **THEN** SHALL NOT 将上游错误包装为应用统一错误
- **THEN** SHALL NOT 将上游错误转换为客户端协议错误格式
#### Scenario: 流式上游非 2xx 响应
- **WHEN** 流式代理请求收到上游 HTTP 响应且状态码不是 2xx
- **THEN** SHALL 透传上游 status code
- **THEN** SHALL 透传过滤 hop-by-hop header 后的上游 headers
- **THEN** SHALL 透传上游 body
- **THEN** SHALL NOT 创建 StreamConverter
### Requirement: 上游不可达错误
系统 SHALL 在没有收到上游 HTTP 响应时返回网关层错误。
#### Scenario: 上游连接失败
- **WHEN** ProviderClient 因 DNS、连接失败、TLS、超时或上下文取消等原因无法获得上游 HTTP 响应
- **THEN** SHALL 返回 HTTP 502 或合适的 5xx 状态码
- **THEN** SHALL 返回应用统一错误格式
- **THEN** code SHALL 为 `UPSTREAM_UNAVAILABLE`
### Requirement: Hop-by-hop header 过滤
系统 SHALL 在透传上游错误响应时过滤 hop-by-hop headers。
#### Scenario: 过滤连接级 header
- **WHEN** 透传上游错误响应 headers
- **THEN** SHALL 过滤 `Connection`、`Transfer-Encoding`、`Keep-Alive`、`Proxy-Authenticate`、`Proxy-Authorization`、`TE`、`Trailer`、`Upgrade`
- **THEN** SHALL 保留 `Content-Type` 等普通响应 header
## MODIFIED Requirements
### Requirement: JSON 格式错误
系统 SHALL 对请求体 JSON 格式错误返回明确的错误信息。
#### Scenario: 请求体 JSON 格式错误
- **WHEN** 代理请求的请求体不是有效的 JSON 格式,且该接口需要网关解析请求体
- **THEN** SHALL 返回 HTTP 400 Bad Request
- **THEN** SHALL 返回以下 JSON 格式:
```json
{
"error": "请求体 JSON 格式错误",
"code": "INVALID_JSON"
}
```
#### Scenario: Smart Passthrough 时请求体 JSON 格式错误
- **WHEN** 同协议 Smart Passthrough 场景下,请求体 JSON 格式不正确
- **THEN** SHALL 返回 HTTP 400 Bad Request
- **THEN** SHALL 返回以下 JSON 格式:
```json
{
"error": "请求体 JSON 格式错误",
"code": "INVALID_JSON"
}
```

View File

@@ -0,0 +1,74 @@
## MODIFIED Requirements
### Requirement: 支持 OpenAI Chat Completions API 端点
网关 SHALL 提供 OpenAI Chat Completions API 端点供外部应用调用。
#### Scenario: 成功的非流式请求
- **WHEN** 应用发送 POST 请求到 `/openai/chat/completions`,携带有效的 OpenAI 请求格式(非流式)
- **THEN** 网关 SHALL 剥离 `/openai` 前缀并将 `/chat/completions` 作为 OpenAI nativePath
- **THEN** 网关 SHALL 通过 ConversionEngine 转换请求
- **THEN** 网关 SHALL 将转换后的请求转发到配置的供应商
- **THEN** 若上游返回 2xx网关 SHALL 将供应商的响应通过 ConversionEngine 转换为 OpenAI 格式返回给应用
#### Scenario: 成功的流式请求
- **WHEN** 应用发送 POST 请求到 `/openai/chat/completions`,携带 `stream: true`
- **THEN** 网关 SHALL 通过 ConversionEngine 创建 StreamConverter 或使用同协议流式透传路径
- **THEN** 网关 SHALL 使用 SSE 格式将转换后的响应流式返回给应用
- **THEN** 网关 SHALL 在 OpenAI 协议流完成时发送或透传 `data: [DONE]`
#### Scenario: 同协议透传OpenAI → OpenAI Provider
- **WHEN** 客户端使用 OpenAI 协议且目标供应商也是 OpenAI 协议
- **THEN** 网关 SHALL 跳过 Canonical 转换
- **THEN** 网关 SHALL 使用 OpenAI adapter 重建上游 URL 和认证 Header
- **THEN** 若请求使用统一模型 ID网关 SHALL 仅通过 Smart Passthrough 改写 model 字段
### Requirement: 根据模型名称路由请求
网关 SHALL 根据请求中的 `model` 字段将请求路由到相应的供应商。
#### Scenario: 有效模型路由
- **WHEN** 请求包含有效统一模型 ID 格式的 `model` 字段
- **AND** 该模型存在且已启用
- **THEN** 网关 SHALL 将请求路由到该模型关联的供应商
- **THEN** 网关 SHALL 从供应商的 `protocol` 字段获取 providerProtocol
#### Scenario: 模型未找到
- **WHEN** 请求包含有效统一模型 ID 格式的 `model` 字段但模型不存在
- **THEN** 网关 SHALL 使用应用统一错误格式返回 404 错误
#### Scenario: 模型已禁用
- **WHEN** 请求包含已禁用模型的有效统一模型 ID
- **THEN** 网关 SHALL 使用应用统一错误格式返回 404 错误
#### Scenario: 原始模型名兼容透传
- **WHEN** 请求中的 `model` 字段不是有效统一模型 ID 格式
- **THEN** 网关 SHALL 按无可路由 model 请求走 forwardPassthrough
### Requirement: 跨协议请求转换
网关 SHALL 对非 OpenAI 兼容供应商的请求和响应通过 ConversionEngine 进行转换处理。
#### Scenario: 跨协议请求转发
- **WHEN** 网关收到 OpenAI 协议请求且目标供应商使用不同协议
- **THEN** 网关 SHALL 通过 ConversionEngine 将请求转换为目标协议格式
- **THEN** 网关 SHALL 使用目标协议的 Adapter 构建 URL 和 Header
#### Scenario: 扩展层接口代理
- **WHEN** 网关收到 `/openai/models` 等扩展层接口请求
- **THEN** 网关 SHALL 通过本地聚合或 ConversionEngine 转换扩展层接口响应格式
#### Scenario: 上游错误透传
- **WHEN** OpenAI 代理请求收到上游非 2xx HTTP 响应
- **THEN** 网关 SHALL 直接透传上游 status code、过滤后的 headers 和 body
- **THEN** 网关 SHALL NOT 将上游错误转换为 OpenAI 错误格式

View File

@@ -0,0 +1,229 @@
## MODIFIED Requirements
### Requirement: 实现统一代理 Handler
系统 SHALL 实现统一的 ProxyHandler替代现有的 OpenAIHandler 和 AnthropicHandler。
ProxyHandler SHALL 依赖 ConversionEngine、ProviderClient、RoutingService、StatsService。
#### Scenario: 从 URL 提取客户端协议
- **WHEN** 收到 `/{protocol}/{path}` 格式的请求
- **THEN** SHALL 从 URL 第一段提取 protocol 作为 clientProtocol
- **THEN** SHALL 剥离协议前缀得到 nativePath
- **THEN** nativePath SHALL 保留剥离协议前缀后的协议原生路径,不添加、不移除版本路径段
- **THEN** OpenAI 请求 `/openai/chat/completions` 的 nativePath SHALL 为 `/chat/completions`
- **THEN** Anthropic 请求 `/anthropic/v1/messages` 的 nativePath SHALL 为 `/v1/messages`
#### Scenario: 协议前缀必须是已注册协议
- **WHEN** 收到的 URL 前缀不是已注册的协议名称
- **THEN** SHALL 返回 404 错误
- **THEN** 错误响应 SHALL 使用应用统一错误格式
#### Scenario: 接口类型识别
- **WHEN** 提取 nativePath 后
- **THEN** SHALL 通过 ConversionEngine.convertHttpRequest 内部调用 clientAdapter.detectInterfaceType(nativePath) 识别接口类型
- **THEN** 未知路径 SHALL 使用透传模式
### Requirement: 非流式请求处理流程
ProxyHandler SHALL 按以下流程处理非流式请求。
#### Scenario: 完整转换流程
- **WHEN** 收到非流式请求
- **THEN** SHALL 通过客户端 adapter 明确支持的接口规则提取 model
- **THEN** SHALL 调用 RoutingService.RouteByModelName(providerID, modelName) 获取路由结果
- **THEN** SHALL 从路由结果的 Provider.Protocol 获取 providerProtocol
- **THEN** SHALL 构建 TargetProviderbase_url, api_key, model_name, adapter_config
- **THEN** SHALL 调用 engine.convertHttpRequest 获取 HTTPRequestSpec
- **THEN** SHALL 调用 providerClient.Send(ctx, requestSpec)
- **THEN** 若上游响应状态码为 2xxSHALL 调用 engine.convertHttpResponse 转换响应
- **THEN** SHALL 将转换后的响应返回给客户端
#### Scenario: 路由失败处理
- **WHEN** RoutingService.RouteByModelName 返回错误
- **THEN** SHALL 使用应用统一错误格式返回网关层错误
- **THEN** SHALL 返回适当的 HTTP 状态码
#### Scenario: 上游请求失败处理
- **WHEN** ProviderClient.Send 未收到上游 HTTP 响应并返回错误
- **THEN** SHALL 使用应用统一错误格式返回网关层错误
- **THEN** SHALL 包装原始错误信息用于日志
#### Scenario: 上游非 2xx 响应透传
- **WHEN** ProviderClient.Send 收到上游 HTTP 响应且状态码不是 2xx
- **THEN** ProxyHandler SHALL NOT 调用 engine.convertHttpResponse
- **THEN** SHALL 透传上游 status code
- **THEN** SHALL 透传过滤 hop-by-hop header 后的上游 headers
- **THEN** SHALL 透传上游 body
### Requirement: 流式请求处理流程
ProxyHandler SHALL 按以下流程处理流式请求。
#### Scenario: 跨协议流式转换流程
- **WHEN** 请求中 stream=true 且 clientProtocol != providerProtocol
- **THEN** SHALL 执行与非流式相同的前置处理(路由、构建 TargetProvider
- **THEN** SHALL 调用 engine.convertHttpRequest 获取 HTTPRequestSpec
- **THEN** SHALL 调用 providerClient.SendStream(ctx, requestSpec) 获取流
- **THEN** 若上游返回 2xx 流式响应SHALL 调用 engine.createStreamConverter(clientProtocol, providerProtocol, provider)
- **THEN** SHALL 设置响应 Header `Content-Type: text/event-stream`
- **THEN** SHALL 对每个流 chunk 调用 converter.processChunk 并写入响应
- **THEN** SHALL 在流结束时调用 converter.flush() 刷新缓冲
#### Scenario: 同协议 raw 流式透传
- **WHEN** clientProtocol == providerProtocol 且响应不需要 model 改写
- **THEN** SHALL 直接将上游 SSE 字节流写入响应
- **THEN** SHALL 保留 SSE frame 边界和结束标记
- **THEN** SHALL NOT 做任何解析或转换
#### Scenario: 同协议流式 Smart Passthrough
- **WHEN** clientProtocol == providerProtocol 且响应需要 model 改写
- **THEN** SHALL 按 SSE frame 读取上游流
- **THEN** SHALL 仅改写 `data` JSON payload 内的 model 字段
- **THEN** SHALL 原样保留 `[DONE]` 结束标记
- **THEN** SHALL 重建合法 SSE frame 写回客户端
- **THEN** SHALL NOT 做 Canonical decode/encode 转换
#### Scenario: 流式上游非 2xx 响应透传
- **WHEN** 流式请求收到上游非 2xx HTTP 响应
- **THEN** SHALL NOT 创建 StreamConverter
- **THEN** SHALL 透传上游 status code、过滤后的 headers 和 body
#### Scenario: 流式错误处理
- **WHEN** 流过程中发生网关层读取或写入错误
- **THEN** SHALL 记录错误日志
- **THEN** SHALL 关闭响应流
### Requirement: GET 请求透传
ProxyHandler SHALL 支持无请求体 GET 请求和未知接口透传。
#### Scenario: Models 接口本地处理
- **WHEN** 收到客户端协议的模型列表路径请求,例如 `GET /openai/models``GET /anthropic/v1/models`
- **THEN** SHALL 走模型列表本地聚合流程
- **THEN** SHALL NOT 请求上游供应商
#### Scenario: 未知 GET 接口透传
- **WHEN** 收到未被 adapter 识别为本地处理接口的 GET 请求
- **THEN** SHALL 走 forwardPassthrough 透传到上游供应商
- **THEN** SHALL 保留原始 query string
### Requirement: 代理请求路由
ProxyHandler SHALL 使用统一模型 ID 路由 adapter 明确支持提取 model 的代理请求。
#### Scenario: 提取统一模型 ID
- **WHEN** 收到 adapter 明确支持提取 model 的接口请求(如 Chat、Embeddings 或 Rerank且请求体非空
- **THEN** SHALL 调用客户端协议 adapter 的 `ExtractModelName(body, ifaceType)` 提取 model 值
- **THEN** SHALL 调用 `ParseUnifiedModelID` 解析得到 providerID 和 modelName
- **THEN** SHALL 调用 `RoutingService.RouteByModelName(providerID, modelName)` 路由
#### Scenario: 未知接口不猜测 model
- **WHEN** 收到 adapter 未明确支持提取 model 的接口请求
- **THEN** ProxyHandler SHALL NOT 尝试通用解析顶层 model 字段
- **THEN** SHALL 按无 model 请求走 forwardPassthrough
#### Scenario: GET 请求或无请求体
- **WHEN** 收到 GET 请求或请求体为空或请求体中无法提取 model 字段
- **THEN** SHALL 走 forwardPassthrough 透传到上游供应商
#### Scenario: 无效的统一模型 ID
- **WHEN** adapter 已提取 model 字段但该字段不是有效的统一模型 ID 格式(不含 `/`
- **THEN** SHALL 走 forwardPassthrough 透传到上游供应商
#### Scenario: 模型不存在
- **WHEN** 解析统一模型 ID 后,数据库中找不到对应的 provider_id + model_name 组合
- **THEN** SHALL 返回应用统一错误响应,状态码为 404
#### Scenario: 模型已禁用
- **WHEN** 解析统一模型 ID 后,对应的模型 enabled 为 false
- **THEN** SHALL 返回应用统一错误响应,状态码为 404
#### Scenario: 供应商已禁用
- **WHEN** 解析统一模型 ID 后,对应的供应商 enabled 为 false
- **THEN** SHALL 返回应用统一错误响应,状态码为 404
### Requirement: 同协议 Smart Passthrough
当客户端协议与供应商协议相同时ProxyHandler SHALL 使用 Smart Passthrough 处理 adapter 明确支持的 Chat、Embedding、Rerank 请求。
#### Scenario: 同协议非流式请求
- **WHEN** 客户端协议 == 供应商协议,且为非流式请求
- **THEN** SHALL 调用 adapter 的 `RewriteRequestModelName(body, modelName, ifaceType)` 将请求体中 model 从统一 ID 改写为上游模型名
- **THEN** SHALL 使用 providerAdapter.BuildUrl(nativePath, ifaceType) 构建上游 URL 路径
- **THEN** SHALL 使用 providerAdapter.BuildHeaders(provider) 构建 Headers
- **THEN** SHALL 发送改写后的请求体到上游
- **THEN** 若上游返回 2xxSHALL 调用 adapter 的 `RewriteResponseModelName(resp.Body, unifiedModelID, ifaceType)` 将响应中 model 从上游名改写为统一 ID
- **THEN** SHALL NOT 对 body 做 Canonical 全量 decode → encode保持未改写字段的 JSON 内容和类型不变,但不承诺保留原始字节顺序或空白
#### Scenario: 同协议流式请求
- **WHEN** 客户端协议 == 供应商协议,且为流式请求
- **THEN** SHALL 对请求体做 `RewriteRequestModelName` 改写 model 字段
- **THEN** SHALL 使用 providerAdapter.BuildUrl(nativePath, ifaceType) 构建上游 URL 路径
- **THEN** SHALL 按 raw passthrough 或 SSE frame 级 Smart Passthrough 处理响应流
- **THEN** SHALL NOT 对 chunk 做 Canonical 全量 decode → encode
#### Scenario: Smart Passthrough 保真性
- **WHEN** 客户端发送含未知参数的请求(如 `{"model":"openai/gpt-4","some_new_param":"value"}`
- **THEN** 上游 SHALL 收到 `{"model":"gpt-4","some_new_param":"value"}`
- **THEN** `some_new_param` SHALL 保持原始值不变,不丢失、不改变类型
### Requirement: 模型列表本地聚合
ProxyHandler SHALL 从数据库聚合返回模型列表,不再透传上游。
#### Scenario: 客户端协议模型列表路径
- **WHEN** 收到客户端协议的模型列表路径请求,例如 `GET /openai/models``GET /anthropic/v1/models`
- **THEN** SHALL 从数据库查询所有 enabled 的模型(关联 enabled 的供应商)
- **THEN** SHALL 组装 `CanonicalModelList`,每个模型的 ID 字段为统一模型 ID`provider_id/model_name`Name 字段为 model_nameOwnedBy 字段为 provider_id
- **THEN** SHALL 使用客户端协议的 adapter 编码响应
- **THEN** SHALL NOT 请求上游供应商
#### Scenario: 无可用模型
- **WHEN** 数据库中没有 enabled 的模型
- **THEN** SHALL 返回空列表
### Requirement: 模型详情本地查询
ProxyHandler SHALL 从数据库查询返回模型详情,不再透传上游。
#### Scenario: 客户端协议模型详情路径
- **WHEN** 收到客户端协议的模型详情路径请求,例如 `GET /openai/models/{provider_id}/{model_name}``GET /anthropic/v1/models/{provider_id}/{model_name}`
- **THEN** SHALL 调用 adapter 的 `ExtractUnifiedModelID` 提取统一模型 ID
- **THEN** SHALL 解析统一模型 ID 得到 providerID 和 modelName
- **THEN** SHALL 从数据库查询对应的模型和供应商
- **THEN** SHALL 组装 `CanonicalModelInfo`ID 字段为统一模型 ID`provider_id/model_name`Name 字段为 model_nameOwnedBy 字段为 provider_id
- **THEN** SHALL 使用客户端协议的 adapter 编码响应
- **THEN** SHALL NOT 请求上游供应商
#### Scenario: 模型详情不存在
- **WHEN** 统一模型 ID 对应的模型不存在或已禁用
- **THEN** SHALL 返回应用统一错误响应,状态码为 404

View File

@@ -0,0 +1,51 @@
## 1. 路径与 Adapter 映射
- [x] 1.0 对照 docs/api_reference/openai忽略 responses 目录)和 docs/api_reference/anthropic 确认本次涉及接口的协议原生路径,并将确认结果写入对应 adapter 测试用例
- [x] 1.1 确认 OpenAI adapter 识别剥离协议前缀后的 `/chat/completions``/models``/embeddings``/rerank` 路径并补充测试
- [x] 1.2 确认 Anthropic adapter 识别剥离协议前缀后的 `/v1/messages``/v1/models``/v1/models/{id}` 路径并补充测试
- [x] 1.3 补充 BuildUrl 测试,确认 OpenAI `base_url=http://xxx.com/v1` 组合为 `/v1/chat/completions`Anthropic `base_url=http://xxx.com` 组合为 `/v1/messages`
- [x] 1.4 修改 ConversionEngine 同协议请求 URL 构建逻辑,统一使用 providerAdapter.BuildUrl(nativePath, interfaceType)
- [x] 1.5 更新 ProxyHandler 和路由测试,确认网关只剥离协议前缀,不对 nativePath 添加或移除 `/v1`
## 2. 错误响应与上游透传
- [x] 2.1 增加网关层代理错误响应工具,统一输出 `{error, code}` 和对应 HTTP 状态码
- [x] 2.2 将路由失败、JSON 解析失败、转换失败、上游不可达等网关错误改为应用统一错误格式
- [x] 2.3 在非流式 handler 中对上游非 2xx 响应绕过 ConvertHttpResponse 并透传 status、过滤后的 headers、body
- [x] 2.4 实现 hop-by-hop response header 过滤并补充单元测试
- [x] 2.5 调整 ProviderClient 或调用方错误返回,使未收到上游 HTTP 响应时可返回 `UPSTREAM_UNAVAILABLE`
## 3. 流式链路
- [x] 3.1 调整 ProviderClient 流式能力,使 handler 可区分上游 2xx 流、上游非 2xx 响应和上游不可达错误
- [x] 3.2 实现同协议无 model 改写时的 raw SSE passthrough保留 frame 边界和 `[DONE]`
- [x] 3.3 实现 SSE frame 解析与重建工具,覆盖 LF、CRLF、多 data 行和空 frame
- [x] 3.4 实现同协议流式 Smart Passthrough在 SSE data JSON 内改写 model失败时输出原 frame 并继续
- [x] 3.5 确保跨协议流式仍走 provider StreamDecoder 到 client StreamEncoder 的 CanonicalStreamConverter
## 4. 路由与能力边界
- [x] 4.1 明确 adapter 的 ExtractModelName 只支持已适配接口PASSTHROUGH 或未适配接口返回错误或空结果
- [x] 4.2 修改 ProxyHandler未知接口不做通用顶层 model 猜测,按无 model 请求走 forwardPassthrough
- [x] 4.3 保留原始模型名兼容透传逻辑,有效统一模型 ID 但路由失败时返回 `MODEL_NOT_FOUND`
- [x] 4.4 在跨协议转换中检测 image、audio、video、file 内容块并返回 `UNSUPPORTED_MULTIMODAL`
- [x] 4.5 确保同协议 Smart Passthrough 不因多模态字段存在而拒绝请求
## 5. 测试
- [x] 5.1 补充 OpenAI `/openai/chat/completions` 和 Anthropic `/anthropic/v1/messages` 的 handler、E2E 测试
- [x] 5.2 补充 nativePath 保持协议原生路径的测试,断言 OpenAI adapter 接收 `/chat/completions`、Anthropic adapter 接收 `/v1/messages`
- [x] 5.3 补充同协议流式 raw passthrough 测试,断言 SSE frame 边界和 `[DONE]` 保留
- [x] 5.4 补充同协议流式 Smart Passthrough 测试,断言响应 model 改写为统一模型 ID
- [x] 5.5 补充非流式和流式上游非 2xx 错误透传测试
- [x] 5.6 补充网关层错误统一格式测试,覆盖 `INVALID_JSON``MODEL_NOT_FOUND``CONVERSION_FAILED``UPSTREAM_UNAVAILABLE`
- [x] 5.7 补充未知接口不提取 model 的透传测试
- [x] 5.8 补充跨协议多模态返回 `UNSUPPORTED_MULTIMODAL` 测试
## 6. 文档与验证
- [x] 6.1 更新 README.md 的代理接口路径、错误边界和 base_url 约定说明
- [x] 6.2 更新 backend/README.md 的代理接口路径、错误边界和流式透传说明
- [x] 6.3 更新 docs/conversion_design.md使路径、三车道流式行为、错误处理和 adapter 模型提取边界与实现一致
- [x] 6.4 运行 `go test ./internal/conversion/...` 并确保通过
- [x] 6.5 运行 `go test ./tests/integration` 并确保通过