Conversation
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
Co-authored-by: batuhan içöz <batuhan@icoz.co>
|
Cursor Agent can help with this pull request. Just |
📝 WalkthroughSummary by CodeRabbit
WalkthroughAdds a new pkg/ai Go runtime and bridge, provider runtime implementations (OpenAI, Anthropic, Google, Azure, Bedrock, Codex), OAuth helpers, provider/model registries, streaming/event infra, connector routing to pkg/ai, many utilities, a model generator CLI, and extensive unit & e2e tests and go.mod dependency updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Connector
participant Selector as RuntimeSelector
participant Bridge as pkg/ai Bridge
participant Registry as API Registry
participant Provider as Provider Runtime
participant SDK as Provider SDK
participant Stream as EventStream
Client->>Connector: Send Generate/Stream request
Connector->>Selector: chooseStreamingRuntimePath(...)
alt pkg/ai path
Connector->>Bridge: buildPkgAIBridgeGenerateParams + Stream
Bridge->>Registry: ResolveAPIProvider(api)
Registry-->>Bridge: APIProvider (Stream fn)
Bridge->>Provider: invoke provider Stream
Provider->>SDK: call provider SDK
SDK-->>Provider: stream deltas/results
loop events
Provider->>Stream: Push delta events
Stream-->>Bridge: Next()
Bridge-->>Connector: mapped events
Connector-->>Client: forward events
end
Provider->>Stream: Push Done/Error
Stream-->>Bridge: Result
Bridge-->>Connector: Final response
else fallback
Connector->>Selector: fallback to chat-completions path
Selector-->>Connector: local streaming path
end
sequenceDiagram
participant Selector
participant Connector
participant Bridge as pkg/ai Bridge
participant Fallback as Local ChatCompletions
Connector->>Selector: chooseStreamingRuntimePath(hasAudio, api, enabled)
alt pkg/ai enabled & eligible
Selector->>Bridge: streamWithPkgAIBridge()
Bridge-->>Connector: handled by pkg/ai
else fallback
Selector->>Fallback: streamChatCompletions()
Fallback-->>Connector: handled by local implementation
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes ✨ Finishing Touches
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 5
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (22)
pkg/ai/utils/type_helpers.go-24-25 (1)
24-25:⚠️ Potential issue | 🟠 MajorGuard
defaultso it is part ofenum.Line 24 currently includes
defaultunconditionally when non-empty, which can produce an inconsistent schema (defaultvalue not listed inenum).Proposed fix
func StringEnum(values []string, description string, defaultValue string) map[string]any { schema := StringEnumSchema{ Type: "string", Enum: append([]string(nil), values...), Description: description, Default: defaultValue, } out := map[string]any{ "type": "string", "enum": schema.Enum, } if schema.Description != "" { out["description"] = schema.Description } - if schema.Default != "" { - out["default"] = schema.Default + if schema.Default != "" { + for _, v := range schema.Enum { + if v == schema.Default { + out["default"] = schema.Default + break + } + } } return out }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/utils/type_helpers.go` around lines 24 - 25, The current code unconditionally writes out["default"] = schema.Default when schema.Default is non-empty, which can create an inconsistent schema if schema.Enum exists but does not include that default; change the check so the default is only added when schema.Default is non-empty AND either schema.Enum is empty/nil or schema.Enum contains schema.Default. Update the block around schema.Default/out to perform a membership check (e.g., loop over schema.Enum or use a small helper containsEnum) and only set out["default"] when that condition is satisfied.pkg/ai/oauth/google_gemini_cli.go-3-7 (1)
3-7:⚠️ Potential issue | 🟠 MajorMove the Gemini client secret out of source code.
Line 11 hardcodes a secret-like OAuth client value. This is a security/compliance risk and triggers secret scanning.
🔐 Suggested patch
import ( "encoding/json" "net/url" + "os" "strings" ) const ( geminiCliClientID = "681255809395-oo8ft2oprdrnp9e3aqf6av3hmdib135j.apps.googleusercontent.com" - geminiCliClientSecret = "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" + geminiCliClientSecretEnv = "GEMINI_CLI_CLIENT_SECRET" geminiCliRedirectURI = "http://localhost:8085/oauth2callback" geminiCliAuthURL = "https://accounts.google.com/o/oauth2/v2/auth" geminiCliTokenURL = "https://oauth2.googleapis.com/token" ) @@ func GeminiCliClientSecret() string { - return geminiCliClientSecret + return strings.TrimSpace(os.Getenv(geminiCliClientSecretEnv)) }Also applies to: 10-15, 27-29
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/oauth/google_gemini_cli.go` around lines 3 - 7, The file contains a hardcoded OAuth client secret; remove the literal and instead load it from a secure source (e.g., environment variable or secret manager) and fail fast if not present. Replace the hardcoded value used by the Google/Gemini OAuth client (the constant/variable holding the client secret referenced in pkg/ai/oauth/google_gemini_cli.go and any uses around the earlier ranges) with a lookup like os.Getenv("GEMINI_OAUTH_CLIENT_SECRET") (or call to your secret provider) and update any functions that construct the OAuth config to accept the secret as a parameter or return an error when the env/secret is empty so callers fail explicitly. Ensure all occurrences (the hardcoded declaration and the places that use it) are updated to use the new secret retrieval and add a clear error message if the secret is missing.pkg/ai/providers/anthropic_runtime_test.go-97-123 (1)
97-123:⚠️ Potential issue | 🟠 MajorReplace token-shaped test literals to avoid secret-scan CI failures.
Line 97 and Line 120 use token-looking inline strings that are already flagged by scanning. Even test-only values can fail security gates.
🔧 Suggested patch
func TestBuildAnthropicClientConfig_CopilotAuthAndHeaders(t *testing.T) { + const testCopilotSessionID = "copilot-session-test" model := ai.Model{ ID: "claude-sonnet-4", Provider: "github-copilot", @@ - config := buildAnthropicClientConfig(model, ctx, "tid_copilot_session_test_token", "", nil) + config := buildAnthropicClientConfig(model, ctx, testCopilotSessionID, "", nil) @@ func TestBuildAnthropicClientConfig_CopilotInterleavedThinkingHeader(t *testing.T) { + const testCopilotSessionID = "copilot-session-test" model := ai.Model{Provider: "github-copilot"} config := buildAnthropicClientConfig( model, ai.Context{Messages: []ai.Message{{Role: ai.RoleUser, Text: "hello"}}}, - "tid_copilot_session_test_token", + testCopilotSessionID, "interleaved-thinking-2025-05-14", nil, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/anthropic_runtime_test.go` around lines 97 - 123, The test uses token-shaped string literals (e.g., "tid_copilot_session_test_token") in TestBuildAnthropicClientConfig_* which triggers secret-scan failures; update the test to use non-secret placeholders or generated safe values instead (for example a constant like testSessionToken or a call to a helper that returns "TEST_TOKEN"), replacing occurrences passed into buildAnthropicClientConfig and any other token-like inline strings so they no longer resemble real credentials while preserving test behavior and assertions.pkg/ai/e2e/cross_provider_handoff_e2e_test.go-77-83 (1)
77-83:⚠️ Potential issue | 🟠 MajorBound both cross-provider completions with context deadlines.
Lines 77 and 163 create external completion calls without
StreamOptions.Ctx; a provider stall can hang the suite.⏱️ Proposed fix
import ( + "context" "os" "strings" "testing" "time" @@ model := openAIReasoningSourceModel() providers.ResetAPIProviders() + ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second) + defer cancel() @@ response, err := ai.CompleteSimple(model, context, &ai.SimpleStreamOptions{ StreamOptions: ai.StreamOptions{ APIKey: apiKey, MaxTokens: 256, + Ctx: ctx, }, Reasoning: ai.ThinkingHigh, }) @@ model := anthropicE2EModel() providers.ResetAPIProviders() + ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second) + defer cancel() @@ response, err := ai.CompleteSimple(model, context, &ai.SimpleStreamOptions{ StreamOptions: ai.StreamOptions{ APIKey: apiKey, MaxTokens: 256, + Ctx: ctx, }, Reasoning: ai.ThinkingHigh, })Also applies to: 163-169
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/e2e/cross_provider_handoff_e2e_test.go` around lines 77 - 83, The call to ai.CompleteSimple is missing a context on the streaming options so a stalled provider can hang the test; update both ai.CompleteSimple invocations to set StreamOptions.Ctx to the test context (i.e. add StreamOptions{Ctx: context, APIKey: apiKey, MaxTokens: 256}) inside ai.SimpleStreamOptions so the external completion calls are bound to the test deadline/cancellation; key symbols to change: ai.CompleteSimple, ai.SimpleStreamOptions, and the embedded StreamOptions.Ctx.pkg/ai/e2e/stream_test.go-24-33 (1)
24-33:⚠️ Potential issue | 🟠 MajorAdd a timeout to the non-streaming E2E call.
Line 24 invokes
ai.CompletewithoutStreamOptions.Ctx; if the provider stalls, this test can block indefinitely.⏱️ Proposed fix
func TestGenerateE2E_OpenAIComplete(t *testing.T) { requirePIAIE2E(t) @@ model := openAIE2EModel() providers.ResetAPIProviders() + ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second) + defer cancel() response, err := ai.Complete(model, ai.Context{ @@ - }, &ai.StreamOptions{APIKey: apiKey}) + }, &ai.StreamOptions{ + APIKey: apiKey, + Ctx: ctx, + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/e2e/stream_test.go` around lines 24 - 33, The ai.Complete call in the test lacks a context with timeout on the StreamOptions, which can hang if the provider stalls; create a context with a short timeout (e.g., context.WithTimeout) and pass it as StreamOptions.Ctx when calling ai.Complete, ensure you cancel the context (defer cancel()) so the test doesn't leak goroutines, and update the invocation that constructs &ai.StreamOptions{APIKey: apiKey} to include the new Ctx field.pkg/ai/providers/amazon_bedrock.go-28-33 (1)
28-33:⚠️ Potential issue | 🟠 MajorOmit unset inference config fields from Bedrock payloads.
Line 32 unconditionally includes
temperatureandmaxTokensin theinferenceConfig. With unset options,temperaturecan serialize asnull(invalid for AWS Bedrock) andmaxTokenscan be0(below the required minimum of 1). Per AWS Bedrock Converse API specification,inferenceConfigis entirely optional and fields within it must be numeric types when present. BuildinferenceConfigconditionally and include only valid, non-zero values; omit the entireinferenceConfigif no valid parameters are set.🛠️ Proposed fix
func BuildBedrockConverseInput(model ai.Model, context ai.Context, options BedrockOptions) map[string]any { cacheRetention := ResolveBedrockCacheRetention(options.StreamOptions.CacheRetention) - input := map[string]any{ - "modelId": model.ID, - "messages": ConvertBedrockMessages(context, model, cacheRetention), - "system": BuildBedrockSystemPrompt(context.SystemPrompt, model, cacheRetention), - "inferenceConfig": map[string]any{"maxTokens": options.StreamOptions.MaxTokens, "temperature": options.StreamOptions.Temperature}, - } + input := map[string]any{ + "modelId": model.ID, + "messages": ConvertBedrockMessages(context, model, cacheRetention), + } + if system := BuildBedrockSystemPrompt(context.SystemPrompt, model, cacheRetention); len(system) > 0 { + input["system"] = system + } + inference := map[string]any{} + if options.StreamOptions.MaxTokens > 0 { + inference["maxTokens"] = options.StreamOptions.MaxTokens + } + if options.StreamOptions.Temperature != nil { + inference["temperature"] = *options.StreamOptions.Temperature + } + if len(inference) > 0 { + input["inferenceConfig"] = inference + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/amazon_bedrock.go` around lines 28 - 33, The current construction of the Bedrock payload unconditionally sets "inferenceConfig" with possibly invalid values; update the code that builds the input map (the block that sets "input" using model.ID, ConvertBedrockMessages(...), and BuildBedrockSystemPrompt(...)) to first construct an inferenceConfig map only when valid numeric params exist: add "temperature" only if options.StreamOptions.Temperature is non-nil and a valid number, add "maxTokens" only if options.StreamOptions.MaxTokens >= 1, and if that map is empty omit the "inferenceConfig" key entirely from the input; ensure the final input map uses this conditional inferenceConfig so no null or zero values are sent to Bedrock.pkg/connector/ai_adapter.go-33-35 (1)
33-35:⚠️ Potential issue | 🟠 MajorFix data-URL parsing for image payloads.
Lines 33-35 extract
Dataincorrectly fordata:URLs by trimming only"data:"; this leaves metadata prefix in the payload. Extract the segment after the first comma (and optionally infer mime type when missing).🐛 Proposed fix
case ContentTypeImage: data := part.ImageB64 if data == "" && strings.HasPrefix(part.ImageURL, "data:") { - data = strings.TrimPrefix(part.ImageURL, "data:") + if comma := strings.Index(part.ImageURL, ","); comma >= 0 && comma+1 < len(part.ImageURL) { + data = part.ImageURL[comma+1:] + } } blocks = append(blocks, aipkg.ContentBlock{ Type: aipkg.ContentTypeImage, Data: data, MimeType: part.MimeType,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/ai_adapter.go` around lines 33 - 35, The code that extracts image payloads from part.ImageURL incorrectly trims only the "data:" prefix (leaving media metadata like "image/png;base64,"); update the logic around the data variable so that when part.ImageURL starts with "data:" you split on the first comma and set data to the substring after that comma (and separately capture/infer the MIME type from the metadata segment if needed); modify the block that references part.ImageURL and data (the code around variables part.ImageURL and data in ai_adapter.go) to use strings.IndexByte or strings.SplitN to get the payload after the comma and populate any missing mime-type information when the metadata lacks it.pkg/ai/e2e/context_overflow_test.go-29-41 (1)
29-41:⚠️ Potential issue | 🟠 MajorBound this e2e request with an explicit timeout path.
Line 29 uses
ai.Complete(...), which can block longer than intended in network/provider stalls. For e2e, prefer explicitai.Stream(...)+Next(ctx)loop with a timeout context so the test always terminates predictably.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/e2e/context_overflow_test.go` around lines 29 - 41, The test currently calls ai.Complete(model, ai.Context{...}, &ai.StreamOptions{...}) which can block; replace this with ai.Stream(model, ai.Context{...}, &ai.StreamOptions{...}) and create a context with a timeout (e.g., ctx, cancel := context.WithTimeout(context.Background(), timeout)); use a loop calling stream.Next(ctx) to read tokens/events until io.EOF or error, and ensure you call cancel() and stream.Close() in a defer so the test always terminates; handle and assert errors from Next(ctx) (including context.DeadlineExceeded) and collect the streamed response for assertions instead of using ai.Complete.pkg/ai/providers/register_builtins_test.go-32-33 (1)
32-33:⚠️ Potential issue | 🟠 MajorDo not reuse a single short-deadline context across all provider checks.
Line 32 creates one
time.Secondcontext that is reused for every subsequentNextcall; later checks can fail fromcontext deadline exceededrather than provider behavior. Create a fresh timeout context per stream assertion.Also applies to: 59-60, 78-79, 97-98, 116-117, 135-136, 154-155, 173-174
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/register_builtins_test.go` around lines 32 - 33, The test currently creates a single short-lived context via ctx, cancel := context.WithTimeout(context.Background(), time.Second) and reuses it for multiple stream.Next assertions, causing spurious context deadline errors; update the test so each stream assertion creates and defers cancellation of its own context (call context.WithTimeout for each Next invocation) instead of reusing the shared ctx/cancel—locate the initial context creation and every subsequent stream.Next usage in register_builtins_test.go (the ctx, cancel variables and Next calls around lines where timeouts are set) and replace them with per-assertion contexts.pkg/ai/providers/register_builtins_test.go-15-15 (1)
15-15:⚠️ Potential issue | 🟠 MajorCleanup should restore built-ins, not leave the registry empty.
Line 15 uses
ai.ClearAPIProvidersas cleanup, which can break later tests that expect registered providers. Restore to a known good state instead.Suggested change
- t.Cleanup(ai.ClearAPIProviders) + t.Cleanup(ResetAPIProviders)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/register_builtins_test.go` at line 15, The test currently calls ai.ClearAPIProviders in cleanup which empties the global registry; instead capture the current provider registry before the test and restore it in cleanup. Replace t.Cleanup(ai.ClearAPIProviders) with code that snapshots the registry (e.g., via a hypothetical ai.ListAPIProviders or ai.GetAPIProviders) and registers a cleanup that calls ai.SetAPIProviders(snapshot) or re-registers each provider (using ai.RegisterAPIProvider) to restore the original built-ins after the test.pkg/ai/utils/validation.go-95-109 (1)
95-109:⚠️ Potential issue | 🟠 MajorInteger coercion currently accepts invalid values and can overflow.
integercoercion silently truncates fractional floats and may overflow on largeuint64values. Validation should reject both cases instead of coercing them.Diff
case "integer": switch v := value.(type) { - case int, int64, int32, uint64, uint32: + case int, int64, int32, uint32: return toInt64(v), nil + case uint64: + if v > uint64(1<<63-1) { + return nil, fmt.Errorf("expected integer, got %T", value) + } + return int64(v), nil case float64: + if v != float64(int64(v)) { + return nil, fmt.Errorf("expected integer, got %T", value) + } return int64(v), nil + case float32: + if v != float32(int64(v)) { + return nil, fmt.Errorf("expected integer, got %T", value) + } + return int64(v), nil case string: i, err := strconv.ParseInt(strings.TrimSpace(v), 10, 64) if err != nil { return nil, fmt.Errorf("expected integer, got %T", value) } return i, nil default: return nil, fmt.Errorf("expected integer, got %T", value) } @@ func toInt64(v any) int64 { switch n := v.(type) { case int: return int64(n) case int64: return n case int32: return int64(n) - case uint64: - return int64(n) case uint32: return int64(n) default: return 0 } }Also applies to: 159-173
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/utils/validation.go` around lines 95 - 109, The "integer" coercion currently truncates floats and can overflow unsigned integers; update the case "integer" block to (1) reject float64 values that are not whole numbers (use a whole-number check, e.g., value == math.Trunc(value)) and return an error instead of truncating, (2) when seeing unsigned types (uint64, uint32) or other numeric types, explicitly check bounds against math.MaxInt64 and math.MinInt64 before converting and return an overflow error if out of range, (3) keep string parsing via strconv.ParseInt but ensure the parsed value fits in int64 and return a clear error on failure, and (4) make the same changes to the other identical integer coercion block referenced (the second occurrence) so both use the same validation logic; locate these changes around the switch case "integer" and the helper toInt64 usage to implement the checks.pkg/connector/pkg_ai_provider_bridge_e2e_test.go-19-25 (1)
19-25:⚠️ Potential issue | 🟠 MajorAdd explicit timeouts to completion E2E calls.
Line 19, Line 45, and Line 71 pass
context.Background()to live provider calls. These should be bounded to avoid hanging E2E runs.⏱️ Suggested fix pattern (apply to all three completion tests)
- resp, handled, err := tryGenerateWithPkgAI(context.Background(), "", apiKey, GenerateParams{ + ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second) + defer cancel() + resp, handled, err := tryGenerateWithPkgAI(ctx, "", apiKey, GenerateParams{ Model: model, Messages: []UnifiedMessage{ {Role: RoleUser, Content: []ContentPart{{Type: ContentTypeText, Text: "Reply with the single word OK."}}}, }, MaxCompletionTokens: 128, })Also applies to: 45-51, 71-77
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/pkg_ai_provider_bridge_e2e_test.go` around lines 19 - 25, Replace unbounded contexts passed to tryGenerateWithPkgAI in the E2E tests with bounded contexts: create a context with timeout (e.g., ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)) before calling tryGenerateWithPkgAI, pass ctx instead of context.Background(), and defer cancel() so the call cannot hang; apply the same change for the three occurrences that call tryGenerateWithPkgAI in pkg_ai_provider_bridge_e2e_test.go (the test blocks around the calls starting at the lines where tryGenerateWithPkgAI is invoked).pkg/ai/providers/transform_messages.go-78-148 (1)
78-148:⚠️ Potential issue | 🟠 MajorFlush pending tool calls before final return.
Line 147 returns without a final pending flush. If the last message is an assistant tool-call turn, orphaned tool results are skipped.
🧩 Suggested fix (final pending flush)
for _, msg := range transformed { switch msg.Role { @@ } } + + if len(pendingToolCalls) > 0 { + for _, tc := range pendingToolCalls { + if _, ok := existingToolResultIDs[tc.ID]; ok { + continue + } + result = append(result, ai.Message{ + Role: ai.RoleToolResult, + ToolCallID: tc.ID, + ToolName: tc.Name, + Content: []ai.ContentBlock{{ + Type: ai.ContentTypeText, + Text: "No result provided", + }}, + IsError: true, + Timestamp: time.Now().UnixMilli(), + }) + } + } return result }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/transform_messages.go` around lines 78 - 148, The loop over transformed messages builds result but never flushes pendingToolCalls before the final return, so orphaned tool-call blocks at the end are omitted; after the for-loop and before returning result, iterate pendingToolCalls and for each tc not in existingToolResultIDs append an ai.Message with Role ai.RoleToolResult, ToolCallID tc.ID, ToolName tc.Name, a Content block with Type ai.ContentTypeText and Text "No result provided", IsError true and Timestamp time.Now().UnixMilli(), mirroring the flush logic used inside the ai.RoleAssistant and ai.RoleUser branches (use the same checks against existingToolResultIDs and append to result).pkg/ai/providers/google.go-52-54 (1)
52-54:⚠️ Potential issue | 🟠 MajorFix thinking level normalization to match runtime clamping behavior.
Line 53 uppercases the thinking level directly (e.g.,
xhigh→XHIGH), but the runtime mapping functionmapThinkingLevelToGenAIclampsxhightohigh. This divergence meansBuildGoogleGenerateContentParamsforwards an invalid level that doesn't match the runtime's normalized values.Apply the normalization and clamping logic that the runtime uses:
Suggested fix
if strings.TrimSpace(options.Thinking.Level) != "" { - thinkingConfig["thinkingLevel"] = strings.ToUpper(strings.TrimSpace(options.Thinking.Level)) + level := strings.ToLower(strings.TrimSpace(options.Thinking.Level)) + if level == "xhigh" { + level = "high" + } + thinkingConfig["thinkingLevel"] = strings.ToUpper(level) } else if options.Thinking.BudgetTokens != nil {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/google.go` around lines 52 - 54, The thinking level is being uppercased in BuildGoogleGenerateContentParams (setting thinkingConfig["thinkingLevel"] = strings.ToUpper(strings.TrimSpace(options.Thinking.Level))) which diverges from the runtime clamp used by mapThinkingLevelToGenAI; change the logic to trim and normalize the input using the same clamping/mapping used at runtime (call or replicate mapThinkingLevelToGenAI behavior) before assigning to thinkingConfig["thinkingLevel"] so values like "xhigh" become "high" (and use the normalized casing expected by the runtime) instead of simply uppercasing the raw input.pkg/ai/stream.go-25-25 (1)
25-25:⚠️ Potential issue | 🟠 MajorUse caller context in
Complete/CompleteSimpleloops.Both loops call
s.Next(context.Background()), which dropsoptions.Ctxcancellation/deadline and can keep completion running after the caller times out.✅ Proposed fix
func Complete(model Model, c Context, options *StreamOptions) (Message, error) { s, err := Stream(model, c, options) if err != nil { return Message{}, err } + runCtx := context.Background() + if options != nil && options.Ctx != nil { + runCtx = options.Ctx + } for { - _, nextErr := s.Next(context.Background()) + _, nextErr := s.Next(runCtx) if nextErr != nil { break } } return s.Result() } @@ func CompleteSimple(model Model, c Context, options *SimpleStreamOptions) (Message, error) { s, err := StreamSimple(model, c, options) if err != nil { return Message{}, err } + runCtx := context.Background() + if options != nil && options.Ctx != nil { + runCtx = options.Ctx + } for { - _, nextErr := s.Next(context.Background()) + _, nextErr := s.Next(runCtx) if nextErr != nil { break } } return s.Result() }Also applies to: 50-50
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/stream.go` at line 25, The loops in Complete and CompleteSimple call s.Next(context.Background()), which ignores the caller's cancellation; change those calls to use the provided caller context (e.g., options.Ctx or the ctx parameter passed into Complete/CompleteSimple) so s.Next(...) receives the original context and respects timeouts/cancellation; locate the s.Next(...) invocations in Complete and CompleteSimple and replace context.Background() with the caller context variable.pkg/connector/pkg_ai_provider_bridge.go-169-171 (1)
169-171:⚠️ Potential issue | 🟠 MajorDon’t silently fallback on every stream setup error.
This currently falls back for all
aipkg.Stream*creation errors, including real runtime failures. That masks hard errors and makes stream behavior inconsistent withtryGenerateWithPkgAI.✅ Proposed fix
if err != nil { - return nil, false + if shouldFallbackFromPkgAIError(err) { + return nil, false + } + out := make(chan StreamEvent, 1) + out <- StreamEvent{Type: StreamEventError, Error: err} + close(out) + return out, true }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/pkg_ai_provider_bridge.go` around lines 169 - 171, The code currently swallows every error from aipkg.Stream* creation by returning nil, false; instead, only fallback for the known non-fatal "stream not supported" case and propagate real runtime errors to the caller like tryGenerateWithPkgAI does. Update the block that handles errors from aipkg.Stream* (the creation calls) to check the error value: if it matches the explicit non-fatal sentinel (e.g., aipkg.ErrStreamNotSupported or equivalent) then return nil, false to allow fallback; otherwise return/propagate the actual error (wrap with context if needed) so upstream can handle hard failures. Ensure you reference the same error handling pattern used in tryGenerateWithPkgAI for consistency.pkg/ai/oauth/openai_codex.go-56-59 (1)
56-59:⚠️ Potential issue | 🟠 MajorFix fragment parsing for OAuth callback inputs.
For inputs like
http://localhost/callback#code=abc&state=xyz, current logic returns the whole URL ascodeand the full fragment asstate, which breaks auth completion.✅ Proposed fix
func ParseOpenAICodexAuthorizationInput(input string) (code string, state string) { value := strings.TrimSpace(input) if value == "" { return "", "" } if parsed, err := url.Parse(value); err == nil { query := parsed.Query() if query.Get("code") != "" || query.Get("state") != "" { return strings.TrimSpace(query.Get("code")), strings.TrimSpace(query.Get("state")) } + if strings.TrimSpace(parsed.Fragment) != "" { + if fragmentQuery, fragErr := url.ParseQuery(strings.TrimPrefix(parsed.Fragment, "?")); fragErr == nil { + if fragmentQuery.Get("code") != "" || fragmentQuery.Get("state") != "" { + return strings.TrimSpace(fragmentQuery.Get("code")), strings.TrimSpace(fragmentQuery.Get("state")) + } + } + } } if strings.Contains(value, "#") { parts := strings.SplitN(value, "#", 2) return strings.TrimSpace(parts[0]), strings.TrimSpace(parts[1]) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/oauth/openai_codex.go` around lines 56 - 59, The current fragment-handling branch (strings.Contains + strings.SplitN producing parts[0]/parts[1]) treats the whole URL and fragment as code/state, breaking OAuth extraction; change it to split on '#' and then parse the fragment string (the part after '#') using url.ParseQuery (or url.Parse + Query) to extract the "code" and "state" parameters explicitly, returning the parsed code and state values (falling back to trimmed fragment or original behavior if those keys are absent). Update the branch that uses strings.Contains, strings.SplitN and parts to use the parsed query values instead.pkg/ai/providers/amazon_bedrock_runtime.go-41-47 (1)
41-47:⚠️ Potential issue | 🟠 Major
OnPayloadcurrently cannot influence the request that is sent.The hook receives one payload object, but the request uses a separately-built object. Hook edits are dropped.
✅ Proposed fix
- payload := BuildBedrockConverseInput(model, c, BedrockOptions{StreamOptions: streamOptions}) - if streamOptions.OnPayload != nil { - streamOptions.OnPayload(payload) - } - input := buildBedrockConverseInput(model, c, streamOptions) + if streamOptions.OnPayload != nil { + streamOptions.OnPayload(input) + } resp, err := client.Converse(runCtx, input)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/amazon_bedrock_runtime.go` around lines 41 - 47, The OnPayload hook is called with a payload built by BuildBedrockConverseInput but the request is sent using a separately-built object from buildBedrockConverseInput, so any mutations by streamOptions.OnPayload are lost; fix by building the payload once, call streamOptions.OnPayload(payload) if present, then use that same payload to create or convert to the input passed to client.Converse (e.g., replace the second call to buildBedrockConverseInput with a conversion from the already-built payload or change buildBedrockConverseInput to accept the payload), ensuring the symbols BuildBedrockConverseInput, streamOptions.OnPayload and client.Converse are updated to use the mutated payload.pkg/ai/providers/openai_responses.go-85-94 (1)
85-94:⚠️ Potential issue | 🟠 MajorRemove hardcoded developer prompt injection (
# Juice: 0 !important).This hidden instruction is not a stable API control and can unpredictably bias model outputs.
✅ Proposed fix
- } else if strings.HasPrefix(strings.ToLower(model.Name), "gpt-5") { - messages = append(messages, map[string]any{ - "role": "developer", - "content": []map[string]any{{ - "type": "input_text", - "text": "# Juice: 0 !important", - }}, - }) - params["input"] = messages }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/openai_responses.go` around lines 85 - 94, Remove the hardcoded developer prompt injection introduced in the gpt-5 branch: delete the block inside the strings.HasPrefix(strings.ToLower(model.Name), "gpt-5") condition that appends a developer-role message containing "# Juice: 0 !important" to the messages slice and the subsequent params["input"] = messages line; instead leave messages and params unchanged (or only set params using validated, explicit inputs) so the function that builds messages using model.Name, messages, and params does not inject hidden instructions.pkg/ai/providers/azure_openai_responses.go-141-148 (1)
141-148:⚠️ Potential issue | 🟠 MajorRemove hardcoded developer prompt injection (
# Juice: 0 !important).This introduces hidden behavioral bias instead of explicit, typed request controls.
✅ Proposed fix
- } else if strings.HasPrefix(strings.ToLower(model.Name), "gpt-5") { - params["input"] = append(messages, map[string]any{ - "role": "developer", - "content": []map[string]any{{ - "type": "input_text", - "text": "# Juice: 0 !important", - }}, - }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/azure_openai_responses.go` around lines 141 - 148, The code in azure_openai_responses.go injects a hardcoded developer prompt when model.Name starts with "gpt-5" by appending to params["input"] a map with role "developer" and content type "input_text" containing "# Juice: 0 !important"; remove this hidden prompt injection and either stop appending that map entirely or replace it with an explicit, configurable control (e.g., a typed parameter or a clearly named variable) that callers can set; update the logic around the model.Name check and params["input"] handling so no hardcoded behavioral bias remains and any request controls are explicit and documented.pkg/ai/providers/azure_openai_responses.go-111-116 (1)
111-116:⚠️ Potential issue | 🟠 MajorAvoid sending empty
prompt_cache_key.
prompt_cache_keyshould only be included when a non-empty session ID is available.✅ Proposed fix
params := map[string]any{ "model": deploymentName, "input": messages, "stream": true, - "prompt_cache_key": options.StreamOptions.SessionID, } + if strings.TrimSpace(options.StreamOptions.SessionID) != "" { + params["prompt_cache_key"] = options.StreamOptions.SessionID + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/azure_openai_responses.go` around lines 111 - 116, When building the request params map in azure_openai_responses.go (the params variable that currently includes "prompt_cache_key": options.StreamOptions.SessionID), only add the "prompt_cache_key" entry if options.StreamOptions.SessionID is non-empty; modify the code that constructs params to conditionally set that key (e.g., check options.StreamOptions.SessionID != "" before adding to params) so empty session IDs are not sent to the API.pkg/ai/providers/amazon_bedrock_runtime.go-30-33 (1)
30-33:⚠️ Potential issue | 🟠 MajorRemove the env-key gate; it blocks EC2 instance metadata and other valid AWS credential sources.
The
GetEnvAPIKey("amazon-bedrock")check at line 30 only validates a subset of credential sources (env vars, AWS_PROFILE, ECS). It does not check for EC2 instance metadata IAM role credentials or credentials resolved directly from shared files, both of whichconfig.LoadDefaultConfig()can resolve. This blocks valid authentication paths in EC2 and other managed environments.Proposed fix
- if ai.GetEnvAPIKey("amazon-bedrock") == "" { - pushProviderError(stream, model, "missing AWS credentials for Amazon Bedrock runtime") - return - } cfg, err := loadBedrockAWSConfig(runCtx)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/amazon_bedrock_runtime.go` around lines 30 - 33, Remove the restrictive env-key gate that calls GetEnvAPIKey("amazon-bedrock"); instead, rely on AWS SDK's credential resolution (config.LoadDefaultConfig) to discover credentials from EC2 instance metadata, shared files, env, ECS, etc.; so delete the if block that calls pushProviderError for missing AWS credentials and instead attempt to call config.LoadDefaultConfig in the Amazon Bedrock runtime initialization path (where GetEnvAPIKey was checked) and only call pushProviderError(stream, model, ...) if config.LoadDefaultConfig returns an error.
🟡 Minor comments (9)
pkg/ai/e2e/image_tool_result_e2e_test.go-26-31 (1)
26-31:⚠️ Potential issue | 🟡 MinorAssertion is too permissive for mixed text+image validation.
The
&& !contains("red")fallback allows"text-and-image-tool-result"to pass even when text content (e.g.,"pixel") is ignored, reducing parity confidence.Suggested patch
testCases := []struct { name string toolResult []ai.ContentBlock prompt string - expectKeyword string + requiredKeywords []string }{ { name: "image-only-tool-result", toolResult: []ai.ContentBlock{ {Type: ai.ContentTypeImage, Data: redPixelBase64, MimeType: "image/png"}, }, - prompt: "Describe what you see in the tool result image. Mention the color.", - expectKeyword: "red", + prompt: "Describe what you see in the tool result image. Mention the color.", + requiredKeywords: []string{"red"}, }, { name: "text-and-image-tool-result", toolResult: []ai.ContentBlock{ {Type: ai.ContentTypeText, Text: "The shape has a diameter of 100 pixels."}, {Type: ai.ContentTypeImage, Data: redPixelBase64, MimeType: "image/png"}, }, - prompt: "Summarize the tool result details and mention any visible color.", - expectKeyword: "pixel", + prompt: "Summarize the tool result details and mention any visible color.", + requiredKeywords: []string{"pixel", "red"}, }, } @@ - if !strings.Contains(text, tc.expectKeyword) && !strings.Contains(text, "red") { - t.Fatalf("expected response to reference tool result content, got %q", text) - } + for _, keyword := range tc.requiredKeywords { + if !strings.Contains(text, keyword) { + t.Fatalf("expected response to include %q, got %q", keyword, text) + } + }Also applies to: 40-48, 110-111
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/e2e/image_tool_result_e2e_test.go` around lines 26 - 31, The test's assertion is too permissive because the fallback `&& !contains("red")` lets cases like "text-and-image-tool-result" pass without validating the text content; update the test in the testCases loop to require both an image block and that at least one text ContentBlock contains the expectKeyword (i.e., check toolResult for an image-type block AND assert that any text block string contains expectKeyword) instead of relying on the `&& !contains("red")` condition; apply the same stricter check where similar logic appears (around the blocks handling testCases/toolResult and the helper contains usage) so text-and-image combos only pass when the expected keyword is actually present in text.pkg/ai/e2e/helpers_test.go-13-15 (1)
13-15:⚠️ Potential issue | 🟡 MinorTighten the E2E gate to explicit truthy values.
Line 13 enables E2E for any non-empty value, so
PI_AI_E2E=0orfalsewill still run E2E tests unexpectedly.Suggested fix
import ( "os" + "strings" "testing" ) @@ - if os.Getenv("PI_AI_E2E") == "" { + switch strings.ToLower(strings.TrimSpace(os.Getenv("PI_AI_E2E"))) { + case "1", "true", "yes", "on": + // enabled + default: t.Skip("set PI_AI_E2E=1 to enable ai package e2e tests") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/e2e/helpers_test.go` around lines 13 - 15, The E2E gate currently treats any non-empty PI_AI_E2E as true; update the check around os.Getenv("PI_AI_E2E") in the test setup so it only enables E2E when the value is an explicit truthy token (e.g., "1" or "true"/"yes"), using a case-insensitive comparison (e.g., strings.EqualFold) or matching against a small allowed set, and call t.Skip unless the value matches one of those explicit truthy values.pkg/ai/e2e/tool_call_id_normalization_e2e_test.go-42-45 (1)
42-45:⚠️ Potential issue | 🟡 MinorUse the selected source model ID in seeded assistant context.
Line 44 hardcodes
"gpt-5-mini"while the test model is chosen at Line 19. If the env override changes the model, this can make the test brittle.🛠️ Suggested patch
{ Role: ai.RoleAssistant, @@ Provider: "openai", API: ai.APIOpenAIResponses, - Model: "gpt-5-mini", + Model: model.ID, StopReason: ai.StopReasonToolUse, Timestamp: time.Now().UnixMilli(), },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/e2e/tool_call_id_normalization_e2e_test.go` around lines 42 - 45, The test currently hardcodes Model: "gpt-5-mini" which conflicts with the test model chosen earlier on line 19; replace the hardcoded string with the variable used to select the model at line 19 (e.g., model, selectedModel, or modelID) so the struct's Model field uses that variable instead, ensuring the seeded assistant context follows the environment-overridden model selection.pkg/ai/e2e/context_overflow_test.go-26-26 (1)
26-26:⚠️ Potential issue | 🟡 MinorRestore provider registry state after this global reset.
Line 26 resets global API providers but the test does not restore state afterward. Add cleanup so this test cannot affect ordering-sensitive tests.
Suggested change
providers.ResetAPIProviders() + t.Cleanup(providers.ResetAPIProviders)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/e2e/context_overflow_test.go` at line 26, The test calls providers.ResetAPIProviders() which mutates global provider registry but doesn't restore it; wrap the reset with a snapshot-and-restore so other tests aren't affected: before calling ResetAPIProviders() capture the current registry (e.g. saved := providers.GetAPIProviders() or make a copy of the registry), call providers.ResetAPIProviders(), and then defer restoring the original state (defer providers.SetAPIProviders(saved)) so the global provider registry is returned to its previous value after the test; if GetAPIProviders/SetAPIProviders helpers don't exist, add small accessor functions to read and restore the registry and use those around the ResetAPIProviders() call in context_overflow_test.go.pkg/ai/providers/simple_options.go-8-15 (1)
8-15:⚠️ Potential issue | 🟡 MinorHandle unknown model token limits without collapsing to
maxTokens=0.When
model.MaxTokens/modelMaxTokensis0(unset), current clamps can return zero max tokens.✅ Proposed fix
func BuildBaseOptions(model ai.Model, options *ai.SimpleStreamOptions, apiKey string) ai.StreamOptions { + defaultMaxTokens := model.MaxTokens + if defaultMaxTokens <= 0 { + defaultMaxTokens = 32000 + } if options == nil { return ai.StreamOptions{ - MaxTokens: minInt(model.MaxTokens, 32000), + MaxTokens: minInt(defaultMaxTokens, 32000), APIKey: apiKey, } } maxTokens := options.MaxTokens if maxTokens <= 0 { - maxTokens = minInt(model.MaxTokens, 32000) + maxTokens = minInt(defaultMaxTokens, 32000) } @@ - maxTokens = minInt(baseMaxTokens+thinkingBudget, modelMaxTokens) + maxTokens = baseMaxTokens + thinkingBudget + if modelMaxTokens > 0 { + maxTokens = minInt(maxTokens, modelMaxTokens) + }Also applies to: 63-63
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/simple_options.go` around lines 8 - 15, The clamp logic can return maxTokens=0 when model.MaxTokens is unset (0); update the logic around options.MaxTokens and the initial struct creation so that whenever model.MaxTokens <= 0 you substitute a fallbackCap (e.g., 32000) before calling minInt, i.e. treat model.MaxTokens <= 0 as fallbackCap when computing MaxTokens for both the returned struct and the local maxTokens variable (referencing model.MaxTokens, options.MaxTokens, maxTokens and minInt).pkg/ai/oauth/index.go-86-90 (1)
86-90:⚠️ Potential issue | 🟡 MinorDon’t force refresh when expiry is unset (
Expires == 0).Current logic treats zero as expired, which can break valid credential flows with unset/unknown expiry.
✅ Proposed fix
- if time.Now().UnixMilli() >= creds.Expires { + if creds.Expires > 0 && time.Now().UnixMilli() >= creds.Expires { refreshed, err := provider.RefreshToken(creds) if err != nil { - return nil, "", fmt.Errorf("failed to refresh OAuth token for %s", providerID) + return nil, "", fmt.Errorf("failed to refresh OAuth token for %s: %w", providerID, err) } creds = refreshed }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/oauth/index.go` around lines 86 - 90, The code currently treats creds.Expires == 0 as expired and forces a refresh; update the condition in the token-check logic (where creds.Expires and provider.RefreshToken are used, referenced by providerID) to only attempt RefreshToken when Expires is set and actually past due — i.e., change the check to require creds.Expires != 0 AND time.Now().UnixMilli() >= creds.Expires so that credentials with an unset/unknown expiry are not treated as expired and are not forcibly refreshed.pkg/ai/providers/amazon_bedrock_runtime.go-48-50 (1)
48-50:⚠️ Potential issue | 🟡 MinorTreat canceled Converse calls as aborted, not provider errors.
Cancellation should emit the aborted event for consistency with the other provider runtimes.
✅ Proposed fix
resp, err := client.Converse(runCtx, input) if err != nil { + if isContextAborted(runCtx, err) { + pushProviderAborted(stream, model) + return + } pushProviderError(stream, model, err.Error()) return }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/amazon_bedrock_runtime.go` around lines 48 - 50, The current error handling in the Amazon Bedrock runtime treats all errors from the Converse/stream call as provider errors (calling pushProviderError(stream, model, err.Error())); instead, detect cancellation by checking errors.Is(err, context.Canceled) or errors.Is(err, context.DeadlineExceeded) and call pushProviderAborted(stream, model) for those cases; for other errors continue to call pushProviderError(stream, model, err.Error()). Update the error branch around the Converse/stream error (where pushProviderError is invoked) to switch to pushProviderAborted for cancellation-related errors.pkg/ai/providers/google_shared.go-79-90 (1)
79-90:⚠️ Potential issue | 🟡 MinorDefault stop reason maps to Error which may be too aggressive.
When an unknown stop reason is received, mapping it to
ai.StopReasonErrorcould incorrectly signal an error condition. Consider mapping unknown reasons toai.StopReasonStopas a safer default.🛡️ Suggested safer default
func MapGoogleStopReason(reason string) ai.StopReason { switch strings.ToUpper(strings.TrimSpace(reason)) { case "STOP": return ai.StopReasonStop case "MAX_TOKENS": return ai.StopReasonLength case "TOOL_USE": return ai.StopReasonToolUse default: - return ai.StopReasonError + return ai.StopReasonStop } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/google_shared.go` around lines 79 - 90, The MapGoogleStopReason function currently returns ai.StopReasonError for unknown reasons; change its default return to ai.StopReasonStop so unexpected/unknown stop reasons are treated as normal stops rather than errors. Locate MapGoogleStopReason in pkg/ai/providers/google_shared.go and replace the default case returning ai.StopReasonError with ai.StopReasonStop (optionally add a short comment noting this is a safer fallback for unknown reasons).pkg/connector/streaming_runtime_selector.go-211-216 (1)
211-216:⚠️ Potential issue | 🟡 MinorSynthetic tool call ID may conflict across multiple tool messages.
Line 215 generates a tool call ID as
"tool_" + strconv.FormatInt(now, 10)wherenowis captured once at line 132. If multiple tool messages are processed in the same call, they'll all get the same ID. Consider using an index or unique suffix.🔧 Proposed fix to ensure unique tool call IDs
func chatPromptToUnifiedMessages(prompt []openai.ChatCompletionMessageParamUnion) []UnifiedMessage { out := make([]UnifiedMessage, 0, len(prompt)) - now := time.Now().UnixMilli() + now := time.Now().UnixMilli() + toolIndex := 0 for _, msg := range prompt { // ... existing code ... default: content, role := airuntime.ExtractMessageContent(msg) content = strings.TrimSpace(content) if content == "" { continue } switch role { // ... existing cases ... case "tool": + toolIndex++ out = append(out, UnifiedMessage{ Role: RoleTool, Content: []ContentPart{{Type: ContentTypeText, Text: content}}, - ToolCallID: "tool_" + strconv.FormatInt(now, 10), + ToolCallID: fmt.Sprintf("tool_%d_%d", now, toolIndex), })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/streaming_runtime_selector.go` around lines 211 - 216, The tool call IDs in the UnifiedMessage construction (RoleTool and ToolCallID) are non-unique because they use the single captured variable now with strconv.FormatInt(now, 10); change the logic that sets ToolCallID in the case "tool" branch so each tool message gets a unique suffix (for example append the loop/index value, a monotonic counter, or a UUID/nonce) instead of just now; update the code that builds UnifiedMessage.ToolCallID to combine "tool_", strconv.FormatInt(now,10) and the chosen unique suffix (or replace strconv.FormatInt(now,10) with a call that generates a unique value) so multiple tool messages in the same call do not collide.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 930f206b-3a10-4384-be53-ca21da5923eb
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (120)
README.mdcmd/generate-ai-models/main.godocs/pkg-ai-runtime-migration.mddocs/pkg-ai-test-parity.mdgo.modpkg/ai/api_registry.gopkg/ai/api_registry_test.gopkg/ai/e2e/abort_test.gopkg/ai/e2e/context_overflow_test.gopkg/ai/e2e/cross_provider_handoff_e2e_test.gopkg/ai/e2e/empty_e2e_test.gopkg/ai/e2e/helpers_test.gopkg/ai/e2e/image_tool_result_e2e_test.gopkg/ai/e2e/openai_reasoning_replay_e2e_test.gopkg/ai/e2e/parity_openai_test.gopkg/ai/e2e/parity_provider_runtime_test.gopkg/ai/e2e/stream_test.gopkg/ai/e2e/tool_call_id_normalization_e2e_test.gopkg/ai/e2e/xhigh_e2e_test.gopkg/ai/env_api_keys.gopkg/ai/env_api_keys_test.gopkg/ai/event_stream.gopkg/ai/event_stream_test.gopkg/ai/models.gopkg/ai/models_generated.gopkg/ai/models_test.gopkg/ai/oauth/anthropic.gopkg/ai/oauth/anthropic_provider_helpers_test.gopkg/ai/oauth/github_copilot.gopkg/ai/oauth/github_copilot_provider_helpers_test.gopkg/ai/oauth/google_antigravity.gopkg/ai/oauth/google_antigravity_helpers_test.gopkg/ai/oauth/google_gemini_cli.gopkg/ai/oauth/google_gemini_cli_helpers_test.gopkg/ai/oauth/index.gopkg/ai/oauth/index_test.gopkg/ai/oauth/openai_codex.gopkg/ai/oauth/openai_codex_helpers_test.gopkg/ai/oauth/pkce.gopkg/ai/oauth/pkce_test.gopkg/ai/oauth/types.gopkg/ai/providers/amazon_bedrock.gopkg/ai/providers/amazon_bedrock_runtime.gopkg/ai/providers/amazon_bedrock_runtime_test.gopkg/ai/providers/amazon_bedrock_test.gopkg/ai/providers/anthropic.gopkg/ai/providers/anthropic_runtime.gopkg/ai/providers/anthropic_runtime_test.gopkg/ai/providers/anthropic_test.gopkg/ai/providers/azure_openai_responses.gopkg/ai/providers/azure_openai_responses_runtime.gopkg/ai/providers/azure_openai_responses_runtime_test.gopkg/ai/providers/azure_openai_responses_test.gopkg/ai/providers/cache_retention_test.gopkg/ai/providers/github_copilot_headers.gopkg/ai/providers/github_copilot_headers_test.gopkg/ai/providers/google.gopkg/ai/providers/google_gemini_cli.gopkg/ai/providers/google_gemini_cli_runtime.gopkg/ai/providers/google_gemini_cli_runtime_test.gopkg/ai/providers/google_gemini_cli_test.gopkg/ai/providers/google_runtime.gopkg/ai/providers/google_runtime_test.gopkg/ai/providers/google_shared.gopkg/ai/providers/google_shared_test.gopkg/ai/providers/google_test.gopkg/ai/providers/google_tool_call_missing_args_test.gopkg/ai/providers/google_vertex.gopkg/ai/providers/google_vertex_test.gopkg/ai/providers/openai_codex_responses.gopkg/ai/providers/openai_codex_responses_runtime.gopkg/ai/providers/openai_codex_responses_runtime_test.gopkg/ai/providers/openai_codex_responses_test.gopkg/ai/providers/openai_completions.gopkg/ai/providers/openai_completions_convert_test.gopkg/ai/providers/openai_completions_runtime.gopkg/ai/providers/openai_completions_runtime_test.gopkg/ai/providers/openai_completions_test.gopkg/ai/providers/openai_responses.gopkg/ai/providers/openai_responses_runtime.gopkg/ai/providers/openai_responses_runtime_test.gopkg/ai/providers/openai_responses_shared.gopkg/ai/providers/openai_responses_shared_test.gopkg/ai/providers/openai_responses_test.gopkg/ai/providers/register_builtins.gopkg/ai/providers/register_builtins_test.gopkg/ai/providers/runtime_abort.gopkg/ai/providers/runtime_abort_test.gopkg/ai/providers/simple_options.gopkg/ai/providers/simple_options_test.gopkg/ai/providers/transform_messages.gopkg/ai/providers/transform_messages_test.gopkg/ai/stream.gopkg/ai/types.gopkg/ai/utils/json_parse.gopkg/ai/utils/json_parse_test.gopkg/ai/utils/overflow.gopkg/ai/utils/overflow_test.gopkg/ai/utils/sanitize_unicode.gopkg/ai/utils/sanitize_unicode_test.gopkg/ai/utils/type_helpers.gopkg/ai/utils/type_helpers_test.gopkg/ai/utils/usage.gopkg/ai/utils/usage_test.gopkg/ai/utils/validation.gopkg/ai/utils/validation_test.gopkg/connector/ai_adapter.gopkg/connector/ai_adapter_test.gopkg/connector/pkg_ai_event_adapter.gopkg/connector/pkg_ai_event_adapter_test.gopkg/connector/pkg_ai_model_adapter.gopkg/connector/pkg_ai_model_adapter_test.gopkg/connector/pkg_ai_provider_bridge.gopkg/connector/pkg_ai_provider_bridge_e2e_test.gopkg/connector/pkg_ai_provider_bridge_test.gopkg/connector/provider_openai.gopkg/connector/provider_openai_pkg_ai_test.gopkg/connector/response_retry.gopkg/connector/streaming_runtime_selector.gopkg/connector/streaming_runtime_selector_test.go
| func (s *AssistantMessageEventStream) Push(evt AssistantMessageEvent) { | ||
| select { | ||
| case <-s.done: | ||
| return | ||
| default: | ||
| } | ||
|
|
||
| isComplete := false | ||
| if evt.Type == EventDone { | ||
| s.mu.Lock() | ||
| s.result = evt.Message | ||
| s.hasResult = true | ||
| s.mu.Unlock() | ||
| isComplete = true | ||
| } | ||
| if evt.Type == EventError { | ||
| s.mu.Lock() | ||
| s.result = evt.Error | ||
| s.hasResult = true | ||
| s.mu.Unlock() | ||
| isComplete = true | ||
| } | ||
|
|
||
| select { | ||
| case <-s.done: | ||
| case s.ch <- evt: | ||
| } | ||
| if isComplete { | ||
| s.Close() | ||
| } | ||
| } | ||
|
|
||
| func (s *AssistantMessageEventStream) Close() { | ||
| s.once.Do(func() { | ||
| close(s.done) | ||
| close(s.ch) | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
git ls-files pkg/ai/Repository: beeper/ai-bridge
Length of output: 3776
🏁 Script executed:
head -100 pkg/ai/event_stream.goRepository: beeper/ai-bridge
Length of output: 1669
🏁 Script executed:
cat pkg/ai/event_stream_test.goRepository: beeper/ai-bridge
Length of output: 1745
🏁 Script executed:
rg -n "Push\|Close" pkg/ai/event_stream_test.go -A 3Repository: beeper/ai-bridge
Length of output: 42
🏁 Script executed:
rg -n "AssistantMessageEventStream" pkg/ai/ -lRepository: beeper/ai-bridge
Length of output: 607
Guard channel send and close with a dedicated mutex to prevent panic from send on closed channel.
Push can race with Close in concurrent scenarios. Between the first select <-s.done check in Push and the channel send s.ch <- evt, another goroutine may execute Close, closing s.ch. The sync.Once ensures Close runs once but does not synchronize with concurrent sends, allowing a send on a closed channel panic.
Suggested fix (dedicated mutex for channel operations)
type AssistantMessageEventStream struct {
ch chan AssistantMessageEvent
done chan struct{}
once sync.Once
+ chMu sync.Mutex
mu sync.Mutex
result Message
hasResult bool
}
@@
- select {
- case <-s.done:
- case s.ch <- evt:
- }
+ s.chMu.Lock()
+ closed := false
+ select {
+ case <-s.done:
+ closed = true
+ case s.ch <- evt:
+ }
+ s.chMu.Unlock()
+ if closed {
+ return
+ }
if isComplete {
s.Close()
}
}
func (s *AssistantMessageEventStream) Close() {
s.once.Do(func() {
+ s.chMu.Lock()
+ defer s.chMu.Unlock()
close(s.done)
close(s.ch)
})
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/ai/event_stream.go` around lines 31 - 67, Push races with Close because
sends on s.ch can occur while Close closes it; add a dedicated channel mutex
(e.g., chMu sync.Mutex) on AssistantMessageEventStream and use it to guard all
operations that close or send on s.ch and read/close s.done: in Push acquire
chMu before checking <-s.done and before performing the send (release after
send/select), and in Close acquire chMu around closing s.done and closing s.ch
(still use s.once to ensure single close), so no send can occur while Close is
closing the channel.
| var modelRegistry = map[string]map[string]Model{} | ||
|
|
||
| func init() { | ||
| registerGeneratedModels() | ||
| } | ||
|
|
||
| func RegisterModels(provider string, models []Model) { | ||
| key := strings.TrimSpace(provider) | ||
| if key == "" { | ||
| return | ||
| } | ||
| if modelRegistry[key] == nil { | ||
| modelRegistry[key] = map[string]Model{} | ||
| } | ||
| for _, model := range models { | ||
| modelRegistry[key][model.ID] = model | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# First, check the full models.go file to understand all registry access patterns
echo "=== Full models.go file ==="
wc -l pkg/ai/models.goRepository: beeper/ai-bridge
Length of output: 107
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Read the entire models.go file
cat -n pkg/ai/models.goRepository: beeper/ai-bridge
Length of output: 2669
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Search for all modelRegistry accesses in the codebase
echo "=== All modelRegistry accesses ==="
rg -n 'modelRegistry' pkg/ai/models.go
echo
echo "=== Sync imports in models.go ==="
rg -n 'import' pkg/ai/models.go -A 10
echo
echo "=== Search for concurrent usage patterns (goroutines calling RegisterModels) ==="
rg -l 'go\s+' --type=go | head -20Repository: beeper/ai-bridge
Length of output: 1373
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Search for RegisterModels calls ==="
rg -n 'RegisterModels' --type=go
echo
echo "=== Search for GetModel calls ==="
rg -n 'GetModel\(' --type=go | head -20
echo
echo "=== Check if registerGeneratedModels calls RegisterModels ==="
rg -n 'registerGeneratedModels' pkg/ai/models.goRepository: beeper/ai-bridge
Length of output: 883
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Search for where registerGeneratedModels is defined
echo "=== Find registerGeneratedModels definition ==="
rg -n 'func registerGeneratedModels' --type=goRepository: beeper/ai-bridge
Length of output: 258
Add synchronization to protect modelRegistry from concurrent access.
modelRegistry is a shared mutable map accessed by exported functions (RegisterModels, GetModel, GetProviders, GetModels) without any locks. Go maps are not thread-safe and will panic under concurrent access, especially during writes (e.g., in RegisterModels at lines 19-23) when simultaneous reads occur from goroutines. Use sync.RWMutex to protect all registry access.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/ai/models.go` around lines 8 - 25, The shared map modelRegistry is
mutated and read concurrently by functions like RegisterModels, GetModel,
GetProviders, and GetModels; add a package-level sync.RWMutex (e.g., var
modelRegistryMu sync.RWMutex) and use modelRegistryMu.Lock()/Unlock() around
write operations (RegisterModels and any registration helpers like
registerGeneratedModels) and modelRegistryMu.RLock()/RUnlock() around all reads
(GetModel, GetProviders, GetModels) to prevent concurrent map access panics and
ensure safe concurrent use.
| func BuildAnthropicAuthorizeURL(codeChallenge string, state string) string { | ||
| params := url.Values{} | ||
| params.Set("code", "true") | ||
| params.Set("client_id", anthropicClientID) | ||
| params.Set("response_type", "code") | ||
| params.Set("redirect_uri", anthropicRedirectURI) | ||
| params.Set("scope", anthropicScopes) | ||
| params.Set("code_challenge", codeChallenge) | ||
| params.Set("code_challenge_method", "S256") | ||
| params.Set("state", state) | ||
| return anthropicAuthorizeURL + "?" + params.Encode() | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Anthropic OAuth authorization parameters code response_type
💡 Result:
For Anthropic’s OAuth authorization request (Authorization Code + PKCE), set:
response_type=code(request an authorization code) [2]- Typical required query parameters (as used/expected by Anthropic tooling around MCP connectors):
client_idredirect_uriresponse_type(must be"code")code_challenge(PKCE; commonly withcode_challenge_method=S256)state[1]
Some flows/servers also require scope (and clients may need to include it in the authorization URL when scopes are advertised). [3]
Sources:
[1] (github.com)
[2] (datatracker.ietf.org)
[3] (github.com)
Citations:
- 1: [BUG] Claude Desktop and claude.ai fail to connect to custom OAuth-protected MCP server - infinite about:blank loop anthropics/claude-code#11814
- 2: https://datatracker.ietf.org/doc/html/rfc6749.txt
- 3: [BUG] OAuth authorization for HTTP MCP servers missing required scope parameter anthropics/claude-code#12077
Remove the invalid code=true parameter from line 27.
The code=true parameter is not part of Anthropic's OAuth authorization flow. According to Anthropic's documentation, the required parameters are: response_type, client_id, redirect_uri, code_challenge, code_challenge_method, and state. Sending undocumented parameters will cause the authorization request to fail. Remove this line:
params.Set("code", "true")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/ai/oauth/anthropic.go` around lines 25 - 36, In
BuildAnthropicAuthorizeURL remove the invalid params.Set("code", "true") entry
so the query only includes the documented OAuth parameters (response_type,
client_id, redirect_uri, code_challenge, code_challenge_method, scope if used,
and state); locate the params variable in the BuildAnthropicAuthorizeURL
function and delete the line that sets the "code" parameter, leaving the rest of
the params.Set calls intact.
| const ( | ||
| antigravityClientID = "1071006060591-tmhssin2h21lcre235vtolojh4g403ep.apps.googleusercontent.com" | ||
| antigravityClientSecret = "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" | ||
| antigravityRedirectURI = "http://localhost:51121/oauth-callback" | ||
| antigravityDefaultProject = "rising-fact-p41fc" | ||
| ) | ||
|
|
||
| var antigravityScopes = []string{ | ||
| "https://www.googleapis.com/auth/cloud-platform", | ||
| "https://www.googleapis.com/auth/userinfo.email", | ||
| "https://www.googleapis.com/auth/userinfo.profile", | ||
| "https://www.googleapis.com/auth/cclog", | ||
| "https://www.googleapis.com/auth/experimentsandconfigs", | ||
| } | ||
|
|
||
| func AntigravityClientID() string { | ||
| return antigravityClientID | ||
| } | ||
|
|
||
| func AntigravityClientSecret() string { | ||
| return antigravityClientSecret | ||
| } |
There was a problem hiding this comment.
Remove committed OAuth secret and stop exposing it via public accessor.
Line 10 stores a live client secret in source, and AntigravityClientSecret() publicly returns it. This is a credential leak risk and should be treated as a blocker.
🔐 Proposed fix
import (
"net/url"
+ "os"
"strings"
)
const (
antigravityClientID = "1071006060591-tmhssin2h21lcre235vtolojh4g403ep.apps.googleusercontent.com"
- antigravityClientSecret = "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"
antigravityRedirectURI = "http://localhost:51121/oauth-callback"
antigravityDefaultProject = "rising-fact-p41fc"
)
+
+const antigravityClientSecretEnv = "ANTIGRAVITY_CLIENT_SECRET"
@@
func AntigravityClientSecret() string {
- return antigravityClientSecret
+ return strings.TrimSpace(os.Getenv(antigravityClientSecretEnv))
}🧰 Tools
🪛 Gitleaks (8.30.0)
[high] 10-10: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/ai/oauth/google_antigravity.go` around lines 8 - 29, Remove the
hard-coded antigravityClientSecret constant and stop returning it from
AntigravityClientSecret(); instead read the secret at runtime from a secure
source (e.g., environment variable or secret manager) and have
AntigravityClientSecret() fetch and return that value (for example using
os.Getenv("ANTIGRAVITY_CLIENT_SECRET") or a secret-manager client), update any
callers to handle a missing/empty secret (return error or nil) and delete the
committed secret literal so credentials are not stored in source control; ensure
the string constant antigravityClientSecret is removed and
AntigravityClientSecret is the single place that retrieves the secret securely.
pkg/connector/provider_openai.go
Outdated
| if pkgAIProviderRuntimeEnabled() { | ||
| if pkgAIEvents, ok := tryGenerateStreamWithPkgAI(ctx, o.baseURL, o.apiKey, params); ok { | ||
| o.log.Debug(). | ||
| Str("model", params.Model). | ||
| Msg("Using pkg/ai provider runtime for OpenAI stream") | ||
| return pkgAIEvents, nil | ||
| } | ||
| o.log.Warn(). | ||
| Str("model", params.Model). | ||
| Msg("pkg/ai provider runtime fallback to existing OpenAI stream path") | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Find function signatures for both stream and non-stream bridge helpers
echo "=== Stream and non-stream bridge function signatures ==="
rg -n --type=go 'func\s+(tryGenerateStreamWithPkgAI|tryGenerateWithPkgAI)\s*\(' -A 10
# Check the exact context around the stream path (lines 271-281)
echo -e "\n=== Stream path context (lines 271-281) ==="
sed -n '265,285p' pkg/connector/provider_openai.go
# Check the non-stream path context (lines 378-391)
echo -e "\n=== Non-stream path context (lines 378-391) ==="
sed -n '375,395p' pkg/connector/provider_openai.goRepository: beeper/ai-bridge
Length of output: 3422
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Get the full implementation of tryGenerateStreamWithPkgAI
echo "=== tryGenerateStreamWithPkgAI implementation ==="
sed -n '139,195p' pkg/connector/pkg_ai_provider_bridge.go
# Also check tryGenerateWithPkgAI for comparison
echo -e "\n=== tryGenerateWithPkgAI implementation (for comparison) ==="
sed -n '197,280p' pkg/connector/pkg_ai_provider_bridge.goRepository: beeper/ai-bridge
Length of output: 4107
Stream helper lacks error return; cannot surface all failures like non-stream path does.
The tryGenerateStreamWithPkgAI helper returns only (events, bool) and suppresses all errors into ok=false (line 171 and elsewhere in the implementation). In contrast, tryGenerateWithPkgAI returns (response, bool, error) and can propagate errors that don't match fallback criteria (lines 223–228).
This creates a reliability divergence: stream failures are logged as warnings and silently fall back (line 278), while non-stream failures can be surfaced as errors to the caller (line 381). Align the stream helper to return an error as the third value and propagate it at the call site instead of unconditional fallback.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/connector/provider_openai.go` around lines 271 - 281, Change the stream
helper tryGenerateStreamWithPkgAI to return (events, ok, error) like
tryGenerateWithPkgAI so it can propagate failures; update its implementation to
return an explicit error when a non-fallback failure occurs. At the call site
inside the pkgAIProviderRuntimeEnabled block, check the three-way result: if ok
is true return the events; if err is non-nil return that error to the caller;
otherwise (ok false and err nil) perform the existing warning and fall back to
the OpenAI stream path. Ensure function signature and all call sites are updated
to match the new (events, bool, error) contract.
Co-authored-by: batuhan içöz <batuhan@icoz.co>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (10)
pkg/connector/provider_openai.go (1)
280-292: Remove dead code methods that are no longer used.With both
GenerateandGenerateStreamnow exclusively delegating to the pkg/ai runtime viatryGenerateWithPkgAI, the methodsbuildResponsesParams(lines 216-267) andgenerateChatCompletions(lines 294-337) have become unreachable and should be removed. These methods are not called anywhere in the codebase.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/provider_openai.go` around lines 280 - 292, The dead helper methods buildResponsesParams and generateChatCompletions are no longer used because Generate and GenerateStream fully delegate to tryGenerateWithPkgAI; remove both unused functions (buildResponsesParams and generateChatCompletions) from pkg/connector/provider_openai.go, update any imports if they become unused, and run the test/build to ensure no references remain (search for buildResponsesParams and generateChatCompletions to confirm safe deletion).pkg/connector/streaming_runtime_selector_test.go (2)
10-14: Duplicate test for runtime enablement.This test duplicates
TestPkgAIProviderRuntimeEnabledinpkg_ai_provider_bridge_test.go. If both functions (pkgAIRuntimeEnabledandpkgAIProviderRuntimeEnabled) are meant to be consolidated, consider removing one test.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/streaming_runtime_selector_test.go` around lines 10 - 14, This test duplicates existing coverage; remove the redundant TestPkgAIRuntimeEnabledFromEnv or reconcile it with the other test: either delete TestPkgAIRuntimeEnabledFromEnv (leaving TestPkgAIProviderRuntimeEnabled in pkg_ai_provider_bridge_test.go) or change the test to assert a distinct behavior (e.g., environment-specific vs provider-specific) so both are meaningful; locate the helper functions pkgAIRuntimeEnabled and pkgAIProviderRuntimeEnabled and ensure only one test asserts each unique behavior.
91-129: Good parameter mapping test coverage.Tests model, system prompt, temperature, max tokens, reasoning effort, and message mapping. Consider adding a test case with tool-calling enabled to verify
buildPkgAIBridgeToolsintegration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/streaming_runtime_selector_test.go` around lines 91 - 129, Add a new test that exercises buildPkgAIBridgeGenerateParams when tool-calling is enabled: create a PortalMetadata that includes tool-calling metadata (e.g., tool definitions or flags your code checks) and call client.buildPkgAIBridgeGenerateParams with sample messages, then assert that params.Tools is non-empty and matches the expected tool entries produced by buildPkgAIBridgeTools, and that other fields (Model, SystemPrompt, Temperature, MaxCompletionTokens, ReasoningEffort, Messages) remain correctly mapped; reference buildPkgAIBridgeGenerateParams and buildPkgAIBridgeTools to locate where to enable tool metadata and validate the expected tool mapping.pkg/connector/pkg_ai_provider_bridge.go (2)
117-124: String-based error matching is fragile.Relying on error message substrings for fallback decisions is brittle and may break if error messages change. Consider using sentinel errors or error types for more reliable detection.
Example approach using sentinel errors
// In pkg/ai package, define sentinel errors: var ErrNotImplemented = errors.New("not implemented yet") var ErrNoAPIProviderRegistered = errors.New("no api provider registered") // Then in shouldFallbackFromPkgAIError: func shouldFallbackFromPkgAIError(err error) bool { if err == nil { return false } return errors.Is(err, aipkg.ErrNotImplemented) || errors.Is(err, aipkg.ErrNoAPIProviderRegistered) // ... etc }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/pkg_ai_provider_bridge.go` around lines 117 - 124, The current shouldFallbackFromPkgAIEvent uses fragile substring matching on event.Error.Error(); replace this with sentinel/error-type checks: introduce exported sentinel errors (e.g., ErrNotImplemented, ErrNoAPIProviderRegistered) in the pkg/ai package and then change shouldFallbackFromPkgAIEvent to return false if event.Error==nil or event.Type!=StreamEventError, otherwise use errors.Is(event.Error, aipkg.ErrNotImplemented) || errors.Is(event.Error, aipkg.ErrNoAPIProviderRegistered) (or additional error types) to decide fallback; keep the StreamEvent/StreamEventError usage but stop relying on string matching.
14-16: Consider making runtime gating configurable.Hardcoding
return truemeans this function provides no actual gating. If runtime gating is intended for future use, consider adding an environment variable or configuration parameter; otherwise, consider removing this function and using a constant.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/pkg_ai_provider_bridge.go` around lines 14 - 16, The function pkgAIProviderRuntimeEnabled currently always returns true and provides no gating; change it to read a configuration or environment variable (e.g., PKG_AI_PROVIDER_RUNTIME_ENABLED) and return a boolean based on that value, or remove the function and replace callers with a constant if gating isn’t needed. Update pkgAIProviderRuntimeEnabled to parse the env var (treat "1", "true", "yes" as enabled, defaulting to false or whatever project default is), ensure callers use this function, and add a brief comment documenting the env var behavior so the runtime gating is actually configurable.pkg/connector/streaming_runtime_selector.go (4)
31-36: Unused parameters inchooseStreamingRuntimePath.The
_ ModelAPIand_ boolparameters are ignored. If these are planned for future use, consider documenting the intent; otherwise, remove them to avoid confusion.If parameters are not needed
-func chooseStreamingRuntimePath(hasAudio bool, _ ModelAPI, _ bool) streamingRuntimePath { +func chooseStreamingRuntimePath(hasAudio bool) streamingRuntimePath {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/streaming_runtime_selector.go` around lines 31 - 36, The function chooseStreamingRuntimePath currently takes unused parameters (_ ModelAPI and _ bool); remove these unused parameters from the function signature and update all call sites to pass only hasAudio (i.e., change chooseStreamingRuntimePath(hasAudio, _ ModelAPI, _ bool) to chooseStreamingRuntimePath(hasAudio)), or if those parameters are intentionally reserved for future use, add a brief comment above chooseStreamingRuntimePath documenting their planned purpose and keep them named (e.g., api ModelAPI, preferPkg bool) instead of using blank identifiers so the intent is clear; ensure symbol references mentioned (chooseStreamingRuntimePath, streamingRuntimeChatCompletions, streamingRuntimePkgAI) remain unchanged.
27-29: Duplicate runtime gating function.
pkgAIRuntimeEnabled()here duplicatespkgAIProviderRuntimeEnabled()inpkg_ai_provider_bridge.go. Consider consolidating into a single function to avoid divergence.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/streaming_runtime_selector.go` around lines 27 - 29, There are two duplicate gating functions—pkgAIRuntimeEnabled() (in streaming_runtime_selector.go) and pkgAIProviderRuntimeEnabled() (in pkg_ai_provider_bridge.go); consolidate them by removing one and using the other across the codebase: pick the canonical function name (e.g., pkgAIProviderRuntimeEnabled), delete pkgAIRuntimeEnabled, update any callers that reference pkgAIRuntimeEnabled to call pkgAIProviderRuntimeEnabled, and ensure the single retained function contains the intended runtime-check logic so behavior remains unchanged.
298-309: Infinite loop structure withgotofor tool rounds.The
for round := 0; ; round++loop withgoto nextRoundis unconventional. While functional, consider restructuring withcontinueand labeled loop for clarity.Cleaner loop structure
+rounds: for round := 0; ; round++ { // ... existing code ... // Instead of goto nextRound: - goto nextRound + continue rounds - nextRound: }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/streaming_runtime_selector.go` around lines 298 - 309, The current infinite loop in the function uses "for round := 0; ; round++" with a goto nextRound; refactor it to a labeled loop and use continue to advance rounds for clarity: keep the loop control variable "round" and the body logic (clearing state.pendingFunctionOutputs, building toolCallsThisRound, activeTools map, roundContent, calling tryGenerateStreamWithPkgAI and handling the !handled branch that calls oc.uiEmitter(...).EmitUIError, oc.emitUIFinish(...), and streamFailureError) but remove the goto by placing the loop body under a label (e.g., rounds:) and replacing any "goto nextRound" jumps with "continue rounds" so control flow is explicit and easier to follow around symbols like tryGenerateStreamWithPkgAI, ToolCallResult, activeToolCall, oc.uiEmitter, oc.emitUIFinish, and streamFailureError.
413-416: Tool call handling skips nil events appropriately.The
continueon nilevent.ToolCallprevents panics. Consider logging a warning for unexpected nil tool calls.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/streaming_runtime_selector.go` around lines 413 - 416, The switch branch for StreamEventToolCall currently just continues when event.ToolCall is nil; update that branch to log a warning before continuing so unexpected nil tool calls are visible in logs. In the StreamEventToolCall case (referencing StreamEventToolCall and event.ToolCall) emit a warning using the package's existing logger (e.g., r.logger.Warn/Warnf or log.Printf) including useful context such as event type/id/timestamp if available, then continue as before to preserve the nil-guard behavior.pkg/connector/pkg_ai_provider_bridge_test.go (1)
252-267: Consider using table-driven test witht.Runfor thinking levels.For consistency with other tests in this file (e.g.,
TestInferProviderNameFromBaseURL), consider using subtests.Suggested refactor using subtests
func TestParseThinkingLevel(t *testing.T) { - cases := map[string]string{ - "minimal": "minimal", - "low": "low", - "medium": "medium", - "high": "high", - "xhigh": "xhigh", - "none": "", - "": "", - } - for in, want := range cases { - if got := string(parseThinkingLevel(in)); got != want { - t.Fatalf("parseThinkingLevel(%q) = %q, want %q", in, got, want) - } - } + cases := []struct { + name string + in string + want string + }{ + {name: "minimal", in: "minimal", want: "minimal"}, + {name: "low", in: "low", want: "low"}, + {name: "medium", in: "medium", want: "medium"}, + {name: "high", in: "high", want: "high"}, + {name: "xhigh", in: "xhigh", want: "xhigh"}, + {name: "unknown_none", in: "none", want: ""}, + {name: "empty", in: "", want: ""}, + } + for _, tc := range cases { + t.Run(tc.name, func(t *testing.T) { + if got := string(parseThinkingLevel(tc.in)); got != tc.want { + t.Fatalf("parseThinkingLevel(%q) = %q, want %q", tc.in, got, tc.want) + } + }) + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/pkg_ai_provider_bridge_test.go` around lines 252 - 267, TestParseThinkingLevel should be converted to a table-driven subtest using t.Run so each input case is reported separately; update the test to iterate over the cases map and for each entry call t.Run(name, func(t *testing.T) { if got := string(parseThinkingLevel(in)); got != want { t.Fatalf(...) } }), keeping the same case names and expected values and referencing the existing TestParseThinkingLevel and parseThinkingLevel to locate the code to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/connector/streaming_runtime_selector.go`:
- Around line 160-166: The tool message creation in the switch branch that
builds UnifiedMessage (case "tool") uses strconv.FormatInt(now, 10) for
ToolCallID which can collide within the same millisecond; change ToolCallID
generation in that branch to use a guaranteed-unique value (e.g.,
uuid.NewString() or an atomically incremented counter) instead of the timestamp.
Update the code that sets ToolCallID on the UnifiedMessage to produce
"tool_"+<uniqueID> (preserving the "tool_" prefix) and import/use the UUID or
atomic counter helper so each tool message created by the switch branch yields a
unique ToolCallID.
---
Nitpick comments:
In `@pkg/connector/pkg_ai_provider_bridge_test.go`:
- Around line 252-267: TestParseThinkingLevel should be converted to a
table-driven subtest using t.Run so each input case is reported separately;
update the test to iterate over the cases map and for each entry call
t.Run(name, func(t *testing.T) { if got := string(parseThinkingLevel(in)); got
!= want { t.Fatalf(...) } }), keeping the same case names and expected values
and referencing the existing TestParseThinkingLevel and parseThinkingLevel to
locate the code to change.
In `@pkg/connector/pkg_ai_provider_bridge.go`:
- Around line 117-124: The current shouldFallbackFromPkgAIEvent uses fragile
substring matching on event.Error.Error(); replace this with sentinel/error-type
checks: introduce exported sentinel errors (e.g., ErrNotImplemented,
ErrNoAPIProviderRegistered) in the pkg/ai package and then change
shouldFallbackFromPkgAIEvent to return false if event.Error==nil or
event.Type!=StreamEventError, otherwise use errors.Is(event.Error,
aipkg.ErrNotImplemented) || errors.Is(event.Error,
aipkg.ErrNoAPIProviderRegistered) (or additional error types) to decide
fallback; keep the StreamEvent/StreamEventError usage but stop relying on string
matching.
- Around line 14-16: The function pkgAIProviderRuntimeEnabled currently always
returns true and provides no gating; change it to read a configuration or
environment variable (e.g., PKG_AI_PROVIDER_RUNTIME_ENABLED) and return a
boolean based on that value, or remove the function and replace callers with a
constant if gating isn’t needed. Update pkgAIProviderRuntimeEnabled to parse the
env var (treat "1", "true", "yes" as enabled, defaulting to false or whatever
project default is), ensure callers use this function, and add a brief comment
documenting the env var behavior so the runtime gating is actually configurable.
In `@pkg/connector/provider_openai.go`:
- Around line 280-292: The dead helper methods buildResponsesParams and
generateChatCompletions are no longer used because Generate and GenerateStream
fully delegate to tryGenerateWithPkgAI; remove both unused functions
(buildResponsesParams and generateChatCompletions) from
pkg/connector/provider_openai.go, update any imports if they become unused, and
run the test/build to ensure no references remain (search for
buildResponsesParams and generateChatCompletions to confirm safe deletion).
In `@pkg/connector/streaming_runtime_selector_test.go`:
- Around line 10-14: This test duplicates existing coverage; remove the
redundant TestPkgAIRuntimeEnabledFromEnv or reconcile it with the other test:
either delete TestPkgAIRuntimeEnabledFromEnv (leaving
TestPkgAIProviderRuntimeEnabled in pkg_ai_provider_bridge_test.go) or change the
test to assert a distinct behavior (e.g., environment-specific vs
provider-specific) so both are meaningful; locate the helper functions
pkgAIRuntimeEnabled and pkgAIProviderRuntimeEnabled and ensure only one test
asserts each unique behavior.
- Around line 91-129: Add a new test that exercises
buildPkgAIBridgeGenerateParams when tool-calling is enabled: create a
PortalMetadata that includes tool-calling metadata (e.g., tool definitions or
flags your code checks) and call client.buildPkgAIBridgeGenerateParams with
sample messages, then assert that params.Tools is non-empty and matches the
expected tool entries produced by buildPkgAIBridgeTools, and that other fields
(Model, SystemPrompt, Temperature, MaxCompletionTokens, ReasoningEffort,
Messages) remain correctly mapped; reference buildPkgAIBridgeGenerateParams and
buildPkgAIBridgeTools to locate where to enable tool metadata and validate the
expected tool mapping.
In `@pkg/connector/streaming_runtime_selector.go`:
- Around line 31-36: The function chooseStreamingRuntimePath currently takes
unused parameters (_ ModelAPI and _ bool); remove these unused parameters from
the function signature and update all call sites to pass only hasAudio (i.e.,
change chooseStreamingRuntimePath(hasAudio, _ ModelAPI, _ bool) to
chooseStreamingRuntimePath(hasAudio)), or if those parameters are intentionally
reserved for future use, add a brief comment above chooseStreamingRuntimePath
documenting their planned purpose and keep them named (e.g., api ModelAPI,
preferPkg bool) instead of using blank identifiers so the intent is clear;
ensure symbol references mentioned (chooseStreamingRuntimePath,
streamingRuntimeChatCompletions, streamingRuntimePkgAI) remain unchanged.
- Around line 27-29: There are two duplicate gating
functions—pkgAIRuntimeEnabled() (in streaming_runtime_selector.go) and
pkgAIProviderRuntimeEnabled() (in pkg_ai_provider_bridge.go); consolidate them
by removing one and using the other across the codebase: pick the canonical
function name (e.g., pkgAIProviderRuntimeEnabled), delete pkgAIRuntimeEnabled,
update any callers that reference pkgAIRuntimeEnabled to call
pkgAIProviderRuntimeEnabled, and ensure the single retained function contains
the intended runtime-check logic so behavior remains unchanged.
- Around line 298-309: The current infinite loop in the function uses "for round
:= 0; ; round++" with a goto nextRound; refactor it to a labeled loop and use
continue to advance rounds for clarity: keep the loop control variable "round"
and the body logic (clearing state.pendingFunctionOutputs, building
toolCallsThisRound, activeTools map, roundContent, calling
tryGenerateStreamWithPkgAI and handling the !handled branch that calls
oc.uiEmitter(...).EmitUIError, oc.emitUIFinish(...), and streamFailureError) but
remove the goto by placing the loop body under a label (e.g., rounds:) and
replacing any "goto nextRound" jumps with "continue rounds" so control flow is
explicit and easier to follow around symbols like tryGenerateStreamWithPkgAI,
ToolCallResult, activeToolCall, oc.uiEmitter, oc.emitUIFinish, and
streamFailureError.
- Around line 413-416: The switch branch for StreamEventToolCall currently just
continues when event.ToolCall is nil; update that branch to log a warning before
continuing so unexpected nil tool calls are visible in logs. In the
StreamEventToolCall case (referencing StreamEventToolCall and event.ToolCall)
emit a warning using the package's existing logger (e.g., r.logger.Warn/Warnf or
log.Printf) including useful context such as event type/id/timestamp if
available, then continue as before to preserve the nil-guard behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 52c18cd3-fd1c-4725-bc60-776deafd8e43
📒 Files selected for processing (7)
docs/pkg-ai-runtime-migration.mdpkg/connector/pkg_ai_provider_bridge.gopkg/connector/pkg_ai_provider_bridge_test.gopkg/connector/provider_openai.gopkg/connector/provider_openai_pkg_ai_test.gopkg/connector/streaming_runtime_selector.gopkg/connector/streaming_runtime_selector_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
- docs/pkg-ai-runtime-migration.md
- pkg/connector/provider_openai_pkg_ai_test.go
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: build-docker
- GitHub Check: Lint
- GitHub Check: Lint
- GitHub Check: build-docker
🧰 Additional context used
🧬 Code graph analysis (2)
pkg/connector/provider_openai.go (2)
pkg/ai/types.go (2)
Model(194-207)Context(166-170)pkg/connector/provider.go (2)
GenerateParams(23-35)GenerateResponse(38-44)
pkg/connector/streaming_runtime_selector_test.go (5)
pkg/connector/model_api.go (2)
ModelAPIResponses(8-8)ModelAPIChatCompletions(9-9)pkg/connector/messages.go (1)
RoleTool(17-17)pkg/connector/client.go (1)
AIClient(266-348)pkg/ai/types.go (1)
Model(194-207)pkg/connector/provider_openai.go (1)
OpenAIProvider(27-32)
🔇 Additional comments (22)
pkg/connector/provider_openai.go (4)
27-32: LGTM!The
apiKeyfield addition is appropriate for enabling the pkg/ai runtime integration while maintaining encapsulation with a private field.
83-89: LGTM!The
apiKeyfield is correctly initialized in the constructor.
197-203: LGTM!Consistent
apiKeyinitialization matching the other constructor.
269-278: Stream helper error return asymmetry persists.The
tryGenerateStreamWithPkgAIhelper returns(events, bool)whiletryGenerateWithPkgAIreturns(response, bool, error). This asymmetry was flagged in a previous review. The current implementation cannot distinguish between "runtime unavailable" and "runtime failed with a specific error," potentially returning a misleading error message.pkg/connector/pkg_ai_provider_bridge.go (6)
18-44: Potential URL matching order issue for Google domains.The
googleapis.comcheck on line 31 is too broad and could match before more specific Google endpoints. However, the current ordering places the more specific checks (cloudcode-pa,aiplatform,generativelanguage) before the genericgoogleapis.comcheck, which is correct. The logic looks sound.
46-60: LGTM!The model construction logic correctly infers provider, API, and applies a minimum token guard using
max().
62-90: Missing default fallback case for some providers.The switch returns explicit APIs for known providers, but the function could silently return
APIOpenAIResponsesfor future providers that aren't covered. The model-prefix heuristics provide a reasonable fallback.
175-194: Race between receiving first event and timeout could cause duplicate event emission.The
selecton lines 175-195 waits for either the first event or a 50ms timeout. However, if an event arrives just as the timeout fires, themappedchannel may still have that event, and the caller receivesmappeddirectly without the first event being re-emitted. This could cause the first event to be delivered correctly, so the logic appears sound. The first-event relay pattern on lines 183-190 correctly forwards the first event before continuing.
238-286: LGTM!The response transformation correctly handles text, thinking, and tool call content blocks. The finish reason normalization and usage mapping are appropriate.
137-196: Potential goroutine leak mitigated by context cancellation in producer.In the
time.Aftercase (lines 192-194),mappedis returned directly without spawning a relay goroutine. If the caller abandons consumption mid-stream without cancelling the context, the producer goroutine instreamEventsFromAIStreammay block on send. However, the producer goroutine respects context cancellation and will exit cleanly when the context is cancelled, preventing indefinite blocking. This design is sound provided callers monitor context cancellation, which is done in production code.pkg/connector/pkg_ai_provider_bridge_test.go (4)
11-15: Test validates current behavior but may become stale.This test asserts that
pkgAIProviderRuntimeEnabled()always returnstrue. If gating logic is added later, this test will break. Consider documenting the intent or making the test more adaptive.
46-135: Good coverage of model building scenarios.The test covers multiple providers, API mappings, reasoning flag detection, and heuristic-based inference. This provides solid regression protection.
161-180: Test relies on runtime behavior without credentials.This test expects an error event when calling Azure without credentials. While this validates the error path, consider adding a comment explaining that this is an integration-style test that depends on actual provider registration.
228-250: LGTM!The test validates content extraction, finish reason mapping, tool call transformation, and usage mapping comprehensively.
pkg/connector/streaming_runtime_selector.go (4)
247-271: Silent handling of nil tool returns empty definition.
toToolDefinitionFromAgentToolreturns an emptyToolDefinition{}for nil input (line 249), which may pass deduplication if the name is empty. However,dedupeToolDefinitionsByNamefilters out empty names (line 235-236), so this is handled correctly.
311-320: Context cancellation handling is appropriate.The code correctly handles context cancellation by flushing partial messages and emitting abort UI events. Good defensive coding.
340-350: Tool output messages correctly appended for next round.The logic properly appends assistant content with tool calls and their outputs to
currentParams.Messagesfor the next inference round.
79-170: No issue found. TheExtractUserContentfunction does process text fromOfArrayOfContentParts, but the subsequent loop (lines 91-98) filters only forOfImageURLand does not duplicate text extraction. Text content is extracted once viaExtractUserContent, and image content is extracted separately via the loop—no duplication occurs.pkg/connector/streaming_runtime_selector_test.go (4)
16-26: Good coverage of streaming path selection.Tests cover audio-forced path, pkg_ai preferred path, and model API variations. The assertions are clear.
28-71: Comprehensive message conversion test.Tests system message filtering, role mapping, image content parts, and tool message handling. Good edge case coverage.
73-89: LGTM!The test validates that the explicit system prompt takes precedence and that system messages are filtered from mapped messages.
131-145: LGTM!The credential extraction test validates the happy path with an
OpenAIProvider. Consider adding a negative test case for non-OpenAI providers.
| case "tool": | ||
| out = append(out, UnifiedMessage{ | ||
| Role: RoleTool, | ||
| Content: []ContentPart{{Type: ContentTypeText, Text: content}}, | ||
| ToolCallID: "tool_" + strconv.FormatInt(now, 10), | ||
| }) | ||
| } |
There was a problem hiding this comment.
Synthetic tool call ID uses timestamp with limited uniqueness.
Using strconv.FormatInt(now, 10) for the tool call ID prefix creates the same ID for all tool messages processed in a single millisecond. Consider using a counter or UUID for uniqueness.
Suggested fix
func chatPromptToUnifiedMessages(prompt []openai.ChatCompletionMessageParamUnion) []UnifiedMessage {
out := make([]UnifiedMessage, 0, len(prompt))
- now := time.Now().UnixMilli()
+ toolCounter := 0
for _, msg := range prompt {
// ... existing code ...
case "tool":
out = append(out, UnifiedMessage{
Role: RoleTool,
Content: []ContentPart{{Type: ContentTypeText, Text: content}},
- ToolCallID: "tool_" + strconv.FormatInt(now, 10),
+ ToolCallID: "tool_" + strconv.Itoa(toolCounter),
})
+ toolCounter++🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/connector/streaming_runtime_selector.go` around lines 160 - 166, The tool
message creation in the switch branch that builds UnifiedMessage (case "tool")
uses strconv.FormatInt(now, 10) for ToolCallID which can collide within the same
millisecond; change ToolCallID generation in that branch to use a
guaranteed-unique value (e.g., uuid.NewString() or an atomically incremented
counter) instead of the timestamp. Update the code that sets ToolCallID on the
UnifiedMessage to produce "tool_"+<uniqueID> (preserving the "tool_" prefix) and
import/use the UUID or atomic counter helper so each tool message created by the
switch branch yields a unique ToolCallID.
203b740 to
cb48fce
Compare
There was a problem hiding this comment.
Actionable comments posted: 19
♻️ Duplicate comments (2)
pkg/ai/oauth/google_antigravity.go (1)
8-13:⚠️ Potential issue | 🔴 CriticalHardcoded OAuth client secret must be removed.
The client secret on line 10 is committed to source control, which is a credential leak. This was flagged in a previous review and remains unaddressed. The secret should be read from an environment variable or secret manager at runtime.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/oauth/google_antigravity.go` around lines 8 - 13, The hardcoded credential antigravityClientSecret must be removed; replace the constant with a runtime lookup (e.g., read from an environment variable or secret manager) and ensure code using antigravityClientSecret retrieves the value via that lookup (or an init function) and fails fast with a clear error if the secret is missing; update the definition of antigravityClientSecret to be populated from os.Getenv("ANTIGRAVITY_CLIENT_SECRET") or from your secret manager client and remove the literal string.pkg/ai/oauth/anthropic.go (1)
25-36:⚠️ Potential issue | 🟠 MajorRemove the invalid
code=trueparameter.The
params.Set("code", "true")on line 27 is not a valid Anthropic OAuth authorization parameter. Standard OAuth 2.0 + PKCE parameters areresponse_type,client_id,redirect_uri,scope,code_challenge,code_challenge_method, andstate. This undocumented parameter may cause the authorization request to fail.func BuildAnthropicAuthorizeURL(codeChallenge string, state string) string { params := url.Values{} - params.Set("code", "true") params.Set("client_id", anthropicClientID) params.Set("response_type", "code"),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/oauth/anthropic.go` around lines 25 - 36, The BuildAnthropicAuthorizeURL function is adding an invalid parameter ("code" set to "true") to the OAuth authorization query; remove the params.Set("code", "true") line so the query only includes standard OAuth/PKCE parameters (response_type, client_id, redirect_uri, scope, code_challenge, code_challenge_method, state) to avoid breaking the Anthropic authorization request.
🧹 Nitpick comments (25)
pkg/ai/oauth/pkce.go (1)
14-26: LGTM! Correct PKCE implementation per RFC 7636.The implementation correctly:
- Uses
crypto/randfor cryptographically secure random generation- Produces a 43-character verifier (minimum per RFC 7636, 32 bytes base64-encoded)
- Uses
RawURLEncoding(no padding) as required by PKCE- Computes the challenge as
BASE64URL(SHA256(code_verifier))per the S256 methodConsider adding doc comments for the exported type and function to improve API documentation:
📝 Optional: Add documentation comments
+// PKCE holds the verifier and challenge for OAuth 2.0 PKCE flow (RFC 7636). type PKCE struct { Verifier string Challenge string } +// GeneratePKCE generates a cryptographically secure PKCE verifier and +// S256 challenge suitable for OAuth 2.0 authorization code flows. func GeneratePKCE() (PKCE, error) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/oauth/pkce.go` around lines 14 - 26, Add Go doc comments for the exported PKCE type and the GeneratePKCE function to improve API documentation: above the PKCE type declaration add a concise comment describing the fields (Verifier and Challenge) and their purpose in PKCE, and above GeneratePKCE add a comment that explains what it returns, the algorithm used (S256), that it uses crypto/rand for secure generation, and any guarantees (e.g., verifier length/encoding); ensure comments follow Go doc style starting with the symbol name (e.g., "PKCE ..." and "GeneratePKCE ...").pkg/ai/oauth/google_antigravity_helpers_test.go (1)
8-37: Test covers main paths; consider a few additional edge cases.The test validates the primary functionality well. For more comprehensive coverage, consider adding:
- Empty map payload
map[string]any{}(nocloudaicompanionProjectkey)- Whitespace-only string
map[string]any{"cloudaicompanionProject": " "}- Unexpected type
map[string]any{"cloudaicompanionProject": 123}These would exercise the fallback to
AntigravityDefaultProjectID()through different branches.♻️ Suggested additional test cases
// Empty map - should fallback to default if got := ResolveAntigravityProjectID(map[string]any{}); got != AntigravityDefaultProjectID() { t.Fatalf("expected default project for empty map, got %q", got) } // Whitespace-only string - should fallback to default if got := ResolveAntigravityProjectID(map[string]any{"cloudaicompanionProject": " "}); got != AntigravityDefaultProjectID() { t.Fatalf("expected default project for whitespace string, got %q", got) } // Unexpected type - should fallback to default if got := ResolveAntigravityProjectID(map[string]any{"cloudaicompanionProject": 123}); got != AntigravityDefaultProjectID() { t.Fatalf("expected default project for non-string/map type, got %q", got) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/oauth/google_antigravity_helpers_test.go` around lines 8 - 37, Add three edge-case assertions to TestGoogleAntigravityOAuthHelpers to exercise ResolveAntigravityProjectID: call ResolveAntigravityProjectID with an empty map (map[string]any{}), with a whitespace-only string value (map[string]any{"cloudaicompanionProject":" "}), and with an unexpected non-string/non-map type (map[string]any{"cloudaicompanionProject":123}); for each, assert the result equals AntigravityDefaultProjectID() and fail the test with a clear message if not. This ensures ResolveAntigravityProjectID correctly falls back to the default for missing keys, blank strings, and wrong types.pkg/ai/providers/openai_responses_test.go (1)
34-41: Tighten the GPT-5 fallback assertion.This only checks that the last entry is
developer, so it would still pass if the fallback clobbered the original user message or accidentally leftreasoning/includepopulated. I'd assert both that the user input is preserved and that reasoning-only fields are absent in the fallback case.🧪 Suggested test hardening
noReasoning := BuildOpenAIResponsesParams(model, ai.Context{ Messages: []ai.Message{{Role: ai.RoleUser, Text: "hello"}}, }, OpenAIResponsesOptions{}) +if _, ok := noReasoning["reasoning"]; ok { + t.Fatalf("did not expect reasoning payload when reasoning options are omitted") +} +if _, ok := noReasoning["include"]; ok { + t.Fatalf("did not expect include entries when reasoning options are omitted") +} input := noReasoning["input"].([]map[string]any) +if len(input) != 2 || input[0]["role"] != "user" { + t.Fatalf("expected original user input to be preserved, got %#v", input) +} last := input[len(input)-1] if last["role"] != "developer" { t.Fatalf("expected gpt-5 fallback developer hint when reasoning omitted, got %#v", last) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/openai_responses_test.go` around lines 34 - 41, The test should be tightened by asserting that BuildOpenAIResponsesParams (called with OpenAIResponsesOptions{}) preserves the original user message and only appends a developer fallback without clobbering fields; update the test around variables input and last to (1) verify the penultimate entry still has Role ai.RoleUser and Text "hello" (i.e., the original user message is preserved) and (2) verify the developer fallback (last) exists but does not include reasoning-only fields like "reasoning" or "include" (ensure those keys are absent or nil) so the fallback only adds the developer hint.pkg/ai/utils/sanitize_unicode.go (1)
14-24: Consider pre-allocating builder capacity.The rune slice correctly pre-allocates capacity, but the
strings.Buildercould also benefit fromout.Grow(len(text))to reduce allocations during writes.♻️ Suggested improvement
var out strings.Builder + out.Grow(len(text)) runes := make([]rune, 0, len(text))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/utils/sanitize_unicode.go` around lines 14 - 24, The strings.Builder named out should pre-allocate capacity to reduce allocations: after creating out (the variable out strings.Builder) call out.Grow(len(text)) (using the original text byte length is fine) before you start decoding runes and writing to the builder; this change keeps the existing rune slice pre-allocation (runes := make([]rune, 0, len(text))) and simply grows the builder up front to avoid repeated reallocations during writes.pkg/connector/ai_adapter.go (2)
93-104: Consider explicit handling for unknown roles.The default case silently falls back to
RoleUser, which could mask bugs if an unexpected role is passed. Consider logging a warning or returning an error for unknown roles.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/ai_adapter.go` around lines 93 - 104, The mapAIRole function currently maps unknown MessageRole values to aipkg.RoleUser, which can hide bugs; update mapAIRole to explicitly handle unexpected roles by either returning an error (e.g., change signature to (MessageRole) (aipkg.MessageRole, error)) or by keeping the current signature but logging a warning before returning a safe default. Modify callers of mapAIRole accordingly if you change the signature, and reference the function mapAIRole, the MessageRole type and aipkg.MessageRole values (RoleUser, RoleAssistant, RoleTool/RoleToolResult) when implementing the explicit handling.
17-17: Hardcoded timestamp loses source message timing.Setting
Timestamp: 0discards any timestamp information from the sourceUnifiedMessage. If the source message has timing information, consider propagating it to preserve message ordering context.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/connector/ai_adapter.go` at line 17, Replace the hardcoded Timestamp: 0 with the source message's timestamp so you preserve original timing: use the UnifiedMessage's timestamp field (e.g., src.Timestamp or unifiedMsg.Timestamp) when constructing the AI message in the code that sets Timestamp, and optionally fall back to a sensible default (current time or 0) only if the source timestamp is missing/zero; update the constructor/mapper that sets Timestamp so it propagates UnifiedMessage.Timestamp instead of always writing 0.pkg/ai/utils/overflow.go (1)
34-36: Regex compiled on every call - consider moving to package level.
regexp.MustCompileis called inside the function, which compiles the regex on every invocation. This pattern should be moved to the package-leveloverflowPatternsslice for consistency and performance.♻️ Proposed fix
Add to the package-level patterns:
var overflowPatterns = []*regexp.Regexp{ // ... existing patterns ... regexp.MustCompile(`(?i)token limit exceeded`), + regexp.MustCompile(`(?i)^4(00|13)\s*(status code)?\s*\(no body\)`), }Then simplify the function:
if message.StopReason == ai.StopReasonError && message.ErrorMessage != "" { for _, p := range overflowPatterns { if p.MatchString(message.ErrorMessage) { return true } } - if regexp.MustCompile(`(?i)^4(00|13)\s*(status code)?\s*\(no body\)`).MatchString(message.ErrorMessage) { - return true - } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/utils/overflow.go` around lines 34 - 36, The regex is being compiled on every call via regexp.MustCompile(`(?i)^4(00|13)\s*(status code)?\s*\(no body\)`) — move this pattern into the package-level overflowPatterns slice as a precompiled *regexp.Compile/regexp.MustCompile entry and replace the in-function regexp.MustCompile call to iterate/check against overflowPatterns instead; update the function that performs the overflow check to use the precompiled pattern from overflowPatterns (and remove the per-call compilation).pkg/ai/oauth/google_gemini_cli.go (1)
9-15: Embedded OAuth client credentials for CLI application.The static analysis tool flagged the
geminiCliClientSecretas a potential secret exposure. However, for native/CLI OAuth applications, embedding client credentials is standard practice per RFC 8252 since native apps cannot securely store secrets. Google's own CLI tools (gcloud, etc.) use the same pattern. The PKCE flow (code_challenge) provides the necessary security for public clients.If this is intentional (public CLI client), consider adding a comment explaining why this is acceptable:
const ( geminiCliClientID = "681255809395-oo8ft2oprdrnp9e3aqf6av3hmdib135j.apps.googleusercontent.com" + // geminiCliClientSecret is embedded per RFC 8252 for public/native clients; PKCE provides auth security geminiCliClientSecret = "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"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/oauth/google_gemini_cli.go` around lines 9 - 15, The embedded OAuth client secret geminiCliClientSecret (and associated constants geminiCliClientID, geminiCliRedirectURI, geminiCliAuthURL, geminiCliTokenURL) is flagged as a secret; if this is an intentional public/native CLI client using PKCE, add a concise explanatory comment above these constants documenting that this is a public/native OAuth client per RFC 8252, that secrets for native apps are not considered confidential, that PKCE is used and how (e.g., code_challenge), and note why embedding the values is acceptable and audited; update the comment to reference RFC 8252 and your security rationale so static analysis and future readers understand the intent.pkg/ai/providers/google_runtime_test.go (1)
42-66: Consider adding EOF check for consistency.Unlike
TestStreamGoogleGenerativeAI_MissingAPIKeyEmitsErrorwhich verifies EOF after the terminal error event (lines 37-39), this test doesn't verify the stream returns EOF after the error. Consider adding the check for consistency:if !strings.Contains(strings.ToLower(evt.Error.ErrorMessage), "project") { t.Fatalf("expected missing project error, got %q", evt.Error.ErrorMessage) } + if _, err := stream.Next(ctx); err != io.EOF { + t.Fatalf("expected EOF after terminal event, got %v", err) + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/google_runtime_test.go` around lines 42 - 66, In TestStreamGoogleVertex_MissingEnvEmitsError add the same EOF verification as in TestStreamGoogleGenerativeAI_MissingAPIKeyEmitsError: after asserting the terminal error event from stream.Next(ctx) (the ai.EventError check on the returned evt), call stream.Next(ctx) again and assert it returns io.EOF (or an EOF error) to ensure the stream is closed; update the test to import/use io.EOF if needed and keep references to streamGoogleVertex, TestStreamGoogleVertex_MissingEnvEmitsError, and ai.EventError to locate the changes.pkg/ai/e2e/cross_provider_handoff_e2e_test.go (2)
108-161: Same variable shadowing issue.The
contextvariable at line 108 also shadows the standard library package.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/e2e/cross_provider_handoff_e2e_test.go` around lines 108 - 161, The local variable named `context` shadows the standard library `context` package; rename the variable (e.g., `aiContext` or `testContext`) wherever it is declared as type `ai.Context` in this test (the block that creates ai.Context with SystemPrompt, Tools, Messages and uses doubleNumberTool(), RoleUser/RoleAssistant/RoleToolResult, etc.) and update all references to that variable in the test so the stdlib `context` package is no longer shadowed.
22-75: Variable namecontextshadows the standard library package.While the
contextpackage isn't used directly in this function, the variable namecontextat line 22 shadows the standard library import which can cause confusion and maintenance issues. Consider renaming toaiContext,testContext, orhandoffContext.- context := ai.Context{ + aiCtx := ai.Context{ SystemPrompt: "You are a helpful assistant.", // ... } - response, err := ai.CompleteSimple(model, context, &ai.SimpleStreamOptions{ + response, err := ai.CompleteSimple(model, aiCtx, &ai.SimpleStreamOptions{🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/e2e/cross_provider_handoff_e2e_test.go` around lines 22 - 75, The variable named `context` (of type ai.Context) shadows the standard library package import; rename this variable (e.g., `aiContext`, `testContext`, or `handoffContext`) and update all references in this snippet and surrounding test code where `context` is used to the new name so it no longer conflicts with the `context` package; ensure the ai.Context literal and any subsequent reads/writes (including uses of Tools, Messages, SystemPrompt) reference the new identifier.pkg/ai/env_api_keys.go (1)
77-92: ADC path is Unix-specific; Windows uses a different location.The hardcoded path
~/.config/gcloud/application_default_credentials.jsonis Unix-specific. On Windows, the ADC file is typically at%APPDATA%\gcloud\application_default_credentials.json. If cross-platform support is needed, consider handling both paths:func hasVertexADCCredentials() bool { if path := os.Getenv("GOOGLE_APPLICATION_CREDENTIALS"); path != "" { if _, err := os.Stat(path); err == nil { return true } } home, err := os.UserHomeDir() if err != nil || home == "" { return false } + // Unix-like systems adcPath := filepath.Join(home, ".config", "gcloud", "application_default_credentials.json") if _, err := os.Stat(adcPath); err == nil { return true } + // Windows: check %APPDATA%\gcloud\... + if appData := os.Getenv("APPDATA"); appData != "" { + windowsPath := filepath.Join(appData, "gcloud", "application_default_credentials.json") + if _, err := os.Stat(windowsPath); err == nil { + return true + } + } return false }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/env_api_keys.go` around lines 77 - 92, The hasVertexADCCredentials function currently only checks the Unix ADC location (~/.config/gcloud/application_default_credentials.json); update it to also check the Windows ADC location by deriving the correct config directory (e.g., use os.UserConfigDir() or, if unavailable, use the APPDATA environment variable) and test for filepath.Join(configDir, "gcloud", "application_default_credentials.json") in addition to the existing home-based path so both Unix and Windows ADC files are detected.pkg/ai/oauth/anthropic.go (1)
38-49: Clarify the#delimiter usage in authorization code parsing.Using
#as a delimiter is non-standard for OAuth authorization codes. Typically, authorization codes come via query parameters (?code=...&state=...) in the redirect URL. If this is intentional for a custom Anthropic flow or manual copy-paste input, consider adding a comment explaining the expected input format.+// ParseAnthropicAuthorizationCode parses authorization code input in the format "code#state". +// This format is used when the code and state are provided as a combined string +// (e.g., from manual user input or a custom callback flow). func ParseAnthropicAuthorizationCode(input string) (code string, state string) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/oauth/anthropic.go` around lines 38 - 49, The ParseAnthropicAuthorizationCode function currently splits input on a non-standard '#' delimiter; update it to robustly handle full redirect URLs by first attempting to parse URL query parameters for "code" and "state" (e.g., from ?code=...&state=...), then falling back to parsing the fragment or legacy "#" split for manual copy-paste input, and add a concise comment above ParseAnthropicAuthorizationCode explaining the accepted input formats (full redirect URL, fragment, or manual "code#state") and the parsing precedence.pkg/ai/providers/simple_options.go (1)
88-100: Consider using Go 1.21+ builtinmin/maxfunctions.The project targets Go 1.25.0, which supports the builtin
minandmaxfunctions introduced in Go 1.21. These helpers can be replaced with the builtins for cleaner, more idiomatic code. Usage includes minInt (4 occurrences across simple_options.go and google_gemini_cli_runtime.go) and maxInt (1 occurrence).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/simple_options.go` around lines 88 - 100, Replace the custom helpers minInt and maxInt with Go 1.21+ builtins by removing the minInt and maxInt functions and updating all call sites to use the builtin min and max (e.g., replace minInt(x,y) with min(x,y) and maxInt(x,y) with max(x,y)); ensure you update all occurrences (notably in simple_options.go and google_gemini_cli_runtime.go) and remove any now-unused helper definitions to keep the code idiomatic.pkg/ai/models.go (1)
79-84:ModelsAreEqualreturnsfalsewhen both pointers arenil.This may be intentional, but semantically two
nilpointers often represent "equal absence." If this behavior is deliberate, consider adding a brief doc comment clarifying the design choice.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/models.go` around lines 79 - 84, ModelsAreEqual currently returns false when both a and b are nil; change its semantics to treat two nil pointers as equal by updating the nil checks in ModelsAreEqual so that it returns true if both a and b are nil, false if exactly one is nil, and otherwise compare a.ID and a.Provider to b.ID and b.Provider; update or add a brief doc comment for ModelsAreEqual to document that nil==nil is considered equal.pkg/ai/providers/openai_completions_runtime.go (1)
70-78: Defaulting tocontext.Background()may mask bugs.If
options.StreamOptions.Ctxisnil, the stream becomes uncancellable. Consider logging a warning or requiring a non-nil context to ensure proper lifecycle management.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/openai_completions_runtime.go` around lines 70 - 78, The code currently defaults runCtx to context.Background(), making the OpenAI stream uncancellable; instead detect when options.StreamOptions.Ctx is nil and fail fast: if options.StreamOptions.Ctx == nil, call pushProviderError(stream, model, "nil StreamOptions.Ctx; a cancellable context is required") and return rather than using context.Background(); locate the logic around options.StreamOptions.Ctx, runCtx, and the NewStreaming call and replace the defaulting behavior with this validation so NewStreaming always receives a non-nil cancellable context.pkg/ai/utils/validation.go (1)
62-71:cloneMapperforms a shallow copy.Nested
map[string]anyor slice values will share references with the original. If callers mutate nested structures in the returnedargs, they'll affect the originaltoolCall.Arguments. This may be acceptable if downstream code is read-only, but worth documenting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/utils/validation.go` around lines 62 - 71, cloneMap currently does a shallow copy which leaves nested map[string]any and slice values shared with the original (e.g. toolCall.Arguments), causing mutations to propagate; update cloneMap to perform a deep copy: for each value check its type and recursively clone map[string]any, clone slices (and slices of interfaces by copying elements and recursively cloning nested maps/slices), and otherwise assign primitive/immutable values directly, ensuring the returned map is fully independent of the original toolCall.Arguments; keep the function name cloneMap and preserve the same signature so callers are unchanged.pkg/ai/e2e/context_overflow_test.go (1)
58-68: Hardcoded context window defaults may become stale.The function returns
400000for gpt-5 variants and128000as default. As models evolve, these values may drift. Consider documenting these assumptions or logging when defaults are used.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/e2e/context_overflow_test.go` around lines 58 - 68, The openAIE2EContextWindow function currently returns hardcoded defaults (400000 for gpt-5 variants and 128000 otherwise); update it to surface when a default is being used by emitting a clear runtime log message and add a short comment documenting the assumption, and prefer reading a new environment fallback like PI_AI_E2E_OPENAI_CONTEXT_WINDOW_DEFAULT first; specifically modify openAIE2EContextWindow to: 1) attempt the explicit env parse as now, 2) determine the default based on model string, 3) log (via the local test/log package used in tests or standard log) which default was chosen and why (model value or missing env), and 4) add an inline comment noting these numeric defaults are conservative and should be reviewed as models evolve.pkg/ai/providers/amazon_bedrock_runtime.go (2)
18-123: Non-streaming Converse call despite streaming entry point name.The function is named
streamBedrockConversebut usesclient.Converse(line 47), which is the non-streaming synchronous API. For true streaming, Bedrock providesConverseStream. This appears intentional based on the implementation (processing the full response and pushing events), but the naming could be misleading.If this is intentional (e.g., simulating streaming from a non-streaming response), consider adding a comment. If streaming is desired, use
ConverseStreamInputand iterate over the stream.+// streamBedrockConverse wraps the synchronous Converse API, pushing events +// to simulate streaming behavior. For true server-side streaming, consider +// using ConverseStream in the future. func streamBedrockConverse(model ai.Model, c ai.Context, options *ai.StreamOptions) *ai.AssistantMessageEventStream {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/amazon_bedrock_runtime.go` around lines 18 - 123, The function streamBedrockConverse is named and wired to push streaming events but calls the synchronous client.Converse; either document that this is a simulated stream or switch to the real streaming API: replace client.Converse with bedrockruntime's ConverseStream (use ConverseStreamInput built from BuildBedrockConverseInput/buildBedrockConverseInput), iterate the returned stream events to push ai.AssistantMessageEvent deltas as they arrive, and update message/usage/StopReason handling accordingly; if you intend to keep the current synchronous behavior, add a clear comment in streamBedrockConverse explaining it simulates streaming from the full response and why client.Converse is used.
41-46: Redundant payload construction.The code builds
payloadviaBuildBedrockConverseInput(line 41) for theOnPayloadcallback, then buildsinputviabuildBedrockConverseInput(line 46) for the actual API call. These appear to be different functions producing different formats. Ensure this is intentional—if both are needed, the naming could be clearer.💡 Consider adding comments clarifying the two different inputs
+ // Build serializable payload for debugging/observability callback payload := BuildBedrockConverseInput(model, c, BedrockOptions{StreamOptions: streamOptions}) if streamOptions.OnPayload != nil { streamOptions.OnPayload(payload) } + // Build typed SDK input for the actual API call input := buildBedrockConverseInput(model, c, streamOptions)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/amazon_bedrock_runtime.go` around lines 41 - 46, The code constructs two different inputs—payload via BuildBedrockConverseInput and input via buildBedrockConverseInput—and calls streamOptions.OnPayload(payload) with the first but uses the second for the API; clarify intent and remove redundancy by either (A) reusing the same construction for both callbacks and the API (call buildBedrockConverseInput once and pass that object to OnPayload), or (B) if both formats are intentionally different, add a clear comment above the block explaining why BuildBedrockConverseInput (for OnPayload) differs from buildBedrockConverseInput (for the API) and consider renaming one of the functions to make their distinct purposes explicit; update the code around streamOptions.OnPayload, BuildBedrockConverseInput, and buildBedrockConverseInput accordingly.pkg/ai/oauth/openai_codex.go (1)
45-67: Consider input validation edge cases in ParseOpenAICodexAuthorizationInput.The function handles multiple input formats gracefully, but there's a potential ambiguity:
- Line 56-58: If the input contains
#, it splits on#and treats the parts as code and state. This assumes a specific fragment format that may not always hold.- The fallback at line 66 returns the entire value as the code, which may be unexpected for malformed inputs.
This appears intentional for flexibility, but consider documenting the expected input formats.
📝 Consider adding documentation
+// ParseOpenAICodexAuthorizationInput extracts code and state from various input formats: +// - Full URL with query parameters: https://...?code=X&state=Y +// - Fragment-style: code#state +// - Query string: code=X&state=Y +// - Plain code value func ParseOpenAICodexAuthorizationInput(input string) (code string, state string) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/oauth/openai_codex.go` around lines 45 - 67, ParseOpenAICodexAuthorizationInput currently assumes a raw split on "#" yields code and state and otherwise returns the entire input as code; update the function to validate inputs more robustly by (1) when "#" is present, treat the fragment as query parameters (use url.ParseQuery on the part after "#") and extract "code" and "state" only if present, otherwise do not treat the left/right split as valid code/state, (2) when parsing a full URL or raw query, ensure the extracted "code" value is non-empty before returning it and return empty strings for malformed inputs, and (3) add a concise doc comment on ParseOpenAICodexAuthorizationInput describing accepted input formats (full URL with ?code=, fragment with # containing query params, or raw query string) so callers know the expected behavior.pkg/ai/providers/openai_codex_responses.go (1)
126-248: Consider handling unknown event types explicitly.The
ProcessCodexSSEPayloadfunction silently ignores unknown event types in the switch statement. While this may be intentional for forward compatibility, logging or tracking unknown events could help with debugging during development.Additionally, the function accumulates text in
output.Content[currentTextIndex].Textduring deltas (line 174) but then overwrites it withfinalTextonresponse.output_item.done(line 198). This is correct behavior, but the intermediate accumulation is only useful for partial event emission.💡 Consider adding a default case for observability
case "error": msg, _ := event["message"].(string) output.StopReason = ai.StopReasonError output.ErrorMessage = msg emit(ai.AssistantMessageEvent{ Type: ai.EventError, Reason: ai.StopReasonError, Error: *output, }) + default: + // Unknown event types are silently ignored for forward compatibility }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/openai_codex_responses.go` around lines 126 - 248, ProcessCodexSSEPayload currently ignores unknown event types in its switch; add a default branch in the switch over eventType to record unknown events for observability by either logging the eventType + raw data (e.g., using the package logger or fmt/log) and/or emitting an ai.AssistantMessageEvent (e.g., Type: ai.EventUnknown, Partial: *output, and include the raw event map or eventType in the event payload) so unknown SSE messages are visible in logs/telemetry without changing existing handling for known cases (update the switch in ProcessCodexSSEPayload to include this default).pkg/ai/providers/openai_responses_runtime.go (1)
208-223: Consider simplifying redundant default handling.Lines 219-222 have a condition that always returns
ai.StopReasonStopin both branches - the empty string check at line 219 returns the same value as the unconditional line 222.♻️ Suggested simplification
func mapOpenAIResponseStatus(status responses.ResponseStatus) ai.StopReason { switch status { case responses.ResponseStatusCompleted: return ai.StopReasonStop case responses.ResponseStatusInProgress, responses.ResponseStatusIncomplete: return ai.StopReasonLength case responses.ResponseStatusCancelled: return ai.StopReasonAborted case responses.ResponseStatusFailed: return ai.StopReasonError default: - if strings.TrimSpace(string(status)) == "" { - return ai.StopReasonStop - } return ai.StopReasonStop } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/openai_responses_runtime.go` around lines 208 - 223, The default branch in mapOpenAIResponseStatus contains a redundant empty-string check that returns ai.StopReasonStop in both branches; simplify by replacing the entire if/else in the default case with a single return ai.StopReasonStop. Update the mapOpenAIResponseStatus function so the default path just returns ai.StopReasonStop (keeping existing cases for ResponseStatusCompleted, InProgress, Incomplete, Cancelled, and Failed unchanged).pkg/ai/providers/google_gemini_cli_runtime.go (1)
269-272: Consider checking the validity bool fromGeminiEmptyStreamBackoff.The second return value indicating whether the backoff is valid is discarded. While the
ShouldRetryGeminiEmptyStreamcheck on line 269 should prevent invalid attempts, explicitly checking the bool adds defensive clarity.♻️ Suggested improvement
if !ShouldRetryGeminiEmptyStream(false, emptyAttempt) { break } - delay, _ := GeminiEmptyStreamBackoff(emptyAttempt + 1) + delay, ok := GeminiEmptyStreamBackoff(emptyAttempt + 1) + if !ok { + break + } if sleepErr := sleepWithContext(runCtx, delay); sleepErr != nil {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/google_gemini_cli_runtime.go` around lines 269 - 272, The call to GeminiEmptyStreamBackoff currently discards its validity bool; update the code in the retry loop (around ShouldRetryGeminiEmptyStream and the GeminiEmptyStreamBackoff call) to capture the second return (e.g., delay, ok := GeminiEmptyStreamBackoff(emptyAttempt + 1)) and explicitly handle the case where ok is false (for example, break or return an error) before using delay, ensuring defensive clarity and preventing use of an invalid backoff value.pkg/ai/providers/openai_responses_shared_test.go (1)
67-75: Consider strengthening test assertions for better coverage.The combined assertion
callID == "call_abc" && strings.Contains(itemID, "+")only fails if both conditions are true simultaneously. IfcallIDunexpectedly changed (e.g., due to a regression), the test would still pass as long asitemIDwas sanitized.♻️ Suggested improvement for clearer assertions
functionCall := output[2] callID, _ := functionCall["call_id"].(string) itemID, _ := functionCall["id"].(string) - if callID == "call_abc" && strings.Contains(itemID, "+") { - t.Fatalf("expected normalized function call ids, got call=%q item=%q", callID, itemID) + if callID != "call_abc" { + t.Fatalf("expected call_id to remain call_abc, got %q", callID) + } + if strings.ContainsAny(itemID, "+/=") { + t.Fatalf("expected item id sanitized, got %q", itemID) } if !strings.HasPrefix(itemID, "fc") { t.Fatalf("expected function call item id to start with fc, got %q", itemID) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/providers/openai_responses_shared_test.go` around lines 67 - 75, The test currently combines checks so a regression in one value can be masked; in the block that inspects functionCall (variables callID and itemID) separate the combined condition into two independent assertions: assert callID is not equal to "call_abc" and assert itemID does not contain "+" (e.g., use two t.Fatalf/t.Errorf calls), while keeping the existing check that itemID starts with "fc", so modify the checks around functionCall / callID / itemID to validate each invariant independently.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 6576e601-b028-492e-b435-ad7a93b40429
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (108)
README.mdcmd/generate-ai-models/main.godocs/pkg-ai-runtime-migration.mddocs/pkg-ai-test-parity.mdgo.modpkg/ai/api_registry.gopkg/ai/api_registry_test.gopkg/ai/e2e/abort_test.gopkg/ai/e2e/context_overflow_test.gopkg/ai/e2e/cross_provider_handoff_e2e_test.gopkg/ai/e2e/empty_e2e_test.gopkg/ai/e2e/helpers_test.gopkg/ai/e2e/image_tool_result_e2e_test.gopkg/ai/e2e/openai_reasoning_replay_e2e_test.gopkg/ai/e2e/parity_openai_test.gopkg/ai/e2e/parity_provider_runtime_test.gopkg/ai/e2e/stream_test.gopkg/ai/e2e/tool_call_id_normalization_e2e_test.gopkg/ai/e2e/xhigh_e2e_test.gopkg/ai/env_api_keys.gopkg/ai/env_api_keys_test.gopkg/ai/event_stream.gopkg/ai/event_stream_test.gopkg/ai/models.gopkg/ai/models_generated.gopkg/ai/models_test.gopkg/ai/oauth/anthropic.gopkg/ai/oauth/anthropic_provider_helpers_test.gopkg/ai/oauth/github_copilot.gopkg/ai/oauth/github_copilot_provider_helpers_test.gopkg/ai/oauth/google_antigravity.gopkg/ai/oauth/google_antigravity_helpers_test.gopkg/ai/oauth/google_gemini_cli.gopkg/ai/oauth/google_gemini_cli_helpers_test.gopkg/ai/oauth/index.gopkg/ai/oauth/index_test.gopkg/ai/oauth/openai_codex.gopkg/ai/oauth/openai_codex_helpers_test.gopkg/ai/oauth/pkce.gopkg/ai/oauth/pkce_test.gopkg/ai/oauth/types.gopkg/ai/providers/amazon_bedrock.gopkg/ai/providers/amazon_bedrock_runtime.gopkg/ai/providers/amazon_bedrock_runtime_test.gopkg/ai/providers/amazon_bedrock_test.gopkg/ai/providers/anthropic.gopkg/ai/providers/anthropic_runtime.gopkg/ai/providers/anthropic_runtime_test.gopkg/ai/providers/anthropic_test.gopkg/ai/providers/azure_openai_responses.gopkg/ai/providers/azure_openai_responses_runtime.gopkg/ai/providers/azure_openai_responses_runtime_test.gopkg/ai/providers/azure_openai_responses_test.gopkg/ai/providers/cache_retention_test.gopkg/ai/providers/github_copilot_headers.gopkg/ai/providers/github_copilot_headers_test.gopkg/ai/providers/google.gopkg/ai/providers/google_gemini_cli.gopkg/ai/providers/google_gemini_cli_runtime.gopkg/ai/providers/google_gemini_cli_runtime_test.gopkg/ai/providers/google_gemini_cli_test.gopkg/ai/providers/google_runtime.gopkg/ai/providers/google_runtime_test.gopkg/ai/providers/google_shared.gopkg/ai/providers/google_shared_test.gopkg/ai/providers/google_test.gopkg/ai/providers/google_tool_call_missing_args_test.gopkg/ai/providers/google_vertex.gopkg/ai/providers/google_vertex_test.gopkg/ai/providers/openai_codex_responses.gopkg/ai/providers/openai_codex_responses_runtime.gopkg/ai/providers/openai_codex_responses_runtime_test.gopkg/ai/providers/openai_codex_responses_test.gopkg/ai/providers/openai_completions.gopkg/ai/providers/openai_completions_convert_test.gopkg/ai/providers/openai_completions_runtime.gopkg/ai/providers/openai_completions_runtime_test.gopkg/ai/providers/openai_completions_test.gopkg/ai/providers/openai_responses.gopkg/ai/providers/openai_responses_runtime.gopkg/ai/providers/openai_responses_runtime_test.gopkg/ai/providers/openai_responses_shared.gopkg/ai/providers/openai_responses_shared_test.gopkg/ai/providers/openai_responses_test.gopkg/ai/providers/register_builtins.gopkg/ai/providers/register_builtins_test.gopkg/ai/providers/runtime_abort.gopkg/ai/providers/runtime_abort_test.gopkg/ai/providers/simple_options.gopkg/ai/providers/simple_options_test.gopkg/ai/providers/transform_messages.gopkg/ai/providers/transform_messages_test.gopkg/ai/stream.gopkg/ai/types.gopkg/ai/utils/json_parse.gopkg/ai/utils/json_parse_test.gopkg/ai/utils/overflow.gopkg/ai/utils/overflow_test.gopkg/ai/utils/sanitize_unicode.gopkg/ai/utils/sanitize_unicode_test.gopkg/ai/utils/type_helpers.gopkg/ai/utils/type_helpers_test.gopkg/ai/utils/usage.gopkg/ai/utils/usage_test.gopkg/ai/utils/validation.gopkg/ai/utils/validation_test.gopkg/connector/ai_adapter.gopkg/connector/ai_adapter_test.go
🚧 Files skipped from review as they are similar to previous changes (45)
- pkg/ai/oauth/github_copilot_provider_helpers_test.go
- pkg/ai/providers/register_builtins.go
- pkg/ai/oauth/pkce_test.go
- pkg/ai/providers/github_copilot_headers.go
- docs/pkg-ai-test-parity.md
- pkg/ai/providers/google_test.go
- pkg/ai/utils/type_helpers.go
- pkg/ai/providers/google.go
- pkg/ai/providers/openai_completions_test.go
- pkg/connector/ai_adapter_test.go
- pkg/ai/oauth/openai_codex_helpers_test.go
- docs/pkg-ai-runtime-migration.md
- pkg/ai/oauth/types.go
- pkg/ai/utils/validation_test.go
- pkg/ai/providers/transform_messages_test.go
- pkg/ai/providers/openai_completions_convert_test.go
- pkg/ai/utils/sanitize_unicode_test.go
- pkg/ai/e2e/tool_call_id_normalization_e2e_test.go
- pkg/ai/providers/azure_openai_responses_runtime_test.go
- pkg/ai/providers/runtime_abort_test.go
- pkg/ai/event_stream.go
- pkg/ai/providers/amazon_bedrock_runtime_test.go
- pkg/ai/providers/google_gemini_cli_test.go
- pkg/ai/stream.go
- pkg/ai/providers/github_copilot_headers_test.go
- pkg/ai/providers/openai_responses.go
- pkg/ai/utils/json_parse.go
- pkg/ai/providers/azure_openai_responses_test.go
- pkg/ai/models_generated.go
- pkg/ai/providers/openai_codex_responses_test.go
- pkg/ai/oauth/github_copilot.go
- pkg/ai/env_api_keys_test.go
- pkg/ai/e2e/empty_e2e_test.go
- cmd/generate-ai-models/main.go
- pkg/ai/providers/simple_options_test.go
- pkg/ai/utils/json_parse_test.go
- pkg/ai/providers/google_vertex_test.go
- pkg/ai/providers/cache_retention_test.go
- pkg/ai/oauth/anthropic_provider_helpers_test.go
- pkg/ai/providers/google_tool_call_missing_args_test.go
- pkg/ai/e2e/openai_reasoning_replay_e2e_test.go
- pkg/ai/e2e/abort_test.go
- pkg/ai/oauth/index_test.go
- pkg/ai/providers/google_gemini_cli.go
- pkg/ai/providers/openai_responses_runtime_test.go
| if !strings.Contains(text, tc.expectKeyword) && !strings.Contains(text, "red") { | ||
| t.Fatalf("expected response to reference tool result content, got %q", text) | ||
| } |
There was a problem hiding this comment.
Assertion logic may mask test failures.
The condition !strings.Contains(text, tc.expectKeyword) && !strings.Contains(text, "red") only fails if BOTH keywords are absent. For the "text-and-image-tool-result" case expecting "pixel", finding "red" instead would pass the test even though "pixel" wasn't mentioned. Consider making the fallback explicit per test case or removing it.
🛠️ Suggested fix
- if !strings.Contains(text, tc.expectKeyword) && !strings.Contains(text, "red") {
+ if !strings.Contains(text, tc.expectKeyword) {
t.Fatalf("expected response to reference tool result content, got %q", text)
}Or if "red" is always acceptable as a fallback for all image tests, document this clearly:
+ // Note: "red" is always acceptable since all test images are red
if !strings.Contains(text, tc.expectKeyword) && !strings.Contains(text, "red") {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if !strings.Contains(text, tc.expectKeyword) && !strings.Contains(text, "red") { | |
| t.Fatalf("expected response to reference tool result content, got %q", text) | |
| } | |
| if !strings.Contains(text, tc.expectKeyword) { | |
| t.Fatalf("expected response to reference tool result content, got %q", text) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/ai/e2e/image_tool_result_e2e_test.go` around lines 110 - 112, The
assertion currently allows the test to pass if either tc.expectKeyword OR the
literal "red" appears, which can mask failures (see strings.Contains(text,
tc.expectKeyword) && strings.Contains(text, "red") usage); change the check to
require the expected keyword explicitly by removing the unconditional "red"
fallback (replace the if with if !strings.Contains(text, tc.expectKeyword) {
t.Fatalf(...) }), or if "red" is a valid alternate for specific cases, add a
test-field (e.g., tc.fallbackKeyword or tc.allowFallback bool) and update the
assertion to check for tc.expectKeyword OR tc.fallbackKeyword so only tests that
declare the fallback accept it (update the test case
"text-and-image-tool-result" accordingly).
| first, err := ai.Complete(model, context, &ai.StreamOptions{ | ||
| APIKey: apiKey, | ||
| MaxTokens: 512, | ||
| }) |
There was a problem hiding this comment.
Give each live ai.Complete call an explicit deadline.
These are real network E2E calls, but the StreamOptions omit Ctx. If the provider stalls or retries aggressively, this test can hang until the outer test runner kills it.
Also applies to: 64-67, 114-117, 129-132
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/ai/e2e/parity_openai_test.go` around lines 45 - 48, The ai.Complete calls
lack a context deadline so E2E network stalls can hang tests; wrap each call
(the ones passing model, context, &ai.StreamOptions) with a context.WithTimeout
(e.g., 30s or another sensible timeout), set StreamOptions.Ctx to that timed
context, and ensure you call cancel() via defer right after creation; update the
ai.Complete invocations at the shown sites (the calls around lines with model,
context, &ai.StreamOptions) so each StreamOptions includes Ctx and uses the new
timed context.
| func CalculateCost(model Model, usage Usage) UsageCost { | ||
| usage.Cost.Input = (model.Cost.Input / 1_000_000) * float64(usage.Input) | ||
| usage.Cost.Output = (model.Cost.Output / 1_000_000) * float64(usage.Output) | ||
| usage.Cost.CacheRead = (model.Cost.CacheRead / 1_000_000) * float64(usage.CacheRead) | ||
| usage.Cost.CacheWrite = (model.Cost.CacheWrite / 1_000_000) * float64(usage.CacheWrite) | ||
| usage.Cost.Total = usage.Cost.Input + usage.Cost.Output + usage.Cost.CacheRead + usage.Cost.CacheWrite | ||
| return usage.Cost | ||
| } |
There was a problem hiding this comment.
CalculateCost mutates the input usage parameter unexpectedly.
The function modifies usage.Cost fields in-place before returning usage.Cost. Callers may not expect their Usage struct to be mutated. Consider computing costs into a local UsageCost variable instead.
🛠️ Proposed fix
func CalculateCost(model Model, usage Usage) UsageCost {
- usage.Cost.Input = (model.Cost.Input / 1_000_000) * float64(usage.Input)
- usage.Cost.Output = (model.Cost.Output / 1_000_000) * float64(usage.Output)
- usage.Cost.CacheRead = (model.Cost.CacheRead / 1_000_000) * float64(usage.CacheRead)
- usage.Cost.CacheWrite = (model.Cost.CacheWrite / 1_000_000) * float64(usage.CacheWrite)
- usage.Cost.Total = usage.Cost.Input + usage.Cost.Output + usage.Cost.CacheRead + usage.Cost.CacheWrite
- return usage.Cost
+ cost := UsageCost{
+ Input: (model.Cost.Input / 1_000_000) * float64(usage.Input),
+ Output: (model.Cost.Output / 1_000_000) * float64(usage.Output),
+ CacheRead: (model.Cost.CacheRead / 1_000_000) * float64(usage.CacheRead),
+ CacheWrite: (model.Cost.CacheWrite / 1_000_000) * float64(usage.CacheWrite),
+ }
+ cost.Total = cost.Input + cost.Output + cost.CacheRead + cost.CacheWrite
+ return cost
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func CalculateCost(model Model, usage Usage) UsageCost { | |
| usage.Cost.Input = (model.Cost.Input / 1_000_000) * float64(usage.Input) | |
| usage.Cost.Output = (model.Cost.Output / 1_000_000) * float64(usage.Output) | |
| usage.Cost.CacheRead = (model.Cost.CacheRead / 1_000_000) * float64(usage.CacheRead) | |
| usage.Cost.CacheWrite = (model.Cost.CacheWrite / 1_000_000) * float64(usage.CacheWrite) | |
| usage.Cost.Total = usage.Cost.Input + usage.Cost.Output + usage.Cost.CacheRead + usage.Cost.CacheWrite | |
| return usage.Cost | |
| } | |
| func CalculateCost(model Model, usage Usage) UsageCost { | |
| cost := UsageCost{ | |
| Input: (model.Cost.Input / 1_000_000) * float64(usage.Input), | |
| Output: (model.Cost.Output / 1_000_000) * float64(usage.Output), | |
| CacheRead: (model.Cost.CacheRead / 1_000_000) * float64(usage.CacheRead), | |
| CacheWrite: (model.Cost.CacheWrite / 1_000_000) * float64(usage.CacheWrite), | |
| } | |
| cost.Total = cost.Input + cost.Output + cost.CacheRead + cost.CacheWrite | |
| return cost | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/ai/models.go` around lines 60 - 67, CalculateCost currently mutates the
input Usage by writing into usage.Cost; change it to compute into a new local
variable (e.g., cost := UsageCost{}) and populate cost.Input, cost.Output,
cost.CacheRead, cost.CacheWrite using model.Cost values and usage counts, then
set cost.Total and return cost without modifying the passed-in usage. Update
references inside the function to use the local cost variable and ensure the
function signature still returns UsageCost.
| if time.Now().UnixMilli() >= creds.Expires { | ||
| refreshed, err := provider.RefreshToken(creds) | ||
| if err != nil { | ||
| return nil, "", fmt.Errorf("failed to refresh OAuth token for %s", providerID) | ||
| } | ||
| creds = refreshed | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Preserve the original error from RefreshToken for debugging.
The error message at line 89 discards the original error from provider.RefreshToken(), which makes debugging harder. Consider wrapping the original error:
if time.Now().UnixMilli() >= creds.Expires {
refreshed, err := provider.RefreshToken(creds)
if err != nil {
- return nil, "", fmt.Errorf("failed to refresh OAuth token for %s", providerID)
+ return nil, "", fmt.Errorf("failed to refresh OAuth token for %s: %w", providerID, err)
}
creds = refreshed
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if time.Now().UnixMilli() >= creds.Expires { | |
| refreshed, err := provider.RefreshToken(creds) | |
| if err != nil { | |
| return nil, "", fmt.Errorf("failed to refresh OAuth token for %s", providerID) | |
| } | |
| creds = refreshed | |
| } | |
| if time.Now().UnixMilli() >= creds.Expires { | |
| refreshed, err := provider.RefreshToken(creds) | |
| if err != nil { | |
| return nil, "", fmt.Errorf("failed to refresh OAuth token for %s: %w", providerID, err) | |
| } | |
| creds = refreshed | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/ai/oauth/index.go` around lines 86 - 92, The returned error thrown when
provider.RefreshToken(creds) fails loses the original error; update the error
return in the token refresh branch (where provider.RefreshToken, providerID and
creds are used and creds.Expires is compared) to wrap the original err (e.g.,
using fmt.Errorf("failed to refresh OAuth token for %s: %w", providerID, err) or
errors.Wrap) so the original error is preserved for debugging and logging.
| if toolUse, ok := accumulated.Content[contentIndex].AsAny().(anthropic.ToolUseBlock); ok { | ||
| toolCall := ai.ContentBlock{ | ||
| Type: ai.ContentTypeToolCall, | ||
| ID: strings.TrimSpace(toolUse.ID), | ||
| Name: strings.TrimSpace(toolUse.Name), | ||
| Arguments: parseToolArguments(string(toolUse.Input)), | ||
| } | ||
| stream.Push(ai.AssistantMessageEvent{ | ||
| Type: ai.EventToolCallEnd, | ||
| ContentIndex: contentIndex, | ||
| ToolCall: &toolCall, | ||
| }) |
There was a problem hiding this comment.
Translate Claude Code tool aliases back before emitting tool calls.
The request path rewrites tool names when UseClaudeCodeToolNames is enabled, but the response path returns ToolUseBlock.Name verbatim here and in anthropicMessageToAIMessage. A tool registered locally as read will therefore come back as Read, so dispatch against context.Tools stops matching even though FromClaudeCodeToolName already exists for this exact case.
Suggested fix
if toolUse, ok := accumulated.Content[contentIndex].AsAny().(anthropic.ToolUseBlock); ok {
+ toolName := strings.TrimSpace(toolUse.Name)
+ if options.UseClaudeCodeToolNames {
+ toolName = FromClaudeCodeToolName(toolName, c.Tools)
+ }
toolCall := ai.ContentBlock{
Type: ai.ContentTypeToolCall,
ID: strings.TrimSpace(toolUse.ID),
- Name: strings.TrimSpace(toolUse.Name),
+ Name: toolName,
Arguments: parseToolArguments(string(toolUse.Input)),
}- assistantMessage := anthropicMessageToAIMessage(model, accumulated)
+ assistantMessage := anthropicMessageToAIMessage(model, accumulated, c.Tools, options.UseClaudeCodeToolNames)-func anthropicMessageToAIMessage(model ai.Model, msg anthropic.Message) ai.Message {
+func anthropicMessageToAIMessage(model ai.Model, msg anthropic.Message, tools []ai.Tool, useClaudeCodeToolNames bool) ai.Message {
...
case anthropic.ToolUseBlock:
+ toolName := strings.TrimSpace(blockVariant.Name)
+ if useClaudeCodeToolNames {
+ toolName = FromClaudeCodeToolName(toolName, tools)
+ }
out.Content = append(out.Content, ai.ContentBlock{
Type: ai.ContentTypeToolCall,
ID: strings.TrimSpace(blockVariant.ID),
- Name: strings.TrimSpace(blockVariant.Name),
+ Name: toolName,
Arguments: parseToolArguments(string(blockVariant.Input)),
})Also applies to: 166-170, 217-223
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/ai/providers/anthropic_runtime.go` around lines 137 - 148, The code emits
tool call events using anthropic.ToolUseBlock.Name verbatim (see the block
creating toolCall and pushing ai.AssistantMessageEvent) which leaves Claude Code
tool aliases like "Read" instead of the local registered name "read"; update the
emission to translate names back via the existing FromClaudeCodeToolName mapping
when UseClaudeCodeToolNames is enabled (apply same fix for the other occurrences
noted and in anthropicMessageToAIMessage), i.e. look up ToolUseBlock.Name
through FromClaudeCodeToolName (falling back to trimmed ToolUseBlock.Name)
before populating ai.ContentBlock.Name so dispatch against context.Tools
succeeds.
| if msg.StopReason == ai.StopReasonError || msg.StopReason == ai.StopReasonAborted { | ||
| continue | ||
| } |
There was a problem hiding this comment.
Preserve assistant error/aborted messages.
This branch drops the assistant message entirely, so StopReason, ErrorMessage, and any partial content never make it into the transformed transcript. ai.Message carries that metadata explicitly in pkg/ai/types.go:131-157.
🛠️ Suggested fix
- if msg.StopReason == ai.StopReasonError || msg.StopReason == ai.StopReasonAborted {
- continue
- }
+ if msg.StopReason == ai.StopReasonError || msg.StopReason == ai.StopReasonAborted {
+ result = append(result, msg)
+ continue
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if msg.StopReason == ai.StopReasonError || msg.StopReason == ai.StopReasonAborted { | |
| continue | |
| } | |
| if msg.StopReason == ai.StopReasonError || msg.StopReason == ai.StopReasonAborted { | |
| result = append(result, msg) | |
| continue | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/ai/providers/transform_messages.go` around lines 106 - 108, The current
branch drops assistant messages when msg.StopReason == ai.StopReasonError or
ai.StopReasonAborted; instead preserve those messages (including StopReason and
ErrorMessage and any partial Content) in the transformed transcript. Modify the
loop handling ai.Message (variable msg) to remove the continue and ensure the
transformed output still appends/creates an ai.Message (or the existing
transformed struct) for assistant messages even when msg.StopReason is Error or
Aborted, copying StopReason, ErrorMessage and Content into the output; keep any
existing handling for normal assistant messages but include the additional
metadata for error/aborted cases so they are not lost.
| if r >= 0xD800 && r <= 0xDBFF { | ||
| if i+1 < len(runes) { | ||
| next := runes[i+1] | ||
| if next >= 0xDC00 && next <= 0xDFFF { | ||
| out.WriteRune(r) | ||
| out.WriteRune(next) | ||
| i++ | ||
| } | ||
| } | ||
| continue | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Are UTF-16 surrogate code points valid in UTF-8 encoded strings?
💡 Result:
No. UTF‑8 is defined to encode only Unicode scalar values—i.e., all Unicode code points except the surrogate range U+D800..U+DFFF. Therefore, any UTF‑8 byte sequence that would decode to a surrogate code point is ill‑formed / invalid UTF‑8. [1], [2]
Some nonstandard encodings (not UTF‑8), such as CESU‑8 / “modified UTF‑8”, do encode UTF‑16 surrogate code points as if they were Unicode code points, but that’s explicitly outside the UTF‑8 definition. [1], [3]
Sources:
[1] RFC 3629 (UTF‑8 definition prohibits surrogates) (ietf.org)
[2] Unicode Standard, Chapter 3 (scalar values exclude surrogates) (unicode.org)
[3] CESU‑8 description (surrogates encoded as UTF‑8 sequences, but nonstandard) (en.wikipedia.org)
Citations:
- 1: https://www.ietf.org/ietf-ftp/rfc/rfc3629.txt.pdf?utm_source=openai
- 2: https://www.unicode.org/versions/latest/core-spec/chapter-3/?utm_source=openai
- 3: https://en.wikipedia.org/wiki/CESU-8?utm_source=openai
🏁 Script executed:
cat -n pkg/ai/utils/sanitize_unicode.goRepository: beeper/agentremote
Length of output: 1189
🏁 Script executed:
rg -B2 -A10 'SanitizeSurrogates' --type goRepository: beeper/agentremote
Length of output: 25439
Surrogate pairs should be decoded, not preserved.
UTF-16 surrogate code points (U+D800–U+DFFF) are not valid Unicode scalar values and must not appear in valid UTF-8 strings. Writing both surrogates to output (lines 32–33) produces invalid UTF-8.
When a valid surrogate pair is found, decode it to the actual Unicode code point it represents instead of preserving the invalid surrogates:
for i := 0; i < len(runes); i++ {
r := runes[i]
if r >= 0xD800 && r <= 0xDBFF {
if i+1 < len(runes) {
next := runes[i+1]
if next >= 0xDC00 && next <= 0xDFFF {
- out.WriteRune(r)
- out.WriteRune(next)
+ // Decode UTF-16 surrogate pair to Unicode code point
+ codepoint := 0x10000 + ((r - 0xD800) << 10) + (next - 0xDC00)
+ out.WriteRune(codepoint)
i++
}
}
continue
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if r >= 0xD800 && r <= 0xDBFF { | |
| if i+1 < len(runes) { | |
| next := runes[i+1] | |
| if next >= 0xDC00 && next <= 0xDFFF { | |
| out.WriteRune(r) | |
| out.WriteRune(next) | |
| i++ | |
| } | |
| } | |
| continue | |
| } | |
| if r >= 0xD800 && r <= 0xDBFF { | |
| if i+1 < len(runes) { | |
| next := runes[i+1] | |
| if next >= 0xDC00 && next <= 0xDFFF { | |
| // Decode UTF-16 surrogate pair to Unicode code point | |
| codepoint := 0x10000 + ((r - 0xD800) << 10) + (next - 0xDC00) | |
| out.WriteRune(codepoint) | |
| i++ | |
| } | |
| } | |
| continue | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/ai/utils/sanitize_unicode.go` around lines 28 - 38, The loop currently
preserves UTF-16 surrogate halves by writing r and next directly (r, next,
out.WriteRune), which produces invalid UTF-8; instead when you detect a high
surrogate (r in 0xD800..0xDBFF) followed by a low surrogate (next in
0xDC00..0xDFFF) decode them into the actual Unicode code point using the
standard formula ((high-0xD800)<<10) + (low-0xDC00) + 0x10000 and call
out.WriteRune(decoded) (and advance i) rather than writing the raw surrogates;
update the code around the r/next handling so only the decoded rune is emitted.
| enumVals, ok := schema["enum"].([]string) | ||
| if !ok || len(enumVals) != 2 { | ||
| t.Fatalf("expected enum values in schema, got %#v", schema["enum"]) |
There was a problem hiding this comment.
Assert the actual enum values, not just the count.
This currently passes for any two strings, so a bad enum payload could slip through unnoticed. Check the contents as well.
Proposed test tightening
enumVals, ok := schema["enum"].([]string)
- if !ok || len(enumVals) != 2 {
- t.Fatalf("expected enum values in schema, got %#v", schema["enum"])
+ if !ok || len(enumVals) != 2 || enumVals[0] != "add" || enumVals[1] != "subtract" {
+ t.Fatalf("expected enum values [add subtract], got %#v", schema["enum"])
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| enumVals, ok := schema["enum"].([]string) | |
| if !ok || len(enumVals) != 2 { | |
| t.Fatalf("expected enum values in schema, got %#v", schema["enum"]) | |
| enumVals, ok := schema["enum"].([]string) | |
| if !ok || len(enumVals) != 2 || enumVals[0] != "add" || enumVals[1] != "subtract" { | |
| t.Fatalf("expected enum values [add subtract], got %#v", schema["enum"]) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/ai/utils/type_helpers_test.go` around lines 10 - 12, The test currently
only checks that schema["enum"] casts to []string and has length 2; update it to
assert the actual enum contents exactly (not just count). After casting to
enumVals, compare enumVals to the expected slice (e.g., using reflect.DeepEqual
or a testing helper) and fail with a clear message if they differ; reference the
existing enumVals variable and schema["enum"] in the failure message so the test
verifies the specific values, not merely the count.
| func toInt64(v any) int64 { | ||
| switch n := v.(type) { | ||
| case int: | ||
| return int64(n) | ||
| case int64: | ||
| return n | ||
| case int32: | ||
| return int64(n) | ||
| case uint64: | ||
| return int64(n) | ||
| case uint32: | ||
| return int64(n) | ||
| default: | ||
| return 0 | ||
| } |
There was a problem hiding this comment.
toInt64 can silently overflow when converting large uint64 values.
A uint64 value greater than math.MaxInt64 will wrap to a negative int64. If such large values are plausible, consider adding bounds checking or returning an error.
🛠️ Example bounds check
+import "math"
+
func toInt64(v any) int64 {
switch n := v.(type) {
case int:
return int64(n)
case int64:
return n
case int32:
return int64(n)
case uint64:
+ if n > math.MaxInt64 {
+ return math.MaxInt64 // or handle overflow differently
+ }
return int64(n)
case uint32:
return int64(n)
default:
return 0
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func toInt64(v any) int64 { | |
| switch n := v.(type) { | |
| case int: | |
| return int64(n) | |
| case int64: | |
| return n | |
| case int32: | |
| return int64(n) | |
| case uint64: | |
| return int64(n) | |
| case uint32: | |
| return int64(n) | |
| default: | |
| return 0 | |
| } | |
| import "math" | |
| func toInt64(v any) int64 { | |
| switch n := v.(type) { | |
| case int: | |
| return int64(n) | |
| case int64: | |
| return n | |
| case int32: | |
| return int64(n) | |
| case uint64: | |
| if n > math.MaxInt64 { | |
| return math.MaxInt64 // or handle overflow differently | |
| } | |
| return int64(n) | |
| case uint32: | |
| return int64(n) | |
| default: | |
| return 0 | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/ai/utils/validation.go` around lines 159 - 173, The toInt64 function can
silently overflow when converting uint64 values > math.MaxInt64; update toInt64
to detect uint64 inputs and perform a bounds check (compare against
math.MaxInt64) and handle the overflow case instead of blindly casting—either
return an error (change signature to (int64, error)) or define and document a
safe behavior (e.g., clamp to MaxInt64 or skip conversion) and apply the same
care to other unsigned conversions; locate the logic in function toInt64 and
implement the chosen overflow handling consistently.
| case ContentTypeImage: | ||
| data := part.ImageB64 | ||
| if data == "" && strings.HasPrefix(part.ImageURL, "data:") { | ||
| data = strings.TrimPrefix(part.ImageURL, "data:") | ||
| } | ||
| blocks = append(blocks, aipkg.ContentBlock{ | ||
| Type: aipkg.ContentTypeImage, | ||
| Data: data, | ||
| MimeType: part.MimeType, | ||
| }) |
There was a problem hiding this comment.
Data URL parsing may be incorrect.
The current logic strips only "data:" from the URL, but data URLs follow the format data:[<mediatype>][;base64],<data>. Stripping just "data:" would leave the MIME type and encoding prefix in the data field.
🛠️ Proposed fix
case ContentTypeImage:
data := part.ImageB64
if data == "" && strings.HasPrefix(part.ImageURL, "data:") {
- data = strings.TrimPrefix(part.ImageURL, "data:")
+ // Parse data URL: data:[<mediatype>][;base64],<data>
+ if idx := strings.Index(part.ImageURL, ","); idx != -1 {
+ data = part.ImageURL[idx+1:]
+ }
}
blocks = append(blocks, aipkg.ContentBlock{
Type: aipkg.ContentTypeImage,
Data: data,
MimeType: part.MimeType,
})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/connector/ai_adapter.go` around lines 31 - 40, The ContentTypeImage
branch incorrectly strips only "data:" from part.ImageURL; update the logic in
the ContentTypeImage handling (where part.ImageB64, part.ImageURL and blocks =
append(... aipkg.ContentBlock ... ) are used) to properly parse a data URL by:
if part.ImageB64 is empty and part.ImageURL starts with "data:", split the
substring after "data:" at the first comma, use the part after the comma as the
base64 payload (if present), and parse the part before the comma to extract an
optional MIME type (the mediatype before any ";base64" token) to set MimeType;
if no comma or no payload, fall back to using the original ImageURL or ImageB64
as before. Ensure only the raw base64 payload is assigned to Data and the
correct MIME type is assigned to MimeType in the aipkg.ContentBlock.
This pull request contains changes generated by a Cursor Cloud Agent