Skip to content

fix(litellm): preserve thought_signature in tool call round-trip#4662

Open
pandego wants to merge 2 commits intogoogle:mainfrom
pandego:fix/4650-thought-signature-roundtrip
Open

fix(litellm): preserve thought_signature in tool call round-trip#4662
pandego wants to merge 2 commits intogoogle:mainfrom
pandego:fix/4650-thought-signature-roundtrip

Conversation

@pandego
Copy link

@pandego pandego commented Feb 28, 2026

Link to Issue or Description of Change

1. Link to an existing issue (if applicable):

Testing Plan

Unit Tests:

  • I have added or updated unit tests for my change.
  • All unit tests pass locally.

Passed locally:

uv run pytest tests/unittests/models/test_litellm.py -k "thought_signature or message_to_generate_content_response_tool_call"

Checklist

  • I have read the CONTRIBUTING.md document.
  • I have performed a self-review of my own code.
  • I have added tests that prove my fix is effective.
  • New and existing unit tests pass locally with my changes.

Additional context

This patch keeps LiteLLM's tool-call id encoding contract intact for Gemini thinking models by:

  • decoding embedded thought signatures from incoming LiteLLM tool-call ids into ADK Part.thought_signature, and
  • re-embedding Part.thought_signature when building outgoing LiteLLM tool calls.

It also adds focused unit tests for both decode and encode paths.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an issue where thought_signature was not being preserved during the conversion of tool calls within the LiteLLM integration, particularly affecting Gemini models. By introducing explicit encoding and decoding mechanisms for the thought_signature within the tool call ID, the change ensures that this crucial piece of information is maintained across the API boundary, improving the reliability and functionality of tool interactions.

Highlights

  • Thought Signature Preservation: Implemented logic to preserve thought_signature during the round-trip conversion of LiteLLM tool calls, specifically for Gemini thinking models.
  • Encoding/Decoding Utilities: Added new utility functions, _decode_litellm_tool_call_id and _encode_litellm_tool_call_id, to handle the embedding and extraction of thought_signature bytes within LiteLLM tool call IDs using base64 encoding.
  • Integration with Conversion Functions: Integrated the new encoding and decoding logic into _content_to_message_param and _message_to_generate_content_response to ensure thought_signature is correctly handled during message conversions.
  • Unit Test Coverage: Added dedicated unit tests to validate the correct encoding and decoding of thought_signature in LiteLLM tool calls.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/google/adk/models/lite_llm.py
    • Added _LITELLM_THOUGHT_SIGNATURE_SEPARATOR constant.
    • Introduced _decode_litellm_tool_call_id function to extract thought_signature from tool call IDs.
    • Introduced _encode_litellm_tool_call_id function to embed thought_signature into tool call IDs.
    • Modified _content_to_message_param to use _encode_litellm_tool_call_id when creating tool calls.
    • Modified _message_to_generate_content_response to use _decode_litellm_tool_call_id and assign the extracted thought_signature.
  • tests/unittests/models/test_litellm.py
    • Imported the base64 module.
    • Added test_message_to_generate_content_response_tool_call_with_thought_signature to verify decoding.
    • Added test_content_to_message_param_embeds_thought_signature_in_tool_call to verify encoding.
Activity
  • The author has confirmed that all unit tests pass locally.
  • New unit tests have been added to cover the changes related to thought_signature handling.
  • The author has performed a self-review of the code.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot adk-bot added the models [Component] Issues related to model support label Feb 28, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly implements the preservation of thought_signature during the round-trip conversion of tool calls for LiteLLM. The new encoding and decoding functions are well-defined, and their integration into _content_to_message_param and _message_to_generate_content_response is appropriate. The added unit tests provide good coverage for the new functionality. I have one suggestion to improve debuggability in an error case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

models [Component] Issues related to model support

Projects

None yet

Development

Successfully merging this pull request may close these issues.

thought_signature missing error when using SkillToolset + LiteLlm + Gemini thinking models (regression from 1.23.0 → 1.26.0)

2 participants