-
Notifications
You must be signed in to change notification settings - Fork 3k
Labels
models[Component] Issues related to model support[Component] Issues related to model support
Description
Describe the Bug:
- There's a bug in
tests/unittests/models/test_litellm.py, specifically with the functiontest_model_response_to_chunk, which causes the./scripts/unittests.shscript to fail; it fails in the first of multiple iterations of Python environments (i.e. fails first in a Python 3.10 environment). Error logs indicate the test failed on three specific test combinations (test cases 0, 3, and 4). These tests set theexpected_usage_chunkparameter toUsageMetadataChunk(prompt_tokens=0, completion_tokens=0, total_tokens=0). The variableusage_chunkin the test is ending up asNone; it doesn't get updated with another value in these test cases. The parameterized test expects those cases to returnusage_chunkequal to aUsageMetadataChunkpopulated with zeros:UsageMetadataChunk(prompt_tokens=0, completion_tokens=0, total_tokens=0);. As a result, theassert usage_chunk is not Nonestatement triggers the error seen in the logs below. The underlying function_model_response_to_chunk(response)skips the creation of aUsageMetadataChunkwhen there is no usage data in the original response (i.e. returnsNone). From the_model_response_to_chunkmethod itself:
usage = response.get("usage")
if usage:
try:
yield UsageMetadataChunk(
prompt_tokens=usage.get("prompt_tokens", 0) or 0,
completion_tokens=usage.get("completion_tokens", 0) or 0,
total_tokens=usage.get("total_tokens", 0) or 0,
cached_prompt_tokens=_extract_cached_prompt_tokens(usage),
), None
except AttributeError as e:
raise TypeError(
"Unexpected LiteLLM usage type: %r" % (type(usage),)
) from e- However, the test suite expects the function to artificially generate a 0, 0, 0 usage chunk as a default fallback. Therefore, we need to set this usage parameter in the test.
- Fixing this test is important, so that developers can execute the
./scripts/unittests.sh.
Steps to Reproduce:
- Execute ./scripts/unittests.sh from the
adk-pythonroot directory, per the instructions CONTRIBUTING.md
Expected Behavior:
- The expectation is for all tests to pass in all Python environments.
Observed Behavior:
- The
tests/unittests/models/test_litellm.pytest fails early in the./scripts/unittests.shscript's execution, specifically in the first of multiple Python environments (i.e. fails first in a Python 3.10 environment).
Environment Details:
- ADK Library Version (pip show google-adk): latest (1.26.0)
- Desktop OS:** [e.g., macOS, Linux, Windows] Linux
- Python Version (python -V): Multiple (3.10,3.11,3.12,3.13,3.14)
Model Information:
- Are you using LiteLLM: Yes
- Which model is being used: N/A
🟡 Optional Information
Regression:
Did this work in a previous version of ADK? If so, which one? N/A
Logs:
tests/unittests/models/test_litellm.py ...............................................................................................................................F..FF..... [ 52%]
...............................................................[ 53%]
===================================================================================================================================== short test summary info ======================================================================================================================================
FAILED tests/unittests/models/test_litellm.py::test_model_response_to_chunk[response0-expected_chunks0-expected_usage_chunk0-stop] - assert None is not None
FAILED tests/unittests/models/test_litellm.py::test_model_response_to_chunk[response3-expected_chunks3-expected_usage_chunk3-tool_calls] - assert None is not None
FAILED tests/unittests/models/test_litellm.py::test_model_response_to_chunk[response4-expected_chunks4-expected_usage_chunk4-stop] - assert None is not None
=============================================================================================================== 3 failed, 4664 passed, 1 skipped, 1814 warnings in 235.45s (0:03:55) ===============================================================================================================
--------------------------------------------------
Unit tests failed for Python 3.10 with exit code 1
--------------------------------------------------
Cleaning up .unittest_venv...Screenshots / Video:
- N/A
Additional Context:
- N/A
Minimal Reproduction Code:
- Simply attempt to execute
./scripts/unittests.sh
How often has this issue occurred?:
- Always (100%). Every time a developer attempts to execute
./scripts/unittests.sh.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
models[Component] Issues related to model support[Component] Issues related to model support