Skip to content

[Bug] unittests.sh Script Failure to Execute Beyond Python 3.10 Environment - Blocked by Test Case Failures in test_litellm.py #4680

@DrewAfromsky

Description

@DrewAfromsky

Describe the Bug:

  • There's a bug in tests/unittests/models/test_litellm.py, specifically with the function test_model_response_to_chunk, which causes the ./scripts/unittests.sh script to fail; it fails in the first of multiple iterations of Python environments (i.e. fails first in a Python 3.10 environment). Error logs indicate the test failed on three specific test combinations (test cases 0, 3, and 4). These tests set the expected_usage_chunk parameter to UsageMetadataChunk(prompt_tokens=0, completion_tokens=0, total_tokens=0). The variable usage_chunk in the test is ending up as None; it doesn't get updated with another value in these test cases. The parameterized test expects those cases to return usage_chunk equal to a UsageMetadataChunk populated with zeros: UsageMetadataChunk(prompt_tokens=0, completion_tokens=0, total_tokens=0);. As a result, the assert usage_chunk is not None statement triggers the error seen in the logs below. The underlying function _model_response_to_chunk(response) skips the creation of a UsageMetadataChunk when there is no usage data in the original response (i.e. returns None). From the _model_response_to_chunk method itself:
usage = response.get("usage")
  if usage:
    try:
      yield UsageMetadataChunk(
          prompt_tokens=usage.get("prompt_tokens", 0) or 0,
          completion_tokens=usage.get("completion_tokens", 0) or 0,
          total_tokens=usage.get("total_tokens", 0) or 0,
          cached_prompt_tokens=_extract_cached_prompt_tokens(usage),
      ), None
    except AttributeError as e:
      raise TypeError(
          "Unexpected LiteLLM usage type: %r" % (type(usage),)
      ) from e
  • However, the test suite expects the function to artificially generate a 0, 0, 0 usage chunk as a default fallback. Therefore, we need to set this usage parameter in the test.
  • Fixing this test is important, so that developers can execute the ./scripts/unittests.sh.

Steps to Reproduce:

  1. Execute ./scripts/unittests.sh from the adk-python root directory, per the instructions CONTRIBUTING.md

Expected Behavior:

  • The expectation is for all tests to pass in all Python environments.

Observed Behavior:

Environment Details:

  • ADK Library Version (pip show google-adk): latest (1.26.0)
  • Desktop OS:** [e.g., macOS, Linux, Windows] Linux
  • Python Version (python -V): Multiple (3.10,3.11,3.12,3.13,3.14)

Model Information:

  • Are you using LiteLLM: Yes
  • Which model is being used: N/A

🟡 Optional Information

Regression:
Did this work in a previous version of ADK? If so, which one? N/A

Logs:

tests/unittests/models/test_litellm.py ...............................................................................................................................F..FF..... [ 52%]
...............................................................[ 53%]

===================================================================================================================================== short test summary info ======================================================================================================================================
FAILED tests/unittests/models/test_litellm.py::test_model_response_to_chunk[response0-expected_chunks0-expected_usage_chunk0-stop] - assert None is not None
FAILED tests/unittests/models/test_litellm.py::test_model_response_to_chunk[response3-expected_chunks3-expected_usage_chunk3-tool_calls] - assert None is not None
FAILED tests/unittests/models/test_litellm.py::test_model_response_to_chunk[response4-expected_chunks4-expected_usage_chunk4-stop] - assert None is not None
=============================================================================================================== 3 failed, 4664 passed, 1 skipped, 1814 warnings in 235.45s (0:03:55) ===============================================================================================================

--------------------------------------------------
Unit tests failed for Python 3.10 with exit code 1
--------------------------------------------------
Cleaning up .unittest_venv...

Screenshots / Video:

  • N/A

Additional Context:

  • N/A

Minimal Reproduction Code:

How often has this issue occurred?:

Metadata

Metadata

Assignees

Labels

models[Component] Issues related to model support

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions