Skip to content

Sparse Dice Loss#8767

Closed
Asmodasis wants to merge 2 commits intoProject-MONAI:devfrom
Asmodasis:dev
Closed

Sparse Dice Loss#8767
Asmodasis wants to merge 2 commits intoProject-MONAI:devfrom
Asmodasis:dev

Conversation

@Asmodasis
Copy link

Began implementation of functions for sparse dice loss, which is dice loss computed on sparse datasets. initial commit is to track issue #8731

Fixes # .

Description

Function skeleton of sparse dice loss, being a dice loss of sparse datasets, the implementation will follow from standard dice loss. Code has been copied and modified from such. Following contribution documentation draft commit opened early.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

Began implementation of functions for sparse dice loss, which is dice loss computed on sparse datasets. initial commit is to track issue Project-MONAI#8731
@Asmodasis Asmodasis marked this pull request as draft March 5, 2026 23:06
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 5, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: e88896f9-fedb-4a7e-ae00-8f34049b6ef2

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds SparseDiceLoss class to monai/losses/dice.py with support for sparse/one-hot target formats, including activation handling, per-class weighting, batch-wise reduction, and smoothing parameters. Also adds module-level alias sparse_dice_loss. Includes comprehensive test module covering various input configurations, activation modes, shape validation, error handling, and TorchScript compatibility.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 37.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed Title 'Sparse Dice Loss' directly describes the main addition—a new SparseDiceLoss class implementation.
Description check ✅ Passed Description covers the main change (sparse dice loss implementation), references issue #8731, and completes the required template structure.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
tests/losses/test_sparse_dice_loss.py (1)

24-28: Duplicate test case can be removed.

These two entries are identical and don’t add coverage.

As per coding guidelines, "Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness."

Also applies to: 107-111

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/losses/test_sparse_dice_loss.py` around lines 24 - 28, Remove the
duplicated test case entries in tests/losses/test_sparse_dice_loss.py: locate
the repeated dictionary tuple (the case with {"include_background": True,
"sigmoid": True, "smooth_nr": 1e-6, "smooth_dr": 1e-6} paired with the input
tensor torch.tensor([[[[1.0, -1.0], [-1.0, 1.0]]]]) and target
torch.tensor([[[[1.0, 0.0], [1.0, 1.0]]]])) and delete one occurrence to avoid
redundant coverage; apply the same removal for the identical duplicate around
the block referenced at lines 107-111 so each unique case (used by the test
function(s) validating sparse Dice loss) appears only once.
monai/losses/dice.py (1)

1115-1313: SparseDiceLoss currently duplicates DiceLoss without sparse-specific behavior.

This class is effectively a copy of DiceLoss, so the new API name suggests functionality that is not implemented yet, and it creates long-term drift risk.

♻️ Suggested interim structure
-class SparseDiceLoss(_Loss):
+class SparseDiceLoss(DiceLoss):
     """
-    Compute average Dice loss between two tensors. It can support both multi-classes and multi-labels tasks.
-    ...
+    Sparse Dice loss placeholder.
+    Currently identical to DiceLoss; add sparse-specific overrides here as they land.
     """
-
-    def __init__(...):
-        ...
-
-    def forward(...):
-        ...
+    pass

As per coding guidelines, "Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/losses/dice.py` around lines 1115 - 1313, SparseDiceLoss is a
near-duplicate of DiceLoss without any sparse-specific behavior; refactor so
SparseDiceLoss either (A) becomes a thin wrapper that delegates to DiceLoss
(call DiceLoss.__init__/forward) and handles sparse-target conversion (e.g.,
convert integer label tensor to one-hot before delegating) or (B) implements
true sparse logic (accept integer class indices in target and avoid full one-hot
expansion by computing tp/fp/fn via index-based accumulation using
compute_tp_fp_fn or a new sparse helper). Update SparseDiceLoss to remove
duplicated code paths (activation handling, reduction, class_weight checks,
smoothing) by reusing DiceLoss internals or a shared base; ensure unique symbols
to change are SparseDiceLoss.__init__, SparseDiceLoss.forward, DiceLoss (or a
new BaseDice class), and the one-hot conversion logic around
to_onehot_y/include_background/class_weight so behavior is correct for sparse
targets and no long-term code drift remains.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/losses/dice.py`:
- Around line 1227-1235: The doc example incorrectly instantiates DiceLoss
instead of the sparse variant; update the example to import and instantiate
SparseDiceLoss (replace references to DiceLoss with SparseDiceLoss) so the
snippet uses one_hot, target_idx, target and calls self =
SparseDiceLoss(reduction='none') and loss = self(input, target); ensure the
import line(s) at the top of the snippet reference SparseDiceLoss from
monai.losses.dice and adjust any surrounding text to consistently name
SparseDiceLoss.

In `@tests/losses/test_sparse_dice_loss.py`:
- Around line 20-223: The tests in TestSparseDiceLoss are invoking DiceLoss
instead of the new SparseDiceLoss, so they don't validate the new class; update
the test cases and all test usages (e.g., the parameterized TEST_CASES consumer
in test_shape, test_ill_shape, test_ill_opts, test_input_warnings, and
test_script) to instantiate and call SparseDiceLoss instead of DiceLoss (or add
parallel tests that call SparseDiceLoss) and ensure any behavior differences
(args like to_onehot_y, include_background, soft_label, other_act, reduction)
are covered; keep the same test inputs/expected values unless SparseDiceLoss has
different semantics, in which case adjust expected_val accordingly.

---

Nitpick comments:
In `@monai/losses/dice.py`:
- Around line 1115-1313: SparseDiceLoss is a near-duplicate of DiceLoss without
any sparse-specific behavior; refactor so SparseDiceLoss either (A) becomes a
thin wrapper that delegates to DiceLoss (call DiceLoss.__init__/forward) and
handles sparse-target conversion (e.g., convert integer label tensor to one-hot
before delegating) or (B) implements true sparse logic (accept integer class
indices in target and avoid full one-hot expansion by computing tp/fp/fn via
index-based accumulation using compute_tp_fp_fn or a new sparse helper). Update
SparseDiceLoss to remove duplicated code paths (activation handling, reduction,
class_weight checks, smoothing) by reusing DiceLoss internals or a shared base;
ensure unique symbols to change are SparseDiceLoss.__init__,
SparseDiceLoss.forward, DiceLoss (or a new BaseDice class), and the one-hot
conversion logic around to_onehot_y/include_background/class_weight so behavior
is correct for sparse targets and no long-term code drift remains.

In `@tests/losses/test_sparse_dice_loss.py`:
- Around line 24-28: Remove the duplicated test case entries in
tests/losses/test_sparse_dice_loss.py: locate the repeated dictionary tuple (the
case with {"include_background": True, "sigmoid": True, "smooth_nr": 1e-6,
"smooth_dr": 1e-6} paired with the input tensor torch.tensor([[[[1.0, -1.0],
[-1.0, 1.0]]]]) and target torch.tensor([[[[1.0, 0.0], [1.0, 1.0]]]])) and
delete one occurrence to avoid redundant coverage; apply the same removal for
the identical duplicate around the block referenced at lines 107-111 so each
unique case (used by the test function(s) validating sparse Dice loss) appears
only once.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: ea33db5b-762d-42d5-b1ae-0407ca8b1929

📥 Commits

Reviewing files that changed from the base of the PR and between 9ddd5e6 and 28f99d3.

📒 Files selected for processing (2)
  • monai/losses/dice.py
  • tests/losses/test_sparse_dice_loss.py

Comment on lines +1227 to +1235
>>> from monai.losses.dice import * # NOQA
>>> import torch
>>> from monai.losses.dice import DiceLoss
>>> B, C, H, W = 7, 5, 3, 2
>>> input = torch.rand(B, C, H, W)
>>> target_idx = torch.randint(low=0, high=C - 1, size=(B, H, W)).long()
>>> target = one_hot(target_idx[:, None, ...], num_classes=C)
>>> self = DiceLoss(reduction='none')
>>> loss = self(input, target)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Doc example references the wrong class.

The example instantiates DiceLoss instead of SparseDiceLoss (Line 1229 and Line 1234), which makes the new API docs misleading.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/losses/dice.py` around lines 1227 - 1235, The doc example incorrectly
instantiates DiceLoss instead of the sparse variant; update the example to
import and instantiate SparseDiceLoss (replace references to DiceLoss with
SparseDiceLoss) so the snippet uses one_hot, target_idx, target and calls self =
SparseDiceLoss(reduction='none') and loss = self(input, target); ensure the
import line(s) at the top of the snippet reference SparseDiceLoss from
monai.losses.dice and adjust any surrounding text to consistently name
SparseDiceLoss.

Comment on lines +20 to +223
from monai.losses import DiceLoss
from tests.test_utils import test_script_save

TEST_CASES = [
[ # shape: (1, 1, 2, 2), (1, 1, 2, 2)
{"include_background": True, "sigmoid": True, "smooth_nr": 1e-6, "smooth_dr": 1e-6},
{"input": torch.tensor([[[[1.0, -1.0], [-1.0, 1.0]]]]), "target": torch.tensor([[[[1.0, 0.0], [1.0, 1.0]]]])},
0.307576,
],
[ # shape: (2, 1, 2, 2), (2, 1, 2, 2)
{"include_background": True, "sigmoid": True, "smooth_nr": 1e-4, "smooth_dr": 1e-4},
{
"input": torch.tensor([[[[1.0, -1.0], [-1.0, 1.0]]], [[[1.0, -1.0], [-1.0, 1.0]]]]),
"target": torch.tensor([[[[1.0, 1.0], [1.0, 1.0]]], [[[1.0, 0.0], [1.0, 0.0]]]]),
},
0.416657,
],
[ # shape: (2, 1, 2, 2), (2, 1, 2, 2)
{"include_background": True, "smooth_nr": 1e-4, "smooth_dr": 1e-4, "soft_label": True},
{
"input": torch.tensor([[[[0.3, 0.4], [0.7, 0.9]]], [[[1.0, 0.1], [0.5, 0.3]]]]),
"target": torch.tensor([[[[0.3, 0.4], [0.7, 0.9]]], [[[1.0, 0.1], [0.5, 0.3]]]]),
},
0.0,
],
[ # shape: (2, 1, 2, 2), (2, 1, 2, 2)
{"include_background": True, "smooth_nr": 1e-4, "smooth_dr": 1e-4, "soft_label": False},
{
"input": torch.tensor([[[[0.3, 0.4], [0.7, 0.9]]], [[[1.0, 0.1], [0.5, 0.3]]]]),
"target": torch.tensor([[[[0.3, 0.4], [0.7, 0.9]]], [[[1.0, 0.1], [0.5, 0.3]]]]),
},
0.307773,
],
[ # shape: (2, 2, 3), (2, 1, 3)
{"include_background": False, "to_onehot_y": True, "smooth_nr": 0, "smooth_dr": 0},
{
"input": torch.tensor([[[1.0, 1.0, 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, 1.0], [0.0, 1.0, 0.0]]]),
"target": torch.tensor([[[0.0, 0.0, 1.0]], [[0.0, 1.0, 0.0]]]),
},
0.0,
],
[ # shape: (2, 2, 3), (2, 1, 3)
{"include_background": True, "to_onehot_y": True, "sigmoid": True, "smooth_nr": 1e-4, "smooth_dr": 1e-4},
{
"input": torch.tensor([[[-1.0, 0.0, 1.0], [1.0, 0.0, -1.0]], [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]]),
"target": torch.tensor([[[1.0, 0.0, 0.0]], [[1.0, 1.0, 0.0]]]),
},
0.435050,
],
[ # shape: (2, 2, 3), (2, 1, 3)
{
"include_background": True,
"to_onehot_y": True,
"sigmoid": True,
"reduction": "none",
"smooth_nr": 1e-4,
"smooth_dr": 1e-4,
},
{
"input": torch.tensor([[[-1.0, 0.0, 1.0], [1.0, 0.0, -1.0]], [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]]),
"target": torch.tensor([[[1.0, 0.0, 0.0]], [[1.0, 1.0, 0.0]]]),
},
[[[0.296529], [0.415136]], [[0.599976], [0.428559]]],
],
[ # shape: (2, 2, 3), (2, 1, 3)
{"include_background": True, "to_onehot_y": True, "softmax": True, "smooth_nr": 1e-4, "smooth_dr": 1e-4},
{
"input": torch.tensor([[[-1.0, 0.0, 1.0], [1.0, 0.0, -1.0]], [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]]),
"target": torch.tensor([[[1.0, 0.0, 0.0]], [[1.0, 1.0, 0.0]]]),
},
0.383713,
],
[ # shape: (2, 2, 3), (2, 1, 3)
{
"include_background": True,
"to_onehot_y": True,
"softmax": True,
"reduction": "sum",
"smooth_nr": 1e-4,
"smooth_dr": 1e-4,
},
{
"input": torch.tensor([[[-1.0, 0.0, 1.0], [1.0, 0.0, -1.0]], [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]]),
"target": torch.tensor([[[1.0, 0.0, 0.0]], [[1.0, 1.0, 0.0]]]),
},
1.534853,
],
[ # shape: (1, 1, 2, 2), (1, 1, 2, 2)
{"include_background": True, "sigmoid": True, "smooth_nr": 1e-6, "smooth_dr": 1e-6},
{"input": torch.tensor([[[[1.0, -1.0], [-1.0, 1.0]]]]), "target": torch.tensor([[[[1.0, 0.0], [1.0, 1.0]]]])},
0.307576,
],
[ # shape: (1, 1, 2, 2), (1, 1, 2, 2)
{"include_background": True, "sigmoid": True, "squared_pred": True},
{"input": torch.tensor([[[[1.0, -1.0], [-1.0, 1.0]]]]), "target": torch.tensor([[[[1.0, 0.0], [1.0, 1.0]]]])},
0.178337,
],
[ # shape: (1, 1, 2, 2), (1, 1, 2, 2)
{"include_background": True, "sigmoid": True, "jaccard": True},
{"input": torch.tensor([[[[1.0, -1.0], [-1.0, 1.0]]]]), "target": torch.tensor([[[[1.0, 0.0], [1.0, 1.0]]]])},
0.470451,
],
[ # shape: (2, 1, 2, 2), (2, 1, 2, 2)
{"include_background": True, "other_act": torch.tanh, "smooth_nr": 1e-4, "smooth_dr": 1e-4},
{
"input": torch.tensor([[[[1.0, -1.0], [-1.0, 1.0]]], [[[1.0, -1.0], [-1.0, 1.0]]]]),
"target": torch.tensor([[[[1.0, 1.0], [1.0, 1.0]]], [[[1.0, 0.0], [1.0, 0.0]]]]),
},
0.999963,
],
[ # shape: (2, 2, 3), (2, 1, 3)
{
"include_background": True,
"to_onehot_y": True,
"other_act": lambda x: torch.log_softmax(x, dim=1),
"smooth_nr": 1e-4,
"smooth_dr": 1e-4,
},
{
"input": torch.tensor([[[-1.0, 0.0, 1.0], [1.0, 0.0, -1.0]], [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]]),
"target": torch.tensor([[[1.0, 0.0, 0.0]], [[1.0, 1.0, 0.0]]]),
},
-8.522593,
],
[ # shape: (2, 1, 2, 2), (2, 1, 2, 2)
{"include_background": True, "other_act": torch.tanh, "smooth_nr": 1e-4, "smooth_dr": 1e-4, "batch": True},
{
"input": torch.tensor([[[[1.0, -0.0], [-1.0, 1.0]]], [[[1.0, -1.0], [-1.0, 1.0]]]]),
"target": torch.tensor([[[[1.0, 1.0], [1.0, 1.0]]], [[[1.0, 0.0], [1.0, 0.0]]]]),
},
0.774718,
],
[ # shape: (2, 1, 2, 2), (2, 1, 2, 2)
{"include_background": True, "other_act": torch.tanh, "smooth_nr": 0, "smooth_dr": 1e-4, "batch": True},
{
"input": torch.tensor([[[[1.0, -0.0], [-1.0, 1.0]]], [[[1.0, -1.0], [-1.0, 1.0]]]]),
"target": torch.tensor([[[[1.0, 1.0], [1.0, 1.0]]], [[[1.0, 0.0], [1.0, 0.0]]]]),
},
0.774733,
],
[ # shape: (2, 1, 2, 2), (2, 1, 2, 2)
{"include_background": True, "other_act": torch.tanh, "smooth_nr": 0, "smooth_dr": 1e-4, "batch": False},
{
"input": torch.tensor([[[[1.0, -0.0], [-1.0, 1.0]]], [[[1.0, -1.0], [-1.0, 1.0]]]]),
"target": torch.tensor([[[[1.0, 1.0], [1.0, 1.0]]], [[[1.0, 0.0], [1.0, 0.0]]]]),
},
0.840058,
],
[ # shape: (2, 2, 3), (2, 1, 3) weight
{
"include_background": True,
"to_onehot_y": True,
"other_act": lambda x: torch.log_softmax(x, dim=1),
"smooth_nr": 1e-4,
"smooth_dr": 1e-4,
"weight": (0, 1),
},
{
"input": torch.tensor([[[-1.0, 0.0, 1.0], [1.0, 0.0, -1.0]], [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]]),
"target": torch.tensor([[[1.0, 0.0, 0.0]], [[1.0, 1.0, 0.0]]]),
},
-8.268515,
],
]


class TestSparseDiceLoss(unittest.TestCase):
@parameterized.expand(TEST_CASES)
def test_shape(self, input_param, input_data, expected_val):
result = DiceLoss(**input_param).forward(**input_data)
np.testing.assert_allclose(result.detach().cpu().numpy(), expected_val, rtol=1e-5)

def test_ill_shape(self):
loss = DiceLoss()
with self.assertRaisesRegex(AssertionError, ""):
loss.forward(torch.ones((1, 2, 3)), torch.ones((4, 5, 6)))

def test_ill_opts(self):
with self.assertRaisesRegex(ValueError, ""):
DiceLoss(sigmoid=True, softmax=True)
chn_input = torch.ones((1, 1, 3))
chn_target = torch.ones((1, 1, 3))
with self.assertRaisesRegex(ValueError, ""):
DiceLoss(reduction="unknown")(chn_input, chn_target)
with self.assertRaisesRegex(ValueError, ""):
DiceLoss(reduction=None)(chn_input, chn_target)

def test_input_warnings(self):
chn_input = torch.ones((1, 1, 3))
chn_target = torch.ones((1, 1, 3))
with self.assertWarns(Warning):
loss = DiceLoss(include_background=False)
loss.forward(chn_input, chn_target)
with self.assertWarns(Warning):
loss = DiceLoss(softmax=True)
loss.forward(chn_input, chn_target)
with self.assertWarns(Warning):
loss = DiceLoss(to_onehot_y=True)
loss.forward(chn_input, chn_target)

def test_script(self):
loss = DiceLoss()
test_input = torch.ones(2, 1, 8, 8)
test_script_save(loss, test_input, test_input)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Tests are targeting DiceLoss, not SparseDiceLoss.

This file does not validate the new class at all. Every assertion currently re-tests existing DiceLoss behavior.

✅ Minimal fix
-from monai.losses import DiceLoss
+from monai.losses import SparseDiceLoss
...
-        result = DiceLoss(**input_param).forward(**input_data)
+        result = SparseDiceLoss(**input_param).forward(**input_data)
...
-        loss = DiceLoss()
+        loss = SparseDiceLoss()
...
-            DiceLoss(sigmoid=True, softmax=True)
+            SparseDiceLoss(sigmoid=True, softmax=True)
...
-            DiceLoss(reduction="unknown")(chn_input, chn_target)
+            SparseDiceLoss(reduction="unknown")(chn_input, chn_target)
...
-            DiceLoss(reduction=None)(chn_input, chn_target)
+            SparseDiceLoss(reduction=None)(chn_input, chn_target)
...
-            loss = DiceLoss(include_background=False)
+            loss = SparseDiceLoss(include_background=False)
...
-            loss = DiceLoss(softmax=True)
+            loss = SparseDiceLoss(softmax=True)
...
-            loss = DiceLoss(to_onehot_y=True)
+            loss = SparseDiceLoss(to_onehot_y=True)
...
-        loss = DiceLoss()
+        loss = SparseDiceLoss()

As per coding guidelines, "Ensure new or modified definitions will be covered by existing or new unit tests."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/losses/test_sparse_dice_loss.py` around lines 20 - 223, The tests in
TestSparseDiceLoss are invoking DiceLoss instead of the new SparseDiceLoss, so
they don't validate the new class; update the test cases and all test usages
(e.g., the parameterized TEST_CASES consumer in test_shape, test_ill_shape,
test_ill_opts, test_input_warnings, and test_script) to instantiate and call
SparseDiceLoss instead of DiceLoss (or add parallel tests that call
SparseDiceLoss) and ensure any behavior differences (args like to_onehot_y,
include_background, soft_label, other_act, reduction) are covered; keep the same
test inputs/expected values unless SparseDiceLoss has different semantics, in
which case adjust expected_val accordingly.

@Asmodasis
Copy link
Author

Closing the pull. Reason: Sparse dice loss is already computed within the function as dice loss is computed among tensors and zero denominations are considered.

@Asmodasis Asmodasis closed this Mar 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant