Skip to content

fix: fall back to individual deletes when S3 backend returns NotImplemented for DeleteObjects#899

Open
ameyanaik11 wants to merge 12 commits intosupabase:masterfrom
ameyanaik11:fix/gcs-delete-objects-not-implemented
Open

fix: fall back to individual deletes when S3 backend returns NotImplemented for DeleteObjects#899
ameyanaik11 wants to merge 12 commits intosupabase:masterfrom
ameyanaik11:fix/gcs-delete-objects-not-implemented

Conversation

@ameyanaik11
Copy link

Summary

Fixes #898

Some S3-compatible backends (e.g. Google Cloud Storage in S3 interoperability mode) do not support the DeleteObjects batch API and return a NotImplemented error. This causes all delete operations from the Supabase Studio UI and the batch delete API endpoint to fail with HTTP 400.

Changes

src/storage/backend/s3/adapter.tsS3Backend.deleteObjects()

When DeleteObjectsCommand returns NotImplemented, fall back to deleting objects individually via DeleteObjectCommand in parallel. No configuration required — auto-detects the limitation at runtime.

batch DeleteObjects → NotImplemented
  ↓ fallback
individual DeleteObject × N (in parallel)

How it was tested

Tested against a live GCS bucket in S3 interoperability mode:

  • Confirmed DeleteObjectsCommand returns NotImplemented on GCS
  • Confirmed individual DeleteObjectCommand works correctly on GCS
  • Confirmed the fallback path restores expected delete behaviour end-to-end

Notes

  • Standard S3 (AWS) and MinIO are unaffected — they support batch delete and never hit the fallback path
  • The parallel individual deletes perform equivalently for small batches and degrade gracefully for large ones
  • No new env vars, config flags, or breaking changes

@ameyanaik11 ameyanaik11 requested a review from a team as a code owner March 7, 2026 04:24
@coderabbitai
Copy link

coderabbitai bot commented Mar 7, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Central YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 71d9aff3-7418-4fa5-ae57-13202b3a6400

📥 Commits

Reviewing files that changed from the base of the PR and between f7db348 and cf308f1.

📒 Files selected for processing (2)
  • src/storage/backend/s3/adapter.ts
  • src/test/s3-adapter.test.ts

📝 Walkthrough

Summary by CodeRabbit

  • Bug Fixes

    • Improved S3 deletion resilience: when batch deletion isn't supported, the system automatically falls back to deleting objects individually, ignores missing-object errors, and surfaces only genuine failures.
  • Tests

    • Added comprehensive tests covering batch deletion, individual-delete fallback behavior, handling of missing objects, and error propagation for real failures.

Walkthrough

Adds a fallback in S3Backend.deleteObjects: when the DeleteObjectsCommand call throws a NotImplemented error, the adapter performs parallel individual DeleteObjectCommand calls (ignoring NoSuchKey results) and returns after they complete. Non-NotImplemented errors from the batch call are rethrown. No public or exported signatures changed.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant S3Adapter
    participant S3Service
    Client->>S3Adapter: deleteObjects(keys[])
    S3Adapter->>S3Service: DeleteObjectsCommand(keys[])
    alt Success
        S3Service-->>S3Adapter: Success
        S3Adapter-->>Client: Success
    else NotImplemented
        S3Service-->>S3Adapter: NotImplemented error
        par Parallel deletes
            S3Adapter->>S3Service: DeleteObjectCommand(key1)
            S3Adapter->>S3Service: DeleteObjectCommand(key2)
            S3Adapter->>S3Service: DeleteObjectCommand(keyN)
        end
        S3Service-->>S3Adapter: individual results (fulfilled/rejected)
        S3Adapter-->>Client: Success (after allSettled, ignoring NoSuchKey)
    else Other error
        S3Service-->>S3Adapter: Error
        S3Adapter-->>Client: Rethrow error
    end
Loading

Assessment against linked issues

Objective Addressed Explanation
Catch NotImplemented error from DeleteObjects and fall back to individual DeleteObjectCommand calls [#898]
Maintain existing behavior for other error types [#898]

Warning

Review ran into problems

🔥 Problems

Git: Failed to clone repository. Please run the @coderabbitai full review command to re-trigger a full review. If the issue persists, set path_filters to include or exclude specific files.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/storage/backend/s3/adapter.ts`:
- Around line 352-356: The current delete loop in the S3 adapter (where
prefixes.map(...) sends DeleteObjectCommand) uses Promise.all which can reject
on one failure after some deletes succeeded, causing DB transactions to roll
back while blobs are already removed; change the delete implementation in the
adapter's deleteObjects method to use Promise.allSettled for the
DeleteObjectCommand sends, aggregate results, and do not throw on individual
failures: collect failed Keys (or errors) and return them (or a success/failure
summary) so the caller (e.g., backend.deleteObjects) can handle partial failures
or retry; alternatively perform the S3 deletes only after the DB transaction
commits — but do not keep the current behavior of throwing after partial side
effects.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Central YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 4ed5bca2-4c1c-4e7b-99ae-c46cba138f3f

📥 Commits

Reviewing files that changed from the base of the PR and between bd1edfd and 041a75a.

📒 Files selected for processing (1)
  • src/storage/backend/s3/adapter.ts

@ameyanaik11 ameyanaik11 force-pushed the fix/gcs-delete-objects-not-implemented branch 3 times, most recently from 48ddfd7 to 921ff17 Compare March 7, 2026 04:38
…ted for DeleteObjects

Some S3-compatible backends (e.g. GCS in S3 interoperability mode) do not
support the DeleteObjects batch API and return a NotImplemented error.

Catch this and fall back to individual DeleteObjectCommand calls via
Promise.allSettled, so a single failure does not abort remaining deletes
mid-flight.
@ameyanaik11 ameyanaik11 force-pushed the fix/gcs-delete-objects-not-implemented branch from 921ff17 to f7db348 Compare March 7, 2026 04:41
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/storage/backend/s3/adapter.ts`:
- Around line 346-355: The NotImplemented fallback currently fires
Promise.allSettled over prefixes and returns without checking results, which can
orphan objects if individual deletes fail; update the fallback in the S3 adapter
to inspect Promise.allSettled results for the DeleteObjectCommand calls (the
Promise returned by this.client.send), treat fulfilled results as OK, and for
rejected results throw an error unless the rejection corresponds to a benign
"not found" case (mirror the logic used in the file backend for
ENOENT/NoSuchKey); ensure the code references the same variables (prefixes,
DeleteObjectCommand, this.client.send) and re-throws or rejects on any real
failure so the caller does not commit the DB transaction on partial delete
failures.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Central YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 7762c89a-6dc6-4afa-be9a-bac9dab05805

📥 Commits

Reviewing files that changed from the base of the PR and between 454c6c9 and 921ff17.

📒 Files selected for processing (2)
  • src/storage/backend/s3/adapter.ts
  • src/test/s3-adapter.test.ts

@ameyanaik11
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Mar 7, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Follow the same pattern as the file.ts backend: iterate allSettled
results and throw on any individual delete failure except NoSuchKey
(object already gone), preventing orphaned S3 objects when a delete
fails mid-batch.

Also updates tests to cover NoSuchKey-ignore and real-error-throw
behaviour.
@ameyanaik11
Copy link
Author

@ferhatelmas @fenos — thanks for maintaining this project! Would appreciate a review when you get a chance. This fixes GCS S3 interop delete failures (#898) with a small targeted change, following the existing patterns in the codebase and iterated based on @coderabbitai's feedback.

@coderabbitai
Copy link

coderabbitai bot commented Mar 7, 2026

Tip

For best results, initiate chat on the files or code changes.

@ameyanaik11 — Thanks for the heads-up! For the maintainers' reference, here's a quick summary of where things stand based on the automated review:

  • Change scope: Targeted fallback in S3Backend.deleteObjects() — catches NotImplemented from DeleteObjectsCommand and fans out to parallel individual DeleteObjectCommand calls, ignoring NoSuchKey and re-throwing other errors.
  • Test coverage: New tests cover the happy path (batch), the NotImplemented fallback, NoSuchKey tolerance in fallback, real-error propagation in fallback, and re-throw of non-NotImplemented errors.
  • Blast radius: Zero config changes; AWS S3 and MinIO code paths are unaffected.

The implementation looks clean and well-tested. Looking forward to the maintainers' feedback! 🚀

@ferhatelmas
Copy link
Member

@ameyanaik11 thank you for this well tested PR. It is mostly there except S3Locker compatibility for TUS, which is still using DeleteObjects. We need to support it too.

If we could add a config variable STORAGE_S3_BATCH_DELETE_ENABLED=false (default is true), implementation would be cleaner because main flow wouldn't be cluttered with a possible code path. It would choose one or the other according to the config.

Are you willing to update for this approach?

…dividual-delete helpers

- Add storageS3BatchDeleteEnabled config flag (default: true)
  Set STORAGE_S3_BATCH_DELETE_ENABLED=false for S3-compatible backends
  (e.g. GCS) that don't support the bulk DeleteObjects API.

- S3Backend.deleteObjects: when flag is false, skip DeleteObjectsCommand
  and go straight to individual deletes. Extract private
  deleteObjectsIndividually() helper shared by the config-off path and
  the existing NotImplemented runtime fallback.

- S3Locker: add batchDeleteEnabled option (default: true) wired from
  config in the TUS route. Extract deleteLocksIndividually() helper used
  by cleanupZombieLocks for both the config-off path and the NotImplemented
  fallback added in the previous commit.

- Tests: 2 new unit tests in s3-adapter.test.ts and 1 new integration
  test in s3-locker.test.ts covering the batchDeleteEnabled=false paths.
Copilot AI review requested due to automatic review settings March 19, 2026 19:48
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds resilience for S3-compatible backends (notably GCS S3 interop) that return NotImplemented for DeleteObjects by falling back to per-object deletes, restoring delete behavior for Studio and bulk delete flows.

Changes:

  • Add NotImplemented detection in S3Backend.deleteObjects() and fall back to parallel DeleteObject calls.
  • Add similar fallback logic to TUS S3Locker.cleanupZombieLocks() and plumb a new batchDeleteEnabled toggle.
  • Add/expand Jest coverage for both the adapter and locker fallback behaviors.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
src/storage/backend/s3/adapter.ts Fallback to individual deletes when batch delete is unsupported; adds opt-out via config.
src/storage/protocols/tus/s3-locker.ts Zombie-lock cleanup now falls back to individual deletes on NotImplemented (and supports disabling batch deletes).
src/config.ts Introduces storageS3BatchDeleteEnabled backed by STORAGE_S3_BATCH_DELETE_ENABLED.
src/http/routes/tus/index.ts Passes storageS3BatchDeleteEnabled into S3Locker construction.
src/test/s3-adapter.test.ts Adds unit tests for batch vs fallback delete behavior and config-based disabling.
src/test/s3-locker.test.ts Adds tests covering NotImplemented fallback paths during zombie-lock cleanup.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +363 to +370
; (req.raw as MultiPartRequest).log = req.log
; (req.raw as MultiPartRequest).upload = {
storage: req.storage,
owner: req.owner,
tenantId: req.tenantId,
db: req.db,
isUpsert: req.headers['x-upsert'] === 'true',
}
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same formatting issue here as above: extra indentation and ; ( instead of the repo’s usual ;(…) style. This is likely to fail prettier --check in CI; please reformat with Prettier.

Suggested change
; (req.raw as MultiPartRequest).log = req.log
; (req.raw as MultiPartRequest).upload = {
storage: req.storage,
owner: req.owner,
tenantId: req.tenantId,
db: req.db,
isUpsert: req.headers['x-upsert'] === 'true',
}
;(req.raw as MultiPartRequest).log = req.log
;(req.raw as MultiPartRequest).upload = {
storage: req.storage,
owner: req.owner,
tenantId: req.tenantId,
db: req.db,
isUpsert: req.headers['x-upsert'] === 'true',
}

Copilot uses AI. Check for mistakes.
private readonly locker: S3Locker,
private readonly notifier: LockNotifier
) {}
) { }
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Constructor formatting here () { }) doesn’t match Prettier output and is likely to fail prettier --check. Please reformat (e.g., ) {}) or run Prettier on the file.

Suggested change
) { }
) {}

Copilot uses AI. Check for mistakes.
Comment on lines +24 to +26
; (S3Client as jest.Mock).mockImplementation(() => ({
send: mockSend,
}))
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This beforeEach block has odd indentation/leading semicolon placement (mockSend = jest.fn() followed by an indented ; (S3Client …)). This does not look Prettier-formatted and may fail prettier --check; please reformat (either remove the leading semicolon or use the standard ;(…) pattern).

Suggested change
; (S3Client as jest.Mock).mockImplementation(() => ({
send: mockSend,
}))
;(S3Client as jest.Mock).mockImplementation(() => ({
send: mockSend,
}))

Copilot uses AI. Check for mistakes.
Comment on lines +368 to +369
storageS3BatchDeleteEnabled:
getOptionalConfigFromEnv('STORAGE_S3_BATCH_DELETE_ENABLED') !== 'false',
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This introduces a new env var/config flag (STORAGE_S3_BATCH_DELETE_ENABLED). The PR description says "No new env vars, config flags" and "No configuration required"; please either update the PR/docs to reflect the new flag (and its default), or remove the flag and rely solely on runtime NotImplemented detection.

Copilot uses AI. Check for mistakes.
Comment on lines +260 to +268
fastify.addHook('preHandler', async (req) => {
;(req.raw as MultiPartRequest).log = req.log
;(req.raw as MultiPartRequest).upload = {
storage: req.storage,
owner: req.owner,
tenantId: req.tenantId,
db: req.db,
isUpsert: req.headers['x-upsert'] === 'true',
}
; (req.raw as MultiPartRequest).log = req.log
; (req.raw as MultiPartRequest).upload = {
storage: req.storage,
owner: req.owner,
tenantId: req.tenantId,
db: req.db,
isUpsert: req.headers['x-upsert'] === 'true',
}
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two assignments are formatted in a way that will likely fail the repo’s prettier --check (extra indentation before the second statement and a space after the leading semicolon). Please run Prettier / adjust to the standard ;(expr) style used elsewhere so CI lint passes.

Copilot uses AI. Check for mistakes.
…pers

Keep the existing NotImplemented fallback inline (status quo) when
STORAGE_S3_BATCH_DELETE_ENABLED=true. The false path is a minimal
early-return that skips the batch command entirely.
Now that STORAGE_S3_BATCH_DELETE_ENABLED handles the GCS case explicitly,
the runtime NotImplemented fallback is no longer needed.
…g check inside it

When batchDeleteEnabled is false, a synthetic NotImplemented is thrown
inside the try block so it routes through the existing fallback path,
keeping individual-delete logic in a single place.
…eletes

When STORAGE_S3_BATCH_DELETE_ENABLED=false (default: true), prefer
individual DeleteObjectCommand calls instead of DeleteObjectsCommand.

Both S3Backend.deleteObjects and S3Locker.cleanupZombieLocks route
through the existing NotImplemented catch block, keeping all individual-
delete logic in one place with no structural changes to existing code.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

fix: GCS S3 interop returns NotImplemented for DeleteObjects batch delete

3 participants