fix: fall back to individual deletes when S3 backend returns NotImplemented for DeleteObjects#899
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Central YAML (base), Organization UI (inherited) Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughSummary by CodeRabbit
WalkthroughAdds a fallback in S3Backend.deleteObjects: when the DeleteObjectsCommand call throws a NotImplemented error, the adapter performs parallel individual DeleteObjectCommand calls (ignoring NoSuchKey results) and returns after they complete. Non-NotImplemented errors from the batch call are rethrown. No public or exported signatures changed. Sequence Diagram(s)sequenceDiagram
participant Client
participant S3Adapter
participant S3Service
Client->>S3Adapter: deleteObjects(keys[])
S3Adapter->>S3Service: DeleteObjectsCommand(keys[])
alt Success
S3Service-->>S3Adapter: Success
S3Adapter-->>Client: Success
else NotImplemented
S3Service-->>S3Adapter: NotImplemented error
par Parallel deletes
S3Adapter->>S3Service: DeleteObjectCommand(key1)
S3Adapter->>S3Service: DeleteObjectCommand(key2)
S3Adapter->>S3Service: DeleteObjectCommand(keyN)
end
S3Service-->>S3Adapter: individual results (fulfilled/rejected)
S3Adapter-->>Client: Success (after allSettled, ignoring NoSuchKey)
else Other error
S3Service-->>S3Adapter: Error
S3Adapter-->>Client: Rethrow error
end
Assessment against linked issues
Warning Review ran into problems🔥 ProblemsGit: Failed to clone repository. Please run the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/storage/backend/s3/adapter.ts`:
- Around line 352-356: The current delete loop in the S3 adapter (where
prefixes.map(...) sends DeleteObjectCommand) uses Promise.all which can reject
on one failure after some deletes succeeded, causing DB transactions to roll
back while blobs are already removed; change the delete implementation in the
adapter's deleteObjects method to use Promise.allSettled for the
DeleteObjectCommand sends, aggregate results, and do not throw on individual
failures: collect failed Keys (or errors) and return them (or a success/failure
summary) so the caller (e.g., backend.deleteObjects) can handle partial failures
or retry; alternatively perform the S3 deletes only after the DB transaction
commits — but do not keep the current behavior of throwing after partial side
effects.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Central YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 4ed5bca2-4c1c-4e7b-99ae-c46cba138f3f
📒 Files selected for processing (1)
src/storage/backend/s3/adapter.ts
48ddfd7 to
921ff17
Compare
…ted for DeleteObjects Some S3-compatible backends (e.g. GCS in S3 interoperability mode) do not support the DeleteObjects batch API and return a NotImplemented error. Catch this and fall back to individual DeleteObjectCommand calls via Promise.allSettled, so a single failure does not abort remaining deletes mid-flight.
921ff17 to
f7db348
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/storage/backend/s3/adapter.ts`:
- Around line 346-355: The NotImplemented fallback currently fires
Promise.allSettled over prefixes and returns without checking results, which can
orphan objects if individual deletes fail; update the fallback in the S3 adapter
to inspect Promise.allSettled results for the DeleteObjectCommand calls (the
Promise returned by this.client.send), treat fulfilled results as OK, and for
rejected results throw an error unless the rejection corresponds to a benign
"not found" case (mirror the logic used in the file backend for
ENOENT/NoSuchKey); ensure the code references the same variables (prefixes,
DeleteObjectCommand, this.client.send) and re-throws or rejects on any real
failure so the caller does not commit the DB transaction on partial delete
failures.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Central YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 7762c89a-6dc6-4afa-be9a-bac9dab05805
📒 Files selected for processing (2)
src/storage/backend/s3/adapter.tssrc/test/s3-adapter.test.ts
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
Follow the same pattern as the file.ts backend: iterate allSettled results and throw on any individual delete failure except NoSuchKey (object already gone), preventing orphaned S3 objects when a delete fails mid-batch. Also updates tests to cover NoSuchKey-ignore and real-error-throw behaviour.
|
@ferhatelmas @fenos — thanks for maintaining this project! Would appreciate a review when you get a chance. This fixes GCS S3 interop delete failures (#898) with a small targeted change, following the existing patterns in the codebase and iterated based on @coderabbitai's feedback. |
|
Tip For best results, initiate chat on the files or code changes.
The implementation looks clean and well-tested. Looking forward to the maintainers' feedback! 🚀 |
|
@ameyanaik11 thank you for this well tested PR. It is mostly there except S3Locker compatibility for TUS, which is still using DeleteObjects. We need to support it too. If we could add a config variable STORAGE_S3_BATCH_DELETE_ENABLED=false (default is true), implementation would be cleaner because main flow wouldn't be cluttered with a possible code path. It would choose one or the other according to the config. Are you willing to update for this approach? |
…dividual-delete helpers - Add storageS3BatchDeleteEnabled config flag (default: true) Set STORAGE_S3_BATCH_DELETE_ENABLED=false for S3-compatible backends (e.g. GCS) that don't support the bulk DeleteObjects API. - S3Backend.deleteObjects: when flag is false, skip DeleteObjectsCommand and go straight to individual deletes. Extract private deleteObjectsIndividually() helper shared by the config-off path and the existing NotImplemented runtime fallback. - S3Locker: add batchDeleteEnabled option (default: true) wired from config in the TUS route. Extract deleteLocksIndividually() helper used by cleanupZombieLocks for both the config-off path and the NotImplemented fallback added in the previous commit. - Tests: 2 new unit tests in s3-adapter.test.ts and 1 new integration test in s3-locker.test.ts covering the batchDeleteEnabled=false paths.
There was a problem hiding this comment.
Pull request overview
Adds resilience for S3-compatible backends (notably GCS S3 interop) that return NotImplemented for DeleteObjects by falling back to per-object deletes, restoring delete behavior for Studio and bulk delete flows.
Changes:
- Add
NotImplementeddetection inS3Backend.deleteObjects()and fall back to parallelDeleteObjectcalls. - Add similar fallback logic to TUS
S3Locker.cleanupZombieLocks()and plumb a newbatchDeleteEnabledtoggle. - Add/expand Jest coverage for both the adapter and locker fallback behaviors.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| src/storage/backend/s3/adapter.ts | Fallback to individual deletes when batch delete is unsupported; adds opt-out via config. |
| src/storage/protocols/tus/s3-locker.ts | Zombie-lock cleanup now falls back to individual deletes on NotImplemented (and supports disabling batch deletes). |
| src/config.ts | Introduces storageS3BatchDeleteEnabled backed by STORAGE_S3_BATCH_DELETE_ENABLED. |
| src/http/routes/tus/index.ts | Passes storageS3BatchDeleteEnabled into S3Locker construction. |
| src/test/s3-adapter.test.ts | Adds unit tests for batch vs fallback delete behavior and config-based disabling. |
| src/test/s3-locker.test.ts | Adds tests covering NotImplemented fallback paths during zombie-lock cleanup. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/http/routes/tus/index.ts
Outdated
| ; (req.raw as MultiPartRequest).log = req.log | ||
| ; (req.raw as MultiPartRequest).upload = { | ||
| storage: req.storage, | ||
| owner: req.owner, | ||
| tenantId: req.tenantId, | ||
| db: req.db, | ||
| isUpsert: req.headers['x-upsert'] === 'true', | ||
| } |
There was a problem hiding this comment.
Same formatting issue here as above: extra indentation and ; ( instead of the repo’s usual ;(…) style. This is likely to fail prettier --check in CI; please reformat with Prettier.
| ; (req.raw as MultiPartRequest).log = req.log | |
| ; (req.raw as MultiPartRequest).upload = { | |
| storage: req.storage, | |
| owner: req.owner, | |
| tenantId: req.tenantId, | |
| db: req.db, | |
| isUpsert: req.headers['x-upsert'] === 'true', | |
| } | |
| ;(req.raw as MultiPartRequest).log = req.log | |
| ;(req.raw as MultiPartRequest).upload = { | |
| storage: req.storage, | |
| owner: req.owner, | |
| tenantId: req.tenantId, | |
| db: req.db, | |
| isUpsert: req.headers['x-upsert'] === 'true', | |
| } |
| private readonly locker: S3Locker, | ||
| private readonly notifier: LockNotifier | ||
| ) {} | ||
| ) { } |
There was a problem hiding this comment.
Constructor formatting here () { }) doesn’t match Prettier output and is likely to fail prettier --check. Please reformat (e.g., ) {}) or run Prettier on the file.
| ) { } | |
| ) {} |
| ; (S3Client as jest.Mock).mockImplementation(() => ({ | ||
| send: mockSend, | ||
| })) |
There was a problem hiding this comment.
This beforeEach block has odd indentation/leading semicolon placement (mockSend = jest.fn() followed by an indented ; (S3Client …)). This does not look Prettier-formatted and may fail prettier --check; please reformat (either remove the leading semicolon or use the standard ;(…) pattern).
| ; (S3Client as jest.Mock).mockImplementation(() => ({ | |
| send: mockSend, | |
| })) | |
| ;(S3Client as jest.Mock).mockImplementation(() => ({ | |
| send: mockSend, | |
| })) |
| storageS3BatchDeleteEnabled: | ||
| getOptionalConfigFromEnv('STORAGE_S3_BATCH_DELETE_ENABLED') !== 'false', |
There was a problem hiding this comment.
This introduces a new env var/config flag (STORAGE_S3_BATCH_DELETE_ENABLED). The PR description says "No new env vars, config flags" and "No configuration required"; please either update the PR/docs to reflect the new flag (and its default), or remove the flag and rely solely on runtime NotImplemented detection.
src/http/routes/tus/index.ts
Outdated
| fastify.addHook('preHandler', async (req) => { | ||
| ;(req.raw as MultiPartRequest).log = req.log | ||
| ;(req.raw as MultiPartRequest).upload = { | ||
| storage: req.storage, | ||
| owner: req.owner, | ||
| tenantId: req.tenantId, | ||
| db: req.db, | ||
| isUpsert: req.headers['x-upsert'] === 'true', | ||
| } | ||
| ; (req.raw as MultiPartRequest).log = req.log | ||
| ; (req.raw as MultiPartRequest).upload = { | ||
| storage: req.storage, | ||
| owner: req.owner, | ||
| tenantId: req.tenantId, | ||
| db: req.db, | ||
| isUpsert: req.headers['x-upsert'] === 'true', | ||
| } |
There was a problem hiding this comment.
These two assignments are formatted in a way that will likely fail the repo’s prettier --check (extra indentation before the second statement and a space after the leading semicolon). Please run Prettier / adjust to the standard ;(expr) style used elsewhere so CI lint passes.
…pers Keep the existing NotImplemented fallback inline (status quo) when STORAGE_S3_BATCH_DELETE_ENABLED=true. The false path is a minimal early-return that skips the batch command entirely.
Now that STORAGE_S3_BATCH_DELETE_ENABLED handles the GCS case explicitly, the runtime NotImplemented fallback is no longer needed.
…g check inside it When batchDeleteEnabled is false, a synthetic NotImplemented is thrown inside the try block so it routes through the existing fallback path, keeping individual-delete logic in a single place.
…eletes When STORAGE_S3_BATCH_DELETE_ENABLED=false (default: true), prefer individual DeleteObjectCommand calls instead of DeleteObjectsCommand. Both S3Backend.deleteObjects and S3Locker.cleanupZombieLocks route through the existing NotImplemented catch block, keeping all individual- delete logic in one place with no structural changes to existing code.
Summary
Fixes #898
Some S3-compatible backends (e.g. Google Cloud Storage in S3 interoperability mode) do not support the
DeleteObjectsbatch API and return aNotImplementederror. This causes all delete operations from the Supabase Studio UI and the batch delete API endpoint to fail with HTTP 400.Changes
src/storage/backend/s3/adapter.ts—S3Backend.deleteObjects()When
DeleteObjectsCommandreturnsNotImplemented, fall back to deleting objects individually viaDeleteObjectCommandin parallel. No configuration required — auto-detects the limitation at runtime.How it was tested
Tested against a live GCS bucket in S3 interoperability mode:
DeleteObjectsCommandreturnsNotImplementedon GCSDeleteObjectCommandworks correctly on GCSNotes