Skip to content

chore: Run backend queries parallely#144

Open
gonzalezzfelipe wants to merge 1 commit intomainfrom
chore/run-backend-queries-in-parallel
Open

chore: Run backend queries parallely#144
gonzalezzfelipe wants to merge 1 commit intomainfrom
chore/run-backend-queries-in-parallel

Conversation

@gonzalezzfelipe
Copy link
Contributor

@gonzalezzfelipe gonzalezzfelipe commented Mar 11, 2026

Summary by CodeRabbit

  • Performance Improvements
    • Objects within a specified radius are now fetched faster through parallelized processing of multiple asset types (ships, pellets, tokens, and asteroids).

@coderabbitai
Copy link

coderabbitai bot commented Mar 11, 2026

📝 Walkthrough

Walkthrough

Refactors objects_in_radius query logic to parallelize radius-based asset searches using tokio::join!. Introduces four specialized radius query functions (ships_in_radius, pellets_in_radius, asteria_in_radius, tokens_in_radius) replacing sequential UTXO fetch-and-filter loops. Updates distance_from_center parameter handling.

Changes

Cohort / File(s) Summary
Parallelized Radius Queries
backend/src/main.rs
Added four new radius-based query functions and internal distance/policy filtering helpers. Replaced sequential UTXO loops in objects_in_radius with concurrent tokio::join! composition. Changed distance_from_center to accept reference parameter (&PositionInput) instead of value. Imported join_all for token range processing.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

A rabbit hops through concurrent threads,
Four queries race where one once sped,
With tokio's join, the wait grew brief,
Radius searches find relief! 🐇✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 12.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'chore: Run backend queries parallely' accurately summarizes the main change—introducing parallelized queries in the backend using tokio::join! to replace sequential UTXO processing.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch chore/run-backend-queries-in-parallel

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/src/main.rs`:
- Around line 433-449: tokens_in_radius currently uses a non-strict boundary
check (distance_from_center(...) <= radius) which is inconsistent with other
object queries that use a strict check (< radius); update the filter in
tokens_in_radius to use a strict comparison (< radius) so Token objects are
excluded when exactly on the radius boundary and behavior matches the other
functions (match the check used elsewhere like in the ship/pellet/asteria
filters), leaving the rest of the pipeline (Token::try_from, Token.amount > 0,
mapping to PositionalInterface::Token) unchanged.
- Around line 672-680: The current code maps each user-supplied tokens entry
into an unbounded set of futures and calls join_all (see the tokens variable,
tokens_in_radius function, and join_all usage), which can trigger
Pagination::all() for every token concurrently; instead either truncate the
input (e.g., limit tokens.iter().take(MAX_TOKENS)) or replace join_all with a
bounded concurrency pattern such as converting the futures vector into a stream
and using buffer_unordered(CONCURRENCY_LIMIT) (or FuturesUnordered with a
semaphore) to limit how many tokens_in_radius calls run at once; add a
configurable MAX_TOKENS or CONCURRENCY_LIMIT constant and apply it where the
tokens are mapped before awaiting results.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e152e97c-c192-4d94-bf1c-8cf99713fab5

📥 Commits

Reviewing files that changed from the base of the PR and between 1f9836c and b564d31.

📒 Files selected for processing (1)
  • backend/src/main.rs

Comment on lines +433 to +449
async fn tokens_in_radius(
api: &BlockfrostAPI,
pellet_address: &str,
token: &TokenInput,
radius: i32,
center: &PositionInput,
) -> Result<Vec<PositionalInterface>, Error> {
Ok(fetch_utxos_by_policy(api, pellet_address, &token.policy_id)
.await?
.into_iter()
.map(|utxo| Token::try_from((token.clone(), utxo)))
.collect::<Result<Vec<Token>, Error>>()?
.into_iter()
.filter(|token| distance_from_center(token.position.x, token.position.y, center) <= radius)
.filter(|token| token.amount > 0)
.map(PositionalInterface::Token)
.collect())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Align the radius boundary check across object types.

Line 446 uses <= radius, while Lines 378, 401, and 424 use a strict < radius. Tokens on the boundary are therefore returned while ships, pellets, and Asteria at the same distance are not.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main.rs` around lines 433 - 449, tokens_in_radius currently uses
a non-strict boundary check (distance_from_center(...) <= radius) which is
inconsistent with other object queries that use a strict check (< radius);
update the filter in tokens_in_radius to use a strict comparison (< radius) so
Token objects are excluded when exactly on the radius boundary and behavior
matches the other functions (match the check used elsewhere like in the
ship/pellet/asteria filters), leaving the rest of the pipeline (Token::try_from,
Token.amount > 0, mapping to PositionalInterface::Token) unchanged.

Comment on lines +672 to +680
async {
match tokens.as_ref().map(|tokens| {
tokens
.iter()
.map(|token| tokens_in_radius(api, &pellet_address, token, radius, &center))
.collect::<Vec<_>>()
}) {
Some(futs) => join_all(futs).await,
None => Vec::new(),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Bound the token fan-out before calling Blockfrost.

This branch turns a user-supplied tokens array into one Pagination::all() upstream scan per entry, all in flight at once. A large request can burn rate limits and amplify latency for the whole resolver. Please cap the list size or switch to bounded concurrency instead of join_all.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/main.rs` around lines 672 - 680, The current code maps each
user-supplied tokens entry into an unbounded set of futures and calls join_all
(see the tokens variable, tokens_in_radius function, and join_all usage), which
can trigger Pagination::all() for every token concurrently; instead either
truncate the input (e.g., limit tokens.iter().take(MAX_TOKENS)) or replace
join_all with a bounded concurrency pattern such as converting the futures
vector into a stream and using buffer_unordered(CONCURRENCY_LIMIT) (or
FuturesUnordered with a semaphore) to limit how many tokens_in_radius calls run
at once; add a configurable MAX_TOKENS or CONCURRENCY_LIMIT constant and apply
it where the tokens are mapped before awaiting results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant