Skip to content

benchmark: live view CDP latency comparison (PR #174 vs #176)#179

Draft
hiroTamada wants to merge 1 commit intomainfrom
benchmark/liveview-comparison
Draft

benchmark: live view CDP latency comparison (PR #174 vs #176)#179
hiroTamada wants to merge 1 commit intomainfrom
benchmark/liveview-comparison

Conversation

@hiroTamada
Copy link
Contributor

Checklist

Summary

Benchmark tool and results comparing CDP operation latency across headless live view approaches to inform which implementation to ship.

Variants tested:

Variant Branch Image Resources
Baseline main headless 1024 MB / 4 vCPU
Approach 1 (Xvfb/noVNC) feat/headless-live-view (PR #174) headless 1024 MB / 4 vCPU
Approach 2 (CDP screencast) headless-cdp-live-view (PR #176) headless 1024 MB / 4 vCPU
Headful main headful 8192 MB / 8 vCPU

Environments tested:

  • Docker (local, in-container CDP)
  • Docker with headful constrained to headless resources (1024 MB / 4 vCPU)
  • KraftCloud / Unikraft (remote TLS CDP, session mode)

Key Findings

Docker (side-by-side median latency, selected operations)

Operation Baseline Approach 1 Approach 2 Headful
Screenshot.JPEG.q80 62.5ms 83.7ms 57.4ms 94.4ms
Input.MouseMove 4.5ms 8.0ms 4.9ms 15.6ms
Concurrent.Screenshot 106.3ms 112.5ms 98.5ms 147.8ms
RapidScreenshots (10x) 693ms 866ms 692ms 1.13s
  • Approach 2 adds near-zero overhead vs baseline across all 40+ CDP operations
  • Approach 1 adds ~30% overhead on input and screenshot operations (X display pipeline)
  • Headful under headless constraints is not viable — 39% idle memory, +397% Input.MouseMove, 19% fewer concurrent ops

KraftCloud / Unikraft

  • ~1.5ms network RTT baseline overhead (TLS WebSocket)
  • CPU-sensitive ops (navigation, rendering) 2-4x slower due to 1 vCPU default
  • Approach 2 remains the better choice: minimal image size impact (+10 MB), overhead concentrated in screenshot ops only active during live view

What's Included

  • benchmarks/liveview/main.go — Go benchmark client (40+ CDP operations, concurrent load test, session mode for proxied CDP)
  • benchmarks/liveview/run.sh — Docker orchestration (build, deploy, benchmark, collect stats)
  • benchmarks/liveview/run-unikraft.sh — KraftCloud orchestration
  • benchmarks/liveview/README.md — Usage guide
  • results/20260310-172933/ — Docker 4-variant results with SUMMARY.md
  • results/20260310-174639/ — Headful-constrained results
  • results/headful-constrained-results.md — Headful-constrained analysis
  • results/ukc-final/ — KraftCloud results with SUMMARY.md

Conclusion

Approach 2 (CDP screencast, PR #176) is the recommended live view implementation:

  • Near-zero CDP latency overhead vs baseline in both Docker and Unikraft
  • +10 MB image size (vs +120 MB for Approach 1)
  • Negligible idle memory impact
  • No X display server dependency

Benchmark tool and results comparing CDP operation latency across four
image variants: headless baseline, Approach 1 (Xvfb/noVNC, PR #174),
Approach 2 (CDP screencast, PR #176), and headful.

Covers 40+ CDP operations across 9 categories (screenshot, JS eval, DOM,
input, network, page, emulation, target, composite) plus concurrent load
testing. Includes results from Docker (4 vCPU / 1 GB headless, 8 vCPU /
8 GB headful), Docker with constrained headful (4 vCPU / 1 GB), and
KraftCloud (Unikraft) environments.

Key findings:
- Approach 2 (CDP screencast) adds near-zero overhead vs baseline
- Approach 1 (Xvfb/noVNC) adds ~30% overhead on input/screenshot ops
- Headful under headless constraints is not viable (39% idle memory)
- On Unikraft, Approach 2 remains the better choice for live view

Made-with: Cursor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant