Weave Code
Code Weaver
Helps Laravel developers discover, compare, and choose open-source packages. See popularity, security, maintainers, and scores at a glance to make better decisions.
Feedback
Share your thoughts, report bugs, or suggest improvements.
Subject
Message

Benchmark Laravel Package

dragon-code/benchmark

Benchmark is a small PHP dev tool for quickly comparing execution speed of different code paths. Use the bench() helper or Benchmark class, pass callbacks (named or not), and print results to the console for easy side-by-side timing.

View on GitHub
Deep Wiki
Context7

Getting Started

  1. Install the package with composer require dragon-code/benchmark --dev — it's a dev-only dependency intended for local performance validation.
  2. Start with the quick benchmark using the bench() helper function:
    use function DragonCode\Benchmark\bench;
    bench()->compare(
        fn() => strlen('foo'),
        fn() => mb_strlen('foo')
    )->toConsole();
    
  3. Run in CLI only — output is console-based; ensure your benchmark runs from the command line (not web SAPI).

Implementation Patterns

  • Option comparison: Compare two or more implementations side-by-side (e.g., array_map vs foreach, Eloquent vs raw DB queries).
  • Named keys for readability: Use associative arrays in compare() to give meaningful labels ('fast' => fn()..., 'slow' => fn()...), improving output clarity.
  • Iterative tuning: Use ->iterations(500) to gather more stable stats when measuring small differences; avoid over-measuring by keeping iterations realistic.
  • Before/After hooks for setup/teardown:
    Use ->beforeEach(fn() => $this->seedTestCache()) to ensure fair conditions across iterations.
  • Deviation analysis: Run ->deviations(5) to assess stability — e.g., measure database call times across multiple runs to detect flakiness.
  • Programmatic use: Store results as DTO (->benchmark()->compare(...)->toArray()) for CI validation or comparison logs.

Gotchas and Tips

  • Memory ≠ usage per call: Output shows total memory per loop, not per iteration — don’t assume total / iterations equals per-call overhead unless iterations = 1.
  • Filtered averages: For ≥10 iterations, the avg excludes top/bottom 10% of results — this improves stability but may hide outliers. Verify deviation patterns if suspicious.
  • Callback signatures matter:
    • beforeEach/afterEach callbacks return values get passed into the main callback — useful for injecting per-iteration state (e.g., random seeds).
    • The iteration param is passed only when explicitly typed: fn(int $i) => ....
  • Console formatting quirks: round() affects only display — use round(0) for integer-style timing output, but keep raw values for comparisons.
  • Negative iterations: ->iterations(-10) → treats as 10. Safe, but confusing — prefer positive values for clarity.
  • Not thread-safe or parallel: Benchmarks run sequentially — avoid benchmarking concurrency primitives like parallel unless isolating single-threaded behavior.
  • Cold/warm start: First few iterations may include JIT compilation effects. Consider a dummy warm-up run (->compare(fn() => null, fn() => null) first) if profiling tiny snippets (<1ms).
Weaver

How can I help you explore Laravel packages today?

Conversation history is not saved when not logged in.
Prompt
Add packages to context
No packages found.
davejamesmiller/laravel-breadcrumbs
artisanry/parsedown
christhompsontldr/phpsdk
enqueue/dsn
bunny/bunny
enqueue/test
enqueue/null
enqueue/amqp-tools
bower-asset/punycode
bower-asset/inputmask
bower-asset/jquery
bower-asset/yii2-pjax
laravel/nova
spatie/laravel-mailcoach
spatie/laravel-superseeder
laravel/liferaft
nst/json-test-suite
danielmiessler/sec-lists
jackalope/jackalope-transport
twbs/bootstrap4