You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add a benchmark test that runs multiple spamoor scenarios simultaneously against the same chain to simulate realistic L2 traffic patterns. Currently each benchmark test isolates a single workload type, which measures that workload ceiling but does not represent production traffic mixes.
Motivation
The DeFi simulation benchmark (TestDeFiSimulation) is bottlenecked by spamoor uniswap-swaps scenario - each swap requires a synchronous eth_call to GetAmountsIn/GetAmountsOut before sending, limiting injection to ~150-300 TPS regardless of config. A mixed workload avoids this single-scenario bottleneck while producing more realistic throughput numbers.
Proposed workload mix
Spammers
Scenario
Gas/tx
Role
2
eoatx
~21K
simple transfers, high TPS filler
2
erc20tx
~65K
storage-heavy contract calls
1
uniswap-swaps
~200K
DeFi call chains
1
gasburnertx
configurable
compute pressure
This mix exercises simple transfers, storage reads/writes, deep call chains, and compute simultaneously - closer to what a production L2 mempool looks like.
Acceptance criteria
New TestMixedTraffic in test/e2e/benchmark/
Uses benchConfig env vars for system config (block time, gas limit, etc.)
All spammer types run concurrently against the same chain
Measurement window excludes warmup from all scenarios
Results include per-scenario and aggregate metrics
Summary
Add a benchmark test that runs multiple spamoor scenarios simultaneously against the same chain to simulate realistic L2 traffic patterns. Currently each benchmark test isolates a single workload type, which measures that workload ceiling but does not represent production traffic mixes.
Motivation
The DeFi simulation benchmark (TestDeFiSimulation) is bottlenecked by spamoor uniswap-swaps scenario - each swap requires a synchronous eth_call to GetAmountsIn/GetAmountsOut before sending, limiting injection to ~150-300 TPS regardless of config. A mixed workload avoids this single-scenario bottleneck while producing more realistic throughput numbers.
Proposed workload mix
This mix exercises simple transfers, storage reads/writes, deep call chains, and compute simultaneously - closer to what a production L2 mempool looks like.
Acceptance criteria
Related