mojo_benchmark
allows you to run performance tests for any Mojo application that participates in the tracing ecosystem with no app changes required.
The script reads a list of benchmarks to run from a file, runs each benchmark in controlled caching conditions with tracing enabled and performs specified measurements on the collected trace data.
mojo_benchmark
runs performance tests defined in a benchmark file. The benchmark file is a Python dictionary of the following format:
benchmarks = [ { 'name': '<name of the benchmark>', 'app': '<url of the app to benchmark>', 'shell-args': [], 'duration': <duration in seconds>, # List of measurements to make. 'measurements': [ '<measurement type>/<event category>/<event name>', ], }, ]
The following types of measurements are available:
time_until
- measures time until the first occurence of the specified eventavg_duration
- measures the average duration of all instances of the specified eventThe script runs each benchmark twice. The first run (cold start) clears caches of the following apps on startup:
The second run (warm start) runs immediately afterwards, without clearing any caches.
The underlying benchmark runner records the time origin just before issuing the connection call to the application being benchmarked. Results of time_until
measurements are relative to this time.
For an app that records a trace event named “initialized” in category “my_app” once its initialization is complete, we can benchmark the initialization time of the app (from the moment someone tries to connect to it to the app completing its initialization) using the following benchmark file:
benchmarks = [ { 'name': 'My app initialization', 'app': 'https://my_domain/my_app.mojo', 'duration': 10, 'measurements': [ 'time_until/my_app/initialized', ], }, ]