blob: f9cb773c55ae749508cb494d4b074fd39aad228a [file] [log] [blame] [view]
Przemyslaw Pietrzkiewicz845d0982015-09-14 15:55:43 +02001# mojo_benchmark
2
3`mojo_benchmark` allows you to run performance tests for any Mojo application
4that participates in the [tracing
Viet-Trung Luu0f4f3ba2015-10-10 01:08:40 -07005ecosystem](https://github.com/domokit/mojo/blob/master/mojo/services/tracing/interfaces/tracing.mojom)
Przemyslaw Pietrzkiewicz845d0982015-09-14 15:55:43 +02006with no app changes required.
7
8The script reads a list of benchmarks to run from a file, runs each benchmark in
9controlled caching conditions with tracing enabled and performs specified
10measurements on the collected trace data.
11
12## Defining benchmarks
13
14`mojo_benchmark` runs performance tests defined in a benchmark file. The
Przemyslaw Pietrzkiewicz158137d2015-10-28 16:47:01 +010015benchmark file is a Python program setting a dictionary of the following format:
Przemyslaw Pietrzkiewicz845d0982015-09-14 15:55:43 +020016
17```python
18benchmarks = [
19 {
20 'name': '<name of the benchmark>',
21 'app': '<url of the app to benchmark>',
22 'shell-args': [],
23 'duration': <duration in seconds>,
24
25 # List of measurements to make.
26 'measurements': [
Przemyslaw Pietrzkiewicz158137d2015-10-28 16:47:01 +010027 {
28 'name': <my_measurement>,
29 'spec': <spec>,
30 },
31 (...)
Przemyslaw Pietrzkiewicz845d0982015-09-14 15:55:43 +020032 ],
33 },
34]
35```
36
Przemyslaw Pietrzkiewicz158137d2015-10-28 16:47:01 +010037The benchmark file may reference the `target_os` global that will be any of
38('android', 'linux'), indicating the system on which the benchmarks are run.
39
40### Measurement specs
41
Przemyslaw Pietrzkiewicz845d0982015-09-14 15:55:43 +020042The following types of measurements are available:
43
Przemyslaw Pietrzkiewicz158137d2015-10-28 16:47:01 +010044 - `time_until`
45 - `time_between`
46 - `avg_duration`
47 - `percentile_duration`
48
49`time_until` records the time until the first occurence of the targeted event.
50The underlying benchmark runner records the time origin just before issuing the
51connection call to the application being benchmarked. Results of `time_until`
52measurements are relative to this time. Spec format:
53
54```
55'time_until/<category>/<event>'
56```
57
58`time_between` records the time between the first occurence of the first
59targeted event and the first occurence of the second targeted event. Spec
60format:
61
62```
63'time_between/<category1>/<event1>/<category2>/<event2>'
64```
65
66`avg_duration` records the average duration of all occurences of the targeted
67event. Spec format:
68
69```
70'avg_duration/<category>/<event>'
71```
72
73`percentile_duration` records the value at the given percentile of durations of
74all occurences of the targeted event. Spec format:
75
76```
77'percentile_duration/<category>/<event>/<percentile>'
78```
79
80where `<percentile>` is a number between 0.0 and 0.1.
Przemyslaw Pietrzkiewicz845d0982015-09-14 15:55:43 +020081
82## Caching
83
Przemyslaw Pietrzkiewicze66c6292015-09-16 18:09:27 +020084The script runs each benchmark twice. The first run (**cold start**) clears
85caches of the following apps on startup:
Przemyslaw Pietrzkiewicz845d0982015-09-14 15:55:43 +020086
Przemyslaw Pietrzkiewicz158137d2015-10-28 16:47:01 +010087 - `network_service.mojo`
88 - `url_response_disk_cache.mojo`
Przemyslaw Pietrzkiewicz845d0982015-09-14 15:55:43 +020089
90The second run (**warm start**) runs immediately afterwards, without clearing
91any caches.
92
Przemyslaw Pietrzkiewicz845d0982015-09-14 15:55:43 +020093## Example
94
95For an app that records a trace event named "initialized" in category "my_app"
96once its initialization is complete, we can benchmark the initialization time of
97the app (from the moment someone tries to connect to it to the app completing
98its initialization) using the following benchmark file:
99
100```python
101benchmarks = [
102 {
103 'name': 'My app initialization',
104 'app': 'https://my_domain/my_app.mojo',
105 'duration': 10,
106 'measurements': [
107 'time_until/my_app/initialized',
108 ],
109 },
110]
111```
Przemyslaw Pietrzkiewicz6ead3a92015-10-28 10:14:37 +0100112
113## Dashboard
114
115`mojo_benchmark` supports uploading the results to an instance of a Catapult
116performance dashboard. In order to upload the results of a run to performance
117dashboard, pass the `--upload` flag along with required meta-data describing the
118data being uploaded:
119
120```sh
121mojo_benchmark \
122--upload \
123--master-name my-master \
124--bot-name my-bot \
125--test-name my-test-suite
126--builder-name my-builder \
127--build-number my-build
128--server-url http://my-server.example.com
129```
130
131If no `--server-url` is specified, the script assumes that a local instance of
132the dashboard is running at `http://localhost:8080`. The script assumes that the
133working directory from which it is called is a git repository and queries it to
134determine the sequential number identifying the revision (as the number of
135commits in the current branch in the repository).
136
137For more information refer to:
138
139 - [Catapult project](https://github.com/catapult-project/catapult)