Benchmark use cases with Jetpack Macrobenchmark

Macrobenchmark enables you to write startup and runtime performance tests directly against your app on devices running Android 10 (API 29) or higher.

It is recommended that you use Macrobenchmark with the latest version of Android Studio (2020.3.1 or higher), as there are new features in that version of the IDE that integrate with Macrobenchmark. Users of earlier versions of Android Studio can use the extra instruction later in this topic to work with trace files.

Benchmarking testing is provided through the MacrobenchmarkRule JUnit4 rule API in the Macrobenchmark library:


    val benchmarkRule = MacrobenchmarkRule()

    fun startup() = benchmarkRule.measureRepeated(
        packageName = "mypackage.myapp",
        metrics = listOf(StartupTimingMetric()),
        iterations = 5,
        startupMode = StartupMode.COLD
    ) { // this = MacrobenchmarkScope
        val intent = Intent()


    MacrobenchmarkRule benchmarkRule = MacrobenchmarkRule()

    void startup() = benchmarkRule.measureRepeated(
        "mypackage.myapp", // packageName
        listOf(StartupTimingMetric()), // metrics
        5, // iterations
        StartupMode.COLD // startupMode
    ) { // this = MacrobenchmarkScope
        Intent intent = Intent()

Metrics are displayed directly in Android Studio and are also output for CI usage in a JSON file.

Sample Studio Results

Module setup

Macro benchmarks require a separate module from your app code that is responsible for running the tests that target your app.


Macro benchmarks are created in a separate module that is customized to work with Gradle and Android Studio.

Add a new module

Add a new module to your project. This module holds your Macrobenchmark tests.

  1. Right-click your project or module in the Project panel in Android Studio and click New > Module.
  2. Select "Android Library" in the Templates pane.
  3. Type "macrobenchmark" for the module name.
  4. Set Minimum SDK to "API 29: Android 10.0 (Q)"
  5. Click Finish.

Configuring new library module

Modify the Gradle file

Customize the Macrobenchmark module's build.gradle as follows:

  1. Change plugin from to
  2. Change all dependencies named testImplementation or androidTestImplementation to implementation.
  3. Add a Macrobenchmark dependency:

    • implementation 'androidx.benchmark:benchmark-macro-junit4:1.1.0-SNAPSHOT'
    • In the android {} block, add:
    targetProjectPath = ":app" // Note that your module name may be different
    properties["android.experimental.self-instrumenting"] = true
    buildTypes {
        // Declare a build type (release) to match the target app's build type
        release {
            debuggable = true
    • After the android {} block, but before the dependencies {} block, add:
    androidComponents {
       beforeVariants(selector().all()) {
           // Enable only the release buildType, since we only want to measure
           // release build performance
           enable = buildType == 'release'

Simplify directory structure

In a module, there is only one source directory, for all tests. Delete other source directories, including src/test and src/androidTest, since they aren't used.

See the sample Macrobenchmark module for reference.

Write macro benchmarks

Define a new test class in that module, filling in your app's package name:

class SampleStartupBenchmark {
    val benchmarkRule = MacrobenchmarkRule()

    fun startup() = benchmarkRule.measureRepeated(
        packageName = "mypackage.myapp",
        metrics = listOf(StartupTimingMetric()),
        iterations = 5,
        startupMode = StartupMode.COLD
    ) { // this = MacrobenchmarkScope
        val intent = Intent()

Set up the app

To benchmark an app (called the target of the macro benchmark), that app must be profileable, which enables reading detailed trace information. You enable this in the <application> tag of the app's AndroidManifest.xml:

<application ... >
    <!-- Profileable to enable Macrobenchmark profiling -->
    <!-- Suppress AndroidElementNotAllowed -->
    <profileable android:shell="true"/>

Configure the benchmarked app as close to user experience as possible. Set it up as non-debuggable and preferably with minification on, which improves performance. You typically do this by installing the release variant of the target APK.

If you don't have signing keys for local release builds of your application, you can sign locally with debug keys:

buildTypes {
    release {
        // You'll be unable to release with this config, but it can
        // be useful for local performance testing
        signingConfig signingConfigs.debug

Perform a Gradle sync, open the Build Variants panel on the left, and select the release variant of both the app and the Macrobenchmark module. This ensures running the benchmark will build and test the release variant of your app:

Select release variant

Running Macrobenchmark on inner activities requires more effort. To benchmark an inner activity, pass a setupBlock to MacrobenchmarkRule.measureRepeated() to navigate to the code to benchmark, and use the measureBlock to invoke the actual activity launch or scrolling action to measure.

Customize your macro benchmark


Macro benchmarks can specify a CompilationMode, which defines how the app should be compiled.

By default, benchmarks are run with SpeedProfile, which runs a few iterations of your benchmark before measurement, using that profiling data for profile-driven compilation. This can simulate performance of UI code that has launched and run before, or which has been pre-compiled by the store installing it.

To simulate worst-case, just-after-install performance without pre-compilation, pass None.

This functionality is built on ART compilation commands. Each benchmark will clear profile data before it starts, to ensure non-interference between benchmarks.


To perform an activity start, you can pass a pre-defined startup mode (one of COLD, WARM, or HOT) to the measureRepeated() function. This parameter changes how the activity launches, and the process state at the start of the test.

To learn more about the types of startup, see the Android Vitals startup documentation.

Scrolling and animation

Unlike most Android UI tests, the Macrobenchmark tests run in a separate process from the app itself. This is necessary to enable things like killing the app process and compiling it using shell commands.

You can drive your app using the UI Automator library or other mechanism that can control the target application from the test process. Approaches such as Espresso or ActivityScenario won't work because they expect to run in a shared process with the app.

The following example finds a RecyclerView using its resource id, and scrolls down several times:

fun measureScroll() {
        packageName = "mypackage.myapp",
        metrics = listOf(FrameTimingMetric()),
        compilationMode = compilationMode,
        iterations = 5,
        setupBlock = {
            // before starting to measure, navigate to the UI to be measured
            val intent = Intent()
            intent.action = ACTION
    ) {
        val recycler = device.findObject(By.res("mypackage.myapp", "recycler_id"))
        // Set gesture margin to avoid triggering gesture nav
        // with input events from automation.
        recycler.setGestureMargin(device.displayWidth / 5)

        // Scroll down several times
        for (i in 1..10) {
            recycler.scroll(Direction.DOWN, 2f)

As the test specifies a FrameTimingMetric, the timing of frames is recorded and reported as a high-level summary of frame timing distribution: 50th, 90th, 95th, and 99th percentile.

Your benchmark doesn't have to scroll the UI. It could instead, for example, run an animation. It also doesn't need to use UI automator specifically; as long as frames are being produced by the view system, which includes frames produced by Compose, performance metrics are collected. Note that in-process mechanisms such as Espresso won't work because the app needs to be driven from the test app process instead.

Run the macro benchmark

Run the test from within Android Studio to measure the performance of your app on your device. Note that you must run the test on a physical device, and not an emulator, as emulators do not produce performance numbers representative of the end-user experience.

See the Benchmarking in CI section for information on how to run and monitor benchmarks in continuous integration.

You can also run all benchmarks from the command line by executing the connectedCheck command:

$ ./gradlew :macrobenchmark:connectedCheck

Configuration errors

If the app is misconfigured (debuggable, or non-profileable), Macrobenchmark throws an error, rather than reporting an incorrect or incomplete measurement. You can suppress these errors with the androidx.benchmark.suppressErrors argument.

Errors are also thrown when attempting to measure an emulator, or on a low-battery device, as this may compromise core availability and clock speed.

Inspect a trace

Each measured iteration captures a separate system trace. You can open these result traces by clicking on one of the links in the Test Results pane, as shown in the image in the Jetpack Macrobenchmark section of this topic. When the trace is loaded, Android Studio prompts you to select the process to analyze. The selection is pre-populated with the target app process:

Studio trace process selection

Once the trace file is loaded, Studio shows the results in the CPU profiler tool:

Studio Trace

Access trace files manually

If you are using an older version of Android Studio (prior to 2020.3.1), or to use the Perfetto tool to analyze a trace file, there are extra steps involved.

First, pull the trace file from the device:

# The following command pulls all files ending in .trace from the directory
# hierarchy starting at the root /storage/emulated/0/Android.
$ adb shell find /storage/emulated/0/Android/ -name "*.trace" \
    | tr -d '\r' | xargs -n1 adb pull

Note that your output file path may be different if you customize it with the additionalTestOutputDir argument. You can look for trace path logs in logcat to see where they are written. For example:

I PerfettoCapture: Writing to /storage/emulated/0/Android/data/androidx.benchmark.integration.macrobenchmark.test/cache/TrivialStartupBenchmark_startup[mode=COLD]_iter002.trace.

If you invoke the tests instead using the Gradle command line (such as ./gradlew macrobenchmark:connectedCheck), you can have the test result files copied to a test output directory on your host system. To do this, add this line to your project's file:


The result files from test runs show up in your project's build directory like this:


Once you have the trace file on your host system, you can open it in Android Studio with File > Open in the menu. This shows the profiler tool view shown in the previous section.

You can, instead, choose to use the Perfetto tool. Perfetto allows you to inspect all processes happening across the device during the trace, while Android Studio's CPU profiler limits inspection to a single process.

Improve trace data with custom events

It can be useful to instrument your application with custom trace events, which are seen with the rest of the trace report and can help point out problems specific to your app. To learn more about creating custom trace events, see the Define custom events guide.

Benchmarking in CI

It's common to run test in CI without Gradle, or locally if you're using a different build system. This section explains how to configure Macrobenchmark for CI usage at runtime.

Result files: JSON and traces

Macrobenchmark outputs a JSON file and multiple trace files: one per measured iteration of each MacrobenchmarkRule.measureRepeated loop.

You can define where these files are written by passing in the following instrumentation argument at runtime:

-e additionalTestOutputDir "device_path_you_can_write_to"

Note that for simplicity you can specify a path on /sdcard/, but you must opt-out of scoped storage by setting requestLegacyExternalStorage to true in your Macrobenchmark module:

<manifest ... >
  <application android:requestLegacyExternalStorage="true" ... >

Or pass an instrumentation arg to bypass scoped storage for the test:

-e no-isolated-storage 1

JSON sample

The following shows sample JSON output for a single startup benchmark:

    "context": {
        "build": {
            "device": "walleye",
            "fingerprint": "google/walleye/walleye:10/QQ3A.200805.001/6578210:userdebug/dev-keys",
            "model": "Pixel 2",
            "version": {
                "sdk": 29
        "cpuCoreCount": 8,
        "cpuLocked": false,
        "cpuMaxFreqHz": 2457600000,
        "memTotalBytes": 3834605568,
        "sustainedPerformanceModeEnabled": false
    "benchmarks": [
            "name": "startup",
            "params": {},
            "className": "androidx.benchmark.integration.macrobenchmark.SampleStartupBenchmark",
            "totalRunTimeNs": 77969052767,
            "metrics": {
                "startupMs": {
                    "minimum": 228,
                    "maximum": 283,
                    "median": 242,
                    "runs": [
            "warmupIterations": 3,
            "repeatIterations": 5,
            "thermalThrottleSleepSeconds": 0

Additional resources

A sample project is available as part of the Android/performance-samples repository on GitHub.

For guidance in how to detect performance regressions, see Fighting Regressions with Benchmarks in CI.


To report issues or submit feature requests for Jetpack Macrobenchmark, see the public issue tracker.