iOS Performance Analysis

iOS performance testing runs your app on real, physical devices using Emerge's testing suite, allowing you to detect performance regressions for any PR before it's merged.

How it works

Each uploaded build is tested against its base build to determine the change in time for a specific "span". A span is the duration between a start and stop point that you specify in your app's code. Default spans for app launch are provided automatically. Performance testing tells you when there is a change in how long it takes the app to mark a span as stopped after they are started.

Each time a new version of the app is uploaded, it gets installed on an iPhone SE 2020, along with the version it's being compared against (the app referenced from the specified baseSha). Both apps are installed on the device using the process described in App Installation. Then, for each version, a setup XCUITest is optionally run (e.g., logging the user in) before the UI test is run repeatedly that exercises the span to test. Each run generates a single sample of that span. Once Emerge has a statistically significant result from all the samples the result is available through source control integration such as a GitHub comment or status check.

Integration

Marking span start and end times

For each span to be tracked, Emerge needs to know at what point it starts and stops. To do this, it uses NotificationCenter. To mark when a span has started, use the following:

NotificationCenter.default.post(name: Notification.Name("EmergeMetricStarted"), object: nil, userInfo: [
    "metric": "someSpanName", // Where "someSpanName" is the unique string to identify some span
])

To mark when a span has finished, post the same notification but with "EmergeMetricEnded" as the name instead of "EmergeMetricStarted". Note that for measuring startup time, however, there's a special way to do it, because there's no way to post a Notification at the very start of a launch. To measure startup time, skip the "EmergeMetricStarted" notification, but add "use_process_start": true as a key-value pair alongside "metric": "myMetric". This will cause Emerge to use the process start time as the start of the span.

Each span needs a unique value for "metric" (the name of the span), with at most one start/stop for each unique "metric".

Recording the span

Test lifecycle

3960

Lifecycle of a test and which user-provided methods are called

Note: Different steps of the test lifecycle use either the base or head app. If you are setting things up and changing code in the app itself (not the test package) make sure you update both the base and head app

Setup

Some of your performance tests may need a setup XCUITest, that gets the app into a state where it can be efficiently tested repeatedly. For example, to test the app's startup time for when a user is logged in, you might have a setup XCUITest that logs the user in, and then the main test can just launch the app without having to log the user in/out.

Performance test

🚧

Startup Tests

The below steps are for adding a custom test, which is a more complicated setup and should only be attempted after startup tests are working.

The actual performance test code is run from a wrapping XCUITest, which decides how many times and how exactly to run the test code to get more samples of the span. At a high level, your performance tests will look like this:

// In a UI test bundle
import XCTest
import EMGPerfTesting

class ExamplePerfTest: NSObject, EMGPerfTest {
    func runInitialSetup(withApp app: XCUIElement) {
        // One-time setup, before all the perf test iterations, such as logging in. The app is already launched
        app.buttons["login"].tap()
    }

    func runIteration(withApp app: XCUIElement) {
        // Test to trigger the spans that are being measured. The app is already launched
        app.buttons["searchPage"].tap()
    }
}

Each performance test needs to import the EMGPerfTesting framework and implement the methods in the EMGPerfTest protocol. Instructions to integrate this framework are here. Note that the ExamplePerfTest doesn't conform to XCTestCase, even though it's in a test bundle, and instead has just two methods that each take a pre-constructed XCUIElement. This is because the app is launched by the test running infrastructure, and provided to you as an XCUIElement. Also, since the test will be run in a very specific way (setup gets run once, then the test gets run some variable number of times) it can't be a normal XCTestCase that's run with Xcode's test runner, it has to use this alternate EMGPerfTest protocol. Your code will be loaded from the test bundle and run as something roughly equivalent to:

let test = ExamplePerfTest()
let setupApp = XCUIApplication()
setupApp.launch()
test.runSetup(app: setupApp)
while shouldContinue() {
    let iterationApp = XCUIApplication()
    iterationApp.launch()
    test.runIteration(app: iterationApp)
}

The test should support iOS versions >= iOS 15.5.

Although perf tests don't inherit from XCTestCase, it's still easy to verify that they work as expected with normal Xcode UI testing. Just add a test like the following:

import XCTest
import EMGPerfTesting

class PerfTestRunningTests: XCTestCase {
    func testAllPerfTests() throws {
        PerfTestRunner.runAllPerfTestsForBundle(ofClass: ExampleUITests.self)
    }
}

PerfTestRunner is a class in EMGPerfTesting that runs a simplified version of Emerge's actual test runner to ensure that everything works.

❗️

This simplified test you can run locally doesn't include things such as re-signing the app. Ensure your app works with a default startup test to make sure it is compatible before adding a custom UI test.

App exit

The app is automatically quit after your test runs. When the provided runIteration function returns the app is kept running for up to 20 seconds or until all expected spans (how to specify expected spans is explained below) have been received. If after 20 seconds all expected spans have not been received the test fails with an error.

Uploading test bundles

Once everything looks good, add a top-level folder in the .xcarchive that gets uploaded called EmergePerfTests (a sibling of the Products directory) and add all the .xctest bundles there. Also add a file at EmergePerfTests/info.yaml that contains a list of EMGPerfTest classes that should be run, and the set of spans that should be measured for each one, specified as such:

testClasses:
  - class: ExamplePerfTest1
    spans:
      - didFinishLaunching
  - class: ExamplePerfTest2
    spans:
      - didFinishLaunching
      - customSpan

When running a test, the yaml file and test bundle in the head build is used to decide which tests to run. This ensures when a new test is added in a PR, that test will run before the PR is merged.

There's also a special class name, startup, which doesn't require an actual .xctest bundle and gives you the default test that just launches the app. Also note that didFinishLaunching and didBecomeActive are spans that are automatically emitted for you to use if you want, where they track the end of applicationDidFinishLaunching(_:) and applicationDidBecomeActive(_:) respectively. If you don't include any EmergePerfTests folder with your upload, this startup test with a didFinishLaunching span is used by default.

Code Signing

For details on code signing during performance tests, see App Installation