iOS Performance Analysis

iOS performance testing runs your app on real, physical devices using Emerge's testing suite, allowing you to detect performance regressions for any PR before it's merged.

How it works

Each time a new version of the app is uploaded to Emerge, it gets uploaded to a real device in the cloud, along with the version it's being compared against (the app referenced from the specified baseSha). Both apps are installed on the device using the process described in App Installation. Then, for each version, a setup XCUITest is optionally run (e.g., logging the user in) before the UI test is run repeatedly that exercises the span we want to test. Each run generates a single sample of that span. Once we have a statistically significant result from all the samples, it gets shared via the /analysis endpoint alongside the app size result.

Integration

Marking span start and end times

For each span to be tracked, Emerge needs to know at what point it starts and stops. To do this, it uses NotificationCenter. To mark when a span has started, use the following:

NotificationCenter.default.post(name: Notification.Name("EmergeMetricStarted"), object: nil, userInfo: [
    "metric": "someSpanName", // Where "someSpanName" is the unique string to identify some span
])

To mark when a span has finished, post the same notification but with "EmergeMetricEnded" as the name instead of "EmergeMetricStarted". Note that for measuring startup time, however, there's a special way to do it, because there's no way to post a Notification at the very start of a launch. To measure startup time, skip the "EmergeMetricStarted" notification, but add "use_process_start": true as a key-value pair alongside "metric": "myMetric". This will cause Emerge to use the process start time as the start of the span.

Recording the span

Setup

Some of your performance tests may need a setup XCUITest, that gets the app into a state where it can be efficiently tested repeatedly. For example, to test the app's startup time for when a user is logged in, you might have a setup XCUITest that logs the user in, and then the main test can just launch the app without having to log the user in/out.

Performance test

(Note: if you just want to test startup time for now, you can let us know, and don't need to upload anything following the steps below)

The actual performance test code is run from within Emerge's own XCUITest, where Emerge decides how many times and how exactly to run the test code to get more samples of the span. At a high level, a developer's perf tests will look like this:

// In a UI test bundle
import XCTest
import EMGPerfTesting

class ExamplePerfTest: NSObject, EMGPerfTest {
    func runInitialSetup(withApp app: XCUIApplication) {
        // One-time setup, before all the perf test iterations, such as logging in. The app is already launched
        app.buttons["login"].tap()
    }

    func runIteration(withApp app: XCUIApplication) {
        // Test to trigger the spans that are being measured. The app is already launched
        app.buttons["searchPage"].tap()
    }
}

Each perf test needs to import the EMGPerfTesting framework and implement the methods in the EMGPerfTest protocol. Instructions to integrate this framework are here. Note that the ExamplePerfTest doesn't conform to XCTestCase, even though it's in a test bundle, and instead has just two methods that each take a pre-constructed XCUIApplication. This is so that Emerge can control the parameters of the XCUIApplication and how it's launched. Also, since the test will be run in a very specific way (setup gets run once, then the test gets run some variable number of times) it can't be a normal XCTestCase that's run with Xcode's test runner, it has to use this alternate EMGPerfTest protocol. Emerge, in its own UI test, will load that test bundle and run something roughly equivalent to:

let test = ExamplePerfTest()
let setupApp = XCUIApplication()
setupApp.launch()
test.runSetup(app: setupApp)
while shouldContinue() {
    let iterationApp = XCUIApplication()
    iterationApp.launch()
    test.runIteration(app: iterationApp)
}

Although perf tests don't inherit from XCTestCase, it's still easy to verify that they work as expected with normal Xcode UI testing. Just add a test like the following:

import XCTest
import EMGPerfTesting

class PerfTestRunningTests: XCTestCase {
    func testAllPerfTests() throws {
        PerfTestRunner.runAllPerfTestsForBundle(ofClass: ExampleUITests.self)
    }
}

PerfTestRunner is a class in EMGPerfTesting that runs a simplified version of Emerge's actual test runner to ensure that everything works.

Once everything looks good, add a top-level folder in the .xcarchive that gets uploaded called EmergePerfTests (a sibling of the Products directory) and add all the .xctest bundles there. Also add a file at EmergePerfTests/info.yaml that contains a list of EMGPerfTest classes that should be run, and the set of spans that should be measured for each one, specified as such:

testClasses:
  - class: ExamplePerfTest1
    spans:
      - didFinishLaunching
  - class: ExamplePerfTest2
    spans:
      - didFinishLaunching
      - customSpan

There's also a special class name, startup, which doesn't need an actual class for it and tells Emerge to run a simple test that just launches the app. Also note that didFinishLaunching and didBecomeActive are spans that are automatically emitted for you to use if you want, where they track the end of applicationDidFinishLaunching(_:) and applicationDidBecomeActive(_:) respectively.