PS Regression Analysis Service
We recently launched our new product - PS Regression Analysis (RA).
RA service detects performance regressions in mobile apps. It cuts through the noise, offering a crystal-clear view of potential performance regressions. While you might have tried other RA tools, they often fall short, leaving you with confusing results due to external factors like inconsistent network conditions or unpredictable system behavior.
We approached the problem differently. We take the guesswork out of performance analysis by:
- Measuring under real conditions as users would experience it
- conducting statistical tests to detect regressions
- performing multiple measurements for statistical significance
- providing actionable insights to locate the cause
We use hardware devices and our own device lab as the environment to get as close to the real user experience as possible. This commitment to real-world conditions ensures that any detected regressions directly reflect the experience users will encounter.
We eliminate external noise and deviations by conducting a series of measurements, as well as conducting a group of advanced statistical tests to find the difference between builds.
After regression is detected, we perform additional actions for a deeper analysis to provide actionable insights, pinpointing the cause of the delay.
Then all the collected data is combined into a comprehensive report for engineering and product teams.
Telegram case
Telegram is a messaging app with a focus on speed and security. It's well known for its performance, simplicity and rich feature list. It has over 700 million monthly active users and is one of the 10 most downloaded apps in the world.
Additionally, Telegram has open-source clients, which makes it a great example to demonstrate RA service capabilities.
Let's examine the process using the Telegram Android app as an example. The full process can be described as the following diagram:
User flow ⇒ Markup ⇒ Build ⇒ Measure ⇒ Analyze ⇒ Report
User Flow Markup
Initially, we must pinpoint the critical user flows for analysis, as this is essential for effective regression detection. In our example, we'll focus on one of the most crucial user flows – the app startup. However, it's important to note that we can select any user flow depending on the app. For instance, we could opt for the profile screen opening or any other popular and valuable user flow for the app.


Here you can find detailed documentation of our UserFlow library. And the last step is to create a release build of the app. The UserFlow library is integrated.
Automation
Once we've annotated the user flow, the next step is to automate it by creating a script that will simulate user behavior by clicking through the steps of the selected user flow. In our case, since we're focusing on the app startup, we won't need to do much as the flow is relatively simple.
We will use the UIAutomator API to create an AndroidTest apk script.

CI/CD integration
The most satisfying aspect here is that we can now streamline this rather painful performance regression detection process by automating the uploading of builds for reuse. The most common method for uploading builds is to include a hook in a CI/CD pipeline. Whenever a commit is made to the main VCS branch, this hook will be activated and a new build will be uploaded to the service.
Analysis
By utilizing automation scripts, the RA service will carry out a series of app runs to measure the length of user flows.

To process the data and compare results with a previous build, the service will conduct a series of statistical tests to identify any potential regression.
To dive deeper into possible causes of a regression we encourage to integrate PS Tool services fully. This will result in a recorded trace with insights pointing to particular differences between builds.
Example with delay
Let's simulate a delay in our user flow and identify potential regression issues.

As a result, the RA service will detect the regression, and you will receive a notification via email with details of the identified regression. Alternatively, you can review the details through the PS Tool interface.

Conclusion
Software development is a tricky process. You may put many efforts to polish your app and improve performance, but the development flow may diminish your efforts and withdraw all the hard work just in a couple of new commits to the codebase.
Now, we can automatically detect performance regression with every code change. This will assist engineering teams in identifying issues early on, and ensure that product teams can maintain high user engagement levels.
Worth mentioning that poor app performance can lower your revenue, but improving the performance can increase it. We reviewed this idea more thoroughly in the article "Why mobile performance matters".
That's why it's a good idea to establish a level of performance and adhere to it, while continuously adding new features to your product without any potential decrease in performance. It will make you confident in your product in anticipation of constant improvements and changes.