Optimising your Mobile App — 4 techniques you should be using

Mike Smales
DAYONE — A new perspective.
4 min readMay 24, 2016

--

Mobile app optimisation is the process of using controlled experimentation to improve an app’s ability to drive business goals. Here I will outline various techniques I’ve used to build successful apps.

These techniques allow developers to define, measure and test features in an iterative and low cost way. They utilise a data driven approach as opposed to opinions and facilitate a validated learning process.

The end goal results in a better performing app, whether that is measured by an increased conversion rate, increased sales or another business goal.

In-App Analytics

Believe it or not, your users are probably not using your app exactly how you think they are. Therefore, it is important to include analytics within your app.

App analytics are typically used to track page views and app events (such as button clicks). These provide insights into which app features are most popular and whether users are completing particular goals, such as completing a signup form.

Most importantly, they can highlight issues such as poorly performing screens or app dead spots. A key example would be an issue with a user onboarding screen that was preventing or discouraging user sign-ups. Or it could highlight an issue with the app’s navigation that was keeping a given screen or feature hidden from the user.

If a significant number of users are failing to find a given screen, or complete a sign-up process then obviously this is a pain-point that needs addressing. Conversely, if a non-core feature turns out to be getting a lot more user attention than expected, then it could be worth investigating why.

A/B Split Testing

Once an area of the app has been chosen for improvement, we can use A/B testing to verify the effectiveness of that improvement.

A/B testing is a method for comparing two or more versions of a screen against each other to discover which is the most successful. Usually changing a single item at a time such as an image, a button, or a headline.

For example, on a user onboarding screen we may wish to experiment with a Call To Action (CTA) button. We create different versions of the screen that each have a different CTA, but the rest of the page content will remain exactly the same. Then we randomly split user traffic among the different versions of the screen and record the percentage of users that click on the CTA.

The experiment should run over a couple of days in order that enough data is collected to make a statistically significant decision over which variant is better. Keeping the number of tracked variables constrained ensures valid results.

With A/B testing, each test generates new data about whether a given change has been more effective or not. If it has, then it can be included in the app and consequently forms part of an improved design.

Multivariate Testing (MVT)

This is the more complex brother of A/B testing. MVT uses the same core mechanism as A/B testing, but compares a higher number of screen items, and therefore reveals more information about how the screen items interact with one another.

This allows us to measure the effectiveness that each combination of the design has on the given goal. After the test has been run, the variables on each screen variation are compared to each other, and to their performance in the context of other versions of the test/screen.

What emerges is a clear picture of which screen version is best performing, and which screen items are most responsible for this performance. For example, varying a screen footer may be shown to have very little effect on the performance of the screen. However varying the length of the sign-up form could have a huge impact.

The big drawback of MVT is that it requires much more traffic to reach statistical significance than A/B testing.

Usability Testing

Usability Testing is the process of watching users, use your app. Users will be asked to complete a given task whilst being observed. Typical tasks could be trying out the app’s key feature or completing a user sign-up screen.

During the process, it’s crucial that the observer does not prompt the user and that the user is encouraged to speak their mind. This will allow the observer to see if the user encounters any problems or experiences any confusion along the way. If multiple users encounter similar problems, then a usability issue has been found that needs to be fixed.

Usability Testing is sometimes considered an expensive process. However, as Jakob Nielsen has explained, testing the app on a small number of users (say three to five) can often be enough to identify any issues.

Conclusion

These are powerful techniques that when used the correct way, can enable your team to deliver incremental improvements and increase your app’s success.

The Return On Investment for app optimisation can be massive. Even small changes to say a landing page or sign-up page can result in significant increases in conversion rate, sales or another business goal.

If you liked this and want more:

Heart it, comment on it, and/or follow me. You can find out more about me on my website mikesmales.com

--

--

Mike Smales
DAYONE — A new perspective.

Software engineer and business builder. Entrepreneurship, Start-ups, Deep-Tech, Mobile, IoT, & AI. CTO @ PF Nexus. Former COO @ chirp.io. See mikesmales.com