A Field Guide To Mobile App Testing

Advertisement

Testers are often thought of as people who find bugs, but have you ever considered how testers actually approach testing? Do you ever wonder what testers actually do, and how they can add value to a typical technology project?

I’d like to take you through the thought process of testers and discuss the types of things they consider when testing a mobile app. The intention here is to highlight their thought processes and to show the coverage and depth that testers often go to.

Testers Ask Questions

At the heart of testing is the capability to ask challenging and relevant questions. You are on your way to becoming a good tester if you combine investigative and questioning skills with knowledge of technology and products.

For example, testers might ask:

  • What platforms should this product work on?
  • What is the app supposed to do?
  • What happens if I do this?

And so forth.

Testers find questions in all sorts of places. It could be from conversations, designs, documentation, user feedback or the product itself. The options are huge… So, let’s dive in!

Where To Start Testing

In an ideal world, testers would all have up-to-date details on what is being built. In the real world, this is rare. So, like everyone else, testers make do with what they have. Don’t let this be an excuse not to test! Information used for testing can be gathered from many different sources, internally and externally.

At this stage questions, testers might ask these questions:

  • What information exists? Specifications? Project conversations? User documentation? Knowledgeable team members? Could the support forum or an online company forum be of help? Is there a log of existing bugs?
  • What OS, platform and device should this app work on and be tested on?
  • What kind of data is processed by the application (i.e. personal, credit cards, etc.)?
  • Does the application integrate with external applications (APIs, data sources)?
  • Does the app work with certain mobile browsers?
  • What do existing customers say about the product?
  • How much time is available for testing?
  • What priorities and risks are there?
  • Who is experiencing pain, and why?
  • How are releases or updates made?

Based on the information gathered, testers can put together a plan on how to approach the testing. Budgets often determine how testing is approached. You would certainly approach testing differently if you had one day instead of a week or a month. Predicting outcomes gets much easier as you come to understand the team, its processes and the answers to many of these types of questions.

Example: Social Commentary on the Facebook App

I love using the Facebook app as an example when I’m gathering information as a tester. Complaints of it are everywhere. Just check out the comments in the iTunes App Store for some of the frustrations users are facing. Plenty more are dotted across the Web.

1
Facebook’s iPhone App has a lot of negative reviews.

If I were challenged to test the Facebook app, I would definitely take this feedback into consideration. I would be daft not to!

The Creativity Of Testers

You probably know what the app is meant to do, but what can it do? And how will people actually use it? Testers are great at thinking outside of the box, trying out different things, asking “What if” and “Why” constantly.

For example, mobile testers will often adopt the mindset of different types of people — not literally, of course, but the ability to think, analyze and visualize themselves as different users can be quite enlightening.

Testers might put themselves in these shoes:

  • Novice user,
  • Experienced user,
  • Fan,
  • Hacker,
  • Competitor.

Many more personalities could be adopted; much of this really depends on what you are building. But it’s not just about personalities, but about behavior and workflows, too. People use products in strange ways. For example, they:

  • Go back when they are not supposed to,
  • Are impatient and hit keys multiple times,
  • Enter incorrect data,
  • Can’t figure out how to do something,
  • Might not have the required setup,
  • Might assume they know what they are doing (neglecting to read instructions, for example).

Testers look for these situations, often discovering unexpected results along the way. Sometimes the bugs initially found can appear small and insignificant, whereupon deeper investigation uncovers bigger problems.

Many of these issues can be identified up front with testing. When it comes to testing mobile apps, these might not all be relevant, but perhaps try asking questions such as these:

  • Does it do what it says on the tin?
  • Does the app perform the tasks it was designed to do?
  • Does the app perform tasks that it wasn’t designed to do?
  • How does the app perform when being used consistently or under a load? Is it sluggish? Does it crash? Does it update? Does it give feedback?
  • Do crash reports give clues about the app?
  • How can one navigate creatively, logically or negatively around the app?
  • Does the user trust your brand?
  • How secure is the user’s data?
  • Is it possible to break or hack the app?
  • What happens when you push the app to its limits?
  • Does the app ask to turn on related services? (e.g. GPS, Wifi)? What if the user does? Or doesn’t?
  • Where does the app redirect me? To the website? From website to app? Does it cause problems?
  • Is communication and marketing consistent with the app’s function, design and content?
  • What is the sign-up process like? Can it be done on the app? On a website?
  • Does sign-up integrate with other services such as Facebook and Twitter?

Example: RunKeeper’s Buggy Update

RunKeeper, an app to track your fitness activities, recently released an update with new “Goal Setting” features. I was interested in giving it a try, a bit from a testing perspective, but also as a genuinely interested user. I discovered a few problems.

  1. It defaulted to pounds. I wanted weights in kilograms.
  2. Switching between pounds and kilograms just didn’t work properly.
  3. This ended up causing confusion and causing incorrect data and graphs to be shown when setting my goals.
  4. Because of that, I wanted to delete the goals, but found there was no way to do it in the mobile app.
  5. To work around this, I had to change my weight so that the app would register the goal as being completed.
  6. I could then try adding the goal again.
  7. Because of all of this confusion, I played around with it a bit more to see what other issues I could find.

Below are some screenshots of some of the issues found.

RunKeeper Date Bug
A recent update of RunKeeper included a new “Goals” section. Playing around with its dates, I discovered start and end dates could be set from the year 1 A.D. Also, why two years with “1”?

Run Keeper Typo Bug
Another RunKeeper bug. This one is a typo in the “Current Weight” section. This happened when removing the data from the field. Typos are simple bugs to fix but look very unprofessional if ignored.

Run Keeper Goals Bug
Here is the confusion that happened as a result of trying to switch between pounds and kilograms. If I want to lose 46 pounds, the bar actually shows 21 pounds.

There is no quick way to identify issues like these. Every app and team faces different challenges. However, one defining characteristic of testers is that they want to go beyond the limits, do the unusual, change things around, test over a long period of time — days, weeks or months instead of minutes — do what they have been told is not possible. These are the types of scenarios that often bring up bugs.

Where’s All The Data?

Testers like to have fun with data, sometimes to the frustration of developers. The reality is that confusing either the user or the software can be easy in the flow of information. This is ever more important with data- and cloud-based services; there is so much room for errors to occur.

Perhaps you could try checking out what happens in the following scenarios:

  • The mobile device is full of data.
  • The tester removes all of the data.
  • The tester deletes the app. What happens to the data?
  • The tester deletes then reinstalls the app.
  • Too much or too little content causes the design or layout to change.
  • Working with different times and time zones.
  • Data does not sync.
  • Syncing is interrupted.
  • Data updates affect other services (such as websites and cloud services).
  • Data is processed rapidly or in large amounts.
  • Invalid data is used.

Example: Soup.me2 Is Wrong

I was trying out Soup.me, a Web service that sorts your Instagram photos by map and color, but I didn’t get very far. When I tried to sign up, it said that I didn’t have enough Instagram photos. This is a lie not true because I have published over 500 photos on my Instagram account. It’s not clear what the problem was here. It could have been a data issue. It could have been a performance issue. Or perhaps it was a mistake in the app’s error messages.

SoupMe3

Another Example: Quicklytics

Quickytics is a Web analytics iPad app. In my scenario, a website profile of mine still exists despite my having deleted it from my Google Analytics account. My questions here are:

  • I have deleted this Web profile, so why is this still being displayed?
  • The left panel doesn’t appear to have been designed to account for no data. Could this be improved to avoid confusing the user?

Quicklytics

Testers like to test the limits of data, too. They will often get to know the app as a typical user would, but pushing the limits doesn’t take them long. Data is messy, and testers try to consider the types of users of the software and how to test in many different scenarios.

For example, they might try to do the following:

  • Test the limits of user input,
  • Play around with duplicate data,
  • Test on brand new clean phone,
  • Test on an old phone,
  • Pre-populate the app with different types of data,
  • Consider crowd-sourcing the testing,
  • Automate some tests,
  • Stress the app with some unexpected data to see how it copes,
  • Analyze how information and data affects the user experience,
  • Always question whether what they see is correct,

Creating Errors And Messages

I’m not here to talk about (good) error message design. Rather, I’m approaching this from a user and tester’s point of view. Errors and messages are such common places for testers to find problems.

Questions to Ask About Error Messages

Consider the following questions:

  • Is the UI for errors acceptable?
  • Are error messages accessible?
  • Are error messages consistent?
  • Are they helpful?
  • Is the content appropriate?
  • Do errors adhere to good practices and standards?
  • Are the error messages security-conscious?
  • Are logs and crashes accessible to user and developer?
  • Have all errors been produced in testing?
  • What state is the user left in after an error message?
  • Have no errors appeared when they should have?

Error messages quite often creep into the user experience. Bad and unhelpful errors are everywhere. Trying to stop users from encountering error messages would be ideal, but this is probably impossible. Errors can be designed for and implemented and verified against expectations, but testers are great at finding unexpected bugs and at carefully considering whether what they see could be improved.

Some Examples of Error Messages

I like the example below of an error message in the Facebook app on the iPhone. Not only is the text somewhat longwinded and sheepishly trying to cover many different scenarios, but there is also the possibility that the message gets lost into the ether.

Facebook Error Message4 Facebook Error Message5

Perhaps the messages below are candidates for the Hall of Fame of how not to write messages?

A badly written message.6 A badly written message.7

What about this one from The Guardian’s app for the iPad? What if I don’t want to “Retry”?

The Guardian's 'Download canceled' message.

Platform-Specific Considerations

Becoming knowledgeable about the business, technology and design constraints of relevant platforms is crucial for any project team member.

So, what types of bugs do testers look for in mobile apps?

  • Does it follow the design guidelines for that particular platform?
  • How does the design compare with designs by competitors and in the industry?
  • Does the product work with peripherals?
  • Does the touchscreen support gestures (tap, double-tap, touch and hold, drag, shake, pinch, flick, swipe)?
  • Is the app accessible?
  • What happens when you change the orientation of the device?
  • Does it make use of mapping and GPS?
  • Is there a user guide?
  • Is the email workflow user-friendly?
  • Does the app work smoothly when sharing through social networks? Does it integrate with other social apps or websites?
  • Does the app behave properly when the user is multitasking and switching between apps?
  • Does the app update with a time stamp when the user pulls to refresh?
  • What are the app’s default settings? Have they been adjusted?
  • Does audio make a difference?

Example: ChimpStats

ChimpStats is an iPad app for viewing details of email campaigns. I first started using the app in horizontal mode. I got a bit stuck as soon as I wanted to enter the API key. I couldn’t actually enter any content into the API field unless I rotated it vertically.

ChimpStats

ChimpStats

Connectivity Issues And Interruption

Funny things can happen when connections go up and down or you get interrupted unexpectedly.

Have you tried using the app in the following situations:

  • Moving about?
  • With Wi-Fi connectivity?
  • Without Wi-Fi?
  • On 3G?
  • With intermittent connectivity?
  • Set to airplane mode?
  • When a phone call comes in?
  • While receiving a text message?
  • When receiving an app notification?
  • With low or no battery life?
  • When the app forces an update?
  • When receiving a voicemail?

These types of tests are a breeding ground for errors and bugs. I highly recommend testing your app in these conditions — not just starting it up and checking to see that it works, but going through some user workflows and forcing connectivity and interruptions at particular intervals.

  • Does the app provide adequate feedback?
  • Does data get transmitted knowingly?
  • Does it grind to a halt and then crash?
  • What happens when the app is open?
  • What happens midway through a task?
  • Is it possible to lose your work?
  • Can you ignore a notification? What happens?
  • Can you respond to a notification? What happens?
  • Is any (error) messaging appropriate when something goes wrong?
  • What happens if your log-in expires or times out?

Maintaining The App

Speeding up the process of testing an app is so easy. Test it once and it will be OK forever, right?

Think again.

One problem I’m facing at the moment with some apps on my iPad is that they won’t download after being updated. As a user, this is very frustrating.

Perhaps this is out of the control of the app’s developer. Who knows? All I know is that it doesn’t work for me as a user. I’ve tried removing the app and then reinstalling, but the problem still occurs. I’ve done a bit of searching; no luck with any of my questions, aside from suggestions to update my OS. Perhaps I’ll try that next… when I have time.

The point is, if the app was tested once and only once (or over a short period of time), many problems could have gone undetected. Your app might not have changed, but things all around it could make it break.

When things are changing constantly and quickly, how does it affect your app? Ask yourself:

  • Can I download the app?
  • Can I download and install an update?
  • Does the app still work after updating?
  • Can I update the app when multiple updates are waiting?
  • What happens if the OS is updated?
  • What happens if the OS is not updated?
  • Does the app automatically sync downloading to other devices via iTunes?
  • Is it worth automating some tasks or tests?
  • Does the app communicate with Web services? How would this make a difference?

Testing your mobile app after each release would be wise. Define a set of priority tests to cover at each new release, and make sure the tests are performed in a variety of conditions — perhaps on the most popular platforms. Over time, it might be worth automating some tests — but remember that automated tests are not a magic bullet; some problems are spotted only by a human eye.

Example: Analytics App on the iPhone

I’ve had this app for two years now. It’s worked absolutely fine until recently; now, it has been showing no data for some of my websites (yes, more than one person has visited my website over the course of a month!). A quick look at the comments in the app store showed that I wasn’t the only one with this problem.

8

9

Here is another example from the Twitter app for the iPhone. After updating and starting up the app, I saw this message momentarily (Note: I have been an active tweeter for five years). I got a bit worried for a second! Thankfully, the message about having an empty timeline disappeared quickly and of its own accord.

10

Testing Is Not Clear-Cut

We’ve covered some ground of what mobile testing can cover, the basis of it being: with questions, we can find problems.

All too often, testing is thought of as being entirely logical, planned and predictable, full of processes, test scripts and test plans, passes and fails, green and red lights. This couldn’t be further from the truth.

Sure, we can have these processes if and when necessary, but this shouldn’t be the result of what we do. We’re not here just to create test cases and find bugs. We’re here to find the problems that matter, to provide information of value that enables other project members to confidently decide when to release. And the best way we get there is by asking questions!

(al)

Footnotes

  1. 1 http://www.smashingmagazine.com/wp-content/uploads/2012/08/fbsocialcommentary-med.png
  2. 2 http://www.soup.me/instagram
  3. 3 http://www.soup.me/instagram
  4. 4 http://www.smashingmagazine.com/wp-content/uploads/2012/10/facebookerror-med.jpeg
  5. 5 http://www.smashingmagazine.com/wp-content/uploads/2012/10/facebookerror2-med.jpeg
  6. 6 http://www.smashingmagazine.com/wp-content/uploads/2012/10/textmsg-med.jpeg
  7. 7 http://www.smashingmagazine.com/wp-content/uploads/2012/10/textmsg2-med.jpeg
  8. 8 http://www.smashingmagazine.com/wp-content/uploads/2012/08/analyticsdata-med.jpeg
  9. 9 http://www.smashingmagazine.com/wp-content/uploads/2012/08/analyticreviews-med.jpeg
  10. 10 http://www.smashingmagazine.com/wp-content/uploads/2012/08/twitter-med.jpeg

↑ Back to topShare on Twitter

Rosie has a mixed fascination related to software testing, the social web and startups. You can find her on her blog.

Advertising
  1. 1

    Just when you thought browser and desktop testing leads you to the edge of the cliff… Here comes mobile to push you over.

    Thank you for the article. It is great to see testing brought to more mainstream attention.

    Over the air Data interruption test… dubbing it: stick device under metal pot or in desk drawer test :)

    1
  2. 2

    Great article, Rosie.

    I think every designer out there has come across these sorts of problems, and the best way to fix them – ignoring the highly unlikely possibility of documenting everything before development – is to be as involved in the testing process as possible. It’s really easy for designers to get caught up in delivering the high level, or ideal scenarios, and gloss over the edge cases. More often than not, how an application handles the edge cases will be a huge determining factor in the overall user perception of experience.

    0
  3. 3

    Interesting to hear from the tester’s perspective. I have worked in situations where the QA team expected a detailed test plan and didn’t seem to know how to proceed without one. I understand the need for such a plan in some situations to help focus the testing on relevant features, but my ideal would be to have more thorough testing as described, or just be able to say: “Go break it!” It’s always the bug you don’t find that ends up being the heartbreaker/projectbreaker!

    0
    • 4

      Ryan Salvanera aka PinoyTester

      October 22, 2012 4:25 pm

      I am sure others share the same frustration you have on with QA teams that seem to require everything to be laid out before they start testing. This is unfortunate as not only does it delay the testing (since other team members are busy with their own functions), it also (in my opinion) limits the scope of their testing.

      0
      • 5

        Jean Ann Harrison

        June 21, 2014 11:38 pm

        Ryan;
        It’s very unfortunate some testers expect things to be laid out for them but not all of us do. However, Rosie, myself and many others are trying to provide free content/free training to help but like anyone in a profession or craft, you need people committed to learning too.

        Mobile testers are being trained to pay attention more to the unknown than the know. Rosie has done a fantastic job in laying out many questions but it’s only a start. But in order for Mobile Testers to learn more about the architecture and provide more solid test coverage, these testers must work closely with developers. There’s a responsibility for developers and testers, not just Mobile Testers to work together. As a mobile tester, I did work with a whole team of developers, one being an operating system developer, one being a UI developer, one being a network communications developer and I worked with a few hardware developers and so on and so on…. I was lucky that this team was patient and willing to work with testers. I was eager and willing to learn. They were eager to teach. However, I’ve also worked with developers who outright refused to work with testers, displayed outwardly complete disdain for testers. This doesn’t work. That project went horribly wrong due to teams refusing to cooperate.

        Mobile Testers must learn to ask questions and Rosie you brought up a wide spectrum for any Mobile Tester to get started. My hope is that these spur them on to come up with more questions.

        Thank you for continuing to inspire.

        1
  4. 6

    Ryan Salvanera aka PinoyTester

    October 22, 2012 4:36 pm

    Nice Article, Rosie.

    I especially like this “Sometimes the bugs initially found can appear small and insignificant, whereupon deeper investigation uncovers bigger problems.” I have been in situations where non-testers look at these “insignificant” probings but were surprised when I later on pointed out big issues after my investigation was through.

    0
  5. 7

    Lots of information. Thanks so much for posting this.

    Jessica

    0
  6. 8

    Very nice article. I’ve actually sent this to my QA Manager.

    0
  7. 10

    Nice article . Thanks for sharing.

    0
  8. 11

    Hey Rosie!
    I’m the developer of Quicklytics, one of the apps you mentioned above. First of all, what a pleasure to be listed in smashing magazine, a site I’ve been reading for many years. I’m glad to know you use the app and really hope you like it!

    This is a great article on testing apps. I work daily with or around testers and I mostly agree with all the points in your article. There are some things that you should probably know about that “bug” in Quicklytics you mentioned though.

    Not deleting websites from the app is really not a bug; some would consider it a bug, some would consider it a feature. I made the explicit decision in the app to simply show the error that Google provides when you try to access a site that is not yours, instead of simply deleting it, for several reasons:

    1- first, very few people actually go thru this process (removing his own site manually), and there’s a very easy way of handling this case from within the app.
    2- when this happens, at least from anecdotal evidence, it’s usually because the person was rovoked access to the account, instead of having removed it himself (so it happens TO him). It’s very important for these users to know that they don’t have access anymore, instead of simply deleting the site. II I had just removed the site, it would seem like the app had a bug for these users.
    3- when you open the app, my main concern is in showing you the data for your sites, and from my point of view, anything that gets in the way is just waste. Because of it, there’s no time during the app startup for me to actually sync sites properly without slowing the app down and the user noticing it.
    4- Sync is a bigger problem in itself, because while some users would appreciate that new sites get added to the app, some users have lists of hundreds of sites that they painstakingly curate in the app, and the less I do around this the easier it is for those users.

    Now I’m not saying the app has bugs; I know it has. Dozens of them were just fixed and are waiting for Apple’s approval, and I’m sure the new update will have more bugs.

    The point here, though, is that in software every single thing an app does is an explicit decision made by somebody during the development process. Because of it, you can’t properly validate a piece of software (or probably almost any product out there) without having knowledge of the business, knowing the context in which the decisions were made, the expectations of the users of the app, which might be drastically different for each user.

    As a final note, I would have appreciated if you had contacted me about this issue, like many other users do when they find something. I’ve found over a decade writing software that having an open channel with users is by far the easiest way to sort out issues; many times one or two emails is enough to match people’s expectations with features in your software. I keep an open list of requests for the app here: http://quicklytics.uservoice.com/forums/148836-quicklytics-forum.

    Best regards,

    Eduardo Scoz
    http://escoz.com

    0
    • 12

      Hey Eduardo,

      Thank you for your long and comprehensive comment. I should say that I use Quicklytics daily and do love it :)

      I think most tech teams know that there will always be bugs in a system. The decision that needs to be made is whether the bugs that are found matter enough to improve or fix them.

      In this situation I am sure there are plenty of reasons why it is why it is. And you are most welcome to state whether or not it is a bug or not. The main point I’m highlighting is that I came across a situation that didn’t quite look right to me (as a tester without deep insight), so I questioned it, which is what testers do. In a real life situation I may have raised a bug or spoken to someone that knows more about it.

      I guess the main reason why I highlighted it is because it left me with more questions. Is this what I should be seeing? Could the error text be improved? What am I supposed to do next (as a user)? Why is that box still appearing if there is no data? How do I fix or get rid of it? Is what I’m seeing consistent with the rest of the app? (It felt a bit more rough than other areas of the app).

      And as a side note to myself, yes perhaps it would be good to report these issues, I promise to make a more conscious effort in the future (especially for your app) :) …however on the other side, do you know how many bugs I come across? And how this would equate to my personal time if I reported all of them :)

      0
  9. 13

    actually, for client tester, including mobile platform and PC, it is reasonable for tester to design abnormal scenarios.
    coz you will find any factor, any reason may cause mal-function of the app.
    it is to say, thinking more is always valuable, regardless whether it is reasonable.

    0
  10. 14

    So far I had read many blogs/articles/whitepapers with respect to mobile application testing; I found this as the best one with live scenario examples and screenshots. An outstanding article!!

    0
  11. 15

    Hey Rosie,

    Found this via a post on STC, not really had a work in this area (Mobile App Testing) but the article was a really good insight for me and an excellent “go to” in terms of the questions you identified if i ever become stuck on a problem and would like an idea of a new testing angle.

    Many Thanks

    Danny

    0
  12. 16

    Great Article.. we’re going through the same thing here. Imagine building an app on Titanium Appcelerator which will distribute builds for Android, iOS where we have an English and Arabic version where we have features that work differently Offline, Online with GPS On/Off. Now imagine your QA team is outsourced (apart of the dev team) and you have a small non-QA, QA team close by as well.

    Welcome to my project.

    0
  13. 17

    pfff… great article. Next week, each one of my teams will read it:)

    0
  14. 18

    Great article with very specific points about issues often found in mobile application projects. All of your points should be considered when developing a testing strategy and plan. I especially like the points around negative testing, environmental testing, and the non functional testing. I’d add that a strategy often will consider some tech aspects like performance and load testing along with disaster recovery, and data migration (if applicable). Consider also different phases like unit testing, integrated testing, user acceptance along with the benefits of prototyping to test UX and usability. I’ve put a few thoughts down in my blog: adamsivell.blogspot.com.au/2013/02/testing-enterprise-mobility.html

    0
  15. 19

    Very nice article, it definitely improved my thought process on mobile application testing.

    Thank you very much..

    0
  16. 20

    Mahmoud Passikhani

    September 3, 2013 1:19 am

    Very usefull article. Thanks!

    0
  17. 21

    This is a detailed list indeed! So, here’s the deal – put a faulty application out there and you’ll get hate mail, bad reviews and requests for refunds. Mobile app users simply don’t tolerate risky, low-quality apps anymore. It’s crucial to test the app at the embryonic stage of mobile app development. Here are my thoughts on mobile app testing: http://mlabs.boston-technology.com/blog/the-ultimate-cheat-sheet-on-mobile-app-testing

    0
  18. 22

    Great article!
    You should check out this mobile app testing model – http://moolya.com/blogs/2014/05/34/COP-FLUNG-GUN-MODEL
    It’s a lot of what you said, but in a mindmap!!

    0
  19. 23

    Great article, thanks for sharing your knowledge.

    0

Leave a Comment

Yay! You've decided to leave a comment. That's fantastic! Please keep in mind that comments are moderated and rel="nofollow" is in use. So, please do not use a spammy keyword or a domain as your name, or else it will be deleted. Let's have a personal and meaningful conversation instead. Thanks for dropping by!

↑ Back to top