What is Quality?

As someone who has worked in the area of quality/test engineering for some time, this question has popped up time and again. I have also been involved in several discussions among peers about this question but I have never quite gotten a satisfactory answer, at least not for myself.

Some time back, while reading the book “How Google Tests Software”, I came across the line

Quality != Testing

This really made me balk and ask myself “so what then is Quality, in particular Software Quality?”.

This must come as a surprise to the uninitiated but for those of us in the so called Quality and Test specialization in software engineering, we spend most our time and energy honing the art of testing. Functional testing, performance testing, unit testing etc, how do we test this feature, how do we test that component, how do we “break” the software? Those are the questions and challenges that plague our profession.

So what then is Quality?

I pondered this for a quite a while and then one day it hit me: Quality is and has always been synonymous with the luxury industry.

Take for an example how a Hermes Birkin bag compares to a Coach (now if you’re male, single and straight, you had better learn to distinguish the two fast). One costs a tens to hundreds of thousand dollars and the other a few hundred.

Why is that? A bag is a bag is a bag right? Nope. Each Birkin bag is hand-sewn, buffed, painted, and polished by expert artisans. Note: expert artisans. Artisans who are passionate about their craft.

At the end of the day quality comes down to craftsmanship. We pay for the craftsmanship, not the object.

Software quality, it follows, comes down to software craftsmanship –

1. how systems are designed
2. how systems are built
3. how systems are tested
4. how systems are deployed

Craftsmanship is about pride in the product one is building and it is about knowing and practicing the various aspects of this process.

In “Clean code, A handbook of Agile software craftsmanship”, Robert Martin describes software craftsmanship as both knowledge of the principles, patterns, practices and heuristics that a craftsman knows; augmented with the hard work of applying that knowledge in the daily grind of churning out production ready code. It is hard work and it takes discipline.

Quality hence is everyone’s responsibility. From the design of the UI so that users find it aesthetically pleasing to use, architecting for performance, modularity and testability, deploying in a seamless fashion that minimizes the impact to users, writing code that is efficient yet maintainable; and testing – putting the system through its paces and figuring out what could possibly cause the software to break, or what functionality was missed.

So if quality is everyone’s responsibility, what then is the role of the quality or test engineer?

I would say that it is to encourage software craftsmanship.

This goes beyond testing, although that is the bread and butter of our profession.

Why bother? Wouldn’t it be much easier to just test and report bugs? Yes, but then the true value of finding those bugs is lost. The true value is in asking why those bugs came about in the first place and trying to put in place measures so that it doesn’t surface again. In short – encouraging software craftsmanship.

However, in order to know what good code looks like, you need to have either built code, seen bad code, seen really good code – and mostly all of the above.

Coming back to the Birkin bag analogy: an expert is able to tell a real Birkin bag from a fake. It is down to the fittings, how the leather is prepared, how well the stitching is done. I know intimately about this because I have an uncle who is certified to repair Louis Vuitton products.

In the same way, you can tell if software is designed and built well. But like art, you would have had to have seen/worked with well designed code/systems to know how they look like.

Only when you have seen quality can you build quality.


Filed under Uncategorized

Shameless plug – Twitter account

Some time back, I created a Twitter account to tweet all mobile and mobile testing news. You can access this twitter channel via @mobilepotpurri

1 Comment

Filed under Uncategorized

Automating Android Native Apps Testing using MonkeyRunner

MonkeyRunner is a tool that comes with the Android SDK. It is basically a tool using python scripts to run automation for Android apps.

The Android SDK site has a simple example of a monkeyrunner script and how to run it. See “A Simple monkeyrunner program” and “Running monkeyrunner” – both can be found here. For a start, don’t run with the plug-in option.

One thing to note about monkeyrunner is that unlike UI Automation or any of the other automation tools, you don’t search for a UI element and click (or tap) on it. Instead, you use the keypad keys to navigate. I found using the up, down, left, right and center keys most useful. Also there doesn’t seem to be a way to look for  UI elements and extract the labels or even accessibility keys.

There are 3 main components for monkeyrunner: MonkeyRunner, MonkeyDevice and MonkeyImage classes.

The MonkeyRunner class contains static utility methods to do a run – the most important being the WaitForConnection() method which returns a MonkeyDevice class for actual testing.

The MonkeyDevice class is the most important as its methods control the device eg. install a package, perform button press, drag, start Activities.

The MonkeyImage is used for capturing screenshots.

Some Issues to take note of

There is also a typo in the script found in the SDK documentation:

# sets a variable with the package's internal name
package = 'com.example.android.myapplication'

# sets a variable with the name of an Activity in the package
activity = 'com.example.android.myapplication.MainActivity'

# sets the name of the component to start
runComponent = package + '/' + activity

Will mean that runComponent='com.example.android.myapplication/com.example.android.myapplication.MainActivity'
instead of com.example.android.myapplication/.MainActivity which is correct.

Also for some reason, I could not get the script to run with

as it seemed that the ‘press’ method required 3 params instead of 2. I just added a dummy string param at the end eg. ‘xx’.

Note that the script is written in Python – here is a decent for the Python language I found useful.

Leave a comment

Filed under Android, Native App, QA, Testing

Getting screencaptures for iOS Devices

For iOS devices, there’s a neat way to capture the screen by just holding down the “home” button and pressing the top button.  See this blog article.

This works for iPhone, iPod Touch, iPad.

Once the screenshot is saved on the device, you can then email it to report the issue.

Here’s a video on the steps as well:

Leave a comment

Filed under iOS, QA, Testing, Useful tips

Getting screen captures for Android Devices

One of the very useful tools in reporting issues is the screencapture.

For Android, this usually means hooking up the device to the Android SDK. If you’ve not installed the SDK, you can check out this article.

ITBlogs has a good step-by-step article for this here.

Here’s a video explaining how to do this as well:

Leave a comment

Filed under Android, QA, Testing

Performance testing HTML5 Web Apps

One of the main issues with HTML5 and other sites that have a copious use of Javascript is that with sloppy programming, site performance could suffer.  This is especially so with the HTML5 canvas and offline storage APIs.

One way to perform such kind of performance testing is to use a simple iOS WebView app to load the pages and use the Instruments toolset (that comes with Xcode) to measure memory and response/load time.

Installing Instruments (comes with Xcode)

See https://mobpot.wordpress.com/2011/07/21/simulating-iphone-safari-browser/ for steps on installing Xcode. Once done, use Spotlight (ie. the “magnifying glass” icon on the top right hand corner of your Mac) to search for “Instruments”. Once activated, your screen should look like this:

Instruments screen

Instruments screen

Download and customize the WebView app

  1. Download the zip file from http://dblog.com.au/iphone-development/iphone-sdk-tutorial-build-your-very-own-web-browser/
  2. Unzip it
  3. Open the xcodeproj from Xcode
  4. The app was initially written for iOS 2.x so you’ll need to set the correct SDK version.
  5. To do this – Project->Edit Active Target “WebBrowserTutorial”. Base SDK – set to the latest eg. iOS 4.3
  6. On the “Simulator – 4.3 | Debug” tab on the top left hand corner of Xcode, set it to Simulator and iPhone Simulator
  7. Under “Classes” look for WebBrowserTutorialAppDelegate.m and the applicationDidFinishLaunching method
  8. Change the url to your desired URL
  9. Build & Run
iOS Simulator running the WebView app
iOS Simulator running the WebView app

Using Instruments to Perform Performance Testing

  1. Activate Instruments
  2. Under “iOS”, choose “Allocations” to see how memory is being allocated to the app, “Time Profiler” to see time used by various method calls, “Activity Monitor” to see system activity.
  3. Next select the target app by clicking on the “Target” drop down list on the top left hand of the Instruments screen. Then choose the target app. The target app is a .app file that resides in the “build” directory of the folder you unzipped it in. Check the build settings in Xcode to locate the app file. You may also need to change the settings to generate the app file.
  4. Click the record button ie. the one with the big red dot.
  5. You can then see the results displayed on the Instruments dashboard.
The Apple documentation for Instruments can be found here.

Leave a comment

Filed under Mobile Web, Performance Testing, QA, Testing, Useful tips

In-Country Testing Native Apps and Mobile Web

Native Apps

One of the wrong assumptions client app developers make is the stability of the network – that it is the same everywhere as where they tested. Unfortunately that is not true. Not only are networks slow and unstable in certain countries, a user may be using the app while travelling and his phone switching cells. This means the app needs to be able to handle such cases and gracefully degrade in order to give the user a good experience.

Another wrong assumption is that operators are just pipe – that whatever TCP/UDP/HTTP requests the app makes to our servers, it goes through un-hindered. Unfortunately, operator gateways do stuff like mangle or block cookies, URL encode/decode incorrectly and add extra headers to your requests.

The recommended test sequence would be to:

  1. Test using WiFi – to ensure app is working fine given the best possible network condition
  2. Test at a location where you know the cell reception is good
  3. Choose several locations within the city especially where people gather to see how app behaves at locations where cellular traffic is high
  4. Test while travelling on a car or bus so that the handset switches cells.

Mobile Web

As with Client App testing – the issues faced for mobile web are pretty similar ie. network stability issues and operator gateway issues. However, for mobile web there is the possibility of the operator either (a) adding extra markup eg. their own headers or ads (b) doing HTMLTidy like operations on the markup.


It may seem like overkill to test in various locations around the city and in a travelling vehicle and for some apps it is. It really depends on the most common use case for the app if the app requires constant network connectivity – eg. if the app is a game that has minimal network calls then there’s really no need to test in various locations. On the other end of the spectrum would be an app that frequently tracks the location of the user to display location aware coupons – in this case you can expect the user to use it while travelling in a vehicle and locations where people congregate like a mall.


Filed under In country testing, QA, Testing, Useful tips