Using a Proxy framework to automate API robustness of apps

One of the realities of the new world of CI/CD that we live in is the “bad push” or rather one that was not adequately tested before it was pushed by DevOps via Chef or Docker to production servers. This is because it’s just too easy to make a change on the dev env and promote them over to production. In an ideal world there would be automated tests baked into the CI pipeline to catch these issues but when your app is using 3rd party backends, you are at the mercy of the professionalism of these teams. One way to solve this is simply to build your own middleware to ensure the API responses are all kosher. Another way is to bake in defensive programming into your app’s model layer (as in MVC) to ensure that even if bad responses are received, your app does not barf.

To test this, there are several ways: (a) build a full scale mock server that is able to record and replay backend responses (b) use a proxy to intercept the responses and modify it in several ways:
  1. Add/Remove headers
  2. Modify the content body eg. change values for example of the value is an int, change it to a string etc.
  3. Truncate the content body
There are several advantages of using the proxy server approach: (a) you don’t have to build a mock server from scratch and maintain it (b) you are working with traffic from real production backends.

I chanced upon this tool “mitmproxy” while researching for a tool to do just this. There are some nice things I liked about it:
  1. Easy set up – binaries available for MacOSX and pip install for Linux (Ubuntu)
  2. Inline scripts to intercept end point requests and manipulate responses are in Python so no need for complicated set ups using Maven or ant etc.
  3. 2 modes of operation – interactive and CLI

They have the standard MITM certs to decrypt SSL traffic just like Charles Proxy (which is a great tool btw). See http://mitmproxy.org for details.

Once installed and after both mitmproxy and mitmdump are both in your $PATH, you can start digging in to the tool. Best way would be to use the interactive tool “mitmproxy” first to get the feel for it. There are of course flags to change the port etc (default is 8080). This site (see section 2.6) gives a good intro to how to navigate the tool – http://blog.philippheckel.com/2013/07/01/how-to-use-mitmproxy-to-read-and-modify-https-traffic-of-your-phone/

However the real power of this tool is the fact you are able to run what they call “inline scripts” – essentially pieces of Python code handlers for request, response etc.

Here’s an example code to demonstrate this (you can find this here: https://github.com/foohm71/mitmproxy-stuff – it’s the dumpInfo.py script)
def dumpInfo(flow):
   dict = flow.__dict__
   print "[Flow Info]"
   print "Host:" + dict["Host"]
   print "method:" + dict["method"]
   print "protocol:" + dict["protocol"]
   print "[Request Info]"
   print "request start time:" + dict["requestStartTime"]
   print "request end time:" + dict["requestEndTime"]
   print "request body:" + dict["requestBody"]
   headers = dict["requestHeaders"]
   for k in headers.keys():
      print "request header: " + k + " = " + headers[k][0]
   print "[Response Info]"
   print "response code:",dict["responseCode"]
   print "response start time:" + dict["responseStartTime"]
   print "response end time:" + dict["responseEndTime"]
   print "response body:" + dict["responseBody"]
   headers = dict["responseHeaders"]
   for k in headers.keys():
      print "response header: " + k + " = " + headers[k][0]</code>

def request(context, flow):
   dict = flow.__dict__
   request = flow.request
   dict["Host"] = str(request.host)
   dict["method"] = str(request.method)
   dict["protocolVersion"] = str(request.httpversion)
   dict["protocol"] = str(request.scheme)
   dict["requestStartTime"] = str(request.timestamp_start)
   dict["requestEndTime"] = str(request.timestamp_end)
   dict["requestHeaders"] = request.headers
   dict["requestBody"] = request.get_decoded_content()

def response(context, flow):
   dict = flow.__dict__
   response = flow.response
   dict["responseCode"] = response.code
   dict["responseStartTime"] = str(response.timestamp_start)
   dict["responseEndTime"] = str(response.timestamp_end)
   dict["responseHeaders"] = response.headers
   dict["responseBody"] = response.get_decoded_content()

   dumpInfo(flow)
<div>

All this does it extract out information about the request/response and puts it in a dict object that is passed around. Once done, it just prints out the information. To run this, use the CLI version of the tool in this way: mitmdump -s <script>

As there is a framework for this, we could extend this code to perform the following: based on different end points (or hosts), protocols, body, we could perform different types of response manipulation.

One example of response manipulation could be to just truncate a JSON response like this:
deftruncateJSONString(jsonstr, length):
   return jsonstr[:int(length)]

Another could be to recursively parse the JSON response for a key and replace its value:
def findReplaceValue(jsonobj, key, value):
   if type(jsonobj) == type({}):
   for k in jsonobj:
      if k == key:
         jsonobj[k] = value
      findReplace(jsonobj[k], key, value)
Sometimes your request is in the form of a form POST, in that case, you may need to extract a form field and perform the response manipulation based on that or a combination of fields:
form = request.get_form_urlencoded()
username = form[“username"]
dict[“Username”] = username

Leave a comment

Filed under Uncategorized

What is Quality?

As someone who has worked in the area of quality/test engineering for some time, this question has popped up time and again. I have also been involved in several discussions among peers about this question but I have never quite gotten a satisfactory answer, at least not for myself.

Some time back, while reading the book “How Google Tests Software”, I came across the line

Quality != Testing

This really made me balk and ask myself “so what then is Quality, in particular Software Quality?”.

This must come as a surprise to the uninitiated but for those of us in the so called Quality and Test specialization in software engineering, we spend most our time and energy honing the art of testing. Functional testing, performance testing, unit testing etc, how do we test this feature, how do we test that component, how do we “break” the software? Those are the questions and challenges that plague our profession.

So what then is Quality?

I pondered this for a quite a while and then one day it hit me: Quality is and has always been synonymous with the luxury industry.

Take for an example how a Hermes Birkin bag compares to a Coach (now if you’re male, single and straight, you had better learn to distinguish the two fast). One costs a tens to hundreds of thousand dollars and the other a few hundred.

Why is that? A bag is a bag is a bag right? Nope. Each Birkin bag is hand-sewn, buffed, painted, and polished by expert artisans. Note: expert artisans. Artisans who are passionate about their craft.

At the end of the day quality comes down to craftsmanship. We pay for the craftsmanship, not the object.

Software quality, it follows, comes down to software craftsmanship –

1. how systems are designed
2. how systems are built
3. how systems are tested
4. how systems are deployed

Craftsmanship is about pride in the product one is building and it is about knowing and practicing the various aspects of this process.

In “Clean code, A handbook of Agile software craftsmanship”, Robert Martin describes software craftsmanship as both knowledge of the principles, patterns, practices and heuristics that a craftsman knows; augmented with the hard work of applying that knowledge in the daily grind of churning out production ready code. It is hard work and it takes discipline.

Quality hence is everyone’s responsibility. From the design of the UI so that users find it aesthetically pleasing to use, architecting for performance, modularity and testability, deploying in a seamless fashion that minimizes the impact to users, writing code that is efficient yet maintainable; and testing – putting the system through its paces and figuring out what could possibly cause the software to break, or what functionality was missed.

So if quality is everyone’s responsibility, what then is the role of the quality or test engineer?

I would say that it is to encourage software craftsmanship.

This goes beyond testing, although that is the bread and butter of our profession.

Why bother? Wouldn’t it be much easier to just test and report bugs? Yes, but then the true value of finding those bugs is lost. The true value is in asking why those bugs came about in the first place and trying to put in place measures so that it doesn’t surface again. In short – encouraging software craftsmanship.

However, in order to know what good code looks like, you need to have either built code, seen bad code, seen really good code – and mostly all of the above.

Coming back to the Birkin bag analogy: an expert is able to tell a real Birkin bag from a fake. It is down to the fittings, how the leather is prepared, how well the stitching is done. I know intimately about this because I have an uncle who is certified to repair Louis Vuitton products.

In the same way, you can tell if software is designed and built well. But like art, you would have had to have seen/worked with well designed code/systems to know how they look like.

Only when you have seen quality can you build quality.

2 Comments

Filed under Uncategorized

Shameless plug – Twitter account

Some time back, I created a Twitter account to tweet all mobile and mobile testing news. You can access this twitter channel via @mobilepotpurri

1 Comment

Filed under Uncategorized

Automating Android Native Apps Testing using MonkeyRunner

MonkeyRunner is a tool that comes with the Android SDK. It is basically a tool using python scripts to run automation for Android apps.

The Android SDK site has a simple example of a monkeyrunner script and how to run it. See “A Simple monkeyrunner program” and “Running monkeyrunner” – both can be found here. For a start, don’t run with the plug-in option.

One thing to note about monkeyrunner is that unlike UI Automation or any of the other automation tools, you don’t search for a UI element and click (or tap) on it. Instead, you use the keypad keys to navigate. I found using the up, down, left, right and center keys most useful. Also there doesn’t seem to be a way to look for  UI elements and extract the labels or even accessibility keys.

There are 3 main components for monkeyrunner: MonkeyRunner, MonkeyDevice and MonkeyImage classes.

The MonkeyRunner class contains static utility methods to do a run – the most important being the WaitForConnection() method which returns a MonkeyDevice class for actual testing.

The MonkeyDevice class is the most important as its methods control the device eg. install a package, perform button press, drag, start Activities.

The MonkeyImage is used for capturing screenshots.

Some Issues to take note of

There is also a typo in the script found in the SDK documentation:

# sets a variable with the package's internal name
package = 'com.example.android.myapplication'


# sets a variable with the name of an Activity in the package
activity = 'com.example.android.myapplication.MainActivity'


# sets the name of the component to start
runComponent = package + '/' + activity

Will mean that runComponent='com.example.android.myapplication/com.example.android.myapplication.MainActivity'
instead of com.example.android.myapplication/.MainActivity which is correct.

Also for some reason, I could not get the script to run with
device.press('KEYCODE_MENU','DOWN_AND_UP')

as it seemed that the ‘press’ method required 3 params instead of 2. I just added a dummy string param at the end eg. ‘xx’.

Note that the script is written in Python – here is a decent for the Python language I found useful.

Leave a comment

Filed under Android, Native App, QA, Testing

Getting screencaptures for iOS Devices

For iOS devices, there’s a neat way to capture the screen by just holding down the “home” button and pressing the top button.  See this blog article.

This works for iPhone, iPod Touch, iPad.

Once the screenshot is saved on the device, you can then email it to report the issue.

Here’s a video on the steps as well:

Leave a comment

Filed under iOS, QA, Testing, Useful tips

Getting screen captures for Android Devices

One of the very useful tools in reporting issues is the screencapture.

For Android, this usually means hooking up the device to the Android SDK. If you’ve not installed the SDK, you can check out this article.

ITBlogs has a good step-by-step article for this here.

Here’s a video explaining how to do this as well:

Leave a comment

Filed under Android, QA, Testing

Performance testing HTML5 Web Apps

One of the main issues with HTML5 and other sites that have a copious use of Javascript is that with sloppy programming, site performance could suffer.  This is especially so with the HTML5 canvas and offline storage APIs.

One way to perform such kind of performance testing is to use a simple iOS WebView app to load the pages and use the Instruments toolset (that comes with Xcode) to measure memory and response/load time.

Installing Instruments (comes with Xcode)

See https://mobpot.wordpress.com/2011/07/21/simulating-iphone-safari-browser/ for steps on installing Xcode. Once done, use Spotlight (ie. the “magnifying glass” icon on the top right hand corner of your Mac) to search for “Instruments”. Once activated, your screen should look like this:

Instruments screen

Instruments screen

Download and customize the WebView app

  1. Download the zip file from http://dblog.com.au/iphone-development/iphone-sdk-tutorial-build-your-very-own-web-browser/
  2. Unzip it
  3. Open the xcodeproj from Xcode
  4. The app was initially written for iOS 2.x so you’ll need to set the correct SDK version.
  5. To do this – Project->Edit Active Target “WebBrowserTutorial”. Base SDK – set to the latest eg. iOS 4.3
  6. On the “Simulator – 4.3 | Debug” tab on the top left hand corner of Xcode, set it to Simulator and iPhone Simulator
  7. Under “Classes” look for WebBrowserTutorialAppDelegate.m and the applicationDidFinishLaunching method
  8. Change the url to your desired URL
  9. Build & Run
iOS Simulator running the WebView app
iOS Simulator running the WebView app

Using Instruments to Perform Performance Testing

  1. Activate Instruments
  2. Under “iOS”, choose “Allocations” to see how memory is being allocated to the app, “Time Profiler” to see time used by various method calls, “Activity Monitor” to see system activity.
  3. Next select the target app by clicking on the “Target” drop down list on the top left hand of the Instruments screen. Then choose the target app. The target app is a .app file that resides in the “build” directory of the folder you unzipped it in. Check the build settings in Xcode to locate the app file. You may also need to change the settings to generate the app file.
  4. Click the record button ie. the one with the big red dot.
  5. You can then see the results displayed on the Instruments dashboard.
The Apple documentation for Instruments can be found here.

Leave a comment

Filed under Mobile Web, Performance Testing, QA, Testing, Useful tips