Some time back, I created a Twitter account to tweet all mobile and mobile testing news. You can access this twitter channel via @mobilepotpurri
MonkeyRunner is a tool that comes with the Android SDK. It is basically a tool using python scripts to run automation for Android apps.
The Android SDK site has a simple example of a monkeyrunner script and how to run it. See “A Simple monkeyrunner program” and “Running monkeyrunner” – both can be found here. For a start, don’t run with the plug-in option.
One thing to note about monkeyrunner is that unlike UI Automation or any of the other automation tools, you don’t search for a UI element and click (or tap) on it. Instead, you use the keypad keys to navigate. I found using the up, down, left, right and center keys most useful. Also there doesn’t seem to be a way to look for UI elements and extract the labels or even accessibility keys.
There are 3 main components for monkeyrunner: MonkeyRunner, MonkeyDevice and MonkeyImage classes.
The MonkeyRunner class contains static utility methods to do a run – the most important being the WaitForConnection() method which returns a MonkeyDevice class for actual testing.
The MonkeyDevice class is the most important as its methods control the device eg. install a package, perform button press, drag, start Activities.
The MonkeyImage is used for capturing screenshots.
Some Issues to take note of
There is also a typo in the script found in the SDK documentation:
# sets a variable with the package's internal name
package = 'com.example.android.myapplication'
# sets a variable with the name of an Activity in the package
activity = 'com.example.android.myapplication.MainActivity'
# sets the name of the component to start
runComponent = package + '/' + activity
Will mean that
com.example.android.myapplication/.MainActivity which is correct.
Also for some reason, I could not get the script to run with
as it seemed that the ‘press’ method required 3 params instead of 2. I just added a dummy string param at the end eg. ‘xx’.
Note that the script is written in Python – here is a decent for the Python language I found useful.
For iOS devices, there’s a neat way to capture the screen by just holding down the “home” button and pressing the top button. See this blog article.
This works for iPhone, iPod Touch, iPad.
Once the screenshot is saved on the device, you can then email it to report the issue.
Here’s a video on the steps as well:
One of the very useful tools in reporting issues is the screencapture.
For Android, this usually means hooking up the device to the Android SDK. If you’ve not installed the SDK, you can check out this article.
ITBlogs has a good step-by-step article for this here.
Here’s a video explaining how to do this as well:
One way to perform such kind of performance testing is to use a simple iOS WebView app to load the pages and use the Instruments toolset (that comes with Xcode) to measure memory and response/load time.
Installing Instruments (comes with Xcode)
See https://mobpot.wordpress.com/2011/07/21/simulating-iphone-safari-browser/ for steps on installing Xcode. Once done, use Spotlight (ie. the “magnifying glass” icon on the top right hand corner of your Mac) to search for “Instruments”. Once activated, your screen should look like this:
Download and customize the WebView app
- Download the zip file from http://dblog.com.au/iphone-development/iphone-sdk-tutorial-build-your-very-own-web-browser/
- Unzip it
- Open the xcodeproj from Xcode
- The app was initially written for iOS 2.x so you’ll need to set the correct SDK version.
- To do this – Project->Edit Active Target “WebBrowserTutorial”. Base SDK – set to the latest eg. iOS 4.3
- On the “Simulator – 4.3 | Debug” tab on the top left hand corner of Xcode, set it to Simulator and iPhone Simulator
- Under “Classes” look for WebBrowserTutorialAppDelegate.m and the applicationDidFinishLaunching method
- Change the url to your desired URL
- Build & Run
Using Instruments to Perform Performance Testing
- Activate Instruments
- Under “iOS”, choose “Allocations” to see how memory is being allocated to the app, “Time Profiler” to see time used by various method calls, “Activity Monitor” to see system activity.
- Next select the target app by clicking on the “Target” drop down list on the top left hand of the Instruments screen. Then choose the target app. The target app is a .app file that resides in the “build” directory of the folder you unzipped it in. Check the build settings in Xcode to locate the app file. You may also need to change the settings to generate the app file.
- Click the record button ie. the one with the big red dot.
- You can then see the results displayed on the Instruments dashboard.
One of the wrong assumptions client app developers make is the stability of the network – that it is the same everywhere as where they tested. Unfortunately that is not true. Not only are networks slow and unstable in certain countries, a user may be using the app while travelling and his phone switching cells. This means the app needs to be able to handle such cases and gracefully degrade in order to give the user a good experience.
Another wrong assumption is that operators are just pipe – that whatever TCP/UDP/HTTP requests the app makes to our servers, it goes through un-hindered. Unfortunately, operator gateways do stuff like mangle or block cookies, URL encode/decode incorrectly and add extra headers to your requests.
The recommended test sequence would be to:
- Test using WiFi – to ensure app is working fine given the best possible network condition
- Test at a location where you know the cell reception is good
- Choose several locations within the city especially where people gather to see how app behaves at locations where cellular traffic is high
- Test while travelling on a car or bus so that the handset switches cells.
As with Client App testing – the issues faced for mobile web are pretty similar ie. network stability issues and operator gateway issues. However, for mobile web there is the possibility of the operator either (a) adding extra markup eg. their own headers or ads (b) doing HTMLTidy like operations on the markup.
It may seem like overkill to test in various locations around the city and in a travelling vehicle and for some apps it is. It really depends on the most common use case for the app if the app requires constant network connectivity – eg. if the app is a game that has minimal network calls then there’s really no need to test in various locations. On the other end of the spectrum would be an app that frequently tracks the location of the user to display location aware coupons – in this case you can expect the user to use it while travelling in a vehicle and locations where people congregate like a mall.
The actual test is the same as an end-to-end test for SMS apps found in the “chapter” on Testing Messaging Apps (see the TOC). Essentially, it requires the tester to use a test handset and sim to send and receive SMSs according to the functional test cases for the app.
When performing in-market testing for SMS one thing to note is the traffic pattern for that particular country or operator. This means that at different times of the day, the network latency differs. For most operators, the peak traffic periods are (a) in the morning before start of work, (b) around lunch and (c) in the late evenings.
For purposes of getting a realistic profile of how the app behaves with real traffic, a good idea would be to conduct a test every hour (or key times of the day) over key days of the week. This is especially so if the app has session timeouts assumptions.
- Test Vendor(s)
The first – crowd-sourcing – basically means getting users who are in-market to help with testing the app. Some options for this include using Social Networks such as Twitter or Facebook to request for volunteers to test. Another common option is the Beta test. Both of these require a certain level of rapport with users. Usually the messaging (ie. how the request is put across to users) is key and it would make sense to test out the request message with friends before using it.
Another means of getting volunteers is to give an incentive – like a prize or some form of store credit. Unfortunately checking the validity of the test result may be an issue. Best to get a few results to triangulate.
The second method – getting vendors – requires sufficient budget and a good vendor or group of vendors. This usually means a process of vendor selection. In terms of ease of project management – a single vendor is best. uTest (http://www.utest.com) is a test vendor that has testers all over the world.
The issue with a single vendor is that they may not be the best in the game for each country. One good group of candidates for such projects are local VAS (or Mobile Content) providers. They are ideal for the following reasons: (a) they usually know the local operator’s nuances very well (b) they are already set up to test their own mobile apps eg. test handsets, sims, test process.
One very key success factor in managing such outsourced testing is the test plan. It should be (a) easy to understand by the tester – test procedure, expected result (b) use screenshots where possible (c) easy for tester to record result and accurately describe issues – multiple choice questions with open ended option are the best.
It is usually not very efficient to test in every country. It may make more sense to do exception based testing ie. only test those countries (or operators) that are exhibiting issues. One way to identify which countries or operators to zoom into is to use application logs with entries classified by carrier/country. Usually the countries/operators to zoom into are the ones that ought to have high traffic but are not. To find out where the IP belongs to, you could use one of many “whois” tools online eg. http://tools.whois.net.
Another way is to include a email address on the app for the user to report issues. Even better would be to also include an easy to use troubleshooting tool. If it is easy enough to use, you will be surprised how many users are willing to help.
In the following posts we will cover in-country testing in more detail for SMS, Mobile Web and Native Apps.
In a nutshell, In-country testing means testing your app or service in the intended launch countries.
Why In-Country Testing?
- SMS – you need to hook up with the operator (either direct or via aggregator) for your app to work
- Mobile Web – your markup may screw up or cookies do not work
- Native Apps – app keeps crashing due to network instability
This is why in-country (or in-market) testing is necessary.
For some of these you may be able to test by using sims (GSM) with data roaming. However there are limitations:
- SMS – shortcodes do not roam in general
- The network of the city or country may not be stable causing timeouts on your app
Adi Saxena has an article on codeproject.com with a very good step by step guide to using this tool (see http://www.codeproject.com/KB/iPhone/UI_Automation_Testing.aspx).
This site has a very good tutorial on the tool – http://blog.manbolo.com/2012/04/08/ios-automated-tests-with-uiautomation.
Altf also has an article on this here http://altf.wordpress.com/2010/11/14/automating-ios-user-interface-testing-with-uiautomation/ .
I will be using Adi’s article as reference to explain the steps.
Step 1: Open Instruments and select Automation
Use Spotlight to to search for it. What’s Spotlight? On the top righthand corner of your Mac you’ll see a magnifying glass icon. Click on it and search for “Instruments”. Click on that. You will see the Instruments app show up.
Next click on “Automation” icon.
Step 2: Select Target
Next step is to select the target app. On the top left corner you will see a drop down labelled “Choose Target”. Select “Choose Target”.
Select the target app by navigating to it. You should be able to locate the app under the “build/Debug-iphonesimulator” folder.
If you wish to try out the tool, the codeproject site has a sample app with test JS.
Step 3: Run Script
On the lower left hand corner there is a “Choose script .. ” option. Use that to load the test JS script. We will deconstruct the JS code later.
Next click on the “Record” button – the one with the red dot. You will see the iPhone simulator pop up loading the app and the steps of the JS script will be run.
Deconstructing the JS test code
I’ll be using Adi’s JS code to explain how the tests work – so we’ll be referencing the code from the codeproject article.
// Get the handle of applications main window
var window = UIATarget.localTarget().frontMostApp().mainWindow();
As described in the comment – it’s to obtain the handle for the main window of the app.
var textfields = window.textFields();
This gets the array of text fields from the main window of the app.
// Check number of Text field(s)
UIALogger.logFail("FAIL: Inavlid number of Text field(s)");
UIALogger.logPass("PASS: Correct number of Text field(s)");
This test checks if the number of text fields is correct. The
UIALogger JS logger object is used by UI Automation (hence the name) to log pass or fail cases
//TESTCASE_001 : Test Log on Screen
//Check existence of desired TextField On UIScreen
if(textfields["username"]==null || textfields["username"].toString() ==
UIALogger.logFail("FAIL:Desired textfield not found.");
UIALogger.logPass("PASS: Desired UITextField is available");
This checks to see if the textfield is the correct one showing up.
textfields["username"]. What does “username” refer to? This is the accessibility label of the textfield UI object. If you open Interface Builder on Xcode (or IB) by clicking on the .xib file of the app’s Xcode project – you’ll be able to find it.
These lines of code basically fills in the input fields with accessibility labels “username”, “password” and selects button with accessibility label “logon” and taps on the button.