Implementing a simple mobile telemetry system using a simple LAMP stack

So every year around this time ie. the holidays, I’ll embark on a coding (more like “hacking”) spree and one of the on-going projects has been to build a simple telemetry (or what I called “App Beaconing”) system. I started with using Splunk (ie. beaconing to a Splunk server) and got a little frustrated with the whole process of finding a cheap Splunk instance to host it on (lol).

So last year (yeah, talk about *late* documentation! The code *is* on github tho), I decided to keep it simple and just implement a simple REST API on a LAMP stack and host it on Digital Ocean. Oh gosh I so love Digital Ocean! The best thing about DO is that it is just great for hobbyists cos unlike AWS (you practically sign your life away once you give them your credit card. It makes Amazon *really* rich tho) you can control your spend. Just prepay a fixed amount eg. $100 and once you hit that limit you know you have overspent or not (and it’s probably time to do some spring cleaning). You don’t clock up bills of like thousands of dollars (which happens a fair bit on AWS, so I’ve heard).

OK, enough with the bullcrap. Here is the github repo:

The is pretty descriptive. This is fairly old code and not updated so you’ll have to do your own updating (sorry!). There are 3 main folders:

  • LAMP – this is where the server code resides. There are PHP scripts and MySQL scripts there for set up.
  • iOS – this is code for the iOS app for beaconing the device location etc
  • Android – ditto for Android app

So what’s with the “hash” thingy? Well the code and server uses a shared secret to generate a HTTP Authorization header so that you know the POST is legit. (I’m too cheap to go with HTTPS).

Have fun!





Leave a comment

Filed under Uncategorized

Home Guard: A simple home security system using RPi, 433Mhz sensors/receiver, USB webcam

For the 4th of July long weekend, I finally decided to dig up my stash of IoT “toys” and actually build something. I was also motivated since I was going away for the summer and wanted a low power, simple sensor based system to “keep an eye” on the apartment.

What I dug up

This is what I dug up:
  1. A Raspberry Pi 2 that came with USB WiFi dongle, SD card (you can get one of these kits from Amazon – )
  2. An unused, now defunk Ninjablocks kit that came with a door and PIR 433Mhz sensors (you can get these on Amazon –
  3. A little starter kit with a small breadboard and some wires for connecting to the GPIO ports on the RPi (something like this – and
  4. A simple USB webcam eg.
433Mhz Sensors from (now defunk) Ninjablocks

433Mhz Sensors from (now defunk) Ninjablocks

I figured I could easily whip up a simple sensor set up that would email me a photo of the apartment if it detected a presence or door opening.

Initial Set Up

First was to set up the RPi using NOOBs. For those who don’t know, you can go here on videos on how to set up NOOBs to install Raspbian (the OS) on you RPi.
Also you’ll need to set up WiFi for the Raspberry Pi.

433 Mhz RF Receiver setup and test

Next is to set up the RPi to receive signals from the PIR and door sensor. You need to get one of these – They come in a set but we’re only using the receiver.
Next follow the instructions found here to set up the receiver and test that you’re getting. Here are some links to instructions on how to do this:
  1. – just follow the instructions to set up the receiver
  2. Build and compile the RFSniffer found here –
  3. Test that the signals are getting through by activating the sensors (you may need to make sure the battery is still working for them)
433Mhz Receiver

433Mhz Receiver

This slideshow requires JavaScript.

Setting it all up

The code can be found here:
I created separate folders for each component so that I could test each on its own:
  1. Sensors – this is the control script to listen to the RF sensors and dump it out to a file “sniff” –
  2. Camera – this is the script to take a photo and save it in image.jpg –
  3. Email – this contains the script to uses Gmail to send an email whenever something is detected –
I personally recommend that you test each of these individually. But first you need to setup/configure for your needs:
  1. You’ll need to install fswebcam – see
  2. 433Utils have already been installed but you may need to config to configure the dir path
  3. For email, you’ll need to set up a gmail account for this. I don’t recommend using your personal email. You’ll need to config that in The first time you send an email, gmail will flag it out as suspicious so you’ll need to allow it.
  4. For email, you’ll need to config who you want to send it to as well.
  5. There will be several python packages to install eg. email, subprocess etc. You need to do pip install.
  6. The “sensorAlertCodes” hash in are for my own sensors. You’ll need to use RFSniffer when you set up 433Utils to figure out what are the codes for your sensors and configure accordingly.
  7. also has locations where “Camera”, “Email”, “Sensors” dirs are located, you’ll need to configure those as well.
  8. I modified RFSniffer C++ code to use <iostream> cout instead of <stdio.h> printf as it wasn’t printing to stdio fast enough. YMMV.
  9. I added a crontab in the git repo as a reference. My set up checks “sniff” every 5mins and if there are entries in there, take a photo and send it with the Subject indicating if the door was open or motion was detected etc.
Have fun! 

Leave a comment

Filed under Uncategorized

Quality is in the eye of the beholder

I just got back from the Better Software West conference ( and one of the takeaways from Mike Sower’s tutorial on “Measurement and Metrics for Test Managers” was that Quality is in the eye of the beholder.

This reminded me of a funny story.

This happened when I was QA Manager of a small test team in Singapore. One of my test engineers was from India and was really excited as he was heading home in a couple of weeks and was shopping for his family. He had asked his kid sister what she wanted and she said she wanted a digital camera (this was before the iPhone).

He researched and researched on the features and price points of the various digital cameras on the market: number of magapixels, optical zoom etc. It reached a point when he narrowed down to 2-3 models and he didn’t know which to choose, so I recommended he ask his sister what she wanted.

The next day he was shaking his head. I asked him so what she had said.

“Does it come in pink?”

Quality indeed is in the eye of the beholder.

Leave a comment

Filed under Uncategorized

Building a simple app beaconing solution: Part 2

In part 1 of this series I talked about building the Android app to beacon to a Splunk server its location. In part 2, I’ll cover the iOS app that does the same thing.

As in part 1, I will not be covering basic iOS app development. There are a ton of resources and online courses (Udemy, Lynda among others) that cover this. I assume you already know how to build a simple single “page” (or ViewController) app in Objective-C (not Swift) with a Button and TextField, what a IBOutlet and IBAction is, how to dismiss the keyboard on tap outside TextField etc.

The code can be found here (under the iOS folder):

The first part is how we persist the deviceID on the device (just like what we did for the Android app). The code is in the viewDidLoad() method in ViewController.m. The key is using the NSFileManager object. This is pretty standard code that you can find online. One source is here: . Essentially the code to get the data stored in “deviceID.dat” is here:

    // this part gets the saved deviceID and displays it
    NSFileManager *fileMgr;
    NSString *documentDir;
    NSArray *directoryPaths;
    fileMgr = [NSFileManager defaultManager];
    directoryPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
    documentDir = [directoryPaths objectAtIndex:0];
    filepath = [[NSString alloc] initWithString:[documentDir stringByAppendingPathComponent:@"deviceID.dat"]];
    if ([fileMgr fileExistsAtPath:filepath]) {
        deviceID = [NSKeyedUnarchiver unarchiveObjectWithFile:filepath];
        deviceIdInput.text = deviceID;

The corresponding code to use the value in the TextField to store in the “deviceID.dat” can be found at the submitButton() IBAction method:

- (IBAction)submitButton {
    NSString *deviceID;
    deviceID = deviceIdInput.text;
    [NSKeyedArchiver archiveRootObject:deviceID toFile:filepath];

Note: deviceIdInput is the TextField. We already have initialized “filepath” in ViewDidLoad() so this would work.

The next part we’ll talk about is the splunkPing also in ViewController.m. This is essentially the code to beacon the location (once obtained) to Splunk. This basically uses a NSMutableURLRequest object to make the HTTP Post to the Splunk endpoint (in this case localhost for testing). The rest of the code is pretty standard code to make a HTTP request. An example can be found here: 

The last 2 parts are really the meat of this project: (a) creating a “background service” and (b) obtain lat/long info to beacon. These 2 go hand in hand because (surprise, surprise) iOS only allows for 5 background modes – see . The good news is receiving location updates is one of them ie. we are able to write callbacks to trigger when location changes.

The setup code for this can be found in the ViewDidLoad() method:

    locationMgr = [[CLLocationManager alloc] init];
    locationMgr.delegate = self;
    // locationMgr.distanceFilter = kCLDistanceFilterNone;
    locationMgr.desiredAccuracy = kCLLocationAccuracyBest;
    [locationMgr startUpdatingLocation];
    NSLog(@"LocationManager started");

Next we need to write the handlers for 2 cases: (a) when there’s an error and (b) when location is updated. Note that unlike the Android app where we needed a try-catch block to handle no location received, here Apple already provides the hook in the form of the error method.

-(void)locationManager:(CLLocationManager *)manager didFailWithError:(NSError *)error{

    [self splunkPing:deviceID withLatitude:0.0 withLongitude:0.0];
    NSLog(@"Error: %@",error.description);

-(void)locationManager:(CLLocationManager *)manager didUpdateLocations:(NSArray *)locations {
    // If it's a relatively recent event, turn off updates to save power.
    CLLocation* location = [locations lastObject];
    NSDate* eventDate = location.timestamp;
    NSTimeInterval howRecent = [eventDate timeIntervalSinceNow];
    NSString *coord;
    NSLog(@"Inside didUpdateLocations");
    if (fabs(howRecent) < sleepInSeconds) {
        // If the event is recent, do something with it.
        NSLog(@"latitude %+.6f, longitude %+.6f\n",
        coord = [[NSString alloc] initWithFormat:@"%f,%f",location.coordinate.latitude, location.coordinate.longitude];
        [self splunkPing:deviceID withLatitude:location.coordinate.latitude withLongitude:location.coordinate.longitude];

This is pretty standard and you can find example code from the link listed above as well. Please note, for this to work you need to enable permissions in the project’s Info.plist – see the section “Receiving Location Updates” in the link.

Leave a comment

Filed under Uncategorized

Building a simple app beaconing solution: Part 1

One of the side projects I’ve been working on is to build a simple beaconing app (iOS and Android) to beacon the device’s lat/long onto a Splunk server. This could be used as a way to keep track of test devices or even some kind of geofencing for a device lab.

There are 3 parts to this post:

  1. Part 1: covers the Android app
  2. Part 2: covers the iOS app
  3. Part 3: covers the Splunk set up and custom scripts

For those who do not know, Splunk ( is a really cool devops tool that makes it really easy to injest all kinds of logs eg. syslog, apache access logs, log4j etc and makes it really easy to search, create dashboards and alerts with queried log entries. Other than injesting logs, Splunk is also able to listen to a TCP port for events or in version 6.3, allow you to create a simple http event listener that processes events encoded in JSON format.

Some folks will ask, why not use ELK (Elasticsearch, Logstash, Kibana)? I may decide to investigate that at some later point as I strongly believe this is also possible using an ELK stack.

Ok let’s start with the Android app. So I will not cover the basics of creating a simple Android app as there are tons of resources on on how to build one. There are also video courses on Udemy (, (now incorporated into LinkedIn) and other online course sites for this. For this entry I assume the reader already knows how to create a simple 1 Activity app, how to create UI elements such as input boxes, buttons and bind them to handlers to execute code.

The code can be found here:

For this post, we’re only interested in the “Android” part. To view and build the code, I would recommend using Android Studio. You can download it from

If you navigate down the folders in “app/src/main”, you’ll eventually come to 2 files: and

We’ll start with the first. There are a few more advanced Android concepts here: (a) SharedPreferences and (b) AlarmManager. Most of the code is pretty standard Android code to identify the various UI elements on the activity ie. inputBox, submitButton etc.

You can read more about SharedPreferences here: . Essentially it allows you to persist a set of key-value pairs on the app on the device for retrieval each time the app is run. We use this mechanism to persist a device ID to identify the device in the beaconing.

The code fragment:

SharedPreferences settings = getSharedPreferences(PREFS_NAME, 0);
mDeviceID = settings.getString("deviceID", "");

Just restores the saved SharePreferences KV pair with key “deviceID” and displays on the inputBox.

If the user changes the value, then we change the value:

SharedPreferences.Editor editor = settings.edit();
editor.putString("deviceID", mDeviceID);

AlarmManager basically allows you to set up repeated alarms to trigger some code. You can learn more here:

The code below is basically just to set up an alarm to trigger the pingService (we’ll cover this later) every 600000 millis.

Intent intent = new Intent(this, SplunkService.class);
PendingIntent pingService = PendingIntent.getService(this, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT);
AlarmManager alarmManager = (AlarmManager) getSystemService(Context.ALARM_SERVICE);
alarmManager.setRepeating(AlarmManager.RTC_WAKEUP, System.currentTimeMillis(), 600000, pingService);


Next we look at

There are just a few key things to understand this. (a) You need to have the networking code as AsyncTask and (b) how to use the LocationManager to obtain the lat/long.

In the code, I’ve implemented 2 ways of beaconing to Splunk – via HTTPClient and using Socket. These are pretty standard Java – there’re  ton of examples of these in StackOverflow. What is important to note (for newbie Android devs) is that you need to have them encapsulated into an AsyncTask so that they are non blocking. All networking code is blocking and hence can’t be run on the main UI thread for the app. When that happens, Android barfs. Hence the inner classes SendToSplunkHTTP and SendToSplunkSocket.

Now to obtain the location of the device. The code for this is:

LocationManager manager = (LocationManager) getSystemService(Context.LOCATION_SERVICE);
Location loc = manager.getLastKnownLocation(LocationManager.NETWORK_PROVIDER);

However you may note that is not what I did here. This I only realized thru testing (hence you should always test!). Basically the problem is that I kept getting NullPointerException for loc object. Reason was that getting the location object is error prone. Hence you should always have a fall back, in my case, I implemented a try-catch block to just give the lat/long as 0,0 if the loc object was null.

More info on LocationManager can be found here: and here:

One thing to take note is that to use LocationManager, you need to set permissions on AndroidManifest.xml:

    <uses-permission android:name="android.permission.INTERNET" />
    <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/>
    <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/>   




Leave a comment

Filed under Uncategorized

Managing user churn on external tools

One of the best things about starting a company now is the sheer wealth of tools available to effectively outsource your entire infrastructure, outsource corporate systems eg. AWS, Github, Crittercism, Flurry, Gmail, Google Drive etc. The problem of course is user churn – people come and people go and someone has to maintain the hygiene of these external accounts.

Most companies of a certain size have a HR database somewhere – so that may allow (or it is possible to expose an API) to check if an employee still exists. Also it’s usually companies of a certain size that this issue becomes a little more problematic.

Unfortunately, the tools usually don’t expose an API for your to query users that belong to a particular company or organization. They usually do have a WebUI but that’s pretty manual.

One way I’ve tackled this problem is to have a simple MySQL database with a simple PHPMyAdmin (see to store a copy of each tool’s user list. So each table would correspond to one tool’s user list. Then there will be a simple cron job that runs each day to check users against the HR database. I had my scripts written in Perl (embarrassingly so but it was much easier to do so given the bindings to internal stuff) and it would send out a daily email for each tool of who has since left the company.

Once you have this set up then it’s a breeze since all you need is to review the latest email and start removing them from the tools.

The problem is performing the first data import. PHPMyAdmin allows you to import via CSV but you need the first CSV list. If the tool provider has an API then you’re good but a lot don’t so you’re stuck with a WebUI. One way is to use Selenium WebDriver to do just that. I have put the script (sans my actual password) for getting Apple Developer accounts (if you’re an admin) here: . Note – Apple may change their WebUI anytime so you’ll have to modify accordingly but you get the idea.

Leave a comment

Filed under Uncategorized

A canary in the coal mine for your app CI pipeline

One of the issues of maintaining a CI pipeline is how to know if the various components: (a) source repository (b) build farm (c) artifacts repository (d) test farm – are working at any point in time since when developers want a build, they do not want to have to be blocked by problems with the pipeline.

One way to check this is to have regularly scheduled build/test runs on very simple apps with no major external dependencies.

I had created just this a while back called the Canary App (see and just for this. The contain some simple tests that do not utilize any network so test should always pass. Currently UI tests are only in UI Automation (iOS) and Robotium (Android).

The nice thing about having regularly scheduled runs is that if the build/test run fails, you can check to see which part of your CI pipeline is having issues and immediately address them before a critical build kicks off.

Leave a comment

Filed under Uncategorized