SLASSCOM’s QA Strategy Event: Recorded

148

SLASSCOM recently held an event with one of the longest titles we’ve ever seen at an event. Reading it out loud is like reading and entire article, so let’s get down to brass tacks.  Held at Dialog Future Arcade on the 23rd of April, the event featured a number of speakers reputed in the fields of QA and was targeted towards QA engineers as well as product and project managers.

Now before we start, it should be noted that though the event was not lengthy in duration, it was indeed quite techy. The main areas of interest were mobile apps – no longer a niche market, but a mainstream economic phenomenon.

Starting off the event was Anjana Somathilake, Director of Engineering and Architecture at Leapset Engineering, speaking on adding intelligent analytics and crash reporting to apps in order to enhance your QA strategy. Again, big words, but the gist of the subject is simple. What steps do you take when your app crashes?

IMG_20150423_172119

His session was rather interactive and the audience (of whom a majority were QA engineers) responded well and asked questions in return. Anjana also talked about how this is the first time how analytics and crash reporting have been referred to in a QA environment.

Does traditional testing end when the app goes live?

That was his opening slide. Usually, developers forget about the app after it has been pushed to relevant app stores; no further development or bug fixes are carried out. For example, when asked from the audience, a majority admitted to not carrying out testing once their app is live.

Anjana added that if developers do not test apps further, then they are running blindly in production. Essentially you don’t have the data on how many people are using your app or whether it has bugs; you’re literally blind.

He took an example of an app that was developed by Leapset. The app was tested, there were no “known” defects, no customer complaints, yet and was considered all good. But later they realized that there was little customer satisfaction, increased levels of app abandonment and so on and so forth. But they didn’t know why.

As a solution, they could have carried out a customer satisfaction survey. But carrying out a survey is a tad on the difficult side when you’re based in one country and your customer is on the other side of the world. The next step they could have taken is to ask for QA feedback. This again is difficult when you don’t know what the feedback means as it offers no insight, just results. Hire a domain expert? But these are costly.

So in the end, they decided used Analytics and crash reports. Compared to other traditional methods, analytics gives you immediate results when compared to feedback received from customers or surveys (which are usually given when it’s too late to do anything). He then gave briefs about why some apps are popular and others aren’t, experimentation and optimization, and followed up with an introduction to a few analytics tools such as Google Analytics, Mixpanel, Countly, Flurry etc.

The topic then went into a slightly more technical track with explanations revolving around actions and events of an application. He then talked about steps needed to record and collect data and also to analyze it.

If you torture your data long enough, it will confess.

Funneling is the next topic he covered. This measures how customers move through a series of events. For example, in a booking system, how many people check out the home page and then from that, how many people proceed to make a booking.  After that it was retention, where if a customer finds an app valuable enough, they will keep using it.

We then leave the area of analytics, to cover something a bit more… different; Crash reporting.

Consider the scenario where you spend weeks testing an app, publishing it, positive reviews flow in, THEN you get a one star review stating your worst fear: the app crashed.

Nothing is worse than hearing about crashes from the user community.

As a solution, integrating a crash report system into the app helps the developer improve the app. It won’t fix it. But can help with future releases.

App crashes are like accidents – they just happen, but it’s up to you how to handle them.

The main reason that apps crash on mobile environments is due to limited frameworks and hardware.

Live demos were shown with crashes and how to debug them with relation to iOS development. Apparently the crash report only analyses values for RAM usage and whether or not the device was jail broken. In conclusion, Anjana said that including analytics and crash reporting in your QA strategy will comprehend your QA strategy once the app is live.

With that he signs off and his position was replaced by Nuwan Dehigaspitiya, sharing his expertise on non-functional testing talking about “Mobility + Quality”.

He started off with challenges faced by QA professionals in mobile testing – such as 65% of companies have no idea of the tools used to test applications.

He then got down to brass tacks, talking about points such as diversity of platform OS and device fragmentation, where in order for an application to be successful, an app must also be device compliant.

IMG_20150423_180405

He then spoke about different mobile app types (native or hybrid) and mobile testing tool availability and related knowledge. He also stated that any and all apps MUST meet industry standards in order to be published on any app store.

His second topic was about mobile app testing strategies:

  • Selecting a target device
  • Testing via simulators/cloud test labs/physical devices
  • Connectivity options – constraints when using 2G,3G,4G or Wifi
  • Manual vs. automatic testing
  • Selecting a test automation tool

The session then went real techy, with a live coding session using “New Relic” to test the performance of a pre-built application. Nuwan also tested areas such as execution time, and memory utilization via Instruments.

IMG_20150423_175902

Nuwan also briefed the audience on performance testing, more specifically – energy consumption. This was done via Connection manager, a tool which controls the radio connection and power utilization of a mobile device.

If you spend your day playing a lot of games on your mobile, then you will know exactly how the battery drains and the rate the device heats up, too. 

Next up was security testing, which dealt with checking insecure data. Test automation, accessibility testing and installation testing were next on his list and these are covered briefly and swiftly.

With both sessions coming to a close, the event then turned into a small Q&A session. Soon, refreshments were served, and it was time for us to pack up and head back to 127.0.0.1.

LEAVE A REPLY

Please enter your comment!
Please enter your name here