Friday, April 24, 2015

Testing the system behavior which is not there

Yesterday, we were giving a demo of our mobile test automation POC to a client for their android app in the financial domain.  It was a simple scenario with login to the android app and then testing the market watch feed to the logged in user.

The screen as below:

The POC was based on the existing manual test scenario with the suggested data from client. Using the correct login id and password takes the user to the welcome screen.  If the user id or the password is not correct then the application would throw an error message and request you to try again with valid credentials.

After the demo of the successful login and further steps in the POC, the client representative asked us to update the test to use invalid password which was very long.  The data suggested was “anand long name in the password 8hj”, this string was 35 character long.  When we reran the test the login was unsuccessful as the password was invalid, however the password field didn’t accept the full string and truncated it to 12 characters “anand long n”.  So the conclusion was that the test passed as it didn’t allow to login with invalid password with an observation that the test didn’t do enough to catch the fact that the entire password string was not used to run the test and the test data for the password was actually truncated.

Now should the above test trap this condition and not allow any further characters post 12 characters to be typed into the password field?  Should it throw an error?

What do you think?

Here is my $0.02…
I would look at the behavior of the system in the requirements in the same scenario. What happens when we manually run the test?  Is it conforming to the requirement?  The manual test and thus the test automation script needs to validate this behavior of the system.
It would be wrong for a test case to validate something which is not part of behavior of the system and even worse trap and override the system to give warning and errors.

In the above test case when I manually enter the password, after 12 characters the system does not let me type any more characters, but please note that it does not stops me from entering any further characters into the field.  If I was not looking at the screen then I would simply type away all the characters and process with the next steps.  A test automation tool is like a person who does not look at the screen only interacts with the system under test.

This seems innocuous at first but consider this, if the valid password is “123456789012” (12 character string) and I used test data as “123456789012345” (15 character string), the system would allow me to login with invalid test data as it would truncate the long string into a valid password.

Now it is up to the users to decide what should system do if I am happily typing away even though the password field allows max 12 characters?  
1. The system should let user type more than 12 characters and not truncate (as it may pose a security risk to reveal the max length of the password)
2. The system should pop-up an error message that user has reached max characters allowed in the password
3. Or the existing way where the users can type as many characters but system will truncate it to max allowed characters without knowledge of the user.

Once we close on this system behavior from the varied perspectives of functional as well as UX testing then we need to accordingly validate this behavior using manual and automation test cases.

I would also like to bring your notice that above scenario is also an example of how testing can lead to functional as well as the user experience requirements when it is either done along with or prior to system requirement gathering. This will help us study requirements from a behavior perspective which can then evolve and lead to better test cases and eventually high quality.

Happy testing!!

Wednesday, April 22, 2015

ATDD Mobile Test Automation - Integration of FitNesse with Appium - Part 1

FitNesse is a collaboration tool, some Agile enthusiasts use this tool to facilitate ATDD (Acceptance Test Driven Development) or BDD (Behaviour Driven Development).  Compared to Cucumber where the focus is on the behavioral aspect, here we also get inclined towards the test data framework.

Appium is the premier open-source tool for mobile testing.  Here is my video on integrating FitNesse with Appium.

FitNesse can be downloaded from this link -->

The prerequisite for Appium project are as below:

  • Install Android SDK
  • Install Node.js
  • Install Appium
  • Install Appium Java Client
  • Configure Appium in Eclipse (preferably maven project)

FitNesse server can be started using following command:
java -jar fitnesse-standalone.jar -p 9090

where -p gives which port to start the server.

Configure the project in eclipse with FitNesse and Appium dependencies and run the mobile test.  More details on Eclipse setup in part 2.

Tuesday, April 21, 2015

BDD Mobile Test Automation - Integration of Cucumber with Appium

Behavior Driven Development (BDD) is very popular with many Agilists out there, specially those focused on Testing. However my view is that BDD is not about Test Automation, it is about collaboration so that the expected behavior of the application can be determined. Cucumber happens to be the tool of choice to implement BDD.  While doing this for web applications, we can drive the features through WebDriver.

Lately the application development has been inclined towards mobile apps and we need to extend our exiting BDD frameworks to handle app, be it Android, iOS or Win Mobile.

Appium is a neat tool for anyone who has exposure to WebDriver, irrespective of that also it is an excellent tool for mobile testing.

Here is my video of running Appium tests for android app using Cucumber.

You can create the Maven project using the following POM.xml:

1:  <?xml version=&amp;quot;1.0&amp;quot; encoding=&amp;quot;UTF-8&amp;quot;?>  
2:  <project xmlns=&amp;quot;;quot;  
3:  xmlns:xsi=&amp;quot;;quot;  
4:  xsi:schemaLocation=&amp;quot;;quot;>  
5:  <modelVersion>4.0.0</modelVersion>  
6:  <dependencies>  
7:  <dependency>  
8:  <groupId>junit</groupId>  
9:  <artifactId>junit</artifactId>  
10:  <version>4.11</version>  
11:  <scope>test</scope>  
12:  </dependency>  
13:  <dependency>  
14:  <groupId>org.seleniumhq.selenium</groupId>  
15:  <artifactId>selenium-java</artifactId>  
16:  <version>LATEST</version>  
17:  <scope>test</scope>  
18:  </dependency>  
19:  <dependency>  
20:  <groupId>info.cukes</groupId>  
21:  <artifactId>cucumber-java</artifactId>  
22:  <version>1.0.14</version>  
23:  <scope>test</scope>  
24:  </dependency>  
25:  <dependency>  
26:  <groupId>info.cukes</groupId>  
27:  <artifactId>cucumber-junit</artifactId>  
28:  <version>1.0.14</version>  
29:  <scope>test</scope>  
30:  </dependency>  
31:  </dependencies>  
32:  </project>  

The code for JUnit test runner is as below:

1:  package com.qaagility.atacalc;
2:  import cucumber.api.CucumberOptions;
3:  import cucumber.api.junit.Cucumber;
4:  import org.junit.runner.RunWith;
5:  @RunWith(Cucumber.class)
6:  @CucumberOptions(format ="pretty", tags={"@only"}, features = "test/resources/ATA_Calc.feature", monochrome=true)
7:  public class TestRunner {
8:  }

Wednesday, January 16, 2013

Back after a long time and wishing you a very happy new year to you all!!

So what does 2013 holds for us in the testing area, I find this year full of lot of prospects and exciting developments for the testing world.

As Agile is changing from the "buzz-word" to "must-have" for most of the clients, I expect to see lot of traction in this area.  Historically the Agile has been more of the developers' baby, they seem to have been driving it. However lately the shift can be seen where the distinction between the developers and testers seem to have reduced significantly and we are looking at Agile teams where the team members are "all-rounders" with cross-skills.  Earlier it was nice to have teams like that and we aspired for such Agile teams and for lucky ones eventually the teams transformed into this using pairing and extensive training and coaching.

The testers now have been evolving and getting more development-centric, and developers getting hang of testing beyond Unit level.  Due to this leveling of the skills, now testers seem to placed in a better position to drive the development activity, some may term it is TDD but what I foresee is a "less intense" TDD. What do I mean by that?

TDD in pure sense would mean that we strictly write the tests first and that gives the scope of development to the developers and they write code that passes the test and stop at that.  A less intense TDD would be that you write the test first however it does not take the onus of describing the scope for the developers.  So developers would still refer to the SRS (reqt doc) in order to code, the difference being that they will have high priority tests ready for them to conformance to requirements.  Subsequently more testing would be done at integration and acceptance level to get the appropriate coverage as required.  So if you are not comfortable calling this variation of TDD then we can name is around the concept of early testing  (mind you, these are not static tests from V model).

This variation of testing gives ample powers to testers in an Agile (or traditional) methodology and also brings in concept of quality early in the SDLC.

Sunday, January 23, 2011