Wednesday, February 19, 2020

TEST PLAN



TEST PLAN
Test plan is a document which drives all future testing activities.
Test plan is prepared by Test manager(20%), Test Engineer(20%) and by Test Lead(60%).

There are 15 sections in a test plan. We will look at each one of them below,

1) OBJECTIVE :- It gives the aim of preparing test plan i.e, why are we preparing this test plan.

2) SCOPE :-
2.1 Features to be tested
For ex,    Compose mail
               Inbox
               Sent Item
                Drafts
2.2 Features not to be tested
For ex,    Help
              
              
              
              

i.e, In the planning stage, we decide which feature to test and which not to test due to the limited time available for the project.

How do we decide this (which features not to be tested) ?
a) “HELP” is a feature developed and written by a technical writer and reviewed by another technical writer. So, we’ll not test this feature.

B) Third party modules of any application will not be tested.
Let us consider that an application with features A, B, C and D are to be developed as per requirements. But then, D has already been developed and is in use by another company. So, the development team will purchase D from that company and integrate with the other features A, B and C.
Now, we will not do functional testing on D because D is already in use in the market. But we will do integration testing and system testing between A, B, C and D because the new features may not work with D properly.

c)
The application might be having link to some other application. Here, our scope of testing is limited to,
·         Whether link exists
·         If it goes to homepage of the corresponding application when we click on the link.
Let us consider the example of Gmail. When we log into gmail, we see many links to other applications like – orkut, picassa, youtube etc. when we logged into Gmail and when we click on the orkut link – it must take us to Orkut’s homepage.
Such features are called Single sign-on feature – it is a feature wherein 1 login allows access to multiple applications.

d) In the 1st release of the product – features that have been developed are – a, b, c, d, e, f, g, h, … m, n, o.
Now, the customer gives requirements of new features to be built for enhancement of the product during the 2nd release. The features to be developed are – p, q, r, s, t.
During test plan, we write scope,

Scope
Features to be tested
P, Q, R, S, T (new features)
A, B, C, D, E, F
Features not to be tested
G, H, I, J, … N, O

Thus we first test new features and then test old features which might be affected by building the new features i.e, impact areas. We do regression testing for A, B, C, … F.

3) TESTING METHODOLOGIES (Types of Testing)
Depending upon the application, we decide what type of testing we do for the various features of the application. We should also define and describe each type of testing we mention in the testing methodologies so that everybody (dev team, management, testing team) can understand, because testing terminologies are not universal.

For example, we have to test www.shaadi.com, we do the following types of testing,
Smoke testing 
Functional testing 
Integration testing
System testing                                            
Regression testing  
Adhoc testing   
Usability Globalization 
Accessibility Compatibility 
Performance testing

For standalone applications, like AutoCad, we do the following types of testing,
Smoke testing      
Functional testing      
Integration testing                                
System testing                                                
Regression testing    
Adhoc testing     
Compatibility testing    
Globalization testing      
Accessibility testing               
Usability testing     
Reliability testing        
Recovery testing                          
Installation / Uninstallation testing

4) APPROACH
The way we go about testing the product in future,
a)      By writing high level scenarios
b)      By writing flow graphs





a) By writing high level  scenarios
for ex, we are testing www.yahoo.com

i) Login to Yahoo – send a mail and check whether it is in Sent Items page
ii) Login to …….
iii) …..
…..
…..
…..

This is written only to explain the approach to be taken to test the product. Only for the critical features, we will write a few very high level scenarios. We don’t cover all scenarios here. That is the job of the respective Test Engineers for whom the features have been allocated.

b) By writing flow graphs

5) ASSUMPTIONS
When writing test plans, certain assumptions would be made like technology, resources etc.

6) RISKS
If the assumptions fail, risks are involved


7) CONTINGENCY PLAN OR MITIGATION PLAN OR BACK-UP PLAN
To overcome the risks, a contingency plan has to be made. Atleast to reduce the percentage from 100% to 20%

Let us consider an example for 5, 6, 7



In the project, the assumption we have made is that all the 3 test engineers will be there till the completion of the project and each are assigned modules A, B, C respectively. The risk is one of the engineers may leave the project mid-way.
Thus, the mitigation plan would be to allocate a primary and secondary owner to each feature. Thus, one engineer quits – the secondary owner takes over that particular feature and helps the new engineer to understand their respective modules.
Always assumptions, risks, mitigation plan are specific to the project.
The different types of risks involved are,
Ø  Resource point of view
Ø  Technical point of view
Ø  Customer point of view


8) ROLES AND RESPONSIBILITIES

                                                             Test Manager
 



    Test Lead                     Test Lead                              Test Lead                          Test Lead
 



     Senior Test Engineer     Junior Test Engineer      Fresher

When a Big project comes, it’s the Test Manager who writes the test plan.
If there are 3small projects, then Test Manager allocates each project to each Test lead. The Test lead writes the test plan for the project which he is allocated.




8.1 Test Manager
Ø  Writes or reviews test plan
Ø  Interacts with customer, development team and management
Ø  Sign off release note
Ø  Handle issues and escalations
Ø  ….
Ø  ….
Ø  ….

8.2 Test Lead
Ø  Writes or reviews test plan
Ø  Interacts with development team and customers
Ø  Allocates work to test engineers and ensure that they are completing the work within the schedule
Ø  Consolidate reports sent by Test Engineers and communicate it to development team, customers(if it is a time&material project) and management
Ø 
Ø 
Ø 

8.3 Test Engineer 1
Ø  Review test plan
Ø  Write test cases for trend analysis
Ø  Asset survey
Ø  Write traceability matrix
Ø  Review test cases written for sales and purchase modules
Ø  Execute test cases written for trend analysis, asset survey, registration (old module developed in previous release. Adding trend analysis and asset survey has affected. Old module has been affected. So do regression testing)
Ø  Perform compatibility testing using Internet Explorer, Mozilla Firefox and Google Chrome in Windows XP and Windows Vista
Ø  Prepare test execution report and communicate it to Test lead.
Ø  ….
Ø  ….
Ø 

8.4 Test Engineer 2
Ø  Set up and install the product
Ø  Identify test cases to be automated
Ø  Automate identified test cases using QTP
Ø  Execute and maintain automation scripts
Ø 
Ø 







9) SCHEDULES :-
This section contains – when exactly each activity should start and end? Exact date should be mentioned and for every activity, date will be specified.


     

Thus, as we can see from the above figure – for every specified activity, there will be a starting date and closing date. For every build, there will be a specified date. For every type of testing for each build, there will be a specified date.

10) DEFECT TRACKING
In this section, we mention – how to communicate the defects found during testing to the development team and also how development team should respond to it. We should also mention the priority of the defect – high, medium, low.
10.1 Procedure to track the defect
….
….
….
….
10.2 Defect tracking tool
We mention the name of the tool we will be using to track the defects
10.3 Severity
               10.3.1 Blocker(or Showstopper)
                        ….
                        …. (define it with an example in the test plan)
                        For ex, there will be bug in the module. We cannot go and test the other modules because                         this blocker has blocked the other modules.
               10.3.2 Critical
                       
                        … (define it with an example)
                        Bugs which affects the business is considered critical
               10.3.3 Major
                       
                        … (define it with an example)
                        Bugs which affects look and feel of the application is considered as major
               10.3.4 Minor
                       
                        … (define it with an example)

10.4 Priority
               10.4.1 High – P1
              
               10.4.2 Medium – P2
              
               10.4.3 Low – P3
              
              
                                 P4

So, depending on the priority of the defect(high, medium or low), we classify it as P1, P2, P3, P4.

11) Test Environment
11.1 Hardware
               11.1.1 Server :- Sun Starcat 1500
                        (this is the name of the server from which testing team take the application for testing)

               11.1.2 Client :-
               3 machines with following configurations,
               Processor : Intel 2GHz
               RAM :  2GB
              
              
              
               (this gives the configurations of the computers of the Test Engineers i.e, the testing team)
11.2 Software
               11.2.1 Server
                        OS : Linux
                        Web Server : TomCat
                        Application Server : Websphere
                        Database Server : Oracle (or) MS – SQL Server
(the above servers are the servers which the testing team will be using to test the product)
               11.2.2 Client
                        OS : Windows XP, Vista, 7
                        Browsers : Internet Explorer, Internet Explorer 7, Internet Explorer 8, Mozilla FireFox,                             Google Chrome
(the above gives the various platforms and browsers in which the testing team will test the product)




11.3 Procedure to install the software
(Development team gives how to install the software. If they have not yet given the procedure, then in the test plan, we just write it as TBD – to be decided)

12) Entry and Exit Criteria

                                                                EXIT                    1)Based on %age test execution
                                                                                                2)Based on %age test pass
                                                                                                3) Based on severity







ENTRY
 


a) WBT should be over                                                                                                                          …..
b) Test cases should be ready
c) Product should be installed
  with proper test environment
d) Test data should be ready
e) Resources should be available








Before we start with Functional Testing, all the above entry criteria should be met.
After we are done with FT, before we start with Integration Testing, then the exit criteria of FT should be met. The percentage of exit criteria is decided by meeting with both development and test manager. They compromise and conclude the percentage. If the exit criteria of FT is not met, then we cannot move onto IT.
Based on severity of defects means,
The testing team would have decided that in order to move onto the next stage, the following criteria should be met,
Ø  There should not be more than 20critical bugs
Ø  There should not be more than 60major bugs
Ø  There should not be more than 100minor bugs.
If all the above are met, then they move onto the next testing stage.
But the problem with the above method was,
21 critical, 50major, 99minor – cant exit because there are more than 20 critical bugs.
10critical, 90major, 200 minor – can exit. But the 10 critical bugs can affect the product.
Thus, they came up with the concept of “weight of defects”. i.e, 3major = 1 critical, 5minor – 1critical and total critical should not be more than 60.



So, for,
21 critical – 21
50major – 16critical
99minor – 19critical
Totally there are 56critical bugs, so we can move onto the next stage.
But for the 2nd example, we cannot move on.

Entry criteria for IT :
- should have met exit criteria of FT
(remaining all are same as entry criteria of FT)
Exit criteria for IT :
All points are same as  exit criteria for FT.
But if the %age pass for FT is 85%, then the %age pass for IT should be 90% - because as we reach the later stages of testing, we expect the number of defects to be less.

Entry criteria for ST :
- exit criteria of IT should be met
- minimum set of features must be developed
- test environment should be similar to production environment
(remaining all are same as of IT)
Exit criteria for ST :
- everything remains same as of above, but the pass %age is now 99% - there should be 0 critical bugs. There could be some 30major and 50minor bugs. If all this is met, then product can be released.

Note : All the numbers given above are just for example sake. They are not international standard numbers!!!.


INTERVIEW QUESTIONS
Q) Customer gets 100% defect free product means,
                a) Testing team is not good                           b) Developers are super
                c) Product is old                                                d) All of the above

Ans) a) is correct. Testing team is not good – because – fundamentals of software testing says there is no product which has zero defects.
 
 








                                                                                  






13) TEST AUTOMATION

13.1 Features to be automated
13.2 Features not to be automated
13.3 Which is the automation tool you are planning to use
13.4 What is the automation framework you are planning to use

We automate the test cases only after the 1st release (we have studied this earlier).

13.1 On what basis do we decide which feature to be automated ?

 













               Very Important Features

If the features are very important and need to be repeatedly tested, then we automate that feature. Because manually testing the feature takes longer time and also becomes tedious job.

13.2 How to decide which features are not to be automated ?
®     For ex, “HELP” is a feature that is not repeatedly tested – so we don’t have to automate it.
®     If the feature is unstable and has lot of defects – we will not automate because it has to be tested repeatedly manually.
®     If there is a feature that has to be repeatedly tested, but we are predicting a requirement change for that feature – so we don’t automate it as changing the manual test case is easier than changing the automation script.

14) DELIVERABLES
It is the output from the testing team. It contains what we will deliver to the customer at the end of the project. It has the following sections,
v  14.1 Test Plan
v  14.2 Test Cases
v  14.3 Test Scripts
v  14.4 Traceability Matrix
v  14.5 Defect Report
v  14.6 Test Execution Report
v  14.7 Graphs and Metrics
v  14.8 Release Note

14.6 Graphs and Metrics
Here, we will just mention what are the types of graphs we will deliver and also give a sample of each graph we will be delivering.


 









                                                                     





                  (Defect Distribution Graph)

 












 


 


 
                                                                                                                                       

                                                                             

 


















                                                        (Build-wise Graph)


 













                                                        
                                                         (Defect Trend Analysis Graph)

 



















Graph 1 :- in this graph we depict – how many bugs have been found and how many bugs have been fixed in each module.

Graph 2 :- in this graph, we depict – how many critical, major and minor bugs have been found for each module and how many have been fixed for each module.

Graph 3 :- in this graph, we depict – build wise graph i.e, in each build how many bugs have been found and fixed for each module. According to the module, we have found defects. Adding C has introduced a lot of bugs in A and B. Adding D has introduced a lot of bugs in A, B and C.

Graph 4 :- Defect Trend Analysis graph depicts – this graph is prepared every month and we must send it to management. It’s a kind of forecast. By the end of the project, “rate of fixing defects” curve must have an upward trend. Test Lead prepares this graph.

Graph 5 :- Test manager prepares this graph. This graph is prepared to understand the gap in estimation of defects and the actual defects that have occurred. This graph helps in better estimation of defects in the future.







Metrics

Module Name
Critical
Major
Minor
Found
Fixed
Found
Fixed
Found
Fixed
Sales
40
36
80
30
90
15
Purchase
..
Asset Survey
Defect Distribution Metrics

We generate the defect distribution graph(graph 1) by looking at the above data.
Similarly we can generate many such metrics.
For ex,

Test Engineer Name
Critical
Major
Minor
Found
Fixed
Found
Fixed
Found
Fixed
Bipin
40
36
80
30
90
15
Rajath
..
Amith
..

In the above graph, we are maintain a record of all the test engineers in the project and how many bugs have been caught and fixed etc. We can use this data for future analysis. When a new requirement comes, we can decide who to give the complex feature for testing based on the number of bugs they have found. We will be in a better position to know who can handle the complex features very well and find maximum number of bugs.


Interview Questions and Tips

1) What is Metrics ?
Ans) We can tell any of the above.

2) On the last day of the project i.e on release date, we find a critical bug. Then what will you do? Will you release the product or fix the critical bug?
Ans) 1st say – Testing team prepares a report and sometimes also provides a suggestion on what can be done. But, it’s the Management team which takes a decision on whether to release the product or not.
But, now the Interviewer asks you – “I am asking what will you do, not what the management will do?” Then answer like this – “I will not release the product with critical bug because I want to deliver a high quality product”.
 
 






















14.7 Release Note

 







The product is developed and tested and released to the customer. The name of the release is “Cheetah Release”.
The release note contains,
1) List of pending/open bugs
2) List of features added, modified or deleted
3)Platforms(OS, Browsers, Hardware) in which the product is tested
..
..
4) Platforms in which the product is not tested
5)List of bugs fixed in current release which were found in previous release production
..
 





Let us consider that Cheetah release is the 2nd release of the product after the 1st release Tiger release. Some of the bugs found in the 1st release has been fixed in the 2nd release. Also a list of features which have been added, modified and deleted from the 1st release to the 2nd release will be mentioned here.

6) Procedure to install the software
..
7) Version of the software
..

Release Note is a document prepared during release of the project and signed by test manager




15)TEMPLATES
This section contains all the templates for the documents which will be used in the project. Only these templates will be used by all the test engineers in the project so as to provide uniformity to the entire project. The various documents which will be covered in the Template section are,
·         Test Case
·         Traceability Matrix
·         Test Execution Report
·         Defect Report
·         Test Case Review Template
·        
·        
·        

This is how a Test Plan document looks like,

Pg 1

CBO_Testplan
Revision History

Version
Author
Reviewed By
Approved By
Comments
Approval Date
1
..
Name of manager
Version 1.0 is developed
dd/mm/yyyy
1.1
..
..
" "
Version 1.1 is developed. XYZ feature is added
dd/mm/yyyy

 
 























Pg 2

TABLE OF CONTENTS
1          Objective                    pg 1
2          Scope                          pg 2
3          Approach                   pg 3
..
 
 













Pg 3 – Pg 19
 








Pg 20

REFERENCES
1) CRS
2) SRS
3) FS
4) Design document
 
 











In the 1st page – we initially fill in only version(write as Draft 1.0), author, comments and reviewed by. Later on when the manager approves it, we fill in the Approved by and approval date and also remove (Draft) written in version column.
Generally Test Engineers only review it and the test plan is approved by Test Manager.
When new features come, we modify the Test Plan. Change the version and whichever features need to be changed. Again it is reviewed, updated and approved by manager. Test Plan must be updated whenever changes happen.
References (Pg 20) contains all the documents that are used to write the test plan document.

Who writes Test Plan ?
®     Test Lead – 60%
®     Test Manager – 20%
®     Test Engineer – 20%
Thus, we can see from above – in 60% of projects, Test plan is written by Test Lead and so on as shown above.

Who reviews Test Plan ?
®     Test Engineer
®     Test Lead
®     Test Manager
®     Customer
®     Development team

Test Engineer looks at the Test plan from his module point of view.
Test Manager looks at the Test plan from the customer point of view.

Who approves Test Plan ?
®     Test Manager
®     Customer

Who writes Test Cases ?
®     Test Engineer
®     Test Lead

Who reviews Test Cases ?
®     Test Lead
®     Test Engineer
®     Development Team
®     Customer

Who approves Test Cases ?
®     Test Lead
®     Test Manager
®     Customer

1 comment: