Wednesday, April 29, 2020

SDLC

SDLC: stands for Software development life cycle it is a standard procedure to develop the software.

It has got different Models like
  1. Waterfall Model
  2. Spiral Model
  3. V Model
  4. Prototype Model
  5. Derived Model
  6. Hybrid Model
  7. Agile Model

It has got different stages like
  1. Requirement collection
  2. Feasibility study/Analysis
  3. Design
  4. Coding
  5. Testing
  6. Installation 
  7. Maintenance


Popular Server name

Popular web serveres:
Apache HTTP Server.
IIS
Lighttpd
LiteSpeed Web server
---------------------------
Application Servers:
Apache Tomcat
WebLogic
GlassFish
JBoss
WebSphere
---------------------------
Data Base servers:
Oracle
My SQL
MS-SQL Server
DB2
Sybase
Terradata

Sunday, April 19, 2020

Adhoc Testing


AD – HOC Testing ( also called Monkey Testing / Gorilla Testing )
Testing the application randomly is called Ad-hoc testing.

Why we do Ad-hoc testing ?
1) End-users use the application randomly and he may see a defect, but professional TE uses the application systematically so he may not find the same defect. In order to avoid this scenario, TE should go and then test the application randomly (i.e, behave like and end-user and test).
For ex,

In the above figure, after we have tested the application for FT, IT and ST – if we click on some feature instead of going to homepage (or) sometimes datapage, if it goes to blank page then it will be a bug. In order to avoid these kind of scenarios, we do Ad-hoc testing.

2) Development team looks at the requirements and build the product. Testing Team also look at the requirements and do the testing. By this method, Testing Team may not catch many bugs. They think everything works fine. In order to avoid this, we do random testing behaving like end-users.

3) Ad-hoc is a testing where we don’t follow the requirements (we just randomly check the application). Since we don’t follow requirements, we don’t write test cases.

Examples of Ad-Hoc testing for Gmail :
1) Login to Gmail using valid username and password. Logout  from Gmail. Click on Back button. It should not go back to Inbox page. If it does, then it is a javascript error and it is a bug. It should go back to Login page and say session expired.

2) Login to Gmail homepage using valid username and password. Once we are in Inbox page, copy the URL of the inbox which is in the address bar of the homepage and paste it in Notepad file. Logout from Gmail. Now, open browser page and paste the URL of the inbox in the address bar. It should not go to the inbox, instead it must go to the welcome page of Gmail.

3) Login into Gmail. Go to Settings and Change Password. Set the old password only as the new password and see what happens.

Let us consider an example of Fraud Management System of Online Banking Application.

When we click on the Block Account link, we are transferred to the Block Account page where we find several features in that. Enter the data and click on Block Account, then that account has to be blocked.
Now, we will see how we can do Ad-hoc testing on this application.
1) Login as Bank Manager and enter the Account Number and click Block and see whether it is blocked or not.

2) Before blocking the Account, Go and Delete the person whose account is to be blocked and again Login and check whether it is blocked or not. As we click the Block it should throw a message saying customer not available as an error message. Here, we randomly check the application and nothing is mentioned in the requirements. Thus here we do Ad-hoc testing.

3) Suppose some User B transfers money to A whose account is blocked. In this case also, we should get a message saying Account(of A) is blocked [ i.e, by the time B transfers money, manager blocks A’s account].
Actually the requirement does not say check for money transfer from other account and do testing. But this testing is done by TE not against the requirement. Even sometimes without throwing message that Account is blocked the money gets transferred. In this case also, the TE checks for it and thus it becomes Ad-hoc Testing.

NOTE :-
·         Ad-hoc testing is basically negative testing because we are testing against requirements ( out of requirements ).
·         Here, the objective is to somehow break the product.

When to do Ad-Hoc testing ?
·         Whenever we are free, we do Ad-hoc testing. i.e, developers develop the application and give it to testing team. Testing team is given 15days for doing FT. In that he spends 12 days doing FT and another 3days he does Ad-hoc testing. We must always do Ad-hoc testing in the last because we always 1st concentrate on customer satisfaction
·         After testing as per requirements, then we start with ad-hoc testing
·         When a good scenario comes, we can stop FT, IT, ST and try that scenario for Ad-hoc testing. But we should not spend more time doing Ad-hoc testing and immediately resume with formal testing.
·         If there are more such scenarios, then we record it and do it at the last when we have time.
Forms of Adhoc Testing
1.   Buddy Testing: Two buddies, one from development team and one from test team mutually work on identifying defects in the same module. Buddy testing helps the testers develop better test cases while development team can also make design changes early. This kind of testing happens usually after completing the unit testing.
2.   Pair Testing: Two testers are assigned the same modules and they share ideas and work on the same systems to find defects. One tester executes the tests while another tester records the notes on their findings.
3. Monkey Testing: Testing is performed randomly without any test cases in order to break the system 

Thursday, April 16, 2020

Smoke Testing


*** SMOKE  TESTING or SANITY TESTING or DRY RUN or SKIM TESTING or BUILD VERIFICATION TESTING or Skim Testing or Confidence Testing *** (Very very important interview question)

Testing the basic or critical features of an application before doing thorough testing or rigorous testing is called as smoke testing.

It is also called Build Verification Testing – because we check whether the build is broken or not.

Whenever a new build comes in, we always start with smoke testing, because for every new build – there might be some changes which might have broken a major feature ( fixing the bug or adding a new feature could have affected a major portion of the original software).
In smoke testing, we do only positive testing – i.e, we enter only valid data and not invalid data.

Advantages:

  1. Test engineer can find blocker and critical defects in the early stage itself.
  2. Development team will get sufficient time to fix the defects.
  3. Test cycle will not be postponed and hence release will not be delayed.

How to do smoke testing?
Here we should list the feature to be tested as part of smoke testing and the features not to be tested as part of smoke testing.
When we are doing smoke testing, we do only positive testing (only valid data is entered)
Here, we test only basic or critical features
Here, we take basic features and test for important scenarios

How to write smoke testing scenarios?

1.      To check that when user entered test-url welcome page should be displayed
2.      To check that when the user click on sign up link, signup page should be displayed.
3.      To check that when user entered valid data to all the fields in the sign up page and click on submit button, account should be created.
4.      To check that when user entered valid login credentials and click on login button, home page should be displayed.
5.      To verify used can compose and send a mail.
6.      To verify that user can receive the mails
7.      To verify send mails are displayed in sent item.

When we do smoke testing?
1.      As soon as we get new build from development team we do smoke testing.
 (Because for every new build – there might be some changes which might have broken a major feature ( fixing the bug or adding a new feature could have affected a major portion of the original software).)
2.      Whenever the build comes to the customer, before the customer / client does Acceptance Testing, he also do Smoke Testing to verify all the files are received and build is installed properly or not.
3.      Build engineer/Release engineer will do smoke testing to verify whether build is installed properly or not in test server or in production server.
4.      Developer will do smoke testing after WBT and before giving a build to testing team.

Why we do smoke testing?
1.      To check whether software is testable or not
2.      First day itself while doing smoke testing if we find bugs send it to developers so that they will get sufficient time to fix the defect.
3.      To verify whether build is installed properly or not.
4.      Developers are giving build means they would have done some changes, chances are there that changes might affecting old basic or critical features in order  to find that we do smoke testing,
5.      It is like a health check of the product so smoke testing should be done.
6.      It is like a build verification testing, here we check build is broken or not (if the build is having more number of defects then the build is called as broken build.

Important Points to Remember
·         When we are doing smoke testing, we do only positive testing (only valid data is entered)
·         Here, we test only basic or critical features
·         Here, we take basic features and test for important scenarios
·         Before doing through FT/IT/ST we do smoke testing.
·         Before acceptance testing customers will do smoke testing
·         Once after the software is deployed to the production server, before using the software for the business, smoke testing should be done.

Types of Smoke testing:

Smoke testing is classified into two types,
·         Formal Smoke Testing – the development team sends the s/w to the test lead. The test lead then instructs the testing team to do smoke testing and send report after smoke testing. Once, the testing team is done with smoke testing, they send the smoke testing report to the Test lead.
·         Informal Smoke Testing – here, the test lead says the product is ready and to start testing. He does not specify to do smoke testing. But, still the testing team start testing the product by doing smoke testing.

What are the ways to do smoke testing?
There are 2 ways to do smoke testing.
1.      Manual
2.      Automation
Do you write smoke test scenarios?
Yes

Can we automate smoke test scenarios?
Yes








What is the difference between smoke and sanity testing?
As per my knowledge there is no difference between smoke and sanity testing but I have gone through some website and documents were in i got to know there are some difference between smoke and sanity testing.



Smoke Testing
Sanity Testing
It is wide and shallow testing approach
It is a deep and narrow testing approach
Smoke testing is Positive testing.
It is both positive and negative testing.
Here we document scenarios and test case.
(Scripted)
Here we don’t document scenarios and test case( unscripted)
We can go for automation
We don’t go for automation
It is done by both developer and tester
It is done by only tester


Build verification testing:
As soon as we get the build, first we verify are there any blocker bugs are there or not is called build verification testing.

Confidence Testing:
While doing smoke testing we get to know there is no blocker bugs in the basic features this is called confidence testing.

Dry Run - A dry run is a testing process where the effects of a  possible failure are intentionally mitigated. For example,  an aerospace company may conduct a "dry run" of a takeoff  using a new aircraft on a runway before the first test  flight.

First day itself if you get to know product is not testable as a test engineer how will you spend your time?

I will spend my time in analyzing and understanding requirement, identifying scenarios and write test cases.  






Wednesday, April 8, 2020

Defect status


·  Rejected: If the defect is not considered as a genuine defect by the developer then it is marked as ‘Rejected’ by the developer.
·  Duplicate: If the developer finds the defect as same as any other defect or if the concept of the defect matches with any other defect then the status of the defect is changed to ‘Duplicate’ by the developer.
·  Deferred: If the developer feels that the defect is not of very important priority and it can get fixed in the next releases or so in such a case, he can change the status of the defect as ‘Deferred’.
What type of defect is a not reproducible defect?
Answer: A defect which is not occurring repeatedly in every execution and is producing only at some instances and whose steps as proof have to be captured with the help of screenshots, then such a defect is called as a ‘not reproducible’ defect.

Monday, April 6, 2020

Defect Life Cycle



          DEFECT LIFE CYCLE
 









                                                               
                                                                                                                                            
Oval: RFE – Request for enhancement
 





Customer gives requirements – developers are developing the s/w – testing team is writing test cases looking at the requirements
 Developer develops the product – test engineer starts testing the product – he finds a defect – now the TE must send the defect to the development team.
He prepares a defect report – and sends a mail to the Development lead saying “bug open”.
Development lead looks at the mail and at the bug – and by looking at the bug – he comes to know to which development engineer developed that feature which had a bug – and sends the defect report to that particular developer and says “bug assigned”.
The development engineer fixes the bug – and sends a mail to the test engineer saying “bug fixed” – he also “cc mail” to the development lead.
Now the TE takes the new build in which the bug is fixed and re-tests it again – and if the bug is really fixed – then sends a mail to the developer saying “bug closed” and also “cc mail” to the development lead.
Every bug will have an unique number.
If the defect is still there – it will be sent back as “bug reopen”.

“REJECT” BUG

Now, when the TE sends a defect report – the Development Lead will look at it and reject the bug.

Bug is rejected because,

1) Misunderstanding of requirements

2) While installing or configuring the product – wrongly configured or installed the product and we found a bug in the product – send it to development team – developer says “reject” because he looks at the defect report and comes to know that the Testing team has not installed the product correctly.

3) Referring to old requirements
Chances are there that Testing team may be referring to old requirements while testing the product.
For ex, - in the 1st build – developers have given the product for testing in which a sales link button is there, test engineer does testing and reports any defects – in the 2nd build, customer has given a requirement change to the development team where-in he has asked them to remove the sales link button – but the testing team is not aware of this change – so when the development team gives the new build – obviously they would have removed the sales button – test engineer when he looks at the feature – he is referring to old requirements which has sales link button – but in the new requirements it has been removed which the test engineer is not aware of – so he reports a bug – development team then reply back saying “refer to new requirements in which sales link button has been removed”.

4) Because of extra features
Consider the example and figure shown in Pg 130.
Here, “help” button is a feature which has been developed by the developer as an extra feature not part of he requirements – obviously when the TE looks at the application and requirements, since Help feature is not there – he reports it as a bug – the developer rejects it saying “it is an extra feature and that he wont test it” – Test Engineer replies back by re-opening the bug saying  “update the requirements” – updating the requirements requires lot of running around for the development team, talking to the customer and all that – so he either removes it or talks to the customer.

“DUPLICATE” BUG
 The TE finds a bug and sends a defect report and also assigns the bug report a number – let’s say he sends a bug report numbered as Bug 25 – now the developer replies back by saying “Bug 25 is duplicate of Bug 10” i.e, another Test engineer has already sent that bug earlier and it is being fixed.

Why do we get duplicate bugs?
·         Because of common features – Lets say Test Engineer A and B are testing an application – Test Engineer A and B to test their features must Login to the application using valid username and password before they can go deep into the application and start testing their features – A enters valid username and password and clicks on Login button – it goes to a blank page – this is a bug – so A prepares a defect report for this bug and sends it to developer – After sometime, B also tries to login and finds the same bug – he also prepares a defect report and sends it to developer – but developer sends back defect report of B saying “it is a duplicate”.
·         B finds bug in A’s module – let’s say A and B are testing their respective features – A is doing functional testing and integration testing on all his features – Similarly B is also testing all his features – In one scenario, B needs to go to A’s feature(say m12) and send data to B’s feature(say m23) – when B clicks on m12, he finds a bug – although its feature of A, he can still prepare a defect report and send it to developers – and B sends the defect report to development team – now, A after testing all his features comes to his feature m12 – he also finds the same defect and sends a defect report – but the developers send it back saying “it is duplicate”.

How to avoid duplicate bugs?
                                                            QA                                                                             

CBO_DEFECTS
 
 



 


 


 


 

Sales
….
 


 


 


 

          1                                 2                                3                                     4








             5                                6                                      7                                  8








… … ….
 
                                                                                                                                                   Defect
                                                                                                                                                   Repository







 






            MS – Search

Let us consider Test Engineer A is testing Sales module and finds a defect where-in – he enters all the information and then when he clicks on Submit button, it is going to Blank page – he prepares a defect report in which he explains the defect.

Now, before he sends it to developer – A logs into a server named QA – there, he goes into a folder called “Defect Repository” which has all the defect reports prepared by TE(s) and sent to developers and clicks on MS-Search and gives Sales as the search keyword and hits on search button – now, it starts looking inside the folder if there are any defect report which has Sales in it – if the search results in No files – then A will first copy-paste his defect report in the defect repository and then send the defect report to the development team who then assign it to the development engineer.
Now, after a few days – another TE B finds the same defect and he also prepares a report – but before he sends it to development team. He first logs onto QA and Defect Repository and searches for Sales keyword – he finds the report which was earlier sent by TE A – so he doesn’t send the defect to development lead.

But, always – the TE if he finds a defect similar to his if the Search operation yields some results – before he concludes that his defect has already been sent – he must first read the defect report and ascertain whether it is the same bug which he has found – chances are there that it might have the keyword Sales, but it is about some other defect in Sales feature.
For ex, - TE A is testing password text field – he finds that it is accepting 12 characters also (when requirement says “it must accept only 10characters”) – that’s a bug. So he prepares a defect report which has the keyword password and stores it in the defect repository and also sends it to Dev team. Now, another TE B is also testing password field – he finds there is a bug where-in password text field is also accepting blank space character – before he prepares a report, he goes to the defect repository and searches for Password keyword – there he finds the earlier report sent by A – but that’s a different defect – he reads it and realizes that it’s not the same defect which he has found – so, he prepares a defect report regarding the blank space acceptance in the password text field – and stores it in the defect repository and also sends it to the development team.

CANNOT BE FIXED
Chances are there – Test Engineer finds a bug and sends it to Development Lead – development lead looks at the bug and sends it back saying “cannot be fixed”.
Why does this happen? – Because,
·         Technology itself is not supporting i.e, programming language we are using itself is not having capability to solve the problem
·         Whenever there is a bug in the root of the product. If it’s a minor bug, then development lead says “cannot be fixed”. But, if it’s a critical bug, then development lead cannot reject the bug
·         If the cost of fixing the bug is more than the cost of the bug itself – cost of the bug means loss incurred because of the bug.

POSTPONED or Fixed in the next release
a) We find a bug during the end of the release (could be major or minor but cannot be critical) – developers won’t have time to fix the bug – such a bug will be postponed and fixed in the next release and the bug status will be “open”



 

BUG
 


 


 


 


 
b)






Developers have built the above application – Test Engineers are testing the application and they find a bug – now, the bug is sent to the development team.
Development team looks at the report – and replies back to the testing team saying “they will not fix the bug for now, as the customer is thinking of making some changes to that module or he might as well ask for a requirement change where-in the module itself might have to be removed” – so they won’t fix the bug for now – but if no requirement change happens, then they will fix the bug.

c) If the bug is minor and it is in feature exposed to internal users.
For ex, consider Citibank employee using a s/w where-in he is making keeping track of accounts and customer details – now, for example if the “Sort by name” feature is not working – it’s ok because it’s a minor bug because the development team can always fix the bug during the next release.

NOT REPRODUCIBLE

1) Open browser and enter www.citibank.com and click on Go button

2) Login using valid username and password

3) In the homepage, click on “SALES link button”

4) It goes to blank page
 

BLANK PAGE
 

….
…..
…..
……
…….
….
…..
SALES
……
….
…..
 


 
 









                                                                                                                              DEFECT REPORT





    Open the application in Mozilla FireFox

Let us consider the above application – the TE is testing the above application in Mozilla Firefox– when he clicks on SALES link button – it goes to a blank page – this is a bug – the TE prepares a defect report and sends to the development team.
 









     Open the application in Internet Explorer

The development lead looks at the defect report and opens the application in Internet Explorer – when he clicks on Sales link button, Sales page opens and the application works perfectly.
Then, what was the problem? – the answer is, the TE didn’t specify in the defect report in which browser to open the application – so, the application didn’t work for the TE in Mozilla FireFox – but, when the developer opened it in Internet Explorer, the application worked totally fine.
Since the developer is not able to see any defect – he replies back by saying “I am not able to find the defect” or “I am not able to reproduce the defect”.

Why we get “not reproducible” defects?
1.      Because of platform mismatch
2.      Because of improper defect report
3.      Because of data mismatch
4.      Because of build mismatch
5.      Because of inconsistent defects

1) Because of platform mismatch
Ø  Because of OS mismatch
Ø  Because of browser mismatch
Ø  Because of ‘version of browser’ mismatch
Ø  Because of ‘settings of version’ mismatch

2) Because of improper defect report
Example to explain this - take figure and explanation in Pg 136 and 137. Here, the TE must have specified to open the application in Mozilla Firefox
 Let us consider another example shown below to explain “improper defect report”


1) Open browser and enter www.citibank.com and click on Go button

2) Enter user details and click on Submit button

3) It goes to blank page
 
 





                                                                                                                                 DEFECT REPORT





In the above application – TE is testing the above application – when he enters all the user details and selects checkbox and clicks on Submit button – it goes to blank page – but, when the TE does not select the checkbox and clicks on Submit button, a new user account is created, i.e, it works fine. The TE prepares a defect report and sends it to development lead.
The Dev team looks at the report(as shown in box) – and tests the application – it is working absolutely fine.
Then what happened? – Observe the defect report sent by the testing team – nowhere as he mentioned to “select checkbox and click on submit button” – thus because of the improper defect report – the development team is not able to reproduce the defect.

3) Because of data mismatch
Consider the example shown below,
 











Login as ABC and click on Inbox in homepage of ABC

The above email application has been developed – the TE starts testing the above application – after testing and testing the above application using username ABC and valid password – there are almost 101 mails in ABC’s inbox – now, again when the TE opens the application and goes to test the application – no mails are being displayed when he clicks in Inbox and instead he gets a blank page. So – the defect he has found is that – when the number of mails in Inbox exceeds 100 mails , he gets a blank page – so he prepares a defect report as shown in the box below,

1) Open the browser and enter the following URL www.email.com
2) Login using valid username and password
3) Click on Inbox
4) Blank page is displayed

 
 




The development team looks at the defect report – he logs in to the application using username XYZ and clicks on Inbox – no blank page is displayed and the Inbox is working fine.
What happened here? – The TE should have mentioned in the defect report as “Login as ABC” or “Login to any mailbox which has more than 100mails”. Thus, the developer does not find any defects and thus he mentions it as “not reproducible defect”.
4) Because of build mismatch

 









Developer develops build 1 and gives it for testing – in Build 1, there are 2 defects – bug 1 and bug 2 – TE while testing finds bug1 and reports it to development team – development team when they fix bug1, bug2 automatically gets fixed – development team is developing build 2 and gives it to testing team for testing – testing team just when it finishes testing build1 – he finds Bug 2 and reports it to development team – the development team however do not find any defect as they are testing build 2 – so they report back saying “bug is not reproducible”.

5) Because of inconsistent defects
To explain this, let us consider an example : When TE is testing an email application – he composes mail in User A and sends it to User B – he then logs out from user A and logs into User B and checks B’s inbox – but no mail is there in the Inbox – thus he finds a defect – now, just to confirm the defect before sending a defect report – the TE again logs into User A and sends a mail to User B – he then logs out from User A and logs into User B and checks B’s inbox – but, this time the mail is there in B’s inbox!!
Thus, we don’t know when the defect comes and when the feature works fine. This is called inconsistent defect.

REQUEST  FOR  ENHANCEMENT (RFE)
Test engineer finds a bug and sends it to development team – when development team look at the report sent by the TE – They know it’s a bug, but they say its not a bug because its not part of the requirements.
Let us consider the example shown below,  
 






                                                                                                                                                 
In the above example, the TE is testing the above fields. After he enters all the data into the required fields, he realizes that he needs to clear all the data – but there is no CLEAR button – and he has to manually clear all the fields. He reports this as a bug – but the development team will look at the defect report and see that clear button is not mentioned in the requirements – so they don’t reject or close the bug – instead they reply back to TE saying that it is RFE.
Developers agree that it is a bug – but will say that the customer have not specified these details and hence will give it a name as RFE.
If a defect is known as RFE, then we can bill the customer. Otherwise, development team needs to fix the bug
Development team always have a tendency to call a defect as RFE, so a TE needs to check the justification given by development team – if it is valid, then he will accept it as a RFE – but if it is not, then he will respond to the development team with proper justification.


INTERVIEW QUESTIONS & TIPS

1) Explain Defect Life Cycle ( **** VERY VERY IMPORTANT QUESTION ****)

When TE is testing the product – he finds a defect – he prepares a defect report and creates a status as “OPEN” – and sends it to development lead – development lead looks through the defect report – and if the defect is valid – he then finds out who developed that module in which the defect has been found and assigns the defect to that development engineer – the development lead also changes the status of the defect report to “ASSIGNED” – the development engineer fixes the defect and changes the status to “FIXED” – the build is then sent to the TE for re-testing – the TE tests the fix – and if the defect has been really fixed – then he changes the status of the defect report to “CLOSED” – if the defect is still there, he changes the status to “RE-OPEN”.

Sometimes it so happens that TE finds a defect and sends it to development lead – development lead looks at the report and rejects the report and he explains why it has been rejected,  
REJECT (explain)
DUPLICATE ( explain Duplicate)
CANNOT BE FIXED (explain)
POSTPONED (explain)
NOT REPRODUCIBLE (explain)
RFE (explain)
 
 











DEFECT REPORT

Defect ID – it is an unique number given to the defect

Test Case Name – whenever we find a defect, we send the defect report and not the test case to the developer. For defect tracking, we only track the defect report and not the test case. Test case is only for reference for the TE. We always only send the defect report whenever we catch a bug.

When we are doing ad-hoc testing – no test case is written for ad-hoc testing because they are “out of the box” testing scenarios – if we find a critical bug – then we convert that ad-hoc scenario into a test case and send it to development team.



Given below is – how a defect report looks like – it is a MS – WORD file.


INTERVIEW TIPS
Q) What are the attributes of a defect report ? (OR) What is the mode of communicating a defect ?
Ans) the answer for the above questions are – just explain the defect report.
 
 































Defect Evaluation Team / Bug Counseling Team
 











                                       







Let us consider the above figure to explain about Bug review meeting / Bug Counseling
The Defect Evaluation Team (DET) consists of Development Lead, Development Manager, Senior Development Engineers and Test Lead.
They will have a group mail id – say – xyz_det@abc.com – Now, whenever TE finds a bug, he sends it to DET – In DET, only the Development Lead (DL) looks into the bug report and assigns the bug – although the mail goes to everybody in the DET, only DL will check it and assign the bug. This process continues from Monday morning upto Friday afternoon – by this time about 100bugs have been reported by TE(s) – out of these 100 bugs, 60 have been assigned and the remaining 40 are pending.
Now, on Friday evening – DET team will have a meeting where they discuss about the pending bugs and whether to assign the bugs or reject the bugs – but for everything, they will have to give proper justification.
This meeting is called Bug Review Meeting / Bug Counseling. By the end of the meeting (i.e, 1st cycle) – the status of all the pending bugs will changed from pending to other status(es).

Manual                                                                                                                       Automation
 



DEFECTS
 


DEFECT TRACKING
 



Manual                                                                                                                         Automation

How to track a defect manually?
1)      Find a defect
2)      Ensure that it is not duplicate (i.e, verify it in Defect Repository)
3)      Prepare defect report
4)      Store it in defect repository
5)      Send it to development team
6)      Manage defect life cycle (i.e, keep updating the status)

Tracking of defects using Automation (i.e, Defect Tracking Tool)
The various tools available are,
®     Bugzilla
®     Mantis
®     Rational Clear Quest
®     TeleLogic
®     Bug_track
®     QC – Quality Center – it is a test management tool – a part of it is used to track the defects.

There are 2 categories of tools,
1) Service based – In service based companies – they have many projects of different clients – each project will have a different defect tracking tool
2) Product based – in product based companies – they use only one defect tracking tool.

Now, let us see how the defect tracking tool works,