Thursday, January 30, 2020

ISTQB Exam Questions On Equivalence Partitioning And Boundary Value Analysis




Question #1)
One of the fields on a form contains a text box that accepts numeric values in the range of 18 to 25. Identify the invalid Equivalence class.
a)    17
b)    19
c)    24
d)    21
Question #2)
In an Examination, a candidate has to score a minimum of 24 marks in order to clear the exam. The maximum that he can score is 40 marks.  Identify the Valid Equivalence values if the student clears the exam.
a)    22,23,26
b)    21,39,40
c)    29,30,31
d)    0,15,22
Question #3)
One of the fields on a form contains a text box that accepts alphanumeric values. Identify the Valid Equivalence class
a)    BOOK
b)    Book
c)    Boo01k
d)    Book

Question #4)
The Switch is switched off once the temperature falls below 18 and then it is turned on when the temperature is more than 21. When the temperature is more than 21. Identify the Equivalence values which belong to the same class.
a)    12,16,22
b)    24,27,17
c)    22,23,24
d)    14,15,19
Question #5)
A program validates a numeric field as follows: values less than 10 are rejected, values between 10 and 21 are accepted, values greater than or equal to 22 are rejected. Which of the following input values cover all of the equivalence partitions?
a. 10,11,21
b. 3,20,21
c. 3,10,22
d. 10,21,22
Question #6)
A program validates a numeric field as follows: values less than 10 are rejected, values between 10 and 21 are accepted, values greater than or equal to 22 are rejected. Which of the following covers the MOST boundary values?
a. 9,10,11,22
b. 9,10,21,22
c. 10,11,21,22
d. 10,11,20,21
Question #7)
In a system designed to work out the tax to be paid:
An employee has £4000 of salary tax-free.
The next £1500 is taxed at 10%.
The next £28000 after that is taxed at 22%.
Any further amount is taxed at 40%.
To the nearest whole pound, which of these groups of numbers fall into three DIFFERENT equivalence classes?
a)    £4000; £5000; £5500
b)    £32001; £34000; £36500
c)    £28000; £28001; £32001
d)    £4000; £4200; £5600
Question #8)
In a system designed to work out the tax to be paid:
An employee has £4000 of salary tax-free.
The next £1500 is taxed at 10%.
The next £28000 after that is taxed at 22%.
Any further amount is taxed at 40%.
To the nearest whole pound, which of these is a valid Boundary Value Analysis test case?
a)    £28000
b)    £33501
c)    £32001
d)    £1500
Question #9)
Given the following specification, which of the following values for age are in the SAME equivalence partition?
If you are less than 18, you are too young to be insured.
Between 18 and 30 inclusive, you will receive a 20% discount.
Anyone over 30 is not eligible for a discount.
a)    17, 18, 19
b)    29, 30, 31
c)    18, 29, 30
d)    17, 29, 31


Test case design technique.

What is Software Testing Technique?

Software Testing Techniques help you design better test cases. Since exhaustive testing is not possible; Manual Testing Techniques help reduce the number of test cases to be executed while increasing test coverage.
The different types of test case design techniques are
  • Boundary Value Analysis (BVA)
  • Equivalence Class Partitioning
  • Decision Table based testing.
  • State Transition
  • Error Guessing
Decision Table Based Testing.
A decision table is also known as to Cause-Effect table. This software testing technique is used for functions which respond to a combination of inputs or events.
The first we should identify functionalities where the output depends on a combination of inputs.
For every function, you need to create a table and list down all types of combinations of inputs and its respective outputs. This helps to identify a condition that is overlooked by the tester.
Following are steps to create a decision table:
  • Enlist the inputs in rows
  • Enter all the rules in the column
  • Fill the table with the different combination of inputs
  • In the last row, note down the output against the input combination.
Example: A submit button in a contact form is enabled only when all the inputs are entered by the end user.

State Transition


Wednesday, January 29, 2020

Test Case

Test Scenario: It is a high level documentation of customer business workflow.

Test Case: It is a document which cover all possible scenarios for a specific requirement.


What are the drawbacks of not writing test cases?

What will happen if you look at the requirement and test the application?
What are the drawback if you test the application by directly referring the requirement?


  • There is no consistency in the text execution.
  • Test Eng. will miss lot of defects.
  • Quality of testing varies from person to person
  • Testing depends on memory power of the test engineer.
  • Chances are there we might test same scenarios again and again.
  • Testing varies from person to person,some of them come up with creative scenarios and test the software for all possible 
  • scenarios and some of them may not come up with creative scenarios.
  • Test coverage will not be good.


When we write test case?



  • When customer gives new requirement we should write test case.
  • When customer wants to add new features or extra features we should write test case.
  • When customer wants to do modification on the existing feature we should write test case.
  • While testing the software if test engineer come up with creative scenarios we should write test case.
  • While testing the software if test engineer finds any defect and if test case is not present for defect then we should update test case for the defect.
Why we should write test case?


  • To have a consistency in text execution
  • To have a better test coverage
  • To depends on the process than a person
  • To avoid training to every engineers on the products or on requirements.
  • It is the only document which acts like a prof to customer,project manages and management that we have testing everything.
  • It acts like a base document to write automation test script.
  • If you document all the test case in a very organised manner it will take very less time for test execution.

Test Case Template:

PROJECT NAME: Name of the project the test cases belong to
MODULE NAME: Name of the module the test cases belong to
REFERENCE DOCUMENT: Mention the path of the reference documents (if any such as Requirement Document, Test Plan, Test Scenarios etc.,)
CREATED BY: Name of the Tester who created the test cases
DATE OF CREATION: When the test cases were created
REVIEWED BY: Name of the Tester who created the test cases
DATE OF REVIEW: When the test cases were reviewed
EXECUTED BY: Name of the Tester who executed the test case
DATE OF EXECUTION: When the test case was executed
TEST CASE ID: Each test case should be represented by a unique ID. It’s good practice to follow some naming convention for better understanding and discrimination purpose.
TEST SCENARIO: Test Scenario ID or title of the test scenario.
TEST CASE: Title of the test case
PRE-CONDITION: Conditions which needs to meet before executing the test case.
TEST STEPS: Mention all the test steps in detail and in the order how it could be executed.
TEST DATA: The data which could be used an input for the test cases.
EXPECTED RESULT: The result which we expect once the test cases were executed. It might be anything such as Home Page, Relevant screen, Error message etc.,
POST-CONDITION: Conditions which needs to achieve when the test case was successfully executed.
ACTUAL RESULT: The result which system shows once the test case was executed.
STATUS: If the actual and expected results are same, mention it as Passed. Else make it as Failed. If a test fails, it has to go through the bug life cycle to be fixed

Test case ID: Unique ID is required for each test case. Follow some convention to indicate the types of the test. For Example, ‘TC_UI_1' indicating ‘user interface test case #1'.
Test priority (Low/Medium/High): This is very useful while test execution. Test priority for business rules and functional test cases can be medium or higher whereas minor user interface cases can be of a low priority. Test priority should always be set by the reviewer.
Module Name: Mention the name of the main module or the sub-module.
Test Designed By Name of the Tester.
Test Designed Date: Date when it was written.
Test Executed By Name of the Tester who executed this test. To be filled only after test execution.
Test Execution Date: Date when the test was executed.
Test Title/Name: Test case title. For Example, verify the login page with a valid username and password.
Test Summary/Description: Describe the test objective in brief.
Pre-conditions: Any prerequisite that must be fulfilled before the execution of this test case. List all the pre-conditions in order to execute this test case successfully.
Dependencies: Mention any dependencies on the other test cases or test requirements.
Test Steps: List all the test execution steps in detail. Write test steps in the order in which they should be executed. Make sure to provide as many details as you can.
Pro Tip: In order to manage a test case efficiently with a lesser number of fields use this field to describe the test conditions, test data and user roles for running the test.
Test Data: Use of test data as an input for this test case. You can provide different data sets with exact values to be used as an input.
Expected Result:  What should be the system output after test execution? Describe the expected result in detail including message/error that should be displayed on the screen.
Post-condition: What should be the state of the system after executing this test case?
Actual result: The actual test result should be filled after test execution. Describe the system behavior after test execution.
Status (Pass/Fail): If an actual result is not as per the expected result, then mark this test as failed. Otherwise, update it as passed.
Notes/Comments/Questions: If there are some special conditions to support the above fields, which can’t be described above or if there are any questions related to expected or actual results then mention them here.
Add the following fields if necessary:
Defect ID/Link: If the test status is failed, then include the link to the defect log or mention the defect number.
Test Type/Keywords: This field can be used to classify the tests based on test types. For Example, functional, usability, business rules, etc.
Requirements: Requirements for which this test case is being written for. Preferably the exact section number of the requirement doc.
Attachments/References: This field is useful for complex test scenarios in order to explain the test steps or expected results using a Visio diagram as a reference. Provide the link or location to the actual path of the diagram or document.
Automation? (Yes/No): Whether this test case is automated or not. It is useful to track the automation status when test cases are automated

ISTQB Exam Questions On Equivalence Partitioning And Boundary Value Analysis



Software Testing

The process of finding the defects in the software is called Software Testing.

Why we do software testing?

  1. Software testing is required to find the defect in the software.
  2. To ensure the software is working as per the customers requirement specification.
  3. In order to provide good quality of the software to customer.
  4. Testing is required for an effective performance of software application
  5. It’s important to ensure that the application should not result into any failures because it can be very expensive in the future or in the later stages of the development.
  6. It makes the software more reliable and user-friendly to operate
  7. It’s required to stay in the business.

Different Types Of Software Testing

Given below is the list of some common types of Software Testing:
Functional Testing types include:
  • Unit Testing
  • Integration Testing
  • System Testing
  • Sanity Testing
  • Smoke Testing
  • Interface Testing
  • Regression Testing
  • Beta/Acceptance Testing
Non-functional Testing types include:
  • Performance Testing
  • Load Testing
  • Stress Testing
  • Volume Testing
  • Security Testing
  • Compatibility Testing
  • Install Testing
  • Recovery Testing
  • Reliability Testing
  • Usability Testing
  • Compliance Testing
  • Localization Testing



Test engineers should not be involved in fixing the bug because,
1) if they spend time in fixing the bug, they lose time to catch some more other defects in the s/w
2) fixing a defect might break a lot of other features. Thus, testers should always identify defects and developers should always be involved in fixing defects.

Sunday, January 26, 2020

Exploratory Testing

Explore the application understand each and every features,based on understanding identify all possible scenarios and document it, refer the documented scenarios and test the application is called Exploratory testing.

Drawback:
Test engineer might misunderstand feature as defect and defect as feature.
If any feature are not implement test engineer may not be able to find it.

Below are some tips/tricks that you can use in ET:
  • Divide the application into modules and bifurcate modules into different pages. Start your ET from the pages. This will give the right coverage.
  • Make a checklist of all the features and put a check mark when that is covered.
  • Start with a basic scenario and then gradually enhance it to add more features to test it.
  • Test all the input fields.
  • Test for the error message
  • Test all the negative scenarios.
  • Check the GUI against the standards.
  • Check the integration of the application with other external applications.
  • Check for complex business logic.
  • Try to do the ethical hacking of the application.

Types of Exploratory Testing

Given below are a few types of ET:

1) Freestyle ET:

Exploration of application in ad-hoc style.
In this type of ET, there are no rules, no account for coverage, etc. However, this type of testing is good when you need to familiarize with the application quickly, when you want to verify the work of the other testers, and when you want to investigate a defect or want to do a quick smoke test.

2) Scenario-based ET:

As the name itself suggests, testing done is scenario-based. It starts with real user’s scenarios, end-to-end scenarios or test scenarios. After initial testing, testers can inject variation as per their learning and observation.
Scenarios are like a generic guide for what to do during ET. Testers are encouraged to explore multiple possible paths while executing a scenario to ensure all possible path to feature work. Testers should also ensure to gather as many scenarios as possible from different categories.

3) Strategy based ET: 

Known testing techniques such as Boundary value analysis, equivalence technique, and risk-based technique that are combined with exploratory testing. An experienced tester or a tester who is familiar with the application is appointed for this type of testing.

Test Scenario For Check Box

Check if the checkbox is selectable or not.
Check if the checkbox selection enables the specific element as selected by mouse pointer or keyboard selection.
Check if the checkbox is selected and pressing submit redirects to the option as per the choice made.
Check if the checkbox selection is properly recorded in database or for browser redirection.
Uncheck one checckbox and select another and click on submit and verify if different choice is considered in redirection.
Check if the checkbox alignment on the form page is proper or not.
Check if the label for the checkbox is properly aligned.
Check if the multiple checkboxes can be selected or not.
Check if the checkboxes selected one at a time or multiple at a time are as per the requirement of the application.
Check if the corresponding data is selected by the database based on the selection.
Check if the validation controls enabled or triggered if no user action is done on the choices.
Verify if the selection control is inactive when the page is loaded.
Verify if the initial focus of the checkbox is on the first checkbox.
Verify if the checboxes are placed in a order.
Verify the physical location of the checkboxes (X.Y Coordinates, Height, Lenght, Width).

Test Scenarios for Text Field

How much is the maximum length of text field?
How much is the minimum length of text field?
How much input is expect in the text field?
Does the text field allows input more than the textbox?
Does the text field allows input less then the specified textbox?
Does the textbox accepts numbers only?
Does the textbox accepts decimal numbers?
Does the textbox accepts formatted numbers?
Does the textfield accepts alphabets?
Does the textfield accepts uppercase alphabets?
Does the textfield accepts lowercase alphabets?
Does the textfield accepts mix of upper and lowercase alphabets?
Is there any specific character that field allows?
Is there any specific character that field doesn’t allow?
Does the field accepts HTML characters?
Does the text field accepts javascript?
Does the text field immune to SQL injection?
Does the text field allows copy paste?
Does the text field allow drag and drop of text content?
Does the cursor appears while typing the content?
Does the text field allows spaces?
Does the text field processes content with only spaces?
Does the text field allows blank input?
How does the text field manages trailing spaces?
How does the text field manages leading spaces?

ProtoType Model.

Prototype Model:
It is a step by step procedure or standard procedure to develop the software.

Advantages:

  • Customer will get to know how the software looks in the early stage itself.
  • There will be improved communication between customers and developers.
  • Requirement changes are allowed.
  • Missing functionalities can be easily figured out.
  • Errors can be detected much earlier thereby saving a lot of effort and cost, besides enhancing the quality of the software.
  • The developed prototype can be reused by the developer for more complicated projects in the future.

Drawback: 

  • Total time taken is high.
  • Total investment is more
  • There will be delay in releasing the software to customer.
  • Poor Documentation due to continuously changing customer requirements.


Applications:

  • If the customer is new to the software and developers are new to the domain.
  • When the customer is not clear about requirements.



























Friday, January 24, 2020

V model

V Model:
It is a step by step to  procedure or standard procedure to develop new software. Here all the stages are tested.
In order to overcome the drawback of waterfall and spiral model we use V model.

Verification:

Here we ensure are we building the product right or system right or software right

Validation:

Here we ensure are we building right product or system right or software right.

Drawbacks:


  • Initial investment is high
  • Documentation is more
Advantages:

  • Total time is taken is less(testing is done from early stage).
  • All the stages are tested
  • Since testing is done in the early stage downward flow of defect will be less.
  • Total investment is less(downward flow of defect will be less and const of fixing the defects is less)
  • Software quality will be good.
  • Requirement changes can be done in any stages.
  • Deliverable are parallel

Application
  • The requirement is well defined and not ambiguous
  • Acceptance criteria are well defined.
  • Project is short to medium in size.
  • When customers wants quality software which is in short span of time

Spiral model

Spiral mode is a step by step procedure to develop the new software. In order to overcome the drawback of water fall model we use spiral model. Spiral model is mainly used when there is a dependency in the modules or features or functionalities.

This model is also called as Iterative model or Incremental model.

In every cycle the same process is repeated so this model in called iterative model.
In every cycle we keep adding new module so the model is called incremental model.

Advantages:



  • Requirement changes can be done after every cycle.
  • Customers get an opportunity to see the software in every cycles.
  • Testing is done in every cycle before going to next cycle.
  • Spiral model is a controlled model. i,e When one module is completely stable then only we develop next module.


Drawbacks:

  • Requirement collection and design phase are not tested
  • Is not beneficial for smaller projects.
  • Spiral may go infinitely.
  • Documentation is more as it has intermediate phases.
  • It is costly for smaller projects.
Applications:
When to use Spiral Methodology?
  • Whenever there is a dependency in the modules
  • When the customer gives the requirements in stages.
  • When releases are required to be frequent
  • For medium to high-risk projects
  • When changes may require at any time
  • When project is large




Thursday, January 23, 2020

AD – HOC Testing


AD – HOC Testing ( also called Monkey Testing / Gorilla Testing )
Testing the application randomly is called Ad-hoc testing.

Why we do Ad-hoc testing ?
1) End-users use the application randomly and he may see a defect, but professional TE uses the application systematically so he may not find the same defect. In order to avoid this scenario, TE should go and then test the application randomly (i.e, behave like and end-user and test).
For ex,
2) Development team looks at the requirements and build the product. Testing Team also look at the requirements and do the testing. By this method, Testing Team may not catch many bugs. They think everything works fine. In order to avoid this, we do random testing behaving like end-users.

3) Ad-hoc is a testing where we don’t follow the requirements (we just randomly check the application). Since we don’t follow requirements, we don’t write test cases.




NOTE :-
·         Ad-hoc testing is basically negative testing because we are testing against requirements ( out of requirements ).
·         Here, the objective is to somehow break the product.

When to do Ad-Hoc testing ?
·         Whenever we are free, we do Ad-hoc testing. i.e, developers develop the application and give it to testing team. Testing team is given 15days for doing FT. In that he spends 12 days doing FT and another 3days he does Ad-hoc testing. We must always do Ad-hoc testing in the last because we always 1st concentrate on customer satisfaction
·         After testing as per requirements, then we start with ad-hoc testing
·         When a good scenario comes, we can stop FT, IT, ST and try that scenario for Ad-hoc testing. But we should not spend more time doing Ad-hoc testing and immediately resume with formal testing.
·         If there are more such scenarios, then we record it and do it at the last when we have time.

Wednesday, January 22, 2020

Smoke Testing

SMOKE  TESTING or SANITY TESTING or DRY RUN or SKIM TESTING or BUILD VERIFICATION TESTING or CONFIDENCE TESTING.


Testing the basic or critical features of an application before doing thorough testing or rigorous testing is called as smoke testing.

It is also called Build Verification Testing – because we check whether the build is broken or not.
Whenever a new build comes in, we always start with smoke testing, because for every new build – there might be some changes which might have broken a major feature ( fixing the bug or adding a new feature could have affected a major portion of the original software).
In smoke testing, we do only positive testing – i.e, we enter only valid data and not invalid data.

Important Points to Remember
·         When we are doing smoke testing, we do only positive testing (only valid data is entered)
·         Here, we test only basic or critical features
·         Here, we take basic features and test for important scenarios
·         Whenever the build comes to the customer, before the customer / client does Acceptance Testing, he also does Smoke Testing before doing Acceptance Testing
·         When the product is installed in production, we do quick smoke testing to ensure product is installed properly.

Why we do Smoke testing ?
·         Just to ensure that product is testable
·         Do smoke testing in the beginning – catch bugs in basic features – send it to development team so that development team will have sufficient time to fix it.
Just to ensure that product is installed properly.

In early stages of product development, doing smoke testing fetches more number of bugs. But, in later stages of product development, if you do smoke testing – the number of bugs that you are going to catch in smoke testing will be very less. Thus, gradually the effort spent on smoke testing is less.




Note :- only for information purpose (NOT FOR STUDYING)
Smoke testing is classified into two types,
·         Formal Smoke Testing – the development team sends the s/w to the test lead. The test lead then instructs the testing team to do smoke testing and send report after smoke testing. Once, the testing team is done with smoke testing, they send the smoke testing report to the Test lead.
·         Informal Smoke Testing – here, the test lead says the product is ready and to start testing. He does not specify to do smoke testing. But, still the testing team start testing the product by doing smoke testing. 

This testing can be done both manually and also with the help of automation tools. But the best and preferred way is to use automation tools to save time.


https://www.softwaretestinghelp.com/smoke-testing-and-sanity-testing-difference/

Advantages And Disadvantages of automating smoke test cases

Let us first take a look at the advantages as it has a lot to offer when compared to its few disadvantages.
Advantages:
  • Easy to perform.
  • Reduces the risk.
  • Defects are identified at a very early stage.
  • Saves efforts, time and money.
  • Runs quickly if automated.
  • Least integration risks and issues.
  • Improves the overall quality of the system.
Disadvantages:
  • This testing is not equal to or a substitute for complete functional testing.
  • Even after the smoke test passes, you may find showstopper bugs.
  • This type of testing is best suitable if you can automate else a lot of time is spent on manually executing the test cases especially in large-scale projects having around 700-800 test cases.
S. No.Smoke TestingSanity Testing
1Smoke testing means to verify (basic) that the implementations done in a build are working fine.Sanity testing means to verify the newly added functionalities, bugs etc. are working fine.
2This is the first testing on the initial build.Done when the build is relatively stable.
3Done on every build.Done on stable builds post regression.