Guru99

Software Testing Techniques with Test Case Design Examples

Thomas Hamilton

What is Software Testing Technique?

Software Testing Techniques help you design better test cases. Since exhaustive testing is not possible; Manual Testing Techniques help reduce the number of test cases to be executed while increasing test coverage. They help identify test conditions that are otherwise difficult to recognize.

Boundary Value Analysis (BVA)

Boundary value analysis is based on testing at the boundaries between partitions. It includes maximum, minimum, inside or outside boundaries, typical values and error values.

It is generally seen that a large number of errors occur at the boundaries of the defined input values rather than the center. It is also known as BVA and gives a selection of test cases which exercise bounding values.

This black box testing technique complements equivalence partitioning. This software testing technique base on the principle that, if a system works well for these particular values then it will work perfectly well for all values which comes between the two boundary values.

Guidelines for Boundary Value analysis

  • If an input condition is restricted between values x and y, then the test cases should be designed with values x and y as well as values which are above and below x and y.
  • If an input condition is a large number of values, the test case should be developed which need to exercise the minimum and maximum numbers. Here, values above and below the minimum and maximum values are also tested.
  • Apply guidelines 1 and 2 to output conditions. It gives an output which reflects the minimum and the maximum values expected. It also tests the below or above values.

JIRA Software

On Jira Software Website

Zoho Projects

On Zoho Projects Website

Monday

On Monday’s Website

Equivalence Class Partitioning

Equivalent Class Partitioning allows you to divide set of test condition into a partition which should be considered the same. This software testing method divides the input domain of a program into classes of data from which test cases should be designed.

The concept behind this Test Case Design Technique is that test case of a representative value of each class is equal to a test of any other value of the same class. It allows you to Identify valid as well as invalid equivalence classes.

Input conditions are valid between

Hence there are five equivalence classes

You select values from each class, i.e.,

Also read more about – Boundary Value Analysis and Equivalence Partitioning Testing

Decision Table Based Testing

A decision table is also known as to Cause-Effect table. This software testing technique is used for functions which respond to a combination of inputs or events. For example, a submit button should be enabled if the user has entered all required fields.

The first task is to identify functionalities where the output depends on a combination of inputs. If there are large input set of combinations, then divide it into smaller subsets which are helpful for managing a decision table.

For every function, you need to create a table and list down all types of combinations of inputs and its respective outputs. This helps to identify a condition that is overlooked by the tester.

  • Enlist the inputs in rows
  • Enter all the rules in the column
  • Fill the table with the different combination of inputs
  • In the last row, note down the output against the input combination.

Example : A submit button in a contact form is enabled only when all the inputs are entered by the end user.

Decision Table Based Testing

State Transition

In State Transition technique changes in input conditions change the state of the Application Under Test (AUT). This testing technique allows the tester to test the behavior of an AUT. The tester can perform this action by entering various input conditions in a sequence. In State transition technique, the testing team provides positive as well as negative input test values for evaluating the system behavior.

Guideline for State Transition:

  • State transition should be used when a testing team is testing the application for a limited set of input values.
  • The Test Case Design Technique should be used when the testing team wants to test sequence of events which happen in the application under test.

In the following example, if the user enters a valid password in any of the first three attempts the user will be able to log in successfully. If the user enters the invalid password in the first or second try, the user will be prompted to re-enter the password. When the user enters password incorrectly 3 rd time, the action has taken, and the account will be blocked.

State Transition Diagram

State Transition Diagram

In this diagram when the user gives the correct PIN number, he or she is moved to Access granted state. Following Table is created based on the diagram above-

State Transition Table

Correct PIN Incorrect PIN
attempt
attempt
attempt

In the above-given table when the user enters the correct PIN, the state is transitioned to Access granted. And if the user enters an incorrect password, he or she is moved to next state. If he does the same 3 rd time, he will reach the account blocked state.

Error Guessing

Error Guessing is a software testing technique based on guessing the error which can prevail in the code. The technique is heavily based on the experience where the test analysts use their experience to guess the problematic part of the testing application. Hence, the test analysts must be skilled and experienced for better error guessing.

The technique counts a list of possible errors or error-prone situations. Then tester writes a test case to expose those errors. To design test cases based on this software testing technique, the analyst can use the past experiences to identify the conditions.

Guidelines for Error Guessing:

  • The test should use the previous experience of testing similar applications
  • Understanding of the system under test
  • Knowledge of typical implementation errors
  • Remember previously troubled areas
  • Evaluate Historical data & Test results
  • Test Case Design Technique allow you to design better cases. There are five primarily used techniques.
  • Boundary value analysis is testing at the boundaries between partitions.
  • Equivalent Class Partitioning allows you to divide set of test condition into a partition which should be considered the same.
  • Decision Table software testing technique is used for functions which respond to a combination of inputs or events.
  • In State Transition technique changes in input conditions change the state of the Application Under Test (AUT)
  • Error guessing is a software testing technique which is based on guessing the error which can prevail in the code.
  • What is Software Testing?
  • 7 Principles of Software Testing with Examples
  • V-Model in Software Testing
  • STLC (Software Testing Life Cycle)
  • Manual Testing Tutorial
  • Automation Testing
  • What is Unit Testing?
  • What is Integration Testing? (Example)

Next-Gen App & Browser Testing Cloud

Trusted by 2 Mn+ QAs & Devs to accelerate their release cycles

Join

  • Test Case Design Techniques for Smart Software Testing

case study for testing techniques

Tanay Kumar Deo

Posted On: September 19, 2023

view count

Test automation involves executing the test scripts automatically, handling test data, and using results to sweeten software quality. It’s like a quality checker for the whole team that helps ensure the software is perfect and bug-free. However, test automation is only valuable with suitable test case design techniques, as we will conduct test cases numerous times without catching the bugs in our software.

Therefore, testers or developers must first figure out some practical and reliable Test Case Design Techniques for improving the quality of the software testing process. Selecting a good test case design technique helps us design better test cases faster.

In this article on test case design techniques, we will address the fundamental notion of all test case design techniques to make our software testing smart. We will also explore various examples of these techniques to uplift understanding of the topic.

TABLE OF CONTENTS

Basics of Test Case Design Techniques

Types of test case design techniques, specification-based or black-box techniques, structure-based or white box techniques, experience-based testing techniques, bonus point for effective test case design.

  • Frequently Asked Questions

Test Cases are pre-defined sets of instructions handling the steps to be carried out to determine whether or not the end product flaunts the desired result. These instructions may retain predefined sets of conditions and inputs and their results. Test cases are the essential building blocks that, when put together, construct the testing phase. Designing these test cases can be time-consuming and may lead to some uncaught bugs if not appropriately designed.

Various test case design techniques help us design a robust and reliable test case that covers all our software’s features and functions. These design techniques implicate multiple measures that aim to guarantee the effectiveness of test cases in finding bugs or any other defects in the software.

Test case design requires a clever approach to identify missing necessities and faults without wasting time or resources. In other words, solid test case design techniques create reusable and concise tests over the application’s lifetime. These test case design techniques smoothen how test cases are written to provide the highest code coverage.

The test case design techniques are classified into various types based on their use cases. This classification helps automation testers or developers determine the most effective techniques for their products. Generally, test case design techniques are separated into three main types.

Types of Test Case Design Techniques

Specification-based testing, also known as black-box testing, is a technique that concentrates on testing the software system based on its functioning without any knowledge regarding the underlying code or structure. This technique is further classified as:

Boundary Value Analysis (BVA)

Equivalence partitioning (ep), decision table testing.

  • State Transition Diagrams

Use Case Testing

  • Structure-Based or White-Box Techniques

Structure-Based testing, or White-Box testing, is a test case design technique for testing that focuses on testing the internal architecture, components, or the actual code of the software system. It is further classified into five categories:

Statement Coverage

Decision coverage, condition coverage.

  • Multiple Condition Coverage Testing

All Path Coverage Testing

  • Experience-Based techniques

As the name suggests, Experience-Based testing is a technique for testing that requires actual human experience to design the test cases. The outcomes of this technique are highly dependent on the knowledge, skills, and expertise of the automation tester or developer involved. It is broadly classified into the following types:

Error Guessing

Exploratory testing.

Info

Simplify Testing: Opt for LambdaTest’s Selenium cloud grid with 3000+ real browsers and OS options. Try LambdaTest Today!

Specification-based testing is a technique to test software such as Web apps, Mobile applications, and Desktop applications. It focuses on testing the features and functions of the software without needing to know the source code or structure. These test case design techniques are also known as Black-Box techniques.

The Specification-Based testing technique is an input/output-based testing technique because it considers the software as a black box with inputs and outputs. It provides some inputs to the software and then compares the outputs produced with the desired outputs.

This testing technique is widely used by automation testers and developers at all testing levels wherever any product specification exists. Because successful testing must ensure that the product is appropriately performing as it is supposed to be.

This technique is further classified into the following types:

It is usually observed that most software errors occur at the boundary values. The Boundary Value Analysis technique for designing test cases includes identifying test cases at the input domain’s boundary value.

In this technique, we design our test cases for the testing at or near the border values. Test cases often contain values from the upper boundary and the lower boundary. To make our task more manageable, BVA offers us four main test scenarios, as mentioned below.

  • Just below the boundary values
  • Just above the boundary values

Let’s understand this test case design technique by taking a simple example for a user age validation input form.

user age validation input form

Test Name: Validate user age in the input form.

Test Condition: A valid adult user must be between [18 and 59 yrs], both inclusive.

Expected Behaviour: If the input age is not valid (i.e. age =60), then the form must prompt a “To register, you must be between 18 yrs to 59 yrs.” alert.

user age validation input form 2

Boundary Value Analysis: To ensure this functionality works, we can test the boundaries of the possible inputs. Possible valid inputs for this test are natural numbers between 18 to 59.

Hence, based on the four test scenarios for BVA, we can design test cases as – 18(Minimum), 17 (just below the minimum), 59(maximum), and 60(just above the maximum). Using these test cases, we can test that our app must prompt a “To register, you must be between 18 yrs to 59 yrs.” alert for 17 and 60 as inputs. Also, for the other two inputs, it must accept the age. If the expected behavior is obtained, we can say that the software is bug-free at boundary values.

user age validation input form 3

The above example must have clarified the Boundary Value Analysis technique for test case design. Hence the designed test cases will be as follows:

Test Case 1: (Just Below Minimum) Input: age = 17 Expected Output: “To register you must be between 18 yrs to 59 yrs.” alert popups.

Test Case 2: (Minimum) Input: age = 18 Expected Output: Proceed to Registration.

Test Case 3: (Maximum) Input: age = 59 Expected Output: Proceed to Registration.

Test Case 4: (Just Above Maximum) Input: age = 60 Expected Output: “To register, you must be between 18 yrs to 59 yrs.” alert popups.

In the Equivalence Partitioning technique for testing, the entire range of input data is split into separate partitions. All imaginable test cases are assessed and divided into logical sets of data named classes. One random test value is selected from each class during test execution.

The notion behind this design technique is that a test case of a representative value of an individual class is equivalent to a test of any more value of the same class. It allows us to Identify invalid as well as valid equivalence classes.

Let’s understand this technique for designing test cases with an example. Here, we will cover the same example of validating the user age in the input form before registering. The test conditions and expected behavior of the testing will remain the same as in the last example. But now we will design our test cases based on the Equivalence Partitioning.

Test cases design Equivalence Partitioning: To test the functionality of the user age from the input form (i.e., it must accept the age between 18 to 59, both inclusive; otherwise, produce an error alert), we will first find all the possible similar types of inputs to test and then place them into separate classes. In this case, we can divide our test cases into three groups or classes:

  • Age – Invalid – ( For e.g. 1, 2, 3, 4, …, up to 17 ).
  • 18 – Valid – ( For e.g. 18, 19, 20, …, upto 59 ).
  • Age > 59 – Invalid – (For e.g. 60, 61, 62, 63, … )

user age validation input form 4

These designed test cases are too much for testing, aren’t they? But here lies the beauty of Equivalence testing. We have infinite test cases to pick, but we only need to test one value from each class. This reduces the number of tests we need to perform but increases our test coverage. So, we can perform these tests for a definite number only, and the test value will be picked randomly from each class and track the expected behavior for each input.

A Decision Table is a technique that demonstrates the relationship between the inputs provided, rules, output, and test conditions. In test case design techniques, this decision table is a very efficient way for complex test cases. The decision table allows automation testers or developers to inspect all credible combinations of conditions for testing. True(T) and False(F) values signify the criteria.

Decision table testing is a test case design technique that examines how the software responds to different input combinations. In this technique, various input combinations or test cases and the concurrent system behavior (or output) are tabulated to form a decision table. That’s why it is also known as a Cause/Effect table, as it captures both the cause and effect for enhanced test coverage.

Automation testers or developers mainly use this technique to make test cases for complex tasks that involve lots of conditions to be checked. To understand the Decision table testing technique better, let’s consider a real-world example to test an upload image feature in the software system.

Test Name: Test upload image form.

Test Condition: Upload option must upload image(JPEG) only and that too of size less than 1 Mb.

Expected Behaviour: If the input is not an image or not less than 1 Mb in size, then it must pop an “invalid Image size” alert; otherwise, it must accept the upload.

Test Cases using Decision Table Testing:

Based upon our testing conditions, we should test our software system for the following conditions:

  • The uploaded Image must be in JPEG format.
  • Image size must be less than 1 Mb.

If any of the conditions are not satisfied, the software system will display an “invalid input” alert, and if all requirements are met, the image will be correctly uploaded. Now, let’s try to make the decision table to design the most suitable test cases.

Test case 1 Test case 2 Test Case 3 Test case 4
.jpg/.jpeg ( ) .jpg/.jpeg ( ) Not .jpg/.jpeg ( ) Not .jpg/.jpeg ( )
) >= 1 Mb ( ) ) >= 1Mb
Upload image ( ) Invalid input ( ) Invalid input ( ) Invalid input ( )

Based on the above-formed decision table, we can develop 4 separate test cases to guarantee comprehensive coverage for all the necessary conditions.

State Transition Testing

State, Transition Diagram Testing, is a software testing technique used to check the transformation in the state of the application or software under changing input. The requirement of the input passed is varied, and the change in the state of the application is observed.

In test case design techniques, State Transition Testing is mainly carried out to monitor the behavior of the application or software for various input conditions passed in a sequential manner. In this type of testing, negative and positive input values are passed, and the behavior of the application or software is observed.

To perform State transition testing efficiently on complex systems, we take the help of the State transition diagram or State transition table. The state transition Diagram or state transition table mainly represents the relation between the change in input and the behavior of the application

State Transition Testing can only be performed where different system transitions are required to be tested. A great real-world example where State Transition Testing is performed can be an ATM machine.

Let’s understand this testing technique by considering an example of a mobile passcode verification system.

Test Name: Test the passcode verification system of a mobile to unlock it.

Expected Behavior: The Phone must unlock when the user enters a correct passcode otherwise, it will display an incorrect password message. Also, if 3 consecutive times an incorrect passcode is entered, the device will go on a cooling period for 60 sec to prevent a brute force attack .

State Transition Diagram:

State Transition Diagram

In the state transition diagram, if the user enters the correct passcode in the first three attempts, he is transferred to the Device unlock state, but if he enters a wrong passcode, he is moved to the next try, and if he repeats the same for 3 consecutive times the device will go on a 60 sec colling period.

Now we can use this diagram to analyze the relation between the input and its behavioral change. Henceforth, we can make the test cases to test our system (Mobile passcode verification system) properly.

State Transition Table: Similar to the above state transition diagram, we can also design our test case using the state transition table. The state transition table for this particular example is as follows:

Correct Passcode Incorrect Passcode
Go to step 2. Go to step 2.
Go to step 5. Go to step 3.
Go to step 5. Go to step 4.
Go to step 5. Go to step 6.

As the name suggests, in Use case testing, we design our test cases based on the use cases of the software or application depending on the business logic and end-user functionalities. It is a black box testing technique that helps us identify test cases that constitute part of the whole system on a transaction basis from beginning to end.

A use case testing technique serves the following objective:

  • Manages scope conditions related to the project.
  • Depicts different manners by which any user may interact with the system or software.
  • Visualizes the system architecture.
  • Permits to assess the potential risks and system reliances.
  • Communicates complex technical necessities to relevant stakeholders easily.

Let’s understand the Use case testing technique using our last example of a mobile passcode verification system. Before moving on to the test case, let’s first assess the use cases for this particular system for the user.

  • The user may unlock the device on his/her first try.
  • Someone may attempt to unlock the user’s device with the wrong passcode three consecutive times. Provide a cooling period in such a situation to avoid brute-force attacks.
  • The device must not accept a passcode when the cooling period is active.
  • The user should be able to unlock the device after the expiry of the cooling period by entering a correct passcode.

Now, analyzing these use cases can help us primarily design test cases for our system. These test cases can be either tested manually or automatically.

White Box Technique, also known as Structure-Based testing, is a testing technique that focuses on testing internal components or structures of software or applications. In this technique for testing, the tests interact with the code directly. The test cases are designed to confirm that the code works efficiently and correctly.

Among all test case design techniques, the white box testing technique becomes really important to check some points of the application like, security, reliability, scalability, etc., which otherwise can become difficult to test using other techniques. One of the primary advantages of white box testing is that it makes it possible to guarantee that every aspect of the software or application is tested. To achieve complete code coverage, white box testing uses the following techniques:

Statement Coverage testing is a technique for test case design that focuses on executing all the executable statements available in the source code at least once. It covers all the lines, statements, and paths of the source code for the software or application. Automation testers or developers generally use statement coverage testing to cover the following aspects of software testing:

  • To test the quality of the code written.
  • To decide the flow of various paths of the software or application.
  • To check if the source code’s expected performance is up to the mark or not.
  • Test the software’s internal code and infrastructure.

For statement coverage testing of the code, we calculate the statement coverage in percentage. Statement coverage value represents the percentage of total statements executed in the code.

Statement Coverage = (Number of statements executed/ Total number of statements in the code) * 100

Statement Coverage

To get a better understanding of software coverage let’s consider an example of a program code. Here we are considering an example of the OTP verification system. Given below is the Python code to validate the OTP input by the user by calling the isValidOTP method.

no_of_digit(num): digit = 0 while num > 0: digit += 1 num = num // 10 return digit accept_otp(otp): print("OTP accepted.") is_valid_otp(otp): if no_of_digit(otp) == 4: # Length of OTP is okay to accept the input. accept_otp(otp) else: # Invalid length of OTP print("Invalid OTP. Try again.") (1234)

Statement Coverage 2

The above code basically tests if the OTP is of 4 characters or not. If it has more or less than 4 characters then it will print an invalid OTP; Otherwise, it will call the accept_otp() method. Now, let’s design our test case to check the statement coverage in different input situations.

Test Case 1: Value of OTP = 1234 (Valid 4-digit OTP).

In this test case, our code will execute all the statements except for line no: 13, 14, and 15. So we can calculate our statement coverage value as:

Statement coverage = (no. of statement executed/ total no. of statement) *100

Statement Coverage = (13/16) * 100

Statement Coverage = 81.25%

Test Case 2: Value of OTP = 123 (Invalid non-4-digit OTP).

In this test case, our code will execute all the statements except any code inside the if condition i.e. only line 12. So we can calculate our statement coverage value as:

Statement Coverage = (15/16) * 100

Statement Coverage = 93.75%

Now considering both test cases one and two we can come to the conclusion that 100% Statement Coverage is reacted when we consider both a valid and invalid input. So, the statement coverage testing technique is majorly used to find the dead code, unused code, and unused branches. And considering these, we can design our test cases accordingly.

Decision Testing Coverage is one of the test design techniques that checks all the branches in the code (like if/else, ?: conditionals, switch case, etc.) by executing every possible branch from every decision point at least once. This helps testers or developers to ensure that no branch of the code leads to any unexpected application behavior.

In simple words, if a program or code requires an input and uses a conditional like an “if/else”, or a ternary operator(“?:”), or a switch/case statement to perform single or multiple checks, decision testing coverage would help us in designing test cases such that all the possible outcomes can be tested to observe the behavior of the application in all scenarios.

To understand this test case design technique, let’s consider an example that involves multiple conditional statements. How about testing a program that checks if a number is a multiple of 2, 3, or 6?

Test Name: To test a program that checks if a number is a multiple of 2, 3, or 6.

Test Conditions: Inputs (test cases) will be given by the user, and the program produces an outcome.

Expected Behaviour: If the number is a multiple of 2, then the program must print that, otherwise if it is a multiple of 3, then the program prints that; but if it is a multiple of both 2 and 3, then the program must print that it is a multiple of 6.

Before testing, let’s see the code for this program in Python.

find_multiple(num): # Check if the number is a multiple of 2. if num % 2 == 0: # Check if the number is a multiple of 3. if num % 3 == 0: # The number is a multiple of both 2 and 3. print(f"{num} is a multiple of 6.") else: # The number is a multiple of 2 only. print(f"{num} is a multiple of 2 only.") elif num % 3 == 0: # The number is a multiple of 3 only. print(f"{num} is a multiple of 3 only.") else: # The number is not a multiple of 2 or 3. print(f"{num} is not divisible by any given number.") (12)

Expected Behaviour

Decision Testing Coverage:

We can use the Decision Coverage value to measure if our code is executed completely or not. To find the Decision Coverage value, we can use the below method:

Decision Coverage = (No. of decision outcomes exercised / Total no. of decision outcome in the code) * 100

Decision Coverage

Now, let’s design our test cases to check the Decision Coverage in different input situations.

Test Case 1: Value of num = 8

In this test case, two decision outcomes are observed, first decision statement is if(num % 2 === 0) , which produces a true outcome, and the second decision statement is if(num % 3 === 0) which produces a false outcome. No other decision outcome is covered in this particular test case. Also, the total number of possible decision outcomes is 6 (3 Decision conditional x 2 possible outcomes of each). Henceforth,

Decision Coverage = (2/6) * 100 %

Decision Coverage = 33.33%

Test Case 2: Value of num = 9

In this test case, two decision outcomes are covered, first decision statement is if(num % 2 === 0) which produces a false outcome, and the second decision statement is else if(num % 3 === 0) which produces a true outcome. Here Decision Coverage is,

Test Case 3: Value of num = 12

In the third test case, two decision outcomes are covered, first decision statement is if(num % 2 === 0) which produces a false outcome, and the second decision statement is else if(num % 3 === 0) which also produces a false outcome. Now using this,

Now, with all these 3 test cases, we can observe that our code has produced all the 6 outcomes, and hence that makes our total Decision Coverage to be 100% .

Condition Coverage Testing, also known as expression coverage testing, is a technique used to test and assess all the conditional statements available in the code. The primary goal of condition coverage testing is to test individual outcomes for each logical expression or condition. It offers a satisfactory sensitivity to the code’s control flow. All the expressions with logical conditions are considered collectively in this coverage testing technique.

The formula to find the conditional coverage percentage is as follows.

Conditional Coverage = (Number of Executed Condition outcome/ Total number of Condition outcome) * 100

Conditional Coverage

Note: In decision coverage, all conditions (if/else, switch/case, etc. statements) must be executed at least once. While in condition coverage, all possible outcomes of all expressions or conditions must be tested at least once.

Now, let’s cover a quick simple example to understand conditional coverage better and how to design the test cases. Here, we will consider an example to check if three lines of length (a,b,c) can form a valid triangle or not. The Python program for this example is written below.

is_valid_triangle(a, b, c): # Check if all sides are greater than 0 if a > 0 and b > 0 and c > 0: # Triangle Inequality Theorem: The sum of the lengths of any two # sides must be greater than the length of the third side. if a + b > c and a + c > b and b + c > a: return True return False = float(input("Enter the length of side 'a': ")) = float(input("Enter the length of side 'b': ")) = float(input("Enter the length of side 'c': ")) is_valid_triangle(side_a, side_b, side_c): print("The given sides can form a valid triangle.") : print("The given sides cannot form a valid triangle.")

Conditional Coverage 2

Now, let’s design our test cases to check if we can achieve 100% condition coverage using different input scenarios.

For the above program, we can have two separate conditions if a > 0 and b > 0 and c > 0 and if a + b > c and a + c > b and b + c > a . Both of these conditions can generate a True/False outcome individually. Hence we need to design our test case to get all these outcomes tested.

Test Case 1 (Condition 1-False, Condition 2-False)

Input: a = 0, b= 2, c= 3 Expected Output: The given sides cannot form a valid triangle .

Test Case 2 (Condition 1-True, Condition 2-True) Input: a = 2, b= 2, c= 3 Expected Output: The given sides cannot form a valid triangle .

These two test cases were enough to generate a True/False outcome for both of the conditions, and therefore we will achieve 100% condition coverage.

By implementing these white box test case design techniques, automation testers or developers can ensure the software code is tested thoroughly and is free of unexpected behavior. Selecting the appropriate test case design techniques based on the project’s or software’s specific requirements is crucial.

Multiple Condition Coverage

Multiple Condition Coverage is a test case design technique, that is mainly used to design test cases considering the fact that all the possible combinations of outcomes in a condition must be tested at least once. It offers more satisfactory sensitivity to the code’s control flow than decision coverage or condition coverage testing.

Multiple condition coverage is very similar to the Condition coverage but in Multiple condition coverage, we test all the possible combinations of the condition, while in Condition coverage, we test individual outcomes for each logical condition.

In Multiple condition coverage, we generate multiple test cases for each condition separately such that all the possible combinations of outcomes can be tested. The total number of test cases that can be generated for each condition using Multiple condition coverage is 2 n ; where n is the number of logical sub-conditions or sub-expressions in the condition.

To understand this type of test case design technique, let’s consider the same example to check if three lines of length (a,b,c) can form a valid triangle or not. Here, we will first generate test cases for our first condition (i.e. if a > 0 and b > 0 and c > 0 ), that have three sub-expressions that can produce individual True/False outcomes. Hence we will be having 2 3 (= 8) combinations of the outcome (T/T/T, T/T/F, T/F/T, F/T/T, F/F/T, F/T/F, T/F/F, and F/F/F) .

Test Case 1: (True, True, True) Input: a = 1, b = 1, c = 1 Expected Output: The given sides can form a valid triangle.

Test Case 2: (True, True, False) Input: a = 1, b = 1, c = 0 Expected Output: The given sides cannot form a valid triangle.

Test Case 3: (True, False, True) Input: a = 1, b = 0, c = 1 Expected Output: The given sides cannot form a valid triangle.

Test Case 4: (False, True, True) Input: a = 0, b = 1, c = 1 Expected Output: The given sides cannot form a valid triangle.

Test Case 5: (False, False, True) Input: a = 0, b = 0, c = 1 Expected Output: The given sides cannot form a valid triangle.

Test Case 6: (False, True, False) Input: a = 0, b = 1, c = 0 Expected Output: The given sides cannot form a valid triangle.

Test Case 7: (True, False, False) Input: a = 1, b = 0, c = 0 Expected Output: The given sides cannot form a valid triangle.

Test Case 8: (False, False, False) Input: a = 0, b = 0, c = 0 Expected Output: The given sides cannot form a valid triangle.

Similarly, we can design 8 more test cases for the second condition. Hence, this leads to a very large number of test cases for rigorous testing of the code’s control flow.

All path coverage is a test case design technique that analyzes all of the codes’ pathways. This powerful strategy assures that all program routes or paths are tested at least once. This technique’s coverage is more effective and wider than multiple-condition coverage. Automation testers or developers often use this technique to design test cases for testing sophisticated and complex applications.

Now, Let’s take a quick example to understand this technique of designing test cases. For ease of understanding, we will cover a really simple example to test if a number is positive, negative, or zero. The code for this example in Python is as follows.

check_number(n): if n > 0: return "Positive" elif n < 0: return "Negative" else: return "Zero" = int(input("Enter the number to test: ")) (check_number(num))

Flow Chart for this code can be designed as:

Flow Chart for Path Coverage code

Analyzing the above flow chart we can clearly see that there are three independent paths that can be formed i.e. 1 -> 2 -> 3 -> 4, 1 -> 2 -> 5 -> 6 -> 9, and 1 -> 2 -> 5 -> 7 -> 8 . Hence we can design our test cases such that all these paths are traveled at least once.

Test Case 1 Input: num = 1 Expected Output: Positive Path Taken: 1 -> 2 -> 3 -> 4

Test Case 2 Input: num = -1 Expected Output: Negative Path Taken: 1 -> 2 -> 5 -> 6 -> 9

Test Case 3 Input: num = 0 Expected Output: Zero Path Taken: 1 -> 2 -> 5 -> 7 -> 8

Experience, skill, and instinct are the fundamentals for experience-based testing. In this testing technique, testers or developers are left free to design various test cases in advance or create them on the spot during the test implementation; mostly experienced testers will do a bit of both.

These techniques can be precious in looking out for test cases that are not easily identified by any other structured techniques. Depending on the approach, they may achieve varying degrees of effectiveness and coverage. This test case coverage can be difficult to examine and may not be measurable with structured techniques.

Some very commonly used experience-based testing techniques to design the test cases are as follows:

Error Guessing is an experience-based test case design technique where automation testers or developers use his/her experience to guess the troublesome areas of the software or application. This approach necessarily requires a skilled and experienced tester.

Error guessing test case design technique proves to be very persuasive when used in addition to other structured testing techniques. This technique uncovers those errors that would otherwise be not likely to be out through structured testing. Hence, having the tester’s experience saves a lot of effort and time.

Now, let’s find out some of the guidelines to be followed for error guessing technique:

  • Always Remember earlier troubled areas: During any of the automation testing tasks, whenever you encounter an interesting bug, note it down for future reference.
  • Improve technical understanding: Knowing how the code is written & how error-prone concepts like null pointers, loops, arrays, exceptions, boundaries, indexes, etc., are implemented in the code helps in testing.
  • Do not just look for mistakes in the code but also find errors in design, requirements, build, usage, and testing.
  • Apprehend the system under test.
  • Consider historical data and test outcomes.

For example, error-guessing test case design techniques can pinpoint input validation situations, such as entering uncommon characters, long inputs, or unforeseen combinations of input values. Testers or developers can catch if the system controls such inputs gracefully or exhibits any unexpected behavior or vulnerabilities.

Exploratory Testing is a type of test case design technique for software testing where automation testers or developers do not create test cases in advance but rather check the system in real-time. They usually note down ideas and opinions about what to test before test implementation. The primary focus of exploratory testing is mainly on testing as a “thinking” activity.

Exploratory Testing is widely used by testers in QA testing and is all about learning, discovery, and investigation. It highlights the personal responsibility and freedom of the individual tester.

For example, Exploratory testing for a user login feature involves testing beyond the basics. Testers would start by verifying standard login scenarios and then delve into boundary cases, such as incorrect credentials or excessive login attempts. They would assess features like social media integration, password recovery, and session management.

Exploratory testing would cover aspects like localization, accessibility, and cross-browser/device compatibility. By taking this approach, the testing team ensures a secure, user-friendly, and comprehensive login experience for the application’s users.

Great test designs help us create applications with excellent customer experience. Following are some of the important points that can be considered while designing test cases.

  • Choose design technique wisely: Although we are equipped with multiple test case design techniques, we need not select all of those techniques but instead select a good combination of them based on our software requirements.
  • Define the scope and purpose of the testing: The first step in designing effective test cases is to pick the testable conditions. We must understand the testing goals as well as the features.
  • Domain expertise: Before designing test cases, we need domain knowledge, which is the basis of any software. This helps in designing test cases based on the software structure.
  • Assumptions should be avoided: When designing a test case, don’t make presumptions about the features and functioning of the software application. It may cause a disconnect between the client’s need and the product, affecting the business.
  • Look for any non-functional requirements: Non-functional requirements are just as influential as functional ones. Identify other non-functional testing requirements, such as operating system, hardware requirements, and security features to be handled.
  • Negative and positive test cases: When designing test cases, boundary value analysis, Equivalence class partitioning, and decision table testing are some test case design techniques that should be used. We should always think about negative testing, error handling, and failure situations, as these might help us find the most likely bugs in the code.

Writing effective test cases with all of the required details is an excellent job, and various test case design techniques can help us design them properly. In this article, we equip you with the notion of various test case design techniques, including structured and experienced testing techniques. We also included some bonus points to help you design test cases for smart software testing. Now, to conclude this article, we can say that successful usage of test case design techniques will generate test cases that guarantee the success of software testing.

Frequently Asked Questions (FAQs)

Why do we use test case design techniques.

Test case design techniques are utilized to optimize our testing process by correctly identifying crucial testing needs and designing proper test cases, thus allowing us to test our software against these robust test cases.

What are the advantages of test case design techniques?

Test case design techniques are a valuable asset in software testing. They provide a more structured approach to designing test cases, which can lead to comprehensive test coverage and higher-quality software.

These techniques help testers to identify complex scenarios, generate effective test data, and design test cases that evaluate the full functionality of the system. They also help to reduce the risk of missed defects and ensure that all software requirements are met.

What is Black Box Testing?

Black box testing is a technique for testing that concentrates on testing the software system based on its functioning without any knowledge regarding the underlying code or structure.

What is White Box Testing?

White Box testing is a test case design technique for testing that focuses on testing the internal architecture, components, or the actual code of the software system.

What is the difference between Boundary Value Analysis and Equivalence Partitioning?

In Boundary value analysis, we examine the behavior of our application using corner cases as inputs. Let’s say we have our input in the range L to R, then we will check for the boundaries, which are {L} and {R}.

Regarding equivalence partitioning, we divide our input range into equivalence data classes. For each class, we run a check, and if any class fails, then that whole class interval is considered incorrect it is considered correct.

case study for testing techniques

Tanay kumar deo is a skilled software developer, Upcoming SDET at GlobalLogic (A Hitachi group company). With expertise in Android and web development, he is always eager to expand his skill set and take on new challenges. Whether developing software or sharing his knowledge with others, he is driven by a desire to make a positive impact on the world around him. In addition to his technical abilities, Tanay also possesses excellent blogging and writing skills, which allow him to effectively communicate his ideas and insights to a wider audience.

See author's profile

Author Profile

Author’s Profile

linkedin

Got Questions? Drop them on LambdaTest Community. Visit now

case study for testing techniques

Related Articles

Related Post

How To Generate Extent Reports In Selenium

Author

Vipul Gupta

June 14, 2024

Selenium Java | Automation | Tutorial |

Related Post

What Is CI/CD in Automation Testing

Author

Saniya Gazala

CI/CD | Automation | DevOps |

Related Post

How to Handle Actions Class in Selenium

Author

Andreea Draniceanu

June 12, 2024

Automation | Selenium Tutorial | Tutorial |

Related Post

How to Wait in Python: Python Wait Tutorial With Examples

Author

June 7, 2024

Automation | Selenium Python | Tutorial |

Related Post

How to Get Element by Tag Name In Selenium

June 6, 2024

Related Post

What Is Hypothesis Testing in Python: A Hands-On Tutorial

Author

Jaydeep Karale

June 5, 2024

Automation | Playwright Testing | Selenium Python | Tutorial |

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Join

Download Whitepaper

You'll get your download link by email.

Don't worry, we don't spam!

We use cookies to give you the best experience. Cookies help to provide a more personalized experience and relevant advertising for you, and web analytics for us. Learn More in our Cookies policy , Privacy & Terms of service .

Schedule Your Personal Demo ×

case study for testing techniques

Article & Observations About Diverse Teams Working Better Together

Software Testing Client Project Case Study

Apr 21 • Case Studies

We are often asked what software testing is . The video below shares a solid definition of the term.

But we thought a software testing project case study might be helpful to better understand what software testers do on a typical day. This includes testing software, writing requirement documents for our clients, and creating user guides to ensure compliance for our clients to use for quality assurance and auditing purposes.

Iterators LLC was hired to complete accessibility testing for a few projects for the Library of Congress (LOC). Accessibility testing is required on all government websites, with Section 508 and WCAG 2.2 requirements used. To become a Trusted Tester an employee must complete the DHS Trusted Tester online training course requirements and pass the DHS Section 508 Trusted Tester Certification Exam of Homeland Security in Accessibility, and we are in a unique position to help on this project. We cross-train all our employees so that we can work on several projects at one time or several different aspects of a project to complete the work and reduce the cost to our clients.

Our first project assigned by LOC was testing their new braille feature on BARD Mobile for Android. We were tasked with testing the braille feature with several refreshable braille displays.

During our testing, we used the Orbit Reader 20 , and two different braille displays from Freedom Focus 14 and Freedom Focus 40 . There are plans to use other refreshable displays such as Humanware, but this testing has not occurred yet. We needed to test refreshable braille displays and their use in tandem with Google BrailleBack and Google TalkBack .

This work was to ensure that all hardware worked as expected with the apps we were testing. For this testing, we had to complete functional testing, smoke testing, exploratory testing and had a user panel to ensure we caught all issues that a visually impaired individual might experience while using the app.

Initially, our client was unsure if we would find any bugs and hesitant to have us enter bugs into Bugzilla as they stated the software was “complicated”. Bugzilla is a web-based general-purpose bug tracking system and not unlike other bug tracking systems we use every day such as Jira, TestRails, PractiTest, and Click-Up.

Testing was completed over several agile sprints with many significant software testing bugs found. Our testing had us test against the National Library Service requirements document. Next, we had to create an up-to-date user manual. While the manual had been updated several times, the testing had not been.

For example, when downloading a book or magazine from the Now Reading section of the mobile app, the download would end up at the bottom of the page. For years, the user guide had listed the download being at the top of the page once the document was downloaded.

Our testing team, on several occasions, said this was an error in the documentation and that the download ends up at the bottom of the page. This was corrected in the user document and sent to the development team to fix per the requirement document.

Over the next several months, we reported 30 high-priority bugs with about half fixed at this point. We have encouraged our client to test in an agile fashion because once the development team is finished, it’s harder to get these bugs fixed.

Our bugs were reported and based on the requirement document used to create the software. Lastly, the user guide had to be rewritten to reflect the app's behavior and general updates.

Once the app was tested and created, the user guide was sent to Communication Services to ensure the style used per other requirement documentation. This document had to be approved before being disseminated to the public. For example, how does the library determine what the Most Popular Books are, and over what period.

Once the document was returned from COS, this PDF document had to be remediated . The process of remediation is to make a PDF, create the heading for the document, alt text added to meaningful images, and decorative images either ignored or taken out of the digital document altogether.

Once the remediation process is complete and validated, the document becomes ADA-compliant. We then provide an accessible PDF that can be read with the use of a screen reader and create the HTML output so that the document can be added to the Library of Congress website.

You can find the current user guide we completed here: https://www.loc.gov/nls/braille-audio-reading-materials/bard-access/bard-mobile-android/#creatingfolders3.3

Case studies can be a great learning tool in software testing and project management. By looking at project case study examples, you can see how the project was planned and executed, as well as how certain tasks were managed. This can give a better understanding of what software testing involves on a daily basis. With the right software testing case studies example, software testers can hone their skills, improve project performance, and ultimately deliver better software testing results.

Related Resources:

  • Crafting an Effective Test Plan: A Step-by-Step Guide
  • Top Test Management Tools
  • Mobile Application Functional and Performance Testing

About the Author

Jill Willcox has worked on accessibility issues for most of her professional career. Iterators is an inclusive women-owned small business (WOSB) certified by the Small Business Administration and WBENC. We provide software testing services for websites, mobile apps, enterprise software, and PDF remediation services, rendering PDFs ADA compliant.

Jill Willcox

Jill Willcox

Clutch names iterators llc as a top certified women-owned business for 2022.

Iterators LLC named Top Certified Women0owned Business Again 2022

Test Strategy vs Test Plan: What’s the Difference?

What is the difference between a test strategy and a test plan? Read this article to b...

May 05 • Reference

Transform your testing process with: Real Device Cloud , Company-wide Licences & Accessibility Testing

App & Browser Testing Made Easy

Give your users a seamless experience by testing on 3000+ real devices and browsers. Don't compromise with emulators and simulators

Software Testing Techniques: Explained with Examples

By Shreya Bose, Community Contributor - May 25, 2023

Before getting into the nitty-gritty of designing and running successful software tests, one must be acquainted with the basics. In this piece, we’ll break down the major software testing techniques – ones that must be included in the vast majority of test suites. 

What are Software Testing Techniques?

Common software testing techniques (with examples), black box testing, white box testing, functional testing, non functional testing, best practices for software testing (no matter the technique).

As the name suggests, testing techniques comprise the various ways and angles from which any software can be verified to ensure that it works and appears (UI elements, design) as expected during the planning and requirements gathering stage. 

The book Lessons Learned in Software Testing: A Context-Driven Approach by Kaner, Bach & Pettichordon states that all testing techniques must include the following variables:

  • The testers who actually perform the software tests
  • The software components covered in the technique
  • The potential problems a technique is meant to identify
  • The actual activities involved in running a test
  • The “success” benchmarks against which the test results will be evaluated

Read More: A Detailed Guide on the Software Testing Life Cycle

Note: For many of these different testing techniques to yield accurate, actionable results, they must be executed in environments that replicate the production stage as closely as possible. In other words, they need to be tested on real browsers, devices, and operating systems – the ones real people use to access your site or app. 

If you don’t have access to an on-premise device lab that is regularly updated with the latest technology, consider giving BrowserStack’s real device cloud a try. 

Try BrowserStack for Free

Different Software Testing Techniques

In this case, a tester uses the software to check that it functions accurately in every situation. However, the test remains completely unaware of the software’s internal design, back-end architecture, components, and business/technical requirements. 

The intent of black-box testing is to find performance errors/deficiencies, entire functions missing (if any), initialization bugs, and glitches that may show up when accessing any external database. 

Example : You input values to a system that is slotted into different classes/groups, based on the similarity of outcome (how each class/group responds) to the same values. So, you can use one value to test the outcomes of a number of groups/classes. This is called Equivalence Class Partitioning (ECP) and is a black-box testing technique. 

Here, testers verify systems they are deeply acquainted with, sometimes even ones they have created themselves. No wonder white box testing has alternate names like open box testing, clear box testing, and transparent box testing. 

White box testing is used to analyze systems, especially when running unit, integration, and system tests. 

Example: Test cases are written to ensure that every statement in a software’s codebase is tested at least once. This is called Statement Coverage and is a white box testing technique. 

Also Read: Software Release Flow : Stages of Development Sprint Cycle

Functional tests are designed and run to verify every function of a website or app. It checks that each function works in line with expectations set out in corresponding requirements documentation. 

Example: Tests are created to check the following scenario – when a user clicks “Buy Now”, does the UI take them directly to the next required page?

There are multiple sub-sets of function testing, the most prominent of which are:

  • Unit Testing : Each individual component is tested before the developer pushes it for merging. Unit tests are created and run by the devs themselves. 
  • Integration Testing : Units are integrated and tested to check that they work together seamlessly. 
  • System Testing : All systems elements, hardware and software, are tested for overall functioning to check that it works according to the system’s specific requirements. Regression Testing is a type of System Testing that is performed before every release.
  • Acceptance Testing : Often called User Acceptance Testing , this sub-set of testing puts the software in the hands of a control group of potential users, and note their feedback. It’s the app’s first test in a truly real-world scenario.

Run Functional Tests on 3000+ real browsers and devices. Start for Free .

It’s in the name. Non functional tests check the non functional attributes of any software – performance, usability, reliability, security, quality, responsiveness, etc. These tests establish software quality and performance in real user conditions. 

Example : Tests are created to simulate high user traffic so as to check if a site or app can handle peak traffic hours/days/occasions. 

The main sub-sets of non-functional testing are:

  • Performance Testing : Software is tested for how efficiently and resiliently it handles increased loads in traffic or user function. Load testing, stress testing, spike testing, and endurance testing are various ways in which software resilience is verified.
  • Compatibility Testing : Is your software compatible with different browsers, browser versions, devices (mobile and desktop), operating systems and OS versions? Can a Samsung user play with your app as easily as an iPhone user? Cross Browser Compatibility tests help you check that.

Check your software compatibility with 3000+ real browsers & devices. See how your app working in real user conditions with BrowserStack. Try Now for Free!

  • Security Testing : Conducted from the POV of a hacker/attacker, security tests look for gaps in security mechanisms that could be exploited for data theft or for making unauthorized changes.
  • Usability Testing : Usability tests are run to verify if a software can be used without hassle by actual target users. So, you put it in the hands of a few prospective users, in order to get feedback before its actual deployment. 
  • Visual Testing : Visual tests check if all UI elements are rendering as expected, with the right shape, size, color, font, and placement. The question these tests answer is: Does the software as it was meant to in the requirements?

Percy and App Percy by BrowserStack allows you to run Automated Visual Regression Tests using Visual Diff .

Software Testing Techniques: Explained with Examples

  • Accessibility Testing : Accessibility testing evaluates if the software can be used by individuals who are disabled. Its goal is to optimize the apps so that differently-abled users can perform all key actions without external assistance.
  • Responsive Testing : Responsive tests verify if the app/site renders well on screen sizes and resolutions offered by different devices, mobile, tablets, desktops, etc. A site’s responsive design is massively important since most people access the internet from their mobile devices, and expect software to work without hassle on their personal endpoints. 

Free Responsive Test on Commonly Used Resolutions

Try testing the responsiveness of your website on real devices.

Whatever software testing technique your project might require, using them in tandem with a few best practices helps your team push positive results into overdrive. Do the following, and your tests have a better shot of yielding accurate results that provide a solid foundation for making technical and business decisions. 

  • Use REAL browsers, devices, and OSes to test your software’s real-world efficacy . We’ve written plenty on why emulators and simulators don’t come close to replicating real user conditions. 

Follow-up Read: Will the iOS emulator on PC (Windows & Mac) solve your testing requirements? Replace Android Emulator for PC (Mac & Windows) with Real Devices

These articles reveal why you can never get 100% accurate results with emulators and simulators . You need to run the software on the actual user devices used to access your app. Real device testing is non-negotiable, and the absolute primary best practice you should optimize your tests for.

Read More : Testing on Emulators vs Simulators vs Real Devices

  • Prioritize test planning . You’ll always encounter surprises in the SDLC, but as far as possible, create a formal plan with inputs from all stakeholders that serves as a clear roadmap for testing. Clear documentation is essential to prevent miscommunication. Ensure that the plan is specific, measurable, achievable, relevant, and time-bound.
  • Start testing as early as possible . Use Shift Left Testing – pushing tests to earlier stages in the pipeline. This lets you identify and resolve bugs as early as possible in the development process, which improves software quality and reduces time spent resolving issues.
  • Use automation as widely as possible . Automation reduces the time required to execute tests, as well as the effects of human error. Automation engines don’t get tired, and rarely make mistakes. If you can afford to make the upfront investment (hiring the right testers, purchasing the right tools), you’ll find automation testing paying you back 10X in terms of software quality and business ROI . 

Read More : How to Create Test Cases for Automated tests

  • Establish the right QA metrics to accurately evaluate your test suites . The best QA engine in the world will mean nothing if you don’t establish benchmarks to declare success or failure. 

Read More : Essential Metrics for the QA Process

  • Create test cases that test one feature each . If tests are isolated and independent, they are easier to reuse and maintain.
  • Keep a close eye on test coverage . Each test suite should be able to cover significant swathes of the software under test so that you don’t have to keep building and running new tests with every iteration. The idea is to achieve maximum test coverage .

If you’re googling “testing techniques types”, “testing techniques in manual testing” or “test design techniques with examples”, start here. You’ll get your one or two-line introductions, and then move on to relevant, linked articles that dive deeper into each individual technique. 

Once you’re acquainted with the basics, why not test your testing chops on real browsers & devices…for free? Try your hand at BrowserStack Test University , a free resource that offers access to real devices for a hands-on learning experience. Sign up for free, and master the fundamentals of software testing with Fine-grained, practical, and self-paced online courses. ALL FOR FREE. 

Run Tests on BrowserStack

We're sorry to hear that. Please share your feedback so we can do better

Related articles.

AI and software testing

How AI can change Software Testing?

Why should you keep an eye out for AI in software testing? Learn more about how Artificial Intellige...

case study for testing techniques

A Detailed Guide on the Software Testing Life Cycle

Software Testing is important for creating Web and Mobile Applications. Here’s s Detailed Guide on...

Development sprint

Software Release Flow : Stages of Development Sprint Cycle

Read this guidepost to understand how teams these days operate in development sprints to ensure they...

Featured Articles

Ready to try browserstack.

Over 6 million developers and 50,000 teams test on BrowserStack. Join them.

Martin Schneider

Schedule a personalized demo.

Our team will get in touch with you within 12 hours

Get in touch with us

Please share some details regarding your query

Request received!

We will respond back shortly to

In the meantime, here are some resources that might interest you:

2

Meanwhile, these links might interest you:

  • Software Testing
  • Software Development
  • Data Annotation
  • Guide To 5 Test Case Design Techniques With Examples

case study for testing techniques

Test case design techniques in software engineering refer to how we set up a test case which is a set of activities needed to test a specific function after a software development process . Using the right test case design methods will set a strong foundation for the project, increasing productivity and accuracy. Otherwise, you may fail to identify bugs and defects in the software testing process.

As it is critical that your test cases are designed well, let’s learn about the most popular test case design methods with examples in software testing .

Categories of Software Testing Techniques 

Test techniques are categorized into black-box, white-box, and experience-based.

In this tutorial, we will guide you through the Black-box testing with 5 major test case design techniques:

  • Boundary Value Analysis (BVA)
  • Equivalence Class Partitioning
  • Decision Table based testing
  • State Transition
  • Error Guessing

5 Important Test Case Design Techniques

1. boundary value analysis (bva).

Boundary value analysis is a black-box testing technique to test the boundaries between partitions instead of testing multiple values in the equivalence region. This is because we often find a large number of errors at the boundaries rather than the center of the defined input values, and we suppose that if it is true for boundary values, it is true for the whole equivalence region. Also, BVA is considered an additional test case design type for equivalence classification.

Guide to Boundary Value Analysis design: Select input variable values at their minimum/maximum, just below the minimum/maximum, just above the minimum/maximum, a nominal value (optional).

Boundary Value Analysis (BVA) test case design technique

Example: Valid age values are between 20 – 50.

  • Minimum boundary value is 20
  • Maximum boundary value is 50
  • Take: 19, 20, 21, 49, 50, 51
  • Valid inputs: 20, 21, 49, 50
  • Invalid inputs: 19, 51

So, test cases will look like:

  • Case 1: Enter the value 19: Invalid
  • Case 2: Enter number 20: Valid
  • Case 3: Enter number 50: Valid
  • Case 4: Enter number 51: Invalid

Boundary value analysis test case design example

You might also concern: How to choose the right test automation framework?

2. Equivalence Class Partitioning

Equivalent Class Partitioning (or Equivalent Partitioning) is a test case design method that divides the input domain data into various equivalence data classes, assuming that data in each group behaves the same. From that, we will design test cases for representative values of each class that can stand for the result of the whole class.

The concept behind the Equivalent Partitioning testing technique is that the test case of a typical value is equal to the tests of the rest values in the same group. Hence, it helps reduce the number of test cases designed and executed.

Steps to design Equivalent Partitioning test case:

  • Define the equivalence classes
  • Define the test cases for each class

Example: Valid usernames are within 5 – 20 text-only characters.

Equivalence Class Partitioning example test cases

  • Case 1: Enter within 5 – 20 text characters: Pass
  • Case 2: Input <3 characters: Display error message “Username must be from 5 – 20 characters”
  • Case 3: Enter >20 characters: Display error message “Username must be from 5 – 20 characters”
  • Case 4: Leave blank or no-text characters: Display error message “Invalid username”

3. Decision Table

Decision Table is a software testing technique based on cause-effect relationships, also called a cause-effect table, used to test system behavior in which the output depends on a large combination of input. For instance, navigate a user to the homepage if all blanks/specific blanks in the log-in section are filled in.

First and foremost, you need to identify the functionalities where the output responds to different input combinations. Then, for each function, divide the input set into possible smaller subsets that correspond to various outputs.

For every function we will create a decision table. A table consists of 3 main parts:

  • A list of all possible input combinations
  • A list of corresponding system behavior (output)
  • T (True) and F (False) stand for the correctness of input conditions.
  • Function: A user will be navigated to the homepage if successfully log in.
  • Conditions for success log in: correct username, password, captcha.
  • In the Input section: T & F stands for the correctness of input information.
  • In the Output section: T stands for the result when the homepage is displayed, F stands for the result when an error message is shown.

Look at the image below for more details.

Decision table test case design example

  • Enter correct username, password, captcha: Pass
  • Enter wrong username, password, captcha: Display error message.
  • Enter correct username, wrong password and captcha: Display error message.
  • Enter correct username, password and wrong captcha: Display error message.

4. State Transition 

State Transition is another way to design test cases in black-box testing, in which changes in the input make changes to the state of the system and trigger different outputs. In this technique, testers execute valid and invalid cases belonging to a sequence of events to evaluate the system behavior.

For example: When a user logs into an e-banking app on his mobile phone, if he enters the wrong password 3 times in a row, his account will be blocked. That means he will be able to log in if he enters the correct password at the 1st, 2nd, 3rd try, and the state will be transitioned into Access Accepted. Look at the diagram in the image below.

State transition diagram example - Test cases design techniques

The State Transition technique is often used to test the functions of the Application Under Test (AUT) when the change to the input makes up changes in the state of the system and produces distinct outputs.

5. Error Guessing

In the Error Guessing testing technique, test cases are designed mostly based on experiences of the test analysts. He/she will try to guess and assume the possible errors or error-prone situations which can prevail in the code ; hence the test designers must be skilled and experienced testers.

In Error Guessing, the test cases could be based on:

  • Previous experience of testing related/similar software products.
  • Understanding of the system to be tested.
  • Knowledge of common errors in such applications.
  • Prioritized functions in the requirement specification documents (to not miss them).

Which Test Case Design Techniques for You to Go With?

We’ve gone through 5 important test case design techniques in black-box testing. Let’s summarize them once again:

  • Boundary Value Analysis (BVA) : Test the boundaries between partitions with the assumption that they stand for the behavior of corresponding equivalence regions.
  • Equivalence Class Partitioning : Divide the input domains into smaller equivalence data classes and design test cases for each.
  • Decision Table : used when the output responds to varied combinations of input.
  • State Transition : used when there’s a sequence of input events that can change the state of the system and produce different output.
  • Error Guessing : based on experience, knowledge, intuition to predict the error and defects which can prevail in the code, often used after another formal testing technique.

That’s LQA guide on how to write test cases for better test execution. Don’t hesitate to contact us if you need further assistance regarding software testing!

  • Website:  https://www.lotus-qa.com/
  • Tel: (+84) 24-6660-7474
  • Fanpage:  https://www.linkedin.com/company/lqa/

Frequently Asked Questions about Test Case Design Techniques

What are test case design techniques and why are they important.

Test case design techniques are systematic methods used to create test cases that effectively validate software functionality. These techniques help ensure comprehensive testing coverage and the detection of potential defects. They are important because they guide testers in designing tests that target specific aspects of the software, thereby increasing the likelihood of identifying hidden issues before the software is released.

What are some common test case design techniques?

Universal test case design techniques are Boundary value analysis, Equivalence class partitioning, Decision table, State transition, Error guessing.

How do you choose the right test case design technique?

The choice of test case design technique depends on factors such as the complexity of the software, the project’s requirements, available resources, and the specific types of defects that are likely to occur. It’s often beneficial to use a combination of techniques to ensure comprehensive coverage. The technique chosen should align with the goals of testing, the critical functionalities of the software, and potential risks involved.

Top 8 Cybersecurity Trends in 2024

Learn / Guides / A/B testing guide

Back to guides

6 A/B testing examples to inspire your team’s experiments

A/B testing seems simple: put two different product versions head-to-head to see which one works better for your users.

But in reality, A/B testing can get complicated quickly. Your website has so many different elements—buttons, inputs, copy, and navigational tools—and any one of them could be the culprit of poor conversion rates. You want to ensure you have the right tools and processes to solve the case.

That's why you need to analyze A/B testing examples—to see what kind of strategies and tools other companies used to successfully carry out their experiments.

Last updated

Reading time.

case study for testing techniques

This article looks at six A/B testing examples and case studies so you can see what works well for other businesses—and learn how to replicate those techniques on your own . You’ll walk away with new ways to test website improvements that boost the user experience (UX) and your conversion rates.

Conduct A/B tests with confidence

Use Hotjar’s tools to see how users experience different versions of your product

6 brilliant A/B testing case studies to learn from

Product and website design is not just an art; it’s also a science. To get the best results, you need to conduct A/B testing: a controlled process of testing two versions of your product or website to see which one produces better results.

A/B testing, also known as split testing , follows a predictable structure:

Find a problem

Create a hypothesis of how you could solve it

Create a new design or different copy based on your hypothesis

Test the new version against the old one

Analyze the results

But within this structure, you have many choices about the A/B testing tools you use, the types of data you collect, and how you collect that data. One of the best ways to learn and improve is to look at successful A/B testing examples: 

1. Bannersnack: landing page

Bannersnack , a company offering online ad design tools, knew they wanted to improve the user experience and increase conversions —in this case, sign-ups—on their landing page.

Unsure where to start, Bannersnack turned to Hotjar Heatmaps to investigate how users interacted with the page. With heatmaps, the company could visualize the areas with the most clicks and see spots website visitors ignored .

case study for testing techniques

With A/B testing, Bannersnack discovered that a larger, higher-contrast call-to-action button made a huge difference. Check out the heat difference on these before-and-after click maps!

case study for testing techniques

With this data, Bannersnack could hypothesize how to improve the experience and then create an alternate design, or variant, to test side-by-side with the original. 

Bannersnack completed multiple rounds of testing, checking heatmaps each time and getting incrementally closer to their desired results. Ultimately, they realized they needed a larger call-to-action (CTA) button with a higher contrast ratio—and sign-ups increased by 25%.

💡Pro tip: optimize your landing page by breaking down drivers, barriers, and hooks. 

Drivers are the reasons a lead came to the page

Barriers are the reasons they’re leaving

Hooks are the reasons they convert

case study for testing techniques

Once you fully understand customer behavior on your landing page, you can develop—and test—ideas for improving it. 

2. Turum-burum: checkout flow

Digital UX design agency Turum-burum aimed to optimize conversions for their customer Intertop, an ecommerce shoe store based in Ukraine. 

In the UX analysis phase, Turum-burum used Hotjar Surveys —specifically, an exit-intent pop-up —to gather user insights on Intertop’s checkout page. When a user clicked to leave the page, the survey asked, “Why would you like to stop placing the order?” Out of the 444 respondents, 48.6% said they couldn’t complete the checkout form.

#Hotjar Surveys reveal why users leave the checkout flow

The next step was to develop hypotheses and A/B test them. Turum-burum tested changes like reducing the number of form fields, splitting the webpage into blocks, and adding a time-saving autofill feature.

#A/B testing plays a key role in Turum-burum’s conversion rate optimization (CRO) model, which they call Evolutionary Site Redesign (ESR)

Each time they tweaked a page, the company used Hotjar Recordings and Heatmaps to see how users experienced the change. Heatmaps revealed trends in users’ click and scroll behavior, while Recordings helped the team spot points of friction, like rage clicks, users encountered during the checkout flow.

The final result? Intertop’s conversion rate increased by 54.68% in the test variant. When they officially rolled out the changes, the average revenue per user (ARPU) grew by 11.46%, and the checkout bounce rate decreased by 13.35%.

Hotjar has flexible settings for heatmaps and session recordings, which is especially useful when you’re A/B testing and want to see how users experience each version of your design.

3. Spotahome: new features

A/B testing doesn’t have to be stuffy or stressful. Online booking platform Spotahome keeps it casual and fun with Hotjar Watch Parties .

Right now, people in product and engineering at Spotahome use Hotjar on a daily basis. We’re always running A/B tests and using Hotjar to see how the new feature performs.

Developers gather virtually, over a video call, and watch recordings of users interacting with new features.

#Spotahome’s developers gather for pizza parties to watch Hotjar Recordings and see how new features perform

For example, when watching recordings of users experiencing their new sign-up flow, developers noticed a broken button. 

While they might have grimaced and groaned when they spotted it, the moment allowed them to catch a problem that would’ve cost them conversions.

💡Pro tip: don’t be afraid of negative results when running A/B tests. 

Johann Van Tonder , CEO at ecommerce CRO agency AWA digital , says, “A test with a strong negative result means you’ve identified a conversion lever. You’ve pulled it in the wrong direction, now just figure out how to pull it in the opposite direction.”

Johann says he often gets even more excited about negative results because they showcase how valuable A/B testing actually is. 

“We tested a redesigned checkout flow for a famous car rental company,” he says. “It would’ve cost them £7m in annual revenue if they’d just made it live as is.”

Even though negative results are sometimes inevitable, there are some common A/B testing mistakes you need to be aware of, so you can get maximum results from your experiments. Check out the top A/B testing mistakes chapter of this guide (coming soon!) to learn more.

4. The Good: mobile homepage

Ecommerce CRO experts The Good took on the task of achieving higher conversion rates on mobile for client Swiss Gear, a retailer of outdoor, travel, and camping supplies.

To uncover any existing issues or bottlenecks, The Good turned first to Google Analytics to determine where, when, and why visitors left the website . 

With this quantitative data as a starting point, the company cued up Hotjar Heatmaps, which are free forever , to highlight users’ click and scroll patterns. Then, they used Hotjar Recordings to determine the why behind user behavior — the qualitative data —and form their hypotheses about how to make improvements. 

The Good tested their hypotheses, using heatmaps and recordings again after each test to see how changes impacted user behavior.

case study for testing techniques

The Good used Hotjar Heatmaps to understand how users interacted with content filters, and used this data to redesign client Swiss Gear’s mobile menu to be more user-friendly.

case study for testing techniques

The Good discovered that users were getting confused by the iconography and language on Swiss Gear's mobile site. The process led the team to design a simple, visually appealing menu-driven user interface (UI) for the mobile homepage.

This interface streamlined the browsing experience by promoting top filters—a move that led to big results: Swiss Gear’s mobile bounce rate dropped by 8% and time on site increased by 84% .

💡Pro tip: use Hotjar Engage for even more insights when optimizing your mobile site. 

Engage lets you source and interview real users about how they experience your site on their phones. Then, you can filter these interviews by type of phone, like Android or iPhone, to look for usability trends.

Recruit from Hotjar’s pool of 175,000+ verified participants and automatically screen to make sure you’re speaking to the right people

5. Re:member: application form

Re:member , a Scandinavian credit card company, knew something was wrong with their funnel. Google Analytics showed that many qualified leads arrived from affiliate sites—and quickly left before they signed up for a credit card.

Using Hotjar filters , re:member’s Senior Marketing Specialist, Steffen Quistgaard, pulled up recordings and click maps of sessions from affiliate sites only. 

While studying these sessions, Quistgaard noticed users scrolling up and down, clicking to the homepage, and hovering over—and attempting to click on—the benefits section.

Putting together these behavioral trends, Quistgaard hypothesized that leads were hesitant and needed more persuasive information on the form.

case study for testing techniques

Re:member redesigned their credit card application form with more visual organization on the right side for users: three distinct content sections, checkmarks instead of bullet points, and icons in the rewards program section.

case study for testing techniques

Re:member’s team redesigned the form, using visual and web design hierarchy cues to call attention to the credit card’s features and benefits. Then, they conducted split testing.

The result? Form conversions went up 43% among users from affiliate sites and 17% overall.

💡Pro tip: use filters to spend less time searching and more time analyzing. 

If your site experiences high traffic volume, you could rack up many recordings in a short time. (No worries! You get 1,050 session recordings for free every month on the Hotjar Basic ‘free forever’ plan. 💪) 

To make the most of your time, you need to sort through your recordings in the most efficient way.

Hotjar offers several filters that you can use, depending on your goals: 

Finding broken elements or bugs: sort recordings by rage clicks , errors , or u-turns (when a user returns to the previous page in under seven seconds).

Test a new feature: verify your assumptions about how a new button or link is performing with the clicked element filter. This lets you refine your sessions to only see those sessions where users actually clicked on the featured element.

Compare two versions of your page: filter by events to better understand your A/B test results. By setting up each page variant as a separate event, you can easily separate the recordings before watching them.

6. Every.org: donation flow

Dave Sharp, Senior Product Designer at charity donation site Every.org , was watching session recordings when he noticed something interesting: a surge of rage clicks, or a series of repeated clicks in a short time, on their donation form.

After watching many sessions, Dave hypothesized that the form’s two CTAs were confusing and frustrating visitors.

#Every.org’s original donation form featured two CTAs, which confused visitors and increased the bounce rate

Every.org created a new version of the donation flow, splitting it into two pages, each with only one CTA button. Then they tested it against the original version.

By the end of the A/B testing process, conversions had increased by a whopping 26.5%.

💡Pro tip: when running tests, save time with Hotjar features and integrations. 

While fine-tuning Every.org’s donation flow, Dave used three time-saving tricks to narrow down the number of recordings he was watching: 

Filter by URL: this filter meant Dave could focus on user activity on certain pages—he could skip sessions of users on the blog, for example, and focus on activity closer to the point of conversion

Sort by relevance: instead of watching users’ sessions chronologically, Dave chose to sort them by relevance . That meant Hotjar’s algorithm did the heavy lifting, finding the most insightful recordings for him.

Set up alerts: to save even more time, Dave used Hotjar’s Slack integration to get an alert each time new recordings surfaced of users trying the updated donation flow.

Every.org gets thousands of visitors each month, but with these strategies, Dave made quick work of a task that could otherwise seem daunting.

Get closer and closer to what users need

You can’t just rely on gut instinct when making changes to your website. To create a site visitors enjoy (and one that gets results), you need to collect real evidence and user insights. 

By looking at A/B testing examples, you’ll have a clear roadmap of how to identify a problem, create a hypothesis, and test variants of your site. In time, you’ll have a site that delivers exactly what your target audience wants—and keeps them coming back for more.

FAQs about A/B testing examples

What is a/b testing.

A/B testing is a controlled experiment in which you run two different product or website versions simultaneously and see which one performs better. For example, you might run your current sales page against a new version with a section that addresses objections. Then, you’d gather and analyze data to see which one resulted in more conversions.

Why is A/B testing important?

With A/B testing, you become data-driven instead of relying on your instincts when making improvements to your website design. It helps you identify and address issues like bugs, confusing layouts, and unclear CTAs to create a more satisfying user experience that decreases bounce rates, increases conversion rates, and gets you return customers. 

What can I learn from A/B testing examples?

A website is packed with content, images, organizational features, and navigational tools, so it’s sometimes hard to know where to start to make improvements. Looking at other companies that have had success with A/B testing can spark ideas as you develop your own approach. Here are a few A/B testing case studies we recommend checking out:

Bannersnack boosted its landing page conversions

Turum-burum improved shoe retailer Intertop’s checkout flow

The Good redesigned Swiss Gear’s mobile menu

Spotahome looked for bugs in new features

Re:member increased application form conversions

Every.org made its donation flow better for would-be donors

A/B testing framework

Previous chapter

A/B testing tools

Next chapter

Trending Articles on Technical and Non Technical topics

  • Selected Reading
  • UPSC IAS Exams Notes
  • Developer's Best Practices
  • Questions and Answers
  • Effective Resume Writing
  • HR Interview Questions
  • Computer Glossary

Software Testing Techniques with Test Case Design Examples

What are the different types of software testing techniques.

Software Testing Techniques assist you in creating more effective test cases. Manual Testing Techniques help reduce the number of test cases to be executed while increasing test coverage because exhaustive testing is not possible. They aid in the detection of test circumstances that might otherwise be difficult to detect.

Boundary Values Analysis (BVA)

Boundary value analysis is a type of testing that takes place at the intersections of partitions. Maximum, minimum, inside or outside boundaries, typical values, and error values are all included.

The limits of the defined input values, rather than the center, are where a substantial percentage of mistakes occur. It's also known as BVA, and it provides a set of test scenarios that put boundary values to the test.

Equivalence partitioning is complemented by this black box testing approach. This software testing approach is based on the idea that if a system performs well for these specific values, it will perform as well for all values in the range between the two boundary values.

Boundary Value Analysis Guidelines

If an input condition is constrained to values x and y, the test cases should include both x and y values as well as values above and below x and y.

If an input condition has a large number of values, a test case that exercises the lowest and maximum values should be created. Values above and below the minimum and maximum values are also checked in this section.

Apply the first and second recommendations to the output conditions. It produces an output that represents the predicted minimum and maximum values. It also checks for numbers that are below or over a certain threshold.

Partitioning of Equivalence Classes as an Example

Equivalent Class Partitioning divides a collection of test conditions into a partition that should be treated the same. This software testing approach splits a program's input domain into data classes from which test cases should be created.

The idea behind this method is that a test case for a representative value of each class is the same as a test for any other value of the same class. It enables you to distinguish between valid and incorrect equivalence classes.

Conditions for input are valid between

Counting from 1 to 10 and 20 to 30

As a result, there are five types of equivalence.

-- equals 0 (invalid)

1 to ten (valid)

11 to 19 years old (invalid)

20 to 30 years old (valid)

from 31 to -- (invalid)

You choose values from each class, for example,

-2, 3, 15, 25, 45, 45, 45, 45, 45, 45, 45, 45

Testing with Decision Tables.

A Cause-Effect table is another name for a decision table. For functions that respond to a mix of inputs or events, this software testing approach is utilized. If the user has completed all needed information, for example, a submit button should be activated.

The first step is to identify capabilities whose output is dependent on a number of different inputs. If the input set of possibilities is huge, break it down into smaller subsets to make managing a decision table easier.

Create a table for each function and list all possible combinations of inputs and outputs. This aids in the detection of a condition that the tester may have missed.

The steps for making a decision table are as follows −

Make a list of the inputs in columns.

Fill in the blanks in the column with all of the regulations.

Fill the table with various input combinations.

Note the output versus the input combination in the last row.

A submit button on a contact form, for example, is only activated after the end-user has completed all of the required fields.

Transition of State

Changes in input conditions modify the state of the Application Under Test in the State Transition approach (AUT). The tester can use this approach to test the behavior of an AUT. This action can be carried out by the tester by sequentially entering various input circumstances. The testing team uses the state transition approach to evaluate the system's behavior by providing both positive and negative input test values.

State Transition Guideline

When a testing team is evaluating an application for a limited set of input values, state transition should be employed.

When the testing team wishes to test a sequence of events that occur in the application under test, this approach should be employed.

In the following example, the user will be able to log in successfully if he or she provides a correct password in any of the first three attempts. The user will be requested to re-enter the password if the user enters an incorrect password on the first or second attempt. When a user incorrectly enters their password for the third time, the action is taken, and the account is blocked.

White-box or structure-based approaches

The structure-based or white-box approach creates test cases based on the software's underlying structure. This method thoroughly checks the code that has been produced. Developers with a thorough understanding of the software code, its internal structure, and design assist in the creation of test cases. There are five different types of this approach.

Testing and Coverage of Statements

This method entails running all of the source code's executable statements at least once. According to the specified requirement, the proportion of executable statements is determined. For checking test coverage, this is the least desired statistic.

Coverage of Decision Testing

This approach, also known as branch coverage, is a testing method that ensures all reachable code is run by executing each of the potential branches from each decision point at least once. This aids in invalidating all of the code's branches. This ensures that no branch causes the program to behave in an unanticipated way.

Condition Evaluation

Each Boolean expression is predicted as TRUE or FALSE in condition testing, also known as Predicate coverage testing. All of the test results are tested at least once. This method of testing ensures that the code is completely covered. The test cases are written in such a way that the condition results are simple to implement.

Testing in Multiple Situations

The goal of multiple condition testing is to test every possible combination of situations in order to achieve 100% coverage. Two or more test scripts are necessary to assure comprehensive coverage, which necessitates more effort.

All-Path Evaluation

The source code of a program is used to discover every executable path in this method. This aids in the identification of all flaws in a given code.

Techniques based on prior experience

To grasp the most critical aspects of the program, these methodologies rely heavily on the tester's experience. The abilities, knowledge, and competence of the persons engaged determine the outcomes of these procedures. The following are examples of experience-based techniques

Observational Testing

Without any formal documentation, this approach is used to test the program. There is a minimum amount of time allowed for testing and a maximum amount of time available for test execution. The test design and execution are done simultaneously in exploratory testing.

Guessing Error

The testers use this approach to predict faults based on their previous experience, data availability, and knowledge of product failure. The testers' abilities, intuition, and experience all play a role in error guessing.

Mistake guessing is a software testing approach that involves predicting the type of error that could occur in the code. The approach is mainly focused on experience, with test analysts guessing the troublesome section of the testing application based on their previous experience. As a result, for improved error guessing, test analysts must be skilled and experienced.

A list of likely mistakes or error-prone scenarios is counted using this approach. The tester then creates a test scenario to reveal the flaws. The analyst can utilize previous experiences to establish the conditions for designing test cases based on this software testing approach.

Guidelines for Guessing Errors

The test should be based on past testing experience with similar applications.

Understanding the system that is being tested

Understanding common implementation blunders

Remember difficult locations from the past?

Analyze past data and test results

Software testing techniques enable you to create more effective instances. There are five approaches that are commonly employed.

Boundary value analysis is a type of testing that takes place at the intersections of partitions.

Equivalent Class Partitioning divides a collection of test conditions into a partition that should be treated the same.

For functions that respond to a mix of inputs or events, the Decision Table software testing approach is utilized

Changes in input conditions modify the state of the Application Under Test in the State Transition approach (AUT)

Mistake guessing is a software testing approach that involves predicting the type of error that could occur in the code.

Enhancing the Test Case Efficiency is not a simple concept to describe; it is an activity that can be reached via a well-developed method and consistent practice.

The testing team should not grow bored of participating in the improvement of such activities, as it is the most effective weapon for achieving better success in the field of quality. Many testing companies throughout the world have demonstrated this on mission-critical projects and complicated apps.

Vineet Nanda

Related Articles

  • Test Case Prioritization in Software Testing
  • Estimation Techniques Guide (Software Test)
  • Test Coverage in Software Testing
  • Test Environment for Software Testing
  • Design Verification & Validation Process in Software testing
  • What is Exploratory Testing? (Techniques, Examples)
  • What is Test analysis (Test Basis) in Software Testing?
  • GUI Testing Tutorial: User Interface (UI) Test Cases with Examples
  • Automated software testing with Python
  • REST API Testing Tutorial - Sample Manual Test Case
  • What is Regression Testing? (Definition, Test Cases, Examples)
  • Stability Testing in Software Testing
  • Testing Documentation in Software Testing
  • Positive Testing and Negative Testing with Examples
  • Automated Software Testing with SpecFlow in Selenium

Kickstart Your Career

Get certified by completing the course

To Continue Learning Please Login

Case Studies

bfsi

  • Test consulting & tools assessment partnership providing test assessment approach along with its in-house tools evaluation frameworks across various areas ❶
  • Test advisory and managed testing of their new banking systems and integration testing for the complex touchpoints for multiple Fiserv products ❷
  • Functional testing for MIFID II compliance across various systems including Charles River, Black Rock Alladin and Unavista. Set up a team of domain experts in quick time and help client complete the testing as per the regulatory deadline ❸
  • Functional, usability and performance testing of new digital wallet services for quick financial transactions, and performed end-to-end testing of mobile app and APIs ❹

Retail

  • Test partnership across all digital and backend applications for functional testing and tst automation. Setting up the QA processes, automation solution and framework from scratch and defined the automation roadmap ❶
  • Functional and regression testing for existing ERP and web application. Development of automation framework and script to run regression suite. Refined overall testing process and implemented agile processes for both SDLC & STLC ❷
  • Designed test strategy and performed functional and system integration tests across the application & UAT support was provided for supply chain, stores sales, and warehouses system ❸

Health Care

healthcare

  • Test partnership for automation, functional & performance testing services in a hybrid model across their information management system tested on both functionality and database end ❶
  • End-to-end functional, security and performance testing of solution for defects in the application before going live. Setting up QA/ testing processes and creating re-usable test cases, etc. ❷
  • Created performance test scenarios & automated critical test cases for strategy style maintenance & batch execution of automation scripts for desktop-based application ❸
  • Performed VAPT on the web application, web services, AWS S3 buckets and medical device as per OWASP guidelines & SANS Top 25. Created HIPAA compliance policy manual for client & assisted with its readiness assessment ❹

Travel

  • End-to-end functional testing partnership and provided agile coaching and streamlined their test processes. Supported system, data migration, and performance testing for ongoing sprints & support on regression, exploratory, integration, and automation testing ❶
  • Test advisory, end-to-end functional testing, test automation, implement integrated automated testing framework, test optimization techniques & helped in setting up CI/CD pipeline. Streamlined IT processes and artifacts, optimized resource skills, tool metrics & provided well-defined quality management and monitoring mechanism ❷
  • End-to-end validations on their IoT system. Performed component level and system-level tests using JMeter, Grafana, and Prometheus integration was implemented for component level and system-level performance, scalability, stress and endurance tests ❸

Media & Publishing

Media & Publishing

  • End-to-end test testing function operating in an onsite/ offshore model. Services included test automation and performance testing. Reduced regression cycle time using automation first approach and helped identify the application bottleneck ❶
  • Test automation partnership for their money control application based on web, desktop and mobile (both Android & iOS). Leveraged its in-house test automation framework Tx-Automate built on the BDD approach to help create an end to end CI/ CD testing framework integrated with client’s in-house DevOps tools ❷
  • Performed integration, functional, usability & regression testing the application & customized QA process as per client needs to implement enforced standards and to build consistency across the teams ❸
  • Conducted POC to identify a suitable performance testing tool for the application & recommended HP load runner, perform tool and using these testing was performed of a complex application ❹

ISV & Telecom

Telecom

  • Staffing & managing of QA testing team to support a host of different applications. Implemented matured agile transformation QA practice and extensive experience of test and lifecycle automation helping to build the right QA capabilities ❶
  • Device testing for their mobile application based on iOS and android platforms over different physical devices. Determined the transparent data transmission in the devices connected via HFP ❷
  • Test automation partnership for client’s software used for client lifecycle management processes. Leveraged Tx-Automate to enhance the solution and build a team to rapidly scale test automation for their key financial services clients ❸

Other case studies

  • Test partnership for cloud transformation initiative including moving applications portfolio to Cloud. Created detailed cloud migration testing strategy, test planning & execution ❶
  • Crafted a compelling solution with its comprehensive managed testing centre of excellence (TCoE) for QA partnership, provided in a global delivery model (GDM) ❷
  • Managed testing services including test advisory, functional testing, automation & performance testing for a critical initiative to move 35+ applications to cloud. Set up automation framework using in-house automation framework ‘Tx-Automate’ ❸
  • Examined the current QA processes, people and tools, and benchmark against the industry best practices like TMMi/ CMMi. Provided futuristic QA and automation roadmap & helped them in standardizing the QA /testing process at organization level ❹

Related Blog

robot-blog

Robotic Process Automation (RPA): A Revolution in the Software Testing Industry

automation-blog

The Critical Need for Automation Testing in the Software Ecosystem

Subscribe Our Newsletter

Lorem ipsum dolor sit amet, consectetur

Opt-In. Read T&C

Request For Download

close

Read more on our Privacy Policy .

During your visit on our website, we collect personal information including but not limited to name, email address, contact number, etc. TestingXperts will collect and use your personal information for marketing, discussing the service offerings and provisioning the services you request. By clicking on the check box you are providing your consent on the same. In the future, if you wish to unsubscribe to our emails, you may indicate your preference by clicking on the “Unsubscribe” link in the email. Read more on our Privacy Policy

  • Assessments
  • Certifications
  • Conferences
  • QAI Global Community

Test Case Design – Methods & Techniques

After the completion of the course, the participants would be able to understand the basic concepts of test-case design, testware; and learn how to apply the various test-case design methods and techniques in black box and white box testing.

The course is a mix of case-driven, instructor-led, and self-paced learning, designed to enable participants learn, experiment and implement the concepts involving test-case design – Methods & Techniques. The participants will be presented with ample examples, exercises and case studies to understand and apply the concepts taught.

Module 1 – Introduction

Participants | familiarization with course material | familiarization with the protocols and timings | expectation setting and clarifications, module 2 -introduction to test design, introduction to test ware | test case basics | need for test design | evolution of test design techniques | benefits of using test design techniques, module 3 – black box test case design, basic concepts of black box testing | equivalence class partitioning | boundary value analysis | decision tables | state transition based testing | orthogonal arrays | all pairs technique, module 4 – white box test case design, basic concepts of white box testing | types of white box testing | techniques involved in static white box testing (desk checking, code walkthrough, formal inspections) | techniques involved in structural white box testing (control flow / coverage testing, basis path testing, loop testing, data flow testing), module 5 – case studies & exercises, test case design | test coverage | performing tests  | recording test results | defect management.

  • Duration : 2 Days
  • $ 1295 per participant
  • Certification Costs additional
  • Upcoming schedule
  • Quality Assurance and Control
  • Test Process Improvement
  • Effective Test Case Writing
  • Test Execution and Defect Reporting
  • Security Testing
  • SOA Web Testing
  • Test Automation using UFT
  • Performance Testing using Load Runner (Beginner)
  • Introduction to Quality Center
  • Leveraging Selenium Open Source for Test Automation
  • CSTE Preparatory Training
  • Agile Methods
  • Software Process Improvement
  • Project, Program & Portfolio Management

AREA OF INTEREST

Consulting Assessment Learning Certifications Staffing Conferences Community Chapter

Signup for Newsletter

[recaptcha size:compact]

  • Software Engineering Tutorial
  • Software Development Life Cycle
  • Waterfall Model
  • Software Requirements
  • Software Measurement and Metrics
  • Software Design Process
  • System configuration management
  • Software Maintenance
  • Software Development Tutorial
  • Software Testing Tutorial
  • Product Management Tutorial
  • Project Management Tutorial
  • Agile Methodology
  • Selenium Basics

Software Testing Techniques

  • Software Testing - Estimation Techniques
  • Agile Testing Techniques
  • Software Testing Strategies
  • Storage Software Testing
  • Structural Software Testing
  • Test Strategy - Software Testing
  • Software Testing - Test Review
  • API Testing - Software testing
  • Software Testing | Static Testing
  • Unit Testing - Software Testing
  • Beta Testing - Software Testing
  • Load Testing - Software Testing
  • Fuzz Testing - Software Testing
  • Software Testing - Test Harness
  • Software Testing - Mock Testing
  • Software Testing - Test Analysis
  • Alpha Testing - Software Testing
  • Smoke Testing - Software Testing
  • Manual Testing - Software Testing

Software testing techniques are methods used to design and execute tests to evaluate software applications. The following are common testing techniques:

  • Manual testing – Involves manual inspection and testing of the software by a human tester.
  • Automated testing – Involves using software tools to automate the testing process.
  • Functional testing – Tests the functional requirements of the software to ensure they are met.
  • Non-functional testing – Tests non-functional requirements such as performance, security, and usability.
  • Unit testing – Tests individual units or components of the software to ensure they are functioning as intended.
  • Integration testing – Tests the integration of different components of the software to ensure they work together as a system.
  • System testing – Tests the complete software system to ensure it meets the specified requirements.
  • Acceptance testing – Tests the software to ensure it meets the customer’s or end-user’s expectations.
  • Regression testing – Tests the software after changes or modifications have been made to ensure the changes have not introduced new defects.
  • Performance testing – Tests the software to determine its performance characteristics such as speed, scalability, and stability.
  • Security testing – Tests the software to identify vulnerabilities and ensure it meets security requirements.
  • Exploratory testing – A type of testing where the tester actively explores the software to find defects, without following a specific test plan.
  • Boundary value testing – Tests the software at the boundaries of input values to identify any defects.
  • Usability testing – Tests the software to evaluate its user-friendliness and ease of use.
  • User acceptance testing (UAT) – Tests the software to determine if it meets the end-user’s needs and expectations.

Software testing techniques are the ways employed to test the application under test against the functional or non-functional requirements gathered from business. Each testing technique helps to find a specific type of defect . For example, Techniques that may find structural defects might not be able to find the defects against the end-to-end business flow. Hence, multiple testing techniques are applied in a testing project to conclude it with acceptable quality. 

Principles Of Testing

Below are the principles of software testing:

  • All the tests should meet the customer’s requirements.
  • To make our software testing should be performed by a third party.
  • Exhaustive testing is not possible. As we need the optimal amount of testing based on the risk assessment of the application.
  • All the tests to be conducted should be planned before implementing it.
  • It follows the Pareto rule (80/20 rule) which states that 80% of errors come from 20% of program components.
  • Start testing with small parts and extend it to large parts.

Types Of Software Testing Techniques

There are two main categories of software testing techniques:

  • Static Testing Techniques are testing techniques that are used to find defects in an application under test without executing the code. Static Testing is done to avoid errors at an early stage of the development cycle thus reducing the cost of fixing them.
  • Dynamic Testing Techniques are testing techniques that are used to test the dynamic behaviour of the application under test, that is by the execution of the code base.  The main purpose of dynamic testing is to test the application with dynamic inputs- some of which may be allowed as per requirement (Positive testing) and some are not allowed (Negative Testing).

Each testing technique has further types as showcased in the below diagram. Each one of them will be explained in detail with examples below.

Software testing techniques

Testing Techniques

Static Testing Techniques

As explained earlier, Static Testing techniques are testing techniques that do not require the execution of a code base. Static Testing Techniques are divided into two major categories:

  • Peer Reviews : Informal reviews are generally conducted without any formal setup. It is between peers. For Example- Two developers/Testers review each other’s artifacts like code/test cases.
  • Walkthroughs : Walkthrough is a category where the author of work (code or test case or document under review) walks through what he/she has done and the logic behind it to the stakeholders to achieve a common understanding or for the intent of feedback.
  • Technical review : It is a review meeting that focuses solely on the technical aspects of the document under review to achieve a consensus. It has less or no focus on the identification of defects based on reference documentation. Technical experts like architects/chief designers are required to do the review. It can vary from Informal to fully formal.
  • Inspection : Inspection is the most formal category of reviews. Before the inspection, the document under review is thoroughly prepared before going for an inspection. Defects that are identified in the Inspection meeting are logged in the defect management tool and followed up until closure.  The discussion on defects is avoided and a separate discussion phase is used for discussions, which makes Inspections a very effective form of review.
  • Data Flow : It means how the data trail is followed in a given program – How data gets accessed and modified as per the instructions in the program. By Data flow analysis, you can identify defects like a variable definition that never got used.
  • Control flow : It is the structure of how program instructions get executed i.e. conditions, iterations, or loops.  Control flow analysis helps to identify defects such as Dead code i.e. a code that never gets used under any condition.
  • Data Structure : It refers to the organization of data irrespective of code. The complexity of data structures adds to the complexity of code. Thus, it provides information on how to test the control flow and data flow in a given code.

Dynamic Testing Techniques

Dynamic techniques are subdivided into three categories:

1. Structure-based Testing:

These are also called White box techniques . Structure-based testing techniques are focused on how the code structure works and test accordingly. To understand Structure-based techniques, we first need to understand the concept of code coverage. 

Code Coverage is normally done in Component and Integration Testing . It establishes what code is covered by structural testing techniques out of the total code written. One drawback of code coverage is that- it does not talk about code that has not been written at all (Missed requirement), There are tools in the market that can help measure code coverage. 

There are multiple ways to test code coverage:

1. Statement coverage: Number of Statements of code exercised/Total number of statements. For Example, if a code segment has 10 lines and the test designed by you covers only 5 of them then we can say that statement coverage given by the test is 50%.

2. Decision coverage: Number of decision outcomes exercised/Total number of Decisions. For Example, If a code segment has 4 decisions (If conditions) and your test executes just 1, then decision coverage is 25%

3. Conditional/Multiple condition coverage: It has the aim to identify that each outcome of every logical condition in a program has been exercised. 

2. Experience-Based Techniques:

These are techniques for executing testing activities with the help of experience gained over the years. Domain skill and background are major contributors to this type of testing. These techniques are used majorly for UAT/business user testing . These work on top of structured techniques like Specification-based and Structure-based, and they complement them. Here are the types of experience-based techniques:

1. Error guessing : It is used by a tester who has either very good experience in testing or with the application under test and hence they may know where a system might have a weakness. It cannot be an effective technique when used stand-alone but is helpful when used along with structured techniques.

2. Exploratory testing : It is hands-on testing where the aim is to have maximum execution coverage with minimal planning. The test design and execution are carried out in parallel without documenting the test design steps. The key aspect of this type of testing is the tester’s learning about the strengths and weaknesses of an application under test. Similar to error guessing, it is used along with other formal techniques to be useful.

3. Specification-based Techniques:

This includes both functional and non-functional techniques (i.e. quality characteristics). It means creating and executing tests based on functional or non-functional specifications from the business. Its focus is on identifying defects corresponding to given specifications. Here are the types of specification-based techniques:

1. Equivalence partitioning : It is generally used together and can be applied to any level of testing. The idea is to partition the input range of data into valid and non-valid sections such that one partition is considered “equivalent”. Once we have the partitions identified, it only requires us to test with any value in a given partition assuming that all values in the partition will behave the same. For example, if the input field takes the value between 1-999, then values between 1-999 will yield similar results, and we need NOT test with each value to call the testing complete.

2. Boundary Value Analysis (BVA) : This analysis tests the boundaries of the range- both valid and invalid. In the example above, 0,1,999, and 1000 are boundaries that can be tested. The reasoning behind this kind of testing is that more often than not, boundaries are not handled gracefully in the code. 

3. Decision Tables : These are a good way to test the combination of inputs. It is also called a Cause-Effect table. In layman’s language, one can structure the conditions applicable for the application segment under test as a table and identify the outcomes against each one of them to reach an effective test. 

  • It should be taken into consideration that there are not too many combinations so the table becomes too big to be effective.
  • Take an example of a Credit Card that is issued if both credit score and salary limit are met. This can be illustrated in below decision table below:

case study for testing techniques

Decision Table

4. Use case-based Testing : This technique helps us to identify test cases that execute the system as a whole- like an actual user (Actor) , transaction by transaction. Use cases are a sequence of steps that describe the interaction between the Actor and the system. They are always defined in the language of the Actor, not the system. This testing is most effective in identifying integration defects. Use case also defines any preconditions and postconditions of the process flow. ATM example can be tested via use case:

case study for testing techniques

Use case-based Testing

5. State Transition Testing : It is used where an application under test or a part of it can be treated as FSM or finite state machine. Continuing the simplified ATM example above, We can say that ATM flow has finite states and hence can be tested with the State transition technique.  There are 4 basic things to consider – 

  • States a system can achieve
  • Events that cause the change of state
  • The transition from one state to another
  • Outcomes of change of state

A state event pair table can be created to derive test conditions – both positive and negative. 

case study for testing techniques

State Transition

Advantages of software testing techniques:

  • Improves software quality and reliability – By using different testing techniques, software developers can identify and fix defects early in the development process, reducing the risk of failure or unexpected behaviour in the final product.
  • Enhances user experience – Techniques like usability testing can help to identify usability issues and improve the overall user experience.
  • Increases confidence – By testing the software, developers, and stakeholders can have confidence that the software meets the requirements and works as intended.
  • Facilitates maintenance – By identifying and fixing defects early, testing makes it easier to maintain and update the software.
  • Reduces costs – Finding and fixing defects early in the development process is less expensive than fixing them later in the life cycle.

Disadvantages of software testing techniques:

  • Time-consuming – Testing can take a significant amount of time, particularly if thorough testing is performed.
  • Resource-intensive – Testing requires specialized skills and resources, which can be expensive.
  • Limited coverage – Testing can only reveal defects that are present in the test cases, and defects can be missed.
  • Unpredictable results – The outcome of testing is not always predictable, and defects can be hard to replicate and fix.
  • Delivery delays – Testing can delay the delivery of the software if testing takes longer than expected or if significant defects are identified.
  • Automated testing limitations – Automated testing tools may have limitations, such as difficulty in testing certain aspects of the software, and may require significant maintenance and updates.

Please Login to comment...

Similar reads.

  • Software Testing
  • Software Engineering

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

TESTCo

Software Testing Case Studies and Technical Reports

Look behind the scenes to learn how testco partners with our clients to solve tough testing problems.

An international company looks to TESTCo to improve customer satisfaction for their movie ticket app.

' title=

A Pharmacy Services Company Turns to TESTCo for a Last Minute Testing That Saves Their New Web Application

' title=

An ISV Boosts Revenue and Reduces Costs by Implementing TESTCo Software Test Automation Solutions

' title=

A Real Estate Company Learns the Hard Way the Value of Outsourced Software Testing for a Critical Project

' title=

What Our Other Clients Are Saying About TESTCo

Original Reports and Tips on Software Testing, Mobile App Testing, and Website Testing

Mobile Application Testing Guide: What to Check Before Your Release Your App

' title=

Is Agile Software Testing Wasting Money on Automation Tests?

' title=

Software Testers Vs. Software Test Engineers.

What You Should Know About the People Testing Your Software

' title=

5 Situations That Are A Good Fit for On-Demand Testing (and 2 That Are Not)

on-demand software testing | on demand software testing

Website Testing Report: A Convenient Guide to Successful Website Testing Projects

' title=

Software Testing Services

  • Privacy Policy
  • Testimonials
  • Software Testing Help
  • Software Testing Services Problems
  • Software testing services
  • Online software testing
  • Rapid Software Testing
  • Agile Software Testing
  • Software Testing Company
  • Software testing companies

Software QA Services

  • Software QA Outsourcing
  • Software QA Company
  • QA Outsourcing
  • On-Demand Software Testing
  • Offshore Software Testing
  • Offshore Testing

case study for testing techniques

Website and Mobile App Testing Services

  • Web Application Testing
  • Web Application QA
  • Mobile Application Testing

Client Forms

This site uses cookies. By continuing to browse the site, you are agreeing to our use of cookies.

Cookie and Privacy Settings

We may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.

Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.

These cookies are strictly necessary to provide you with services available through our website and to use some of its features.

Because these cookies are strictly necessary to deliver the website, you cannot refuse them without impacting how our site functions. You can block or delete them by changing your browser settings and force blocking all cookies on this website.

These cookies collect information that is used either in aggregate form to help us understand how our website is being used or how effective our marketing campaigns are, or to help us customize our website and application for you in order to enhance your experience.

If you do not want that we track your visist to our site you can disable tracking in your browser here: Click to enable/disable google analytics tracking.

We also use different external services like Google Webfonts, Google Maps and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.

Google Webfont Settings: Click to enable/disable google webfonts.

Google Map Settings: Click to enable/disable google maps.

Vimeo and Youtube video embeds: Click to enable/disable video embeds.

You can read about our cookies and privacy settings in detail on our Privacy Policy Page.

Your First Name (required)

Your Last Name (required)

Your Company Name (required)

Your Email (required)

Business Phone

This is a potential security issue, you are being redirected to https://csrc.nist.gov .

You have JavaScript disabled. This site requires JavaScript to be enabled for complete site functionality.

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Combinatorial Methods for Trust and Assurance

Project links, industrial case studies - combinatorial and pairwise testing.

Combinatorial testing is being applied successfully in nearly every industry , and is especially valuable for assurance of high-risk software with safety or security concerns.  Combinatorial testing is referred to as  effectively exhaustive , or pseudo-exhaustive , because it can be as effective as fully exhaustive testing, while reducing test set size by 20X to more than 100X.

Case studies below are from many types of applications, including aerospace, automotive, autonomous systems, cybersecurity, financial systems, video games, industrial controls, telecommunications, web applications, and others. 

 

Industrial controls, consumer appliances  

M Park, H Jang, T Byun, Yunja, " ",  (ICSE-SEIP '20), At Seoul

"In the field application, we discovered two fault cases in real products under development using our framework. According to our analysis, these cases could not have been found using manual testing, but made possible by our testing framework. These cases could have cost the company tens of million dollars each, if they were not discovered until after sale."
Industrial controls, operating system - various companies  

Li, X., Gao, R., Wong, W.E., Yang, C. and Li, D.,   . In   (pp. 53-60).

From January 2016 to February 2016, authors tested three real-life software systems using CT, and compared the results to errors that had been discovered using conventional methods, finding roughly 3X as many bugs in one-fourth of the time, for a 12X increase in test efficiency. 

Lockheed Martin -

Aerospace

  J. Hagar, D.R. Kuhn, R.N. Kacker, , , 48(4), pp.64-72. CT applied to 8 Lockheed Martin pilot projects in aerospace software. Results: “Our initial estimate is that this method supported by the technology can save up to 20% of test planning/design costs if done early on a program while increasing test coverage by 20% to 50%."

Video coding

 

Hong, D., & Chae, S. I. (2014, June). Efficient test bitstream generation method for verification of HEVC decoders. In on (pp. 1-2).

"In the proposed method the SE coverage normalized by the number of CTUs is 84 times higher compared to that in the HEVC conformance test suite. This means that we can verify the HEVC decoders 84 times faster with the test bitstream set obtained by the proposed method, compared to the HEVC conformance test suite."

Various software - a study on use of best practices from ISO/IEC/IEEE 29119 (2014) standard by companies in Serbia  

Vukovic, V., Djurkovic, J., Sakal, M., & Rakovic, L. (2020). . Tehnički vjesnik, 27(3), 687-696.

"Combinatorial Test Techniques, Peer Reviews, Statement Coverage, Branch Coverage and Loop Coverage are techniques used, each individually by about 50% respondents in organisations." "Combinatorial Test Techniques provide a good cost/benefit ratio, because it requires a relatively small number of test cases (in relation to the total number) to provide a good coverage of the program code and identification of software defects" 
Web application security - Univ of Novi Sad, Serbia   Preradov, Katarina, Mina Medić, Goran Sladić, and Branko Milosavljević. "Application of combinatorial methods in web application security testing." (2020): Information Society of Serbia, 79-83. "We used ACTS for automatic test set generation. The results show that these test sets are effective and can reveal security issues. Also, the number of necessary test cases is reduced, which directly affects the time and money needed to adequately test the software. "
Virginia Commonwealth U, Razorcat GmbH, NIST, and Idaho National Lab - industrial controls and nuclear safety   

AV Jayakumar, S Gautham, R Kuhn, B Simons, A Collins, T Dirsch, R Kacker, C Elks.  ,  (ISSRE 2020)

In this paper, we present results on the application of a systematic testing methodology called Pseudo-Exhaustive testing. The systematic testing methods were applied at the unit, module integration levels of the software. The findings suggest that Pseudo Exhaustive testing supported by automated testing technology is an effective approach to testing real-time embedded digital devices in critical nuclear applications.

Loyola Univ, NIST -

Cryptography - multiple companies

 

Mouha, N., Raunak, M.S., Kuhn, D.R. and Kacker, R., 2018. . , 67(3), pp.870-884.

Detected flaws in cryptographic software code, reducing the test set size by 700X as compared with exhaustive testing, while retaining the same fault- detection capability. 

Siemens -

Industrial controls

 

Ozcan, M., 2017, March. . In   (pp. 208-215). IEEE.

Applied combinatorial testing to industrial control systems, using mixed-strength covering arrays, “resulting in requiring fewer tests for higher strength coverage”. 

Osaka Univ, AIST - web applications   Jin, H., Kitamura, T., Choi, E. H., & Tsuchiya, T. A comparative study on combinatorial and random testing for highly configurable systems. In  , Naples, Italy, Dec 9–11, 2020, Proceedings 32 (pp. 302-309). Springer International Publishing. "We thus conclude that the diversity of configurations sampled by CT is 2 to 3 times  higher than those sampled by RT."

Adobe - 

Data analytics

 

  Smith, Riley, et al., " , 2019 I  (pp. 208-215).  "In this paper, we therefore report the practical application of combinatorial coverage measurements to evaluate the effectiveness of the validation framework for the Adobe Analytics reporting engine. The results of this evaluation show that combinatorial coverage measurements are an effective way to supplement existing validation for several purposes. In addition, we report details of the approach used to parse moderately nested data for use with the combinatorial coverage measurement tools."

Adobe - 

Data analytics

 

Smith, Riley, et al., " , 2019 I  (pp. 208-215). 

"In this paper, we report the practical application of combinatorial testing to the data collection, compression and processing components of the Adobe analytics product. Consequently, the effectiveness of combinatorial testing for this application is measured in terms of new defects found rather than detecting known defects from previous versions. The results of the application show that combinatorial testing is an effective way to improve validation for these components of Adobe Analytics."

Australian Defense Dept - aerospace and naval applications   Joiner, K. ,

"More recently the movement has incorporated high throughput testing (HTT) techniques using combinatorial methods to do highly efficient screening or validations, especially of simulations and to do more rigorous software verifications and cybersecurity protections with up to six-way combinations. The U.S. Defense methods have also spread to the U.S. Defense Industry initially for compliance purposes, meaning that these industries are now realising industry benefits."

US Marine Corps - computer security penetration testing  

T. McLean,   

"CYBERSTAT is applying Scientific Test and Analysis Techniques (STAT) to offensive cyber penetration testing tools.  By applying STAT to the tool, the tool’s scope is expanded beyond “one at a time” uses as combinations of options are explored with a combinatorial test. "
US Marine Corps - protocol testing   T. McLean, ,

The project successfully implemented an unbiased and statistically based test methodology for Link 16 standards conformance that can ensure repeatability with a quantifiable increase in test space coverage. 

Hexawise - lessons learned, various projects  
   

Using Combinatorial Test Design methods to select software test scenarios has repeatedly delivered large efficiency and thoroughness gains – which begs the questions: 

• Why are these proven methods not used everywhere?• Why do some efforts to promote adoption of new approaches stagnate?• What steps can leaders take to introduce successfully introduce and spread new test design methods?

Masuda, S., Nakamura, H. and Kajitani, K., 2018. Rule-based searching for collision test cases of autonomous vehicles simulation.  IET Intelligent Transport Systems, 12(9), pp.1088-1095.

Smith, B., Feather, M. and Huntsberger, T., A Hybrid Method of Assurance Cases and Testing for Improved Confidence in Autonomous Space Systems . In 2018 AIAA Information Systems-AIAA Infotech@ Aerospace(p. 1981).

Abstract. We are investigating a new test development method that aims to maximize the confidence to be achieved by combining Assurance Cases with High Throughput Testing (HTT). Assurance Cases, developed for safety-critical systems, are a rigorous argument that the system satisfies a property (e.g., the Mars rover will not tip over during a traverse). They integrate testing, analysis, and environmental and operational assumptions, from which the set of conditions that testing must cover is determined. In our method, information from the Assurance Case is used to determine the test coverage needed, and then input to HTT to generate the minimal test suites needed to provide that coverage.

Garn, B., Kapsalis, I., Simos, D.E. and Winkler, S., On the applicability of combinatorial testing to web application security testing: a case study . Proc 2014 Wkshp Joining AcadeMiA and Industry Contributions to Test Automation and Model-Based Testing. ACM.

This paper reports on a case study done for evaluating and revisiting a recently introduced combinatorial testing methodology used for web application security purposes. It further reports on undertaken practical experiments thus strengthening the applicability of combinatorial testing to web application security testing.

Sagi, Bhargava Rohit. " Experimental Design in Game Testing ." Rochester Institute of Technology (2016).

Combinatorial testing is a method of experimental design that is used to generate test cases and is primarily used for commercial software testing. In addition to the discussion of the implementation of combinatorial testing techniques in video game testing, we present a method for finding combinations resulting in video game bugs.

Mälardalen University

Redavid, C., & Farid, A. (2011). An overview of game testing techniques . Västerås: sn.

"It's always an important issue for testers and software managers to decide how much testing should be enough [11] Game quality has to be good enough for customer, but testing has to stop before the release date. It is neither practical nor economical, to test every possible combination of game event, conguration, function and options. Skipping some testing or taking shortcuts is always risky. Combinatorial testing is a quick way to find defects earlier in the game while keeping the test sets small, covering as much area as possible."

Jaguar Land Rover - 

Dhadyalla, Gunwant, Neelu Kumari, and Timothy Snell. "Combinatorial testing for an automotive hybrid electric vehicle control system: a case study." 2014 IEEE Seventh International Conference on Software Testing, Verification and Validation Workshops . IEEE, 2014.

Bozic, J., Garn, B., Kapsalis, I., Simos, D., Winkler, S. and Wotawa, F., 2015, August. Attack pattern-based combinatorial testing with constraints for web security testing. 2015 IEEE Intl Conf Software Quality, Reliability and Security 

The evaluated results indicate that both techniques succeed in detecting security leaks in web applications with different results, depending on the background logic of the testing approach. Last but not least, we claim that attack pattern-based combinatorial testing with constraints can be an alternative method for web application security testing, especially when we compare our method to other test generation techniques like fuzz testing.

Rockwell Collins -

R. Bartholomew, An Industry Proof-of-Concept Demonstration of Automated Combinatorial Test, 25th Annual IEEE Software Technology Conf ., April 8-10, 2013, Salt Lake City, Utah.

“Industry proof-of-concept demonstration that used this approach to automate parts of the unit and integration testing of a 196 KSLOC avionics system. The goal was to see if it might cost-effectively reduce rework by reducing the number of software defects escaping into system test – if it was adequately accurate, rigorous, thorough, scalable, mature, easy to learn, easy to use, etc. Overcoming scalability issues required moderate effort, but in general it was effective – e.g., generating 47,040 test cases (input vectors, expected outputs) in 75 seconds, executing and analyzing them in 2.6 hours. It subsequently detected all seeded defects, and achieved nearly 100% structural coverage.”

Dominka S, Mandl M, Dübner M, Ertl D. Using combinatorial testing for distributed automotive features: Applying combinatorial testing for automated feature-interaction-testing. 2018 IEEE 8th Ann. Computing and Communication Wkshp and Conf  (CCWC) Jan 8 (pp. 490-495)

Abstract—Modern passenger cars have a comprehensive embedded distributed system with a huge number of bus devices interlinked in several communication networks. The number of (distributed) features and hence the risk of undesired feature interaction within this distributed system rises significantly. Such distributed automotive features pose a huge challenge in terms of efficient testing. Bringing together Combinatorial Testing with Automated Feature-Interaction Testing reduces the testing effort for such features significantly.

Avaya Corp. - 

Telecommunications

Sherif, Anwar. "Combinatorial testing: Implementations in solutions testing." 2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW). IEEE, 2016.

Ericsson S, Enoiu E. Combinatorial Modeling and Test Case Generation for Industrial Control Software using ACTS. 2018 IEEE Intl Conf Software Quality, Reliability and Security (QRS) 2018 Jul 16 (pp. 414-425)

"Our results show that not all combinations of algorithms and interaction strengths could generate a test suite within a realistic cut-off time. The results of the modeling process and the efficiency evaluation of ACTS are useful for practitioners considering to use combinatorial testing for industrial control software as well as for researchers trying to improve the use of such combinatorial testing techniques."

Mercedez-Benz - 

Züfle, Siegmar, and Venkataraman Krishnamoorthy. "A process for nonfunctional combinatorial testing: Selection of parameter values from a nondiscrete domain space." 2015 IEEE Eighth International Conference on Software Testing, Verification and Validation Workshops  

Ahmed BS, Pahim A, Junior CR, Kuhn DR, Bures M. Towards an Automated Unified Framework to Run Applications for Combinatorial Interaction Testing. arXiv preprint arXiv:1903.05387. 2019 Mar 13.

Sulake - video games/social network

Puoskari, E., Vos, T.E., Condori-Fernandez, N. and Kruse, P.M., 2013.  Evaluating applicability of combinatorial testing in an industrial environment: A case study .  2013 Intl Wkshp Joining Academia and Industry Contributions to testing Automation  ACM.

Chunduri, Annapurna, Robert Feldt, and Mikael Adenmark. " An effective verification strategy for testing distributed automotive embedded software functions: A case study ." Intl Conf Product-Focused Software Process Improvement , pp. 233-248. Springer, 2016.

The proposed verification strategy to test distributed automotive embedded software functions has given promising results by providing means to identify test gaps and test redundancies. It helps establish an effective and feasible approach to capture function test coverage information that helps enhance the effectiveness of integration testing of the distributed software functions.

Vilkomir, S.A., Swain, W.T., Poore, J.H. and Clarno, K.T., 2008, June. Modeling input space for testing scientific computational software: a case study. Intl Conf Computational Science  Springer, Berlin

Abstract. An application of a method of test case generation for scientific computational software is presented. NEWTRNX, neutron transport software being developed at Oak Ridge National Laboratory, is treated as a case study. A model of dependencies between input parameters of NEWTRNX is created. Results of NEWTRNX model analysis and test case generation are evaluated.

Lockheed Martin -

Failure analysis

Cunningham, A. M., Hagar, J., & Holman, R. J.  A System Analysis Study Comparing Reverse Engineered Combinatorial Testing to Expert Judgment. In Software Testing, Verification and Validation (ICST), 2012 IEEE Fifth Intl Conference on (pp. 630-635) IEEE.

Lockheed Martin F-16 ventral fin redesign “The historic analysis was able to determine a set of combinations, which isolated the problem and tested a solution. However, the original effort was expensive, time consuming, and required highly specialized knowledge from the expert to be effective. In the study, a series of iterations created combinatorial test cases which could have 'replicated' the original highly optimized and successful test program, without the expert.”

Tsumura, K., Washizaki, H., Fukazawa, Y., Oshima, K. and Mibe, R., 2016, April. Pairwise coverage-based testing with selected elements in a query for database applications. In  2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)  (pp. 92-101). IEEE.

Develops a method of applying combinatorial testing for use with SQL database query programs. Results showed that the pairwise tests detected “many bugs which are not detected by existing test methods based on predicates in the query”. 

Wu, H., Petke, J., Jia, Y., & Harman, M. (2018). An empirical comparison of combinatorial testing, random testing and adaptive random testing. IEEE Transactions on Software Engineering.

"Our study was conducted on nine real- world programs under a total of 1683 test scenarios (combinations of available parameter and constraint information and failure rate). The results show significant differences in the techniques’ fault detection ability when faults are hard to detect (failure rates are relatively low). CT performs best overall; no worse than any other in 98 percent of scenarios studied."

Kuhn DR, Higdon JM, Lawrence JF, Kacker RN, Lei Y. Combinatorial methods for event sequence testing. In2012 IEEE Fifth International Conference on Software Testing, Verification and Validation 2012 Apr 17 (pp. 601-609). IEEE.

"The methods described in this paper were motivated by testing needs for systems that may accept multiple communication or sensor connections and generate output to several communication links and other interfaces, where it is important to test the order in which connections occur. Although pairwise event order testing (both A followed by B and B followed by A) has been described, our algorithm ensures that any t events will be tested in every possible t-way order."

Fögen, K. and Lichter, H., 2018. A Case Study on Robustness Fault Characteristics for Combinatorial Testing-Results and Challenges . QuASoQ 2018, p.18.

Abstract.  Combinatorial strategies are extended to generate invalid test inputs but the effectiveness of negative test scenarios is yet unclear. Therefore, we conduct a case study and analyze 434 failures reported as bugs of a financial enterprise application. As a result, 51 robustness failures are identified including failures triggered by invalid value combinations and failures triggered by interactions of valid and invalid values.

Bergström, Henning, and Eduard Paul Enoiu. "Using timed base-choice coverage criterion for testing industrial control software."  2017 IEEE Intl Conf Software Testing, Verification and Validation Workshops (ICSTW) . IEEE, 2017.

Applies combinatorial test methods to industrial control software.  “We found that tests generated for timed base-choice criterion show better code coverage (7% improvement) and fault detection (27% improvement) in terms of mutation score than tests satisfying base-choice coverage criterion. The results demonstrate the feasibility of applying timed base-choice criterion for testing industrial control software.”

Sánchez, A.B., Segura, S., Parejo, J.A. and Ruiz-Cortés, A., 2017. Variability testing in the wild: the Drupal case study.  Software & Systems Modeling ,  16 (1)"

"Among other results, we identified 3392 faults in single features and 160 faults triggered by the interaction of up to four features in Drupal v7.23. We also found positive correlations relating the number of bugs in Drupal features to their size, cyclomatic complexity, number of changes and fault history. To show the feasibility of our work, we evaluated the effectiveness of non-functional data for test case prioritization in Drupal. Results show that non-functional attributes are effective at accelerating the detection of faults, outperforming related prioritization criteria as test case similarity."

Srikanth, H. and Cohen, M.B., 2011, September. Regression testing in software as a service: An industrial case study. In  2011 27th IEEE Intl Conf Software Maintenance (ICSM)  IEEE.

Web administration

N. Condori-Fernandez, T. Vos, P.M. Kruse, E. Brosse, A. Bagnato. Analyzing the Applicability of a Combinatorial Testing Tool in an Industrial Environment, Tech. Rpt. UU-CS-2014-008, May 2014, Univ. of Utrecht.

"The main outcomes of the presented study are: (1) with the test suite designed with the CTE, the testers were able to find faults that the traditional test suites did not find, one of them a severe fault; (2) the company realized that the current coverage metrics used for evaluating the quality of test suites needs to be changed to a more sophisticated one; (3) SOFTEAM’s motivation to do more case studies with the CTE is high"

Raunak MS, Kuhn DR, Kacker R. Combinatorial testing of full text search in Web applications. 2017 IEEE Intl Conf on Software Quality, Reliability and Security Companion (QRS-C)

Testing full-text search in a database web application. We develop test-case selection techniques, where test strings are synthesized using characters or string fragments that may lead to system failure. Demonstrated discovery of a number of "corner-cases" that had not been identified previously. We also present simple heuristics for isolating the fault causing factors that can lead to such system failures. The test method and input model described in this paper have immediate application to other systems that provide complex full text search.

Manchester S, Bryce R, Sampath S, Samant N, Kuhn DR, Kacker R. Applying higher strength combinatorial criteria to test case prioritization: a case study .

Using the CPUT tool, we "conduct an empirical study where we compare 2-way and 3-way combinatorial coverage of inter-window parameter interactions in terms of the rate of fault detection for a web application called Schoolmate and a user-session-based test suite. Our results show that the rate of fault detection for 2-way and 3-way prioritization are within 1% of each other, but 2-way provides a slightly better result."

Web applications

Maughan, C. Test Case Generation Using Combinatorial Based Coverage for Rich Web Applications . Logan, UT: Utah State Univ (2012).

Compared exhaustive (discretized values) w/ CT. 2-way tests found all but one fault found by exhaustive using < 13% of tests required for exhaustive.

Zhang, Z., Liu, X., & Zhang, J. (2012, April). Combinatorial Testing on ID3v2 Tags of MP3 Files. In Software Testing, Verification and Validation (ICST), 2012 IEEE Fifth International Conf

Most faults detected by 1-way and 2-way tests, with one caused by 4-way interaction.

iGate Corp. -

Product engineering; banking & financial services; insurance

M. Mehta, R. Philip, Applications of Combinatorial Testing methods for Breakthrough Results in Software Testing, 2nd Intl. Wkshp on Combinatorial Testing , Luxembourg, IEEE, Mar. 2013.

"Combinatorial Testing (CT) approach has greatly helped our projects from different domains to optimize testing effort without compromising on testing quality. We were able to achieve breakthrough business results. CT based freeware tools such as All Pairs & ACTS are of great help for testing professionals to optimize effort and reduce learning curve."

L. Shikh Gholamhossein Ghandehari, M. N. Bourazjany, Yu Lei, R.N. Kacker and D.R. Kuhn, " Applying Combinatorial Testing to the Siemens Suite ", 2nd Intl. Wkshp Combinatorial Testing, Luxembourg, IEEE, Mar. 2013.

"Modeled the seven programs in the Siemens suite and applied combinatorial testing to these programs. The results show that combinatorial testing can detect most faulty versions of the Siemens programs, and is more effective than random testing."

Borazjany MN, Ghandehari LS, Lei Y, Kacker R, Kuhn R. An input space modeling methodology for combinatorial testing. 2013 IEEE Sixth Intl Conf on Software Testing, Verification and Validation Wkshp

This paper describes a method for modeling input structures and parameters, and compares combinatorial with random testing using the same input models.  It is shown that combinatorial testing provides better structural coverage and detects more errors than random. 

Covering array tool

Borazjany, M. N., Yu, L., Lei, Y., Kacker, R., & Kuhn, R. (2012, April). Combinatorial Testing of ACTS: A Case Study. Software Testing, Verification and Validation (ICST), 2012 IEEE Fifth Intl Conf (pp. 591-600)

Applied 2-way and 3-way tests; approx. 80% module and branch coverage, 88% statement coverage 15 faults found; 2-way tests found as many as 3-way.

Web browser DOM modules

C. Montanez, D.R. Kuhn, M. Brady, R. Rivello, J. Reyes, M.K. Powers,Evaluation of Fault Detection Effectiveness for Combinatorial and Exhaustive Selection of Discretized Test Inputs  Software Quality Professional  - June, 2012.

Compared exhaustive (discretized values) w/ CT, 2-way to 6-way. 4-way tests found all faults using < 5% of tests required for exhaustive, for a 20X reduction in test set size. 

Additional Pages

Rick Kuhn [email protected] Address: https://www.nist.gov/people/d-richard-kuhn

Raghu Kacker [email protected] 301-975-2109 Address: http://math.nist.gov/~RKacker/

M S Raunak [email protected]

Security and Privacy: assurance , modeling , testing & validation

Technologies: software & firmware

Related Projects

Quick start Downloadable Tools Fundamental background papers Tutorials and Documentation Combinatorial Methods in Testing Why do Combinatorial Testing? Event Sequence Testing Case Studies Assured autonomy Explainable AI, Verification, and Validation Rule-based Expert Systems and Formal Methods AI and Assured Autonomy Papers Assured Autonomy - briefings and videos Case studies Hardware and Embedded Systems Semiconductor Functional Verification Technical Plan Benefits Physical Unclonable Function (PUF) Vulnerabilities Embedded System Faults and Vulnerabilities Automated Testing Automated Test Generation Oracle-free Testing Property based Testing Input space measurement for autonomy and testing Input Space Coverage Measurement Coverage examples Case studies Cybersecurity Testing Combinatorial approach MagicMirror smart contract testing Case studies Software Testing Methodology NIST Testing Process DOs and DON'Ts of testing ACTS Library Papers on combinatorial test methods Covering Array Library Seminars & Talks & Tutorial Combinatorial Methods For Modeling & Simulation Workshop Papers Our Research Program

Rick Kuhn [email protected] Address: https://www.nist.gov/people/d-richard-kuhn Raghu Kacker [email protected] 301-975-2109 Address: http://math.nist.gov/~RKacker/ M S Raunak [email protected]

Access Control Policy Testing

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

IMAGES

  1. How to Create a Case Study + 14 Case Study Templates

    case study for testing techniques

  2. Case study on testing

    case study for testing techniques

  3. Case study on testing

    case study for testing techniques

  4. 5 Test Case Design Techniques I Use To Write Smarter Test Cases

    case study for testing techniques

  5. 5 Test Case Design Techniques I Use To Write Smarter Test Cases

    case study for testing techniques

  6. Download Integrate Case Study in Automation & Manual Testing

    case study for testing techniques

VIDEO

  1. DARE Centre Case Study: Testing SMD's latest technology

  2. Oase Drum Koi Pond Filter System

  3. How should I prepare for a manual tester's job interview?

  4. Real Time Defect One identified in a Real Application (Software Testing Knowledge Purpose)

  5. Notes Case Study Non Testing techniques Guidance and Counseling

  6. Case Study

COMMENTS

  1. Test Case Design Techniques in Software Testing

    Test case design techniques provide a more structured approach to testing, resulting in comprehensive test coverage and higher-quality software. Using these techniques, testers can identify complex scenarios, generate effective test data, and design test cases that evaluate full system functionality. It helps reduce the risk of missed defects ...

  2. Software Testing Techniques with Test Case Design Examples

    The concept behind this Test Case Design Technique is that test case of a representative value of each class is equal to a test of any other value of the same class. It allows you to Identify valid as well as invalid equivalence classes. Example: Input conditions are valid between. 1 to 10 and 20 to 30.

  3. Test Case Design Techniques for Smart Software Testing

    Structure-Based testing, or White-Box testing, is a test case design technique for testing that focuses on testing the internal architecture, components, or the actual code of the software system. It is further classified into five categories: Statement Coverage. Decision Coverage. Condition Coverage.

  4. How to Write Test Cases: A Step-by-Step QA Guide

    Below, we've outlined 10 steps you can take whether you're writing new test cases or revisiting and evaluating existing test cases. Define the area you want to cover from the test scenario. Ensure the test case is easy for testers to understand and execute. Understand and apply relevant test designs.

  5. How to write Test Cases

    Parameters of a Test Case: Module Name: Subject or title that defines the functionality of the test. Test Case Id: A unique identifier assigned to every single condition in a test case. Tester Name: The name of the person who would be carrying out the test. Test scenario: The test scenario provides a brief description to the tester, as in providing a small overview to know about what needs to ...

  6. Test Case Design Techniques Explained With Examples

    Using Tools For Test Case Design. In software development, automated tools play a role in test automation and case design. These tools streamline the process by offering features like traceability, collaboration, and ease of maintenance. Some popular tools in this domain include JIRA, TestRail, and QTest.

  7. Case Study: Example of Software Quality Assurance Testing

    Software Testing Client Project Case Study. We are often asked what software testing is. The video below shares a solid definition of the term. But we thought a software testing project case study might be helpful to better understand what software testers do on a typical day. This includes testing software, writing requirement documents for ...

  8. Software Testing Techniques: Explained with Examples

    White box testing is used to analyze systems, especially when running unit, integration, and system tests. Example: Test cases are written to ensure that every statement in a software's codebase is tested at least once. This is called Statement Coverage and is a white box testing technique.

  9. Test Case Design: Techniques for Effective Software Testing

    Tips for Creating Effective Test Cases: Clear and Concise Documentation: Begin by documenting the purpose and objective of each test case clearly. This helps testers understand the expected ...

  10. Guide To 5 Test Case Design Techniques With Examples

    Boundary Value Analysis test case design example . You might also concern: How to choose the right test automation framework? 2. Equivalence Class Partitioning. Equivalent Class Partitioning (or Equivalent Partitioning) is a test case design method that divides the input domain data into various equivalence data classes, assuming that data in each group behaves the same.

  11. PDF A case study on Software Testing Methods and Tools

    and 8 projects.This case study focuses on software testing methods and practices, activities performed with software testing tools and also software testing standards. Based on the outcomes of the case study the contemporary practices of software testing in automotive domain are presented and also some recommendations regarding best practices.

  12. 6 Real Examples and Case Studies of A/B Testing

    One of the best ways to learn and improve is to look at successful A/B testing examples: 1. Bannersnack: landing page. Bannersnack, a company offering online ad design tools, knew they wanted to improve the user experience and increase conversions —in this case, sign-ups—on their landing page.

  13. A case study on Software Testing Methods and Tools

    This study applies an experimentation methodology to compare three state-of-the-practice software testing techniques: a) code reading by stepwise abstraction, b) functional testing using ...

  14. PDF Evaluating Software Testing Techniques and Tools

    Keywords: Case study, Software Testing Techniques, Methodological Framework, Evaluation 1 Introduction There exists a real need in industry to have guidelines on what testing techniques to use for di erent testing objectives, and how usable these techniques are. In order to obtain such guidelines, more case studies need to be performed. However,

  15. Software Testing Techniques with Test Case Design Examples

    Create a table for each function and list all possible combinations of inputs and outputs. This aids in the detection of a condition that the tester may have missed. The steps for making a decision table are as follows −. Make a list of the inputs in columns. Fill in the blanks in the column with all of the regulations.

  16. Software Testing and QA Case Studies

    Read software testing case studies and find our team's success in various fields Like Automation Testing , mobile testing, security testing case study etc. ... implement integrated automated testing framework, test optimization techniques & helped in setting up CI/CD pipeline. Streamlined IT processes and artifacts, optimized resource skills ...

  17. Test Case Design

    The course is a mix of case-driven, instructor-led, and self-paced learning, designed to enable participants learn, experiment and implement the concepts involving test-case design - Methods & Techniques. The participants will be presented with ample examples, exercises and case studies to understand and apply the concepts taught. COVERAGE.

  18. Software Testing Techniques

    4. Use case-based Testing: This technique helps us to identify test cases that execute the system as a whole- like an actual user (Actor), transaction by transaction.Use cases are a sequence of steps that describe the interaction between the Actor and the system. They are always defined in the language of the Actor, not the system.

  19. Software Testing Case Studies

    A Pharmacy Services Company Turns to TESTCo for a Last Minute Testing That Saves Their New Web Application. An ISV Boosts Revenue and Reduces Costs by Implementing TESTCo Software Test Automation Solutions. A Real Estate Company Learns the Hard Way the Value of Outsourced Software Testing for a Critical Project.

  20. Case Study Method: A Step-by-Step Guide for Business Researchers

    Although case studies have been discussed extensively in the literature, little has been written about the specific steps one may use to conduct case study research effectively (Gagnon, 2010; Hancock & Algozzine, 2016).Baskarada (2014) also emphasized the need to have a succinct guideline that can be practically followed as it is actually tough to execute a case study well in practice.

  21. Combinatorial Methods for Trust and Assurance

    Combinatorial testing is being applied successfully in nearly every industry, and is especially valuable for assurance of high-risk software with safety or security concerns. Combinatorial testing is referred to as effectively exhaustive, or pseudo-exhaustive, because it can be as effective as fully exhaustive testing, while reducing test set size by 20X to more than 100X. Case studies below ...

  22. A Case Study Using Testing Technique for Software as a Service (SaaS

    In the second semester of 2013, during the development of the Customer Relationship Management (CRM) System at Brazilian global company, there was the necessity to execute software testing. However, it was done using techniques, such as Script-based Testing, Integration Testing and User Acceptance Testing. These techniques were inadequate to the cloud environment. Thus, the pair wise technique ...

  23. Test Case

    System study. In this, we will understand the application by looking at the requirements or the SRS, which is given by the customer. ... and write test cases by applying test case design techniques and use the standard test case template, which means that the one which is decided for the project. Review the test cases. Review the test case by ...