2.3 Test Planning
Test plans and test cases form the basis of modern quality assurance. Together these items detail the items to be tested and what the expected outcomes are.These elements work together to ensure the reliability, functionality, and performance of software applications.
What is a test plan?
A test plan is a comprehensive document that outlines the testing strategy, objectives, scope, resources, and schedule for a software project. It serves as a roadmap that guides testing activities from start to finish. A well-constructed test plan not only provides clarity to the testing team but also aligns stakeholders' expectations with the testing process.
Key Components of a Test Plan
- Objectives and Scope: The test plan establishes the goals of testing, whether it's validating specific functionalities, ensuring compatibility, or assessing performance under stress. It defines the scope of testing, specifying what will be tested and what won't be tested.
- Testing Strategy: The document outlines the testing methodologies, types of testing (functional, non-functional, security, etc.), and the approach to be taken for each. It defines the testing environment, including hardware, software, and data.
- Resource Allocation: The test plan details the resources required for testing, including human resources, tools, equipment, and testing environments.
- Test Schedule: It provides a timeline for testing activities, including milestones, deadlines, and dependencies.
- Risks and Contingencies: The test plan identifies potential risks that could impact the testing process and outlines contingency plans to mitigate them.
- Test Cases: The test plan contains a set of test cases that will be run in order to fully test the system under test (SUT).
What is a test case?
Test cases are detailed scenarios that describe how the software should behave under specific conditions. They outline inputs, expected outcomes, and the sequence of steps required to execute the test. Test cases are designed to validate that the software functions as intended and meets its requirements.
Key Elements of Test Cases
- Test Case ID: A unique identifier for each test case.
- Test Objective: A brief description of the purpose of the test case.
- Test Preconditions: A description of the expected state of the environment prior to executing the test steps.
- Test Steps: The sequence of actions to be performed during the test, including inputs and interactions.
- Expected Results: The anticipated outcomes or responses from the software based on the inputs and actions.
- Pass/Fail Criteria: The criteria that determine whether the test case is successful or not.
Test Case Example
Test Case ID: TC001
Objective: Verify that the user registration form functions correctly by allowing a new user to successfully register.
Preconditions:
- The application is accessible and functional.
- The user is on the registration page.
- The user has a valid email address and password for registration.
Steps:
- Navigate to the user registration page.
- Enter a valid email address in the designated field.
- Enter a secure password in the password field.
- Confirm the password in the designated confirmation field.
- Provide additional required information (for example, name, date of birth).
- Click the Register button to submit the registration form.
Results
- The system processes the registration request without errors.
- The user receives a confirmation message indicating successful registration.
- The user's account is created, and they are redirected to the login page.
Pass/Fail Criteria
- Pass: The registration process completes without any errors, and the user is redirected to the login page.
- Fail: Any error occurs during the registration process, or the user is not redirected to the login page.
Test Case Types
When creating test cases there are many different types to consider. A good test plan incorporates many of these different kinds of tests.
-
Happy Path Cases
- Test scenarios that represent the “expected” path through the system are called the Happy Path.
- Determining that the ideal path through the system works is only the first step in your testing process.
-
Positive Test Cases
- Verify that the system behaves as expected when valid inputs are provided.
- Confirm that the software meets the specified requirements under normal conditions.
-
Negative Test Cases
- Test scenarios where invalid inputs are provided or unexpected actions are taken.
- Check how the system handles errors, exceptions, and boundary conditions.
-
Boundary Value Test Cases
- Focus on testing values at the edge of acceptable ranges.
- Ensure that the system behaves correctly at the boundaries of input domains.
-
Equivalence Partitioning Test Cases
- Divide input ranges into equivalent classes.
- Test representative values from each class to reduce redundancy in testing.
-
Integration Test Cases
- Verify the interactions between different components or modules.
- Ensure that integrated parts of the system work together as intended.
-
Performance Test Cases
- Assess the system's performance under various conditions (for example, load, stress, and scalability testing).
- Measure response times, throughput, and resource utilization.
-
Security Test Cases
- Evaluate the system's resistance to security threats.
- Test for vulnerabilities and potential exploits.
-
Usability Test Cases
- Evaluate the user interface and overall user experience.
- Ensure that the software is user-friendly and meets usability requirements.
-
Compatibility Test Cases
- Check the software's compatibility with different operating systems, browsers, and devices.
- Ensure consistent functionality across diverse environments.
-
Regression Test Cases
- Confirm that new changes or features haven't negatively impacted existing functionality.
- Re-run previously executed test cases to ensure ongoing system stability.
-
Exploratory Test Cases
- Unscripted testing where testers explore the application to discover defects.
- Mimics real-world usage scenarios and encourages creativity in testing.
-
User Acceptance Test (UAT) Cases
- Involve end users in testing to ensure that the software meets their expectations.
- Verify that the system aligns with business needs and goals.
-
Load and Stress Test Cases
- Test the system's ability to handle varying levels of load and stress.
- Identify performance bottlenecks and areas for improvement.
-
Data Validity Test Cases
- Ensure that the system handles valid and invalid data appropriately.
- Validate data input, processing, and storage.
Your test plan will include as many of these different types of test cases as required to fully test your application.
Characteristics of a Good Test Case
Writing a good test case can be challenging. There are several things to keep in mind as you generate your test. Below are considerations when generating your test cases.
-
Clarity and Simplicity
- The test case should be easy to understand without ambiguity.
- Use clear and concise language to describe the test steps and expected results.
-
Relevance to Requirements
- Test cases should directly relate to specified requirements or user stories and be written from the user’s perspective.
- Ensure alignment with the intended functionality of the software.
-
Independence
- Each test case should be independent of others to avoid dependencies.
- Independence allows for better tracking of defects and simplifies test case execution.
-
Measurable and Verifiable
- Include specific, measurable criteria for determining the success or failure of the test.
- Make the expected results clear and verifiable.
-
Coverage
- Test cases should cover all relevant scenarios, including positive, negative, and boundary cases.
- Ensure comprehensive coverage of the features being tested.
-
Maintainability
- Test cases should be easy to update and maintain.
- Regularly review and update test cases as the software evolves.
-
Feasibility
- Ensure that the test case is practical and feasible to execute within the constraints of time and resources.
- Avoid overly complex or time-consuming test cases.
-
Reusability
- Design test cases in a way that allows for reuse across different test cycles.
- Reusable test cases save time and effort in future testing phases.
-
Automation Readiness
- If applicable, design test cases with automation in mind.
- Follow best practices for creating test cases that can be easily automated.
-
Reliable
- Test cases should return the same results regardless of when or how it’s run.
- Test cases should not be dependent on time of day, presence of data or other external factors.
-
Efficient
- The test case should execute with little effort, or better yet automatically.
- Reduce maintenance costs by allowing test cases to run frequently and quickly.
Generating Test Cases from Acceptance Criteria
Acceptance criteria play an important role in defining the conditions that must be met for a user story or feature to be considered complete. When creating test cases, the acceptance criteria serve as the foundation for designing tests that validate whether the functionality meets the specified requirements.
Here's a step-by-step guide on how to use acceptance criteria to generate effective test cases:
-
Understand the Acceptance Criteria
- Thoroughly review the acceptance criteria associated with the user story or feature. Gain a clear understanding of the expected behavior, conditions, and constraints outlined in the acceptance criteria.
-
Identify Testable Scenarios
- Break down the acceptance criteria into individual testable scenarios. Each scenario should represent a specific condition or functionality outlined in the acceptance criteria.
-
Positive Test Cases
- Create positive test cases to validate that the system behaves as expected under normal conditions. These test cases should demonstrate that the feature meets the specified requirements when all conditions are met.
-
Negative Test Cases
- Develop negative test cases to ensure that the system appropriately handles unexpected or invalid inputs. Consider scenarios where users might deviate from the expected behavior outlined in the acceptance criteria.
-
Boundary Test Cases
- If the acceptance criteria include specific boundaries or limits, design test cases that evaluate the system's behavior at those boundaries. This helps identify any issues related to boundary conditions.
-
Data Validations
- Verify data-related acceptance criteria by creating test cases that cover various data scenarios. This includes testing with different data types, lengths, and formats as specified in the acceptance criteria.
-
Workflow Test Cases
- If the acceptance criteria involve specific workflows or sequences of actions, design test cases that replicate these workflows. Ensure that the system progresses correctly through each step.
-
Error Handling Test Cases
- Check the acceptance criteria for any specifications related to error handling. Develop test cases to validate that the system provides appropriate error messages and gracefully handles error conditions.
-
Integration Test Cases
- If the user story or feature interacts with other components or systems, design integration test cases to verify seamless integration and data flow as per the acceptance criteria.
By following this systematic approach, testers can leverage acceptance criteria to create a comprehensive suite of test cases that cover all aspects of the specified requirements. This process contributes to effective testing, ensuring that the software aligns with user expectations and functional specifications.
Useful Links: ←Unit 2.2 | Unit 3.1→ | Table of Contents | Canvas