A test strategy is an outline that describes the testing approach for Agile software development. The purpose of a test strategy is to provide a rational deduction from organizational, high-level objectives to actual test activities to meet those objectives from a quality assurance perspective. The creation and documentation of a test strategy should be done in a systematic way to ensure that all objectives are fully covered and understood by all stakeholders. It should also frequently be reviewed, challenged and updated as the organization and the product evolve over time. Furthermore, a test strategy should also aim to align different stakeholders of quality assurance in terms of terminology, test and integration levels, roles and responsibilities, traceability, planning of resources, etc.
Test strategies describe how the product risks of the stakeholders are mitigated at the test-level, which types of testing are to be performed, and which entry and exit criteria apply.
Test Strategy Outline
- Test levels
- Roles and responsibilities
- Environment requirements
- Testing tools
- Risks and mitigation
- Test schedule
- Regression test approach
- Test groups
- Test priorities
- Test status collections and reporting
- Test records maintenance
- Requirements traceability matrix
- Test summary
Test levels
The test strategy describes the test level to be performed. There are primarily three levels of testing: Unit And Functional Story Level Testing, Integrated Feature testing, and System Regression testing.
Roles and responsibilities
The roles and responsibilities of the Dev Team in terms of Agile testing.
Environment requirements
Environment requirements are an important part of the test strategy. It describes what operating systems are used for testing.
Testing tools
There are two methods used in executing test cases: manual and automated. Depending on the nature of the testing, it is usually the case that a combination of manual and automated testing is the best testing method. Automated testing is the goal for Regression and Feature level testing
Risks and mitigation
Any risks that will affect the testing process must be listed along with the mitigation. By documenting a risk, its occurrence can be anticipated well ahead of time. Proactive action may be taken to prevent it from occurring, or to mitigate its damage.
Test schedule
A test plan should make an estimation of how long it will take to complete the testing in Development and Hardening Sprints. There are many Stories to complete based on the Definition of Done. First, the Dev Team have to execute all test cases at least once. Furthermore, if a defect was found, the Dev Team will need to fix the problem. The Dev Team should then re-test the failed test case until it is functioning correctly. Last but not the least, the Dev Team need to conduct regression testing during the Release cycle to make sure the Dev Team did not accidentally break parts of the software while fixing another part. This can occur on test cases that were previously functioning properly.
The test schedule should also document the number of Development and Hardening Sprints available for testing.
Regression test approach
When a particular problem is identified, the programs are debugged and the fix is applied to the program. To make sure that the fix works, the program will be tested again for that acceptance criterion. Regression tests will make sure that one fix does not create some other problems in that program or in any other interface. So, a set of related test cases may have to be repeated again, to make sure that nothing else is affected by a particular fix. How this is going to be carried out must be elaborated in this section.
Test groups
From the list of Features, we can identify related areas, whose functionality/business process is similar. These areas are the test groups.
Test priorities
Among test cases, we need to establish priorities based on the backlog prioritization and risk.
Test status collections and reporting
When test cases are executed, the results must be known, in terms of testing activities. This will include, what test cases are executed, how long it took, how many test cases passed, how many failed, and how many are not executable.
Test records maintenance
When the test cases are executed, it is important to keep track of the execution details such as when it is executed, who did it, how long it took, what is the result etc. This data must be available to all the team members and supporting stakeholders, in a central location.
Feature traceability matrix
Ideally, the software must completely satisfy the set of Features. In a Feature traceability matrix, the rows will have the Features. The columns represent Stories and Tests associated with the Feature.
Test summary
Leadership may like to have a test summary on a weekly or monthly basis. If the Agile project is very critical, they may need it even on daily basis. This section must address what kind of test summary reports will be produced for the Leadership along with the frequency.
Test Strategy Plan Example
Levels & Responsibilities
Test Level |
Scrum Team |
Systems Team |
In-Sprint Testing of user stories |
P |
|
Regression Testing |
P |
S |
Automated Regression Testing |
S |
P |
Integration Testing |
S |
P |
End to End Feature Testing |
S |
P |
System Testing |
S |
P |
*primary (P) and secondary (S) responsibility
1.2 Testing Strategy
Our Strategy is to perform the types of testing mentioned in the below table:
Test Type |
Objectives |
Procedure |
In-Sprint Manual Testing |
To verify:
|
|
Regression Testing |
To test the system for regression issues |
|
Automated Regression Testing |
To automate the repeated regression tests. |
|
Integration Testing |
To verify all the features from different Scrum teams are integrated properly without breaking the existing functionalities. |
|
End to End Feature Testing |
To verify:
Functional testing will be performed in an iterative and controlled manner, ensuring the solution matches the defined requirements. |
|
The defects found during each of the levels discussed below are logged in Jira. That serves as the Test Report to the Developers.
1.3 Build & Deployment Strategy
Application Code
- Unit tests are written in JUnit and the code repository is configured for CI/CD that automatically deploys nightly builds to the QA environment after automated Unit Tests are successful.
- Scrum teams notify the CI/CD team on completion of the assigned task. CI/CD team integrates code from all the Scrum teams.
- Once the Code is deployed to the QA Environment, the corresponding team members are notified to start testing.
- The builds are scheduled for the day of the Sprint agreed for QA Release for each Sprint.
Automation Scripts
- Automation Scripts are also configured for CI/CD with Jenkins.
- Scripts are deployed after the development of the Stories for the particular Sprint.
1.4 Testing Requirements
Each person involved in testing will need the following access:
- Access to Application under test and the related database
- Access to Jira
Data Requirements
- Testers of the Scrum team create test data before start of the in-Sprint Testing.
- Test data should be made available to the Automation Engineers.
Resources & Skills
- A resource with Manual Testing skills
- A resource with Automation Testing Skills
- A resource with Test Management Skills
1.5 Tools
The following tools will be used for testing:
Process |
Tool |
Test Case Development |
Microsoft Excel |
Test Case Management |
Jira |
Test Report/Defect Log |
Jira |
Test Execution (Manual & Automation) |
Selenium (Automation) |
Automation Scripts Development |
Eclipse IDE with Selenium Web Driver |
Automation Test Report |
TestNG |
Defect Management |
Jira |
Application Lifecycle Management |
Jira |
CI/CD Tool |
Jenkins |
Source Control |
Github/Bitbucket |
1.6 Testing Metrics
Below are the testing metrics to be captured. Jira Dashboard is to be configured with Gadgets for all the defect metrics.
- No of Defects found during each Sprint
- Defect Density
- Requirement Trackability
- Percentage of Automation Test Coverage Vs Manual Testing Coverage
- Percentage of Tests Passed Vs Failed