Test Management: CTFL Tutorial

Welcome to the fifth chapter of the CTFL tutorial (part of the Certified Tester Foundation Level (CTFL®) course).

Test management will be discussed in this fifth lesson, which includes the importance of independent testing, its advantages and disadvantages within an organization, and different roles and responsibilities within a test team.

We will also examine test plans, estimates, monitoring and control, configuration management, and risks.

Let us look at the course map in the next section.

Objectives

After completing this lesson, you will be able to:

  • Explain the concept of Test Organization

  • Identify the factors to be considered for a Test Plan

  • Describe the process of Test Progress Monitoring

  • Explain Configuration Management

  • List the different aspects of Risk and Testing

  • Explain Incident Management

In the next section, we will begin with the first topic, ‘Test Organization.’

Test Organization

In the next few sections, we will discuss the advantages and disadvantages of independent testing, understand the concept of independent test organization, and different roles and their responsibilities in testing.

Let us find out the advantages and disadvantages of independent testing in the next section.

Advantages and Disadvantages of Independent Testing

Some of the main advantages and disadvantages of independent testing include:

Advantages

Disadvantages

  • Test reports are more reliable and credible.

  • The testing turnaround time is shorter.

  • Independent test organizations have specialized facilities with the latest calibration that will increase test cycles productivity.

  • An independent Tester tests the application with a clear requirement understanding without any assumptions.

  • Tester often acquires experience, credentials, certifications, and accreditations due to this process.

  • It ensures the software works as intended even outside the developed environment.

  • Contract testing is costly.

  • Independent testing organizations face difficulties in understanding the product as they are not involved in the development process.

  • Independent organizations may not have communication with the development team.

  • This communication gap impacts software quality.

  • The developer may lose the sense of responsibility towards product quality.

  • Independent Testers may be seen as a bottleneck or blamed for delays in release.

Let us now look at an independent test organization structure in the next section.

Independent Test Organization

An independent test organization can be a person or another organization that handles the testing activities for product, material, or software; on agreed terms with the producer or the owner.

An organization is independent if it is not affiliated with the producer or the user of the tested item.

An independent test team looks for problems that are difficult for the development team to find. The test report is generated with no favoritism and based on the quality of software.

The levels of testing have already been covered in lesson 2.

Roles in Testing

Let us now look at different roles in testing. There are many roles in each testing organization like:

  • Primary Tester

  • Secondary Tester

  • Subject Matter Expert

  • Functional Test Analyst

  • Test lead

  • Test Manager

However, the standard roles in the test team are Test Lead and Tester.

The responsibilities of Test Lead and Tester varied from organization to organization and based on the nature of test project.

Role of Test Leader

The role of the Test Leader is to effectively lead a testing team. A separate Test Lead is assigned if the project is complex and huge. Otherwise, this role can also be performed by Project, Development, Quality Assurance, or Test Group Manager.

Role of Tester

A Tester analyzes designs and executes manual or automation tests based on the risk of the project and product. The roles and responsibilities are decided and agreed before the testing starts for the project.

Let us look at the responsibilities of a Test Lead in the following section.

Responsibilities of Test Lead

The main responsibility of a Test Lead is to lead the team efficiently for achieving the agreed quality of the project. In many organizations, Test Leads are also called Test Managers or Test Coordinators.

Test Lead has some tasks spread across different test phases. They are:

  • Test planning

  • Team management

  • Test infrastructure

  • Test execution

  • Risk management

  • Client management

These responsibilities are discussed individually.

In test planning, the Test Lead should:

  • Understand the testing effort by project requirements analysis

  • Estimate and obtain management support for the testing time, resources, and budget

  • Organize the testing kick-off meeting

  • Define the test strategy, and develop the test plan for the tasks

  • Monitor dependencies and identify areas to mitigate the risks to system quality;

  • Obtain stakeholder support for the plan.

In test management, a Test Lead should:

  • Build a testing team of professionals with appropriate skills, attitudes, and motivation.

  • Identify both technical and soft skills training requirements and forward it to the Project Manager.

  • Assign task to all testing team members and ensure they have sufficient work in the project.

  • Act as the single point of contact between the Development team and the Testers.

In test infrastructure, Test Lead should arrange the hardware and software requirement for the test setup.

In test execution, a Test Lead should:

  • Ensure content and structure of all testing documents or artifacts are documented and maintained.

  • A document, implement, monitor, and enforce all processes for testing as per standards defined by the organization.

  • Review various reports prepared by Test engineers.

  • Ensure the timely delivery of different testing milestones.

  • Check or review the test cases documents.

  • Keep track of the new or changed project requirements.

In Risk Management, a Test Lead should:

  • Escalate the project requirement issues such as software, hardware, and resource; to Project Manager or Senior Test Manager as required.

  • Prepare and update the metrics dashboard at the completion of the project and share it with stakeholders.

  • Track and prepare the report of testing activities like results, case coverage, required resources, defects discovered, and performance baselines.

In Client Management, a Test Lead should:

  • Organize the status meetings and send daily status reports to the client.

  • Attend client meetings regularly and discuss the weekly status with client.

  • Communicate with the clients, which is a necessary task for the Test Lead.

Let us discuss the responsibilities of a Tester in the next section.

Preparing for a career in Software Testing? Check out our Course Preview on CTFL here!

Responsibilities of Tester

While Test Lead designs the test strategy and test plans, a Tester has the responsibility to implement those test plans and design low-level test plans, scenarios, and cases.

The main responsibilities of a tester are:

  1. Test Planning

  2. Test Execution

  3. Test Reporting phases

Under test planning phase, the Tester has to:

  • Analyze client requirements,

  • Understand the tested software application

  • Give inputs to the test plan and test strategy documents.

  • Prepare test cases for module, integration, and system testing and prepare test data for each test case developed.

  • Prepare test environment and analyzing test and test cases prepared by other Testers.

  • Write the necessary test scripts.

After the test planning activity, Tester has following responsibilities.

  • Execute all the test cases once the code is migrated to test environment.

  • In the process of executing test cases if any mismatches are found between actual and expected results, defects are logged and tracked until the variance reduces.

  • Once the defect is fixed Tester has to perform necessary retesting of the functionality and close the defect if the issue is resolved.

In the Test Reporting phases, a Tester:

  • Needs to provide data for test reporting such as defect information, report summaries, and lesson learned documents.

  • Has to conduct review meetings within the team.

In the next section, we will discuss the second topic, ‘Test Planning and Estimation.’

Test Planning and Estimation

In this topic we will find out how to prepare a test plan; examine issues related to project planning, test level, or phase; and a specific test type. We will also identify the factors that influence the test planning effort. Let us look at test planning in the next section.

Test Planning

Test plan document is one of the main Testing documents. It describes all the activities to be carried out under testing and their milestone dates. Test plan serves as the testing project plan.

The salient features of a test plan are as follows:

  • Test planning includes planning for scope, testing approach, and resources and schedule.

  • It also includes planning for the risks including contingency planning and for the roles and responsibilities in the intended testing activities.

  • Test plan creation starts at the initial or requirements gathering stage of the project and continues throughout the SDLC process.

  • The test plan is a live document, which has to be updated and maintained as the project evolves.

  • It provides a roadmap for the testing project.

  • As testing progresses through various phases, feedback from different stakeholders and various project risks need to be considered to make final changes to the test plan.

In the next section, we will identify the factors to be considered while preparing a good test plan.

Test Plan – Factors

A good test plan is the keystone of a successful testing implementation.

The factors to be considered while building a test plan are as follows:

Test policy of an organization

The approach for testing a project differs from one organization to another. Each organization follows standard processes for common tasks. The organization test policy is the first factor to keep in mind while preparing a test plan.

Scope of testing

This section defines the boundaries of the project and helps teams focus on the direction of testing. The testing team describes the specific requirements to be tested. This sets the basis of project estimations and results in building the schedule and resource plan for the project.

Objectives of testing

Objectives describe the requirements and goals of testing and vary from project to project. The objectives are essential to validate a specific set of requirements, the performance of the system, or other objectives.

Let us continue this discussion in the next section.

Test Plan – Factors (contd.)

Other factors to be considered while building a test plan are as follows:

Project risks

The project risks play a vital role in determining the approach for testing. Strategic decisions are risk dependent. These risks need to be constantly evaluated and re-planned to ensure no risk becomes an issue.

Testability of requirements

If any requirement is not testable or partially testable, testing needs to be carefully planned to ensure the risk of such requirements is minimized. In this situation, test results provide information on the state of the software and risk levels. These requirements should be identified and planned well before the testing starts.

Availability of resources

Availability of the right set of resources plays a significant role in determining the process of project execution.

Required resources need to be identified for testing. Bridging the gap through various means of acquiring new resources, training existing resources, or changing the plan should be done depending on requirement and availability.

Project resources are not only people they can also be hardware, infrastructure, and even software resources.

Test Planning Activities

The different activities performed for completing test planning are as follows:

  • The scope is one of the essential factors in any plan.

  • The scope, objective, and risks associated with testing need to be defined to explain a robust test plan.

  • The purpose of risk analysis during software testing is to identify high-risk application components and to identify error-prone components within specific applications.

  • The result of the analysis can be used to determine the testing objectives.

  • The test plan should contain detailed approach for all tasks, and it also specifies the levels, types, and methods; whether the test is manual, automation, white box, or black box testing.

  • Depending on the project size, the roles and responsibilities should be clearly defined.

  • Each role will have different responsibilities based on their assigned tasks.

  • The schedules for phases such as test analysis, design activities should be planned early.

  • Test implementation, execution, and evaluation of test results should be defined clearly. All templates needed for test documentation should be identified and made available for the resources.

  • Also, the metrics for tracking the project status should be explained clearly. This will help in monitoring test preparation, execution, defect resolution, and controlling risk and issues.

Let us continue the discussion on the contents of test plan in the following section.

Contents of Test Plan

Let us discuss the contents of a test plan as defined by IEEE Std. 829- 1998.

  • These standards are widely adopted in the testing industry and cover critical sections to be included in Test Plans.

  • Test plan identifier is a unique number generated by the testing organization to identify the test plan. A test plan is dynamic in nature and variable in format, so versions need to be always maintained while creating a test plan.

  • All reference sources in the test plan should be cited in the test plan. These are included in the Reference section.

  • The introduction gives a brief summary of the software under test and its high-level functionality.

  • Test items are areas to be tested within the scope of the test plan. This includes details about the test. It also contains a list of features to be tested and not to be tested.

  • Software risk issues list all risk areas in the software such as complex functional area, government rules and regulations, lack of right resources, and changing requirements.

  • Features to be tested includes a list of features to be tested as a part of this test phase. It is important for test effort to focus only on specific test areas in each test cycle. Similar to features to be tested,

  • Features Not to be tested include list of features that should not be tested in a test cycle. These features are considered out of scope for that test phase and might be planned to be tested in other test phases.

  • Approach or Strategy section describes the overall test strategy for the plan.

  • Item pass or fail criteria section describes the criteria for passing a test condition. This is a critical aspect of any test plan and should be appropriate to the level of the plan.

  • Suspension criteria and resumption requirements clearly document when to stop testing and when it can be resumed.

  • Test deliverable section of test plan describes the deliverables, such as test plan, test schedule, test cases, error logs, and execution logs as a part of this testing activity.

  • Remaining Test Tasks include a list of leftover tasks after the testing has been completed. These include tasks which are usually passed on to the next phase of testing.

  • Environmental needs list all environmental needs such as software, hardware, and any other tools for testing along with their versions if required.

  • Staffing and training needs section helps to identify resource and their skill levels. If the resource needs any training on the application or on any other tool, those training should be documented in this section.

  • Responsibilities section covers the roles and their accountabilities. For example, if the resource is a test manager, then the responsibilities of that role is listed in this section.

  • Schedule all testing tasks based on realistic and validated estimates. If the estimates for the development of the application are inaccurate, the entire project plan, which includes testing, will suffer.

  • Planning Risks and Contingencies is a critical part of any planning effort. It helps the test manager to identify project risks and plan for the contingencies for the same.

  • All the approvers for this test plan should be mentioned here with their roles in the project.

  • The glossary contains terms and acronyms used in the document and testing in general.

This section provides information to avoid confusion and promote consistency in all communications.  

In the next section, we will discuss the execution of test schedule.

Test Execution Schedule

The following factors should be considered while building a Test Execution Schedule:

  • Technical dependencies such as availability of hardware, software, environment for executing the tests.

  • Logical dependencies such as specific test cases should pass before executing other relevant tests.

  • Priority of test cases is an important factor that will determine the priority of test execution.

  • Project risks also play an important role in determining the project execution schedule.

For example, if there is a resource risk for execution of specific types of tests after a particular date, the Test Manager may schedule these tests earlier in the test cycle. A simplified sample execution schedule is given below.

Simplified Sample Execution Schedule

SL.No

Procedure Name

Calendar Day

Test Cycle Date

Owner

1

Approve Loan

0

12-June-2012

Tester A

 2 

Generate Interest

30

13-July-2012

Tester A

3

Interest notice issued to customer

30

13-July-2012

Tester A

4

Interest paid by customer

35

18-July-2012

Tester A

In the next section, we will look at entry criteria.

Entry Criteria

Entry Criteria define the prerequisite to be achieved before starting the testing activity. An entry criterion is a prerequisite before the testing activity starts.

The main focus is to check whether a tester can perform the testing tasks on the software without major obstacles.

The main areas to look at while defining entry criteria are as follows:

  • Testing environment setup and availability

  • Availability of all testing tools

  • Accessibility of the testable code

  • Availability of the test data

It should be prepared, or acquired if there are dependencies on other teams for test data.

From the testing perspective, all the test cases have to be completed, reviewed, and signed off.

Let us now discuss the exit criteria in the next section.

Exit Criteria

Exit Criteria define the conditions to be met before testing can be considered as complete.

Exit criteria indicate that the software is up to the required quality and can be deployed into production. Focus points for exit criteria are as follows:

  • Thoroughness measures, such as coverage of code, functionality, or risk

  • Estimation of defect density or reliability measures

  • Cost or budget

  • Residual risks, such as defects not fixed or lack of test coverage in certain areas

  • Schedules like time for marketing

We will discuss Test Estimation in the next section.

Test Estimation

Test effort is the effort required to perform a testing task in either person-days or person-hours.

For the success of any project, test estimation and proper execution are equally important. There are two ways to calculate the estimates.

  1. Estimating technique based on metrics collected from previous similar projects is called metrics-based approach.

  2. A technique based on expertise in a given area or by the owner of the task is called an expert-based approach.

Starting at the highest level, a testing project can be broken down into phases using the fundamental test process identified in the ISTQB Syllabus:

  • Planning and control

  • Analysis and design

  • Implementation and execution

  • Evaluating exit criteria and reporting

  • Test closure

Within each phase, activities can be identified and within each activity tasks and occasionally subtasks can be identified.

To identify the activities and tasks, you can work both forward and backward.

Work forward means start with the planning activities and move forward in time step by step.

Working backward means consider the identified risks, and note those risks, to be addressed through testing.

Let us now identify the factors impacting test efforts.

Factors Impacting Test Efforts

Testing is a complex process, and a variety of factors can influence it. While creating test plans and estimating the testing effort and schedule, these factors must be kept in mind.

Factors can be broadly classified into product characteristics and characteristics of the development process.

Product Characteristics

  • Product characteristics like the complexity of the software impact the testing efforts.

  • A highly complex software requires more test effort.

  • The importance of non-functional quality characteristics, such as usability, reliability, security, performance also influences the testing effort.

  • If the number of platforms to be supported is high, this will increase the test effort as the application needs to be tested across all these platforms.

Development Process Characteristics

  • As a development process characteristics, clearly documented requirements help in defining tests more efficiently, thus, reducing the rework effort.

  • Unskilled resources add more effort to the test cycle and hence impact the test estimates.

  • The number of defects the higher the test effort is likely to be.

  • Stability of processes, tools, and techniques used in the test process is another factor that impacts the test efforts. When these factors are not met, it leads to high test efforts.

Let us now look at Test Strategy and Test approach in the next section.

Test Strategy and Test Approach

Test strategy is a high-level description document of the test levels to be performed for an organization or program. Test approach is the implementation of the test strategy for a specific project.

Test strategy

  • It is developed by the Project Manager and defines the “Testing Approach” to achieve the testing objectives.

  • It is derived from the Business Requirement Specification document.

  • It is created to inform Project Managers, Testers, and Developers about the key testing objectives.

  • It includes the methods of testing new functions, total time and resources required for the project, and the testing environment.

Test approach

Test approach includes the decisions made based on the test project goal, risk assessment outcomes, test process starting points, test design techniques, exit criteria, and test types.

Though test strategy and test approach are seen as sequential activities, test approach is identified during the test strategy. Sometimes test approach might be included in the test strategy document.

We will discuss the components of a test strategy document in the following section.

Components of Test Strategy Document

A test strategy document typically has the following components:

  • Scope and objective of testing, which clearly defines all the testable and non-testable items.

  • Business issues to be addressed during testing.

  • Responsibilities of different roles in testing.

  • The communication protocol, frequently status reporting including benchmark figures, and test deliverability lists with all artifacts to be delivered to the client.

  • Industry standards to be followed such as metrics.

  • Test automation and tools.

  • Testing measurements and metrics used to measure the testing progress.

  • Foreseen risks and mitigation plans.

  • Defect reporting and tracking, which define the defect management process and defect management tools.

  • Change and configuration management, which is used to list all the configurable items.

  • Training plan which plays a vital role when third party testing is involved.

In the next section, we will discuss an example of High-level Test Strategy.

High-level Test Strategy – Example

For example, in an upcoming Maintenance Test Release, due to the nature of fixes, it has been decided to focus on regression testing.

Considering the expanse of regression test scenarios, testing should use automated test scenarios to the maximum.

Due to the migration from internal server to the cloud, performance testing scenarios for ‘Component X’ should be thoroughly executed.

Due to the dependency on vendor PQR for issues related to Module B, the module needs to be tested first in the order of priority so that any issues related to this can be passed on to the vendor for closure.

Typical Test Approaches

Let us discuss different test approaches that can be used for test planning.

Analytical approaches

All analytical test strategies use some formal or informal analytical technique, during the requirements and design stages of the project. Risk-based testing where testing is directed to greatest risk areas is an example of the analytical approach.

Model-based approaches

Testing takes place based on mathematical models for loading and response for e-commerce servers. Model-based approaches; such as stochastic testing use statistical information about failure rates, such as reliability growth models; or usage, such as operational profiles.

Methodical approaches

Methodical test strategies have in common the adherence to a pre-planned, systematized approach that has been developed in-house, assembled from various concepts and adapted significantly from outside ideas. Methodical approaches, such as failure-based, including error guessing and fault attacks; experience-based, checklist-based, and quality characteristic-based.

Process- or standard-compliant approaches

These strategies have in common a reliance on an externally developed approach to testing. Process- or standard-compliant approaches are specified by industry-specific standards or the various Agile methodologies.

Dynamic approaches

A lightweight set of testing guidelines that focus on rapid adaptations or known weaknesses in software. Dynamic strategies, such as exploratory testing, concentrate on finding maximum defects possible during test execution and adapting to the realities of the delivered test system, and they emphasize on later stages of testing.

Consultative or directed approaches

Here, test coverage is driven primarily by the advice and guidance of external technology or business domain experts.

Consultative or directed strategies commonly rely on a group of non-testers to guide or perform the testing and emphasize the later stages of testing due to the lack of recognition of the value of early testing.

Regression-averse approaches

Include the reuse of existing test material, extensive automation of functional regression tests, and standard test suites. Regression-averse strategies commonly have a set of usually automated procedures that allow them to detect regression defects.

These strategies may involve automating functional tests before the release of the function, in which case it requires early testing. However, sometimes, as a form of post-release test involvement, the testing is entirely focused on the released testing functions.

Let us discuss the selection of the right test approach in the following section.

Selecting a Test Approach

The choice of test approaches or strategies is an important factor in the success of the test effort and the accuracy of the test plans and estimates.

Now let us look at the factors to consider before selecting the right test approach.

Risk

Testing is about risk management. Hence risk and its level have to be considered. For a well-established application, regression is an important risk. For a new application, a risk-based analytical strategy may reveal different risks.

Skill

Strategies must be chosen and executed to consider the skills and experience of the Testers. A standard-compliant strategy is best in case of time constraint and lack of skill to create customized approach.

Objective

Testing must fulfill the needs of stakeholders. For example, in an independent test lab, if the objective is to find maximum defects with a minimal amount of time and effort invested, then the right approach is a dynamic strategy.

Regulation

Sometimes along with stakeholders, regulator’s needs also have to be fulfilled. This includes the internal and external regulations for the development process. In this case, a methodical test strategy needs to be devised.

Product

The nature of the product or project plays an important role in deciding the approach. Some products, such as weapons systems and contract-development software, tend to have well-specified requirements. These lead to synergy with a requirements-based analytical strategy.

Business

Business considerations and continuity are important. If a legacy system is used as a model for a new system, a model-based strategy can be used.

Let us move on to the third topic, ‘Test Progress Monitoring and Control,’ in the following section.

Test Progress Monitoring and Control

In the next few sections, we will understand the concept of test progress monitoring, define its related terms, identify the common test metrics, and understand test reporting and control. Let us understand the concept of test progress monitoring in the next section.

Test Progress Monitoring

Planning for tasks is important. However, it is not the only factor for a successful project. The testing work has to be tracked.

The salient features of test progress monitoring are as follows:

  • Test progress monitoring is a test management task that periodically monitors the status of a test project.

  • Metrics, which measures actual progress against the planned milestones, is used to monitor the test progress.

  • Test progress monitoring also gives visibility and feedback on test activities.

A couple of factors which indicates the status of test activities are:

  • The level of test plan and test case completion

  • The tested object and its pass, fail, and blocked data

  • The quantity of testing yet incomplete

  • Number of open defects

  • Amount of retesting and regression testing required

Test Monitoring—Definitions

Let us now look at few definitions that are important to understanding test monitoring:

Failure rate

Failure rate can be defined as the ratio of the number of failures of a given category to a given unit of measure.

For example, failures per unit of time, per number of transactions, and per number of computer runs.

In the case of test case failure rate, it is calculated as some failed test cases divided by the number of test cases executed.

Defect density

Defect density can be defined as the number of defects identified in a component or system, divided by the size of the component or system.

Defect density can be expressed in standard measurement terms such as lines of code, number of classes, or function points.

The formula for calculation of defect density is the number of defects divided by some function points tested.

Test monitoring

Test monitoring is a test management task that periodically checks the status of a test project.

Reports are prepared that compares the actual to planned progress.

In the following section, let us look at the different test metrics which are commonly used to monitor test activities.

Common Test Metrics

The commonly used test metrics are as follows:

Test coverage

Test coverage is a popular metric used for test monitoring. It covers the extent of coverage achieved against requirements, risks, or code. The formula for this metric is the number of requirements tested against the total number of requirements. Higher the coverage, better the quality of testing.

Percentage of test case preparation

Percentage of test case preparation helps the Manager identify the extent of preparedness for testing. This is calculated as some test cases prepared, divided by the total number of planned test cases for preparation.

Percentage of test environment preparation

Percentage of test environment preparation is calculated as the amount of Test Environment preparation complete divided by the total amount of preparation required. It is also a helpful indicator to gauge the preparedness of testing effort.

Percentage of test case execution

Percentage of test case execution is calculated as the number of test cases executed, divided by the total number of planned test cases to be executed. This is an indicator of the amount of test execution progress achieved.

Defect information

Defect information such as defect density, defects found, open and fixed defects are useful metrics for evaluating the software stability and production readiness.

Defect metrics are also used as an indicator to gauge the overall health of the development process.

Defect density is calculated as the total number of defects divided by the total number of modules.

The confidence level of Testers

The confidence level of Testers in the application or product can be captured through surveys or voting.

Test milestones dates

Test milestones dates are set as a part of the test plan. These need to be monitored to measure any schedule slippages.

Testing costs

Testing costs include cost compared to the benefit of finding the next defect or to run the next test. It is usually calculated as the amount of cost spent in testing including the cost of manpower, environments, and associated costs.

These should constantly be monitored to ensure that testing does not exceed the budget or to evaluate when to stop testing.

Let us discuss an example of test metrics, in the next section.

Test Metrics – Example

For example, the dashboard image below represents the status of test execution on a specific date.

test metrics example

It summarizes the percentage of test completion and of test cases that passed and failed.

The percentage completion of tests in this current cycle is 84.6%. This could be a cause of concern for the management if the product release date is imminent.

The biggest concern lies in the area of test conditions that are on hold or deferred. Till these conditions are released for testing, the test coverage would not be complete.

In the next section, we will understand the concept of test reporting.

Test Reporting

Test summary reports are submitted for the testing period, generally at the logical conclusion of testing such as at the end of each phase or test project.

Test Leaders need to generate multiple reports during the test planning, design, and execution phase to report the progress of test activities. These reports keep all stakeholders informed and help the Test Leader get attention or resources to resolve project risks.

Based on the requirement and complexity of the project, test team maintains different reports like daily, weekly, monthly, and even quarterly status reports.

Test summary report should have recommendations and decisions for future actions, based on the metrics collected. These reports include lessons learned, which helps in preventing repetitive mistakes for future phases and, or projects. We have seen the different types of metrics and reports a test team needs to manage.

In the following section, let us now understand the purpose of managing them.

The requirement of Test Metrics

Metrics should be collected at the end of a test level to assess:

  • Adequacy of the test objectives

  • Adequacy of the test approaches taken

  • The effectiveness of the testing concerning the objectives

If one or more of the above parameters are judged inadequate, tests need to be re-planned. This cycle works iteratively till all the parameters are adequately met.

In the next section, we will discuss test control.

Test Control

Test control is a test management task dealing with development and application of a set of corrective actions to get a test project on track when monitoring shows a deviation from plan. Actions may cover any test activity and may affect other software life cycle activity or task.

For example, an organization usually conducts performance testing on weekday evenings, during off-hours, in the production environment.

Due to unanticipated high demand for products, the company has temporarily adopted an evening shift that keeps the production environment in use 18 hours a day, five days a week. This increase in production time reduces the time available for conducting performance testing. This is a risk for the performance testing team.

As mitigation to the risk takes corrective action and the test control. This may involve rescheduling the performance tests to the weekend to ensure zero impact on testing schedule.

Regular monitoring of risks and test metrics, therefore helps the project remain on track to meet the test objectives.

Let us move on to the next topic, ‘Configuration Management,’ in the following section.

Configuration Management

In the next few sections, we will look at the concept of configuration management, its objectives, and its role in testing.

Let us take an overview of configuration management in the next section.

Overview of Configuration Management

Configuration management is a disciplined approach to the management of software and the associated design, development, testing, operations, and maintenance of testing.

It involves the following steps:

  1. Planning and Identification

  2. Control

  3. Status accounting

  4. Verification and audit activities

Planning and Identification

Planning and identification activity involves planning entire configuration management and identifying configurable items.

Control

Control is about controlling releases and changes to configurable items.

Status accounting

Status accounting involves recording and reporting the status of configurable items.

Auditing

Auditing verifies the completeness and correctness of configurable items.

Depending on the roles defined and associated access right during planning activities, users can find read, edit, and delete option, which may also involve approval process.

This process is an integral step in all the steps of configuration management. Purpose of configuration management is to establish and maintain the integrity of the products including components, data and documentation of the software or system through the project and product life cycle.

In the next section, we will discuss the objectives of configuration management.

Objectives of Configuration Management

Objectives of Configuration Management are to:

  • Provide accurate information on time, to the right person, at the right place

  • Support processes like incident and change management

  • Eliminate duplication of data and effort

  • Achieve project management in a cost-effective way and with improved quality.

Let us now discuss how configuration management supports testing.

Configuration Management in Testing

Configuration management has some important implications for testing. It allows the testers to manage their test ware and test results using the same configuration management mechanisms.

Configuration management supports the build process, which is essential for delivery of a test release into the test environment.

Sending e-mail Zip archives are not sufficient, as they there is a chance of pollution of archives with undesirable contents such as previous versions of items.

It is vital to have a solid, reliable way of delivering test items that are the proper version and works well, especially in later phases of testing like System testing or User acceptance testing.

As seen in the image below, configuration management also allows us to map what is being tested to the underlying files and components that make it up which is absolutely critical.

configuration management in testing

For example, when reporting defects, they are needed to be reported in a test case or a requirement which is version controlled. If all the required details are NOT mentioned clearly, developers will have a tough time in fixing the defects.

The reports discussed earlier must be traceable to what was tested. Ideally, when testers receive an organized, version-controlled test release from a change-managed source code repository, it is accompanied by a release notes.

Release note may not always be so formal and do not always contain all the information.

During the test plan stage ensure that configuration management procedures and tools are selected. As the project proceeds, the configuration process and mechanisms are implemented, and the key interfaces to the rest of the development process are documented.

During test execution time, this will allow the project team to avoid unwanted surprises like testing the wrong software, receiving un-installable builds and reporting irreproducible defects against versions of code that don't exist anywhere but in the test environment.

Let us move on to the next topic, ‘Risk and Testing,’ in the following section.

Risk and Testing

In the next few sections, we'll discuss how to determine the level of risk using likelihood and impact along with the various ways to conduct risk analysis and management.

Let us discuss the concept of risk and testing in the next section.

Risk and Testing

Commonly used terms in risk management are as follows:

Risk

A risk is an event or situation that could result in undesirable consequences or a potential problem.

Exposure

Exposure is the amount of loss incurred if an undesirable event occurs. For example, a car accident can cause both losses of life and property.

Threat

The threat is a specific event that may cause an undesirable event to occur.

For example, driving under the influence of alcohol poses a threat that accidents might occur.

Control

Control is an action that reduces risk impact.

For example, to mitigate the threat of an accident, control measure taken of not driving in an intoxicated state.

The likelihood of a risk occurrence can be between 0 and 100 percent; a risk cannot be a certainty. To put risk management at a high level involves assessing the possible risks, prioritizing them, deciding on the risks to be addressed, and implementing controls to address the issues.

In the next section, we will look at the project risks.

Project Risks

Risks related to the management and control of a project that impacts project's capability to deliver its objectives is known as project risks.

Project risks can be divided into three main categories:

  1. Technical issues

  2. Organizational factors

  3. Supplier issues

Risks under technical issues are requirements not clearly defined, requirements not meeting the technical feasibility, Low quality of design and code, improper or inefficient technical planning, and availability of test environment.

Risks under organizational factors are resource issues or skill shortage though resources are available, training issues, communication problems, improper attitude or wrong expectations.

Risks under supplier issues are contractual issues or third party failure risks. All these risks are possible on a project, and hence they need to be identified and mitigated effectively.

Let us now look at the product based risks in the next section.

Product-Based Risks

Product based risk can be defined as the possibility of a system or software failing to satisfy the reasonable customer, user, or stakeholder expectation; which would challenge the quality of the product.

Common product based risks can be

  • Failure-prone software delivered

  • Software or hardware potentially harmful to an individual or company

  • Poor software characteristics such as functionality, reliability, usability, and performance

  • Poor data integrity and quality such as data migration issues

  • Data conversion problems, data transport problems

  • Violation of data standards; and software not performing its intended functions

Let us continue this discussion in the next section.

Product-Based Risks (contd.)

There are four options for addressing product or project risks.

Mitigate the risk

Mitigate the risk by taking advance steps to reduce the possibility and impact of the risk.

Plan for contingency

Plan for contingency which means there should be a plan in place to reduce risk impact.

Transfer the risk

Transfer the risk by convincing another team member or project stakeholder to reduce the probability of occurrence, or accept the risk impact.

Ignore the risk

Ignore the risk by not taking any action, this is a plausible option if effective action cannot be taken or probability and risk impact is low.

There is another risk-management option, buying insurance, which is not usually pursued a project or product risks on software projects, though it is not uncommon.

In the next section, we will find out how software can act as a controller.

Testing as Risk Controller

In general, apart from functionality software might have problems related to other specific quality characteristics, such as security, reliability, usability, maintainability, or performance.

  • Risks are used to decide the testing starting point and focus area; testing helps to reduce the risk of an adverse effect occurring, or its impact.

  • Testing as a risk-control activity provides feedback about the residual risk by measuring the effectiveness of critical defect removal and of contingency plans.

We will understand risk-based testing in the next section.

Risk-Based Testing

Risk-based testing is the idea to organize the testing efforts in a way that reduces the residual level of product or project risk when the system is shipped to production.

  • Risk-based testing starts early in the project and uses risk to prioritize and emphasize the appropriate tests during its execution. Identify system quality risks and use that knowledge to guide testing planning, specification, preparation, and execution.

  • Risk-based testing involves both mitigation and contingency.

  • It also involves measuring, finding, and removing defects in critical areas; risk analysis to identify proactive opportunities to remove, or prevent defects through non-testing activities, and to help in test activities selection.

Let us continue this discussion on risk-based testing, in the next section.

Risk-Based Testing (contd.1)

Risk-based testing starts with product risk analysis.

  • One technique is reading the requirements specification, design specifications, user documentation, and other items thoroughly.

  • Another technique is brainstorming with many project stakeholders.

  • The sequence of one-on-one or small-group sessions with the business and technology experts in the company can be used as another technique.

A team-based approach that involves the key stakeholders and experts is preferable to a purely document-based approach. As team-based approach relies on the knowledge, wisdom, and insight of the entire team to determine a test overview.

Let us continue this discussion on risk-based testing, in the next section.

Risk-Based Testing (contd.2)

A risk-based approach is testing oriented and explores and provides information about product or project risks at the initial stage of a project. It involves the identification of product risks and their use in guiding test planning and control, specification, preparation, and execution of tests.

Risks identified guides in test planning, specification, design, and execution.

During test planning, the risk helps to:

  • Understand the test techniques

  • Testing needs

  • Prioritize testing in an attempt to find the critical defects as early as possible

  • Determine whether any non-testing activities could be employed to reduce risk

An example of a non-testing activity is providing training to inexperienced designers.

Risk-Based Testing – Example

Let us look at an example to illustrate the concept of risk-based testing in this section. Risk-based testing starts with the identification of the risks in the system and planning testing activities to mitigate the risks. The biggest threat to an online banking software is the security of the application.

Any unauthorized access can compromise the integrity of personal data. It can lead to unlawful fund transactions causing huge financial losses to customers.

While planning to test for such applications, there is a huge emphasis on security testing. Security tests need to be conducted on each layer of the application, and even the physical servers.

Also, key elements of security in the software design need to be tested. If there are examples of security breaches of similar applications in other banks, then the test team should plan for specifically testing these scenarios.

Let us move on to the next topic, ‘Incident Management,’ in the following section.

Incident Management

In the next few sections, we look at the process of documenting and managing the incidents test execution incidents. We will also understand the procedure for reporting incidents and defects. Let us discuss the overview of incident management in the next section.

Overview of Incident Management

Incident management is the process to ensure that incidents are tracked through all the steps of the incident lifecycle. Effective incident management requires well-defined process and classification rules.

Any deviation of actual from expected results is termed as a defect. The name of this deviation varies from one organization to the other. For example, incidents, bugs, defects, problems, and issues.

Any incident once accepted as a valid defect is termed as a bug. The steps of its lifecycle include incident logging, classification, correction, and confirmation of the solution.

Recording the occurred incident details is known as incident logging. Incidents can be reported during the development, review, or use of a software product. These can be against issues in code or any type of project documentation deviation.

In the next section, we will look at the objective of an incident report.

Incident Report Objective

Incident report or a defect report is a formal record of each incident.

Objectives of the incident report are as follows:

  • Provide developers with feedback on the problem to enable identification.

  • Assist in isolation and correction of incidents as necessary.

  • Track the quality of the system and progress of testing.

  • Provide ideas for test process improvement.

  • Provide Programmers, Managers, and others with detailed information about the behavior observed and the defect.

  • Support the trends analysis in defect data aggregate either for better.understanding of a specific set of problems or tests; or for understanding and reporting the overall system quality level.

Let us now discuss the contents of an incident report in the next section.

Incident Report Contents

Any incident raised is classified based on business or system impact, also known as severity. Another classification is based on the urgency for a solution, also known as priority.

Apart from them, an incident report includes details like:

  • Unique Identifier

  • Date and Author, which is usually auto-generated by test or defect management tool

  • Summary of incident detailing where the expected and actual results differ.

  • Steps to reproduce the defect

  • Actual results

  • Expected results

  • The environment in which defect is detected

  • Impact on the progress

  • Anomalies if any, and any additional notes or comments from the Tester

  • A description and classification of the observed misbehavior

Let us continue this discussion in the next section.

Incident Report Contents (contd.)

Finally, defect reports, when analyzed over and across projects, give:

  • Information that can lead to the development and test process improvements.

  • The programmers need the report information to find and fix the defects. Before this step, managers should review and prioritize the defects.

  • Since some defects may be deferred workarounds and other helpful information should be included for help desk or technical support teams.

  • Testers often need to know their colleague’s test results so that they can watch for similar behavior elsewhere and avoid trying to run tests that will be blocked.

A good incident report is a technical document. Any good report results from a careful approach to researching and writing. There are some thumb rules that can help in writing a better incident report.

They are as follows:

  • Description of the incidents should be as factual as possible and should include enough detail in them to help the developer replicate the defect.

  • Incidents should not be raised against individuals but against the system under test.

  • The goal of the report should be to ensure the developer is able to identify and fix the defect without causing other defects in the process.

In the following section, we will look at the lifecycle of an incident.

Incident Lifecycle

As seen in the image below, any incident starts with a reported status, which is when a deviation is first noticed in any of the testable items and is documented in the incident management tool.

incident lifecycle

Reported incidents are then verified to check they are valid defects. Invalid defects are moved to Rejected status.

Once the incident is identified as a valid one, it is moved to Opened status.

An incident might also go to Deferred status if it cannot, or need not be fixed right away.

The open defect may need immediate resolution or can be deferred to future releases. Defects to be resolved are assigned to the appropriate resource and are fixed.

Once the Incident is assigned to a specific person or team, its status changes to Assigned.

The expected resolution time depends on the severity and priority of the defect.

Fixed incidents are retested by incident owner and are Closed after its successful fix.

If the incident has not been successfully fixed, it is Reopened.

Similar to an open incident, reopened defects can also be assigned for resolution or can be deferred. With this, we have reached the end of the lesson.

Let us now check your understanding of the topics covered in this lesson.

Curious about the CTFL course? Watch our Course Preview for free!

Summary

Here is a quick recap of what we have learned this lesson:

  • The quality and effectiveness of testing increase with an increase in the degree of independence.

  • The factors to be considered while building a test plan are test policy of the organization, the scope of testing, and objectives of testing.

  • Test progress monitoring is a test management task that periodically monitors the status of a test project.

  • Configuration Management is a disciplined approach to the management of software and the associated design, development, testing, operations, and maintenance of testing.

  • Risks related to the management and control of a project that impacts project's capability to deliver its objectives is known as project risks.

  • Incident management is the process to ensure that incidents are tracked through all the steps of the incident lifecycle.

Conclusion

This concludes the fifth lesson of the course, ‘Test Management.’ The next lesson is, ‘Tools Support for Testing.’

  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

We use cookies on this site for functional and analytical purposes. By using the site, you agree to be cookied and to our Terms of Use. Find out more

Request more information

For individuals
For business
Name*
Email*
Phone Number*
Your Message (Optional)

By proceeding, you agree to our Terms of Use and Privacy Policy

We are looking into your query.
Our consultants will get in touch with you soon.

A Simplilearn representative will get back to you in one business day.

First Name*
Last Name*
Email*
Phone Number*
Company*
Job Title*

By proceeding, you agree to our Terms of Use and Privacy Policy