Roles and Activities > Tester > Plan Test

Purpose
  • To collect and organize test-planning information.
  • To create the test plan.
Steps
Input Artifacts:
  • Supplementary Specifications
  • Design Model
  • Implementation Model
  • Use-Case Model
Resulting Artifacts:
  • Test Plan

Role: Tester

Workflow Details:
  • Test
    • Plan and Design Test

Identify Requirements for Test To top of page

Purpose
  • To identify what is being tested and indicate the scope of testing.

Identifying the requirements for test is the start of the test planning activity. The requirements for test identify what is being tested, and the scope and purpose of the test effort. Requirements for test are also used to determine the overall test effort (for scheduling, test design, and so on) and are used as the basis for test coverage.

Items that are to be identified as requirements for test must be verifiable. That is, they must have an observable, measurable outcome. A requirement that is not verifiable is not a requirement for test.

The following is performed to identify requirements for test:

Review all materials

The requirements for test may be identified from many sources, therefore it is important, that, as the first step, all the materials available for the application / system to be developed should be reviewed. The most common sources of requirements for test include existing requirement lists, use cases, use-case models, use-case realizations, supplemental specifications, design requirements, business cases, interviews with end-users, and review of existing systems.

Indicate the requirements for test

Independent of the source of the requirement for test, there must be some form of identification that a requirement is going to be the target of a test. This results in the generation of a hierarchy of requirements for test. This hierarchy may be based upon an existing hierarchy, or newly generated. The hierarchy is a logical grouping of the requirements for test. Common methods include grouping the items by use-case, business case, type of test (functional, performance, etc.) or a combination of these.

The output of this step is a report (the hierarchy) identifying those requirements that will be the target of test.

See Guidelines: Test Plan for additional information on identifying requirements for tests.

Assess Risk To top of page

Purpose
  • To maximize test effectiveness and prioritize test efforts.
  • To establish an acceptable test sequence.

To access risk, perform the following:

Identify and justify a risk factor for test

The test effort requires balancing resource constraints with risks. The most important requirements for test are those that reflect the highest risks.

Risk can be viewed from several perspectives:

  • Effect - the impact or consequences use case (requirement, etc.) failing
  • Cause - identifying an undesirable outcome and determining what use case or requirement(s), should they fail, would result in the undesirable outcome
  • Likelihood - the probability of a use case or requirement failing.

Each requirement for test should be reviewed and a risk factor identified (such as high, medium, or low). Sometimes, assessing the risk with two or more of the risk perspectives may result in a different risk factor. In these situations, use the highest risk factor value. A statement regarding why a specific risk factor was selected for a given requirement for test should also be given.

Identify and justify an operational profile factor for test

Most applications have functions that are used often and others that are infrequently used. Therefore, to acceptably test an application, one must ensure not only are the highest risk requirements for test tested, but also those that are frequently used (as these often have the highest end-user visibility).

Identify an operational profile factor for each requirement for test and a statement justifying why a specific factor value was identified. This is accomplished by reviewing the business case(s) or by conducting interviews with end-users and their managers. Another method is to observe the end-users as they interact with the system or use software monitors / recorders to record end-user interaction with the system (for analysis).

Identify and justify a test priority factor

Upon identifying and justifying the test risk and operational profile for each requirement for test, a test priority factor should be identified and justified. The test priority factor identifies the relative importance of the test requirement, and the order or sequence in which it will be tested.

The test priority factor is identified by using the risk factors, operational profiles, contractual obligations, other constraints, or a combination of all of these. It is important to consider all these factors when identifying the test priority to ensure that the testing is appropriate and focused.

See Guidelines: Test Plan for additional information on assessing risk and establishing test priorities.

Develop Test Strategy To top of page

Purpose
  • Identifies and communicates the test techniques and tools
  • Identifies and communicates the evaluation methods for determining product quality and test completion

The purpose of the test strategy is to communicate to everyone how you will approach the testing and what measures you will use to determine the completion and success of testing. The strategy does not have to be detailed, but it should give the reader an indication of how you will test.

Developing a test strategy includes:

Identify and describe the approach to test

The approach to test is a statement (or statements) describing how the testing will be implemented. This should state or refer to what will be tested, the major actions taken while testing, and how the results will be verified. The statements should provides enough information to the reader so they can understand what will be tested, although the depth of testing is not yet known, such as in the statements below:

  • For each use case, test cases will be identified and executed, including valid and invalid input data.
  • Test procedures will be designed and developed for each use case.
  • Test procedures will be implemented to simulate managing customer accounts over a period of three months. Test procedures will include adding, modifying, and deleting accounts, customers.
  • Test procedures will be implemented and test scripts executed by 1500 virtual users, each executing functions A, B, and C and each using different input data.

Identify the criteria for test

The criteria for test are objective statements indicating the value(s) used to determine / identify when testing is complete, and the quality of the application-under-test. The test criteria may be a series of statements or a reference to another document (such as a process guide or test standards). Test criteria should identify:

  • what is being tested (the specific target-of-test)
  • how is the measurement being made
  • what criteria is being used to evaluate the measurement

Sample test criteria:

For each high priority use case:

  • All planned test cases and test procedures have been executed.
  • All identified defects have been addressed.
  • All planned test cases and test procedures have been re-executed and no new defects identified.

In the above example, what is being tested is described by the statement "for each high priority use case." How the measurement is being made is described by the statement "all planned test cases and test procedures have been executed." The criteria used for evaluation is included in the last two statements "all identified defects have been addressed" and "all planned test cases and test procedures have been re-executed and no new defects identified."

Identify any special considerations for test

Any special considerations for testing or dependencies should be listed, such as those shown below:

  • Test databases are to be restored by Operations resources
  • Testing (performance) must occur after hours (to not be influenced by normal, daily, operations) and must be completed by 5:00 a.m.
  • Legacy system synchronization must be made available (or simulated)

See Guidelines: Test Plan for additional information on developing test strategies.

Identify Resources To top of page

Purpose
  • Identify the resources necessary to test, including, human resources (skills, knowledge, availability), hardware, software, tools, etc.

Once it's been identified what's being tested and how, there is the need to identify who will do the testing and what is needed to support the test activities. Identifying resource requirements includes determining what resources are needed, including the following:

  • Human resources (number of persons and skills)
  • Test environment (includes hardware and software)
  • Tools
  • Data

Identify human resource needs

For most test efforts, you'll need resources who can do the following:

  • Manage and plan the testing
  • Design the tests and data
  • Implement the tests and data
  • Execute testing and evaluate the results
  • Manage and maintain the test systems

Identify non-human resource needs

Test environment (includes hardware and software)

Two different physical environments are recommended (although it is not necessary):

  • the implementation environment, where the test management, design, and implementation activities occur, and
  • the execution environment, a separate execution system (usually a clone of the production system) - where all testing is performed.

Software will also be necessary for testing. The minimum software needed are the application-under-test, the client O/S, the network, and the server O/S. Additionally software maybe necessary to accurately simulate / duplicate the production environment, this software might include:

  • interfaces to other systems, such as legacy systems
  • other desktop applications, such as Microsoft Office, Lotus Notes, etc.
  • other applications that reside / are executed on the file servers and network. These are applications that, while not required by the application-under-test, reside in the environment the application-under-test executes.
Tools

It should be stated what software tools (for testing) will be used, by whom, and what information or benefit will be gained by the use of each tool.

Data

Software testing relies heavily upon the use of data as input (creating or supporting a test condition) and as output (to be compared to an expected result). Strategies should be identified for the following test data related issues:

  • collection or generation of the data used for testing (input and output)
  • database architecture (isolation from outside influences and methods to return data to its initial state upon completion of testing).

Create Schedule To top of page

Purpose
  • Identify and communicate test effort, schedule, and milestones

Creating a schedule includes:

Estimate test effort

The following assumptions should be considered when estimating the test effort:

  • productivity and skill / knowledge level of the human resources working on the project (such as their ability to use test tools or program)
  • parameters about the application to be built (such as number of windows, components, data entities and relationships, and the percent of re-use)
  • test coverage (the acceptable depth for which testing will be implemented and executed.) It is not the same to state each use case / requirement, was tested if only one test case will be implemented and executed (per use case / requirement). Often many test cases are required to acceptably test a use case / requirement.

Test estimation should also consider partitioning the effort differently within each phase of the testing lifecycle as the weight (of effort) for some types of vary during the lifecycle. For example, the test effort for performance testing, the test execution activity carries a major share of the work estimate, due to the effort to set up the test system and execute tests in a complex environment.

This partitioning is important for scheduling purposes. The test design and test implementation efforts require a single schedule period, with some small increment for refinements. The test execution effort, in contrast, is repeated for each application build, and must be scheduled accordingly.

Testing effort needs to include time for regression test. The following table shows how regression test cases can accumulate over several iterations for the different testing stages.

Iterations vs. stages System Integration Unit
First iteration
Test of this iteration's test cases
that target the system
Test of this iteration's test cases
that target builds
Test of this iteration's test cases
that target units
Following
iterations
Test of this iteration's test cases,
as well as test cases from
previous iterations that have been
designed for regression testing.
Test of this iteration's test cases, as
well as test cases from previous
iterations that have been designed
for regression testing.
Test of this iteration's test cases,
as well as test cases from
previous iterations that have been
designed for regression testing.

Generate test schedule

A test project schedule can be built from the work estimates and resource assignments. In the iterative development environment, a separate test project schedule is needed for each iteration. All test activities are repeated in every iteration.

In a particular iteration the test planning and test design activities address new functions in the software. The test implementation activity involves creating new test cases for new functions and modifying test cases for functions that have changed. The test execution and evaluation steps validates new functions and performs regression tests for existing functions.

Early iterations introduce a larger number of new functions and new tests. As the integration process continues, the number of new tests diminish, and a growing number of regression tests need to be executed to validate the accumulated functions. Consequently, the early iterations require more work on test planning and design while the later iterations are weighted towards test execution and evaluation.


It is not possible to provide detailed schedules for each iteration. It is not known in general how many iterations there will be, or in which iteration a certain test criteria will be met.

Using the estimated effort and the assigned resources, create a schedule for your testing effort.

The example table below summarizes all testing activities. The work estimates are shown as guidelines for the relative amount of work for each task.

When you develop the schedule you have to make sure it is realistic. There are few things as demoralizing as schedules so ambitious that no one has time or energy to follow, and, worst case, resulting in no tests being successfully performed.

Task Relative Work
Total Effort 38d
Test Planning 7d
Identify test project 1d
Define testing strategy 1d
Estimate work 1d
Identify resources 1d
Schedule testing activities 1d
Document test plan 2d
Specify Test Cases 5d
Determine test cases 5d
Design Test 7d
Analyze test requirements 2d
Specify test procedures 3d
Specify test cases 1d
Review test requirement coverage 1d
Implement Test 12d
Establish test implementation environment 1d
Record and play back prototype scripts 1d
Develop test procedures 5d
Test and debug test procedures 1d
Modify test procedures 2d
Establish external data sets 1d
Re-test and debug test procedures 1d
Execute System Test 6d
Set up a test system 1d
Execute tests 2d
Verify expected results 1d
Investigate unexpected results 1d
Log defects 1d
Evaluate Test 1d
Review test logs 0.25d
Evaluate coverage of test cases 0.25d
Evaluate defects 0.25d
Determine if test completion criteria are met 0.25d

Generate Test Plan To top of page

Purpose
  • To organize and communicate to others the test-planning information.

To generate a test plan, perform the following:

Review / refine existing materials

Prior to generating the test plan, a review of all the existing project information should be done to ensure the test plan contains the most current and accurate information. If necessary, test related information (requirements for test, test strategies, resources, etc.) should be revised to reflect any changes.

Identify test deliverables

The purpose of the test deliverables section is to identify and define how the test artifacts will be created, maintained, and made available to others. These artifacts include:

  • Test Cases
  • Change Request

Generate the test plan

The last step in the Plan Test activity is to generate the test plan. This is accomplished by assembling all the test information gathered and generated into a single report.

The test plan should be distributed to at least the following:

  • all test roles
  • developer representative
  • share holder representative
  • stakeholder representative
  • client representative
  • end-user representative
Feedback © 2014 Polytechnique Montreal