Identifying accepters, using acceptance criteria and other information providers
Usually the client is not the only stakeholder who has to accept the system; there are generally others, and it is important to clarify who these accepting parties are. This is done in consultation with the client. In practice, the test manager gets an opportunity here to discuss with stakeholders at a high level in the organisation (steering group members) and to interpret their opinions and expectations. Often there is no other opportunity for this, unless the test manager is in the (unfortunately) rare position of regularly participating in the steering group discussions. It is important to establish which accepters are to be provided with information directly or indirectly during the project by means of test reports. It should also be clear what requirements or acceptance criteria each accepter is proposing. These are the minimum qualitative requirements that the product must meet to make it satisfactory to the accepter. For the sake of clarity: the gathering of acceptance criteria is not the responsibility of the testers, but it is input into the setup of the test process. Acceptance criteria can be very diverse. Some examples are:
- Qualitative criteria as regards product and generation process, e.g. the number of defects that may remain open
- Criteria as regards the environment, e.g. the infrastructure should be installed or the users should have followed a training course
- Criteria in the form of (the detailing of) requirements of the product, e.g. 'an order should be processed within X seconds'.
Not all the acceptance criteria are relevant to testing. The first example has a considerable overlap with the exit criteria for the test process. The second example is usually less important to testing, and the third example is a form of test basis.
In more detailAcceptance criteria pitfallThis latter use of acceptance criteria contains a danger. In practice, the following sometimes happens: after establishing and freezing the requirements, users discover that they have additional requirements. They then formulate these requirements as acceptance criteria. In this way, acceptance criteria form the 'back door' for taking in even more requirements. This is not a good method of operation. The only correct way is to submit a change proposal to a Change Control Board. |
Besides accepters, various other parties/individuals can supply the test process with relevant information. Bear in mind, for example:
In more detail
|
Exit criteria
Exit criteria can relate, for example, to the number of issues in a particular risk category that may still be open, the way in which a certain risk is covered (e.g. all the system parts with the highest risk category have been tested using a formal test design technique), or the depth in which the requirements should have been tested. From within the master test plan, the exit criteria are applied to the test level. If that is not the case, or if there is no master test plan, the test manager should agree the criteria with the client.
The box below shows a number of concrete examples of exit criteria:
System X may only be transferred to the AT when the following conditions have been met:
System X may be transferred to the AT when it can be shown in writing that all the risks that were allocated to the ST in accordance with document Y have been tested in the agreed depth and by the agreed test method. |
An important point of focus as regards the above-mentioned criteria is that clear definitions should be agreed by all the stakeholders of what a particular category of severity is and what is meant by 'agreed depth of testing and test method'. In practice, a lack of clarity here can lead to heated discussions.
Similarities and differences between acceptance and exit criteria
Another term for exit criteria that is used is 'acceptance criteria'. Besides the fact that acceptance criteria may be a broader term than exit criteria, another difference is that acceptance criteria come at the end, i.e. at acceptance, and exit criteria at the transfer from one test level to another, or to production. The figure below illustrates this.
In more detailExample of exit/acceptance criteriaIn this example from practice, exit and acceptance criteria have high overlap. The test approach and acceptance criteria are tuned with the stakeholders (see section x.y). Two levels of acceptance can be distinguished:
For level 1 acceptance, the test must be executed according to the agreed test strategy and the following guidelines are respected for any defects found: The products of XYZ can be taken into production (are accepted) if:
For level 2 acceptance, the test team (responsible for the relevant test level) will be discharged if the aims as defined globally in section y.z and further specified in detailed test plans, if any, are achieved. This is also a Go/No Go decision to start executing the following test level. |
Suspend- and resume criteria
In some, particularly formally set up, tests, so-called suspend- and resume criteria may be defined in the plan. These criteria indicate under which circumstances the testing is temporarily suspended and then resumed. Examples of suspend criteria are that testing has to stop when a particular infrastructural component is not available, or if a test-blocking defect is found. A resume criterion may be that with the lifting of the suspend criterion the testing of the system part /function/component has to take place entirely anew.
Acceptance criteria in a High Performance delivery model: