Use Case Test (UCT)


The use case test is a technique that is applied in particular to the testing of the quality characteristics of Suitability, Effectivity and User-friendliness. The test basis contains at least the use cases and preferably also the associated use-case diagram. There are various definitions of the concept of use case in circulation. In this section, the following definition is used:


A use case contains a typical interaction between a user and a system. The use case describes a complete piece of functionality that a system offers to a user and that delivers an observable result for the user.

Besides various use case definitions, there are also various types of use case descriptions. The type can vary from organisation to organisation and even from project to project. The variations relate to the abstraction level, the scope and the degree of detail, with which a use case is described. Since a use case can be described in various ways, it makes sense to carry out a check before applying the use case test in order to examine whether the use case description employed contains sufficient information to be used for the use case test. The simplest way to perform this check is with a checklist (see the tip "Use cases checklist").


Use cases checklist

The detail content of a checklist for determining whether a use case is usable for the application of the use case test depends on the way in which a use case is described. Below are some checks that can be used as a basis for creating your own checklist:

  • Is the (standard for the project/organisation) use case template filled in completely?
  • Is the use case diagram available?
  • Is the use case a separate task in itself?
  • Is the aim of the use case clear?
  • Is it clear for which actors the use case is intended?
  • Does the use case relate to the functionality (and not to the screens sequence)?
  • Have all the foreseen alternative possibilities been described?
  • Have all the known exceptions been described?
  • Does the use case contain a complete step-by-step plan?
  • Has each step in the scenario(s) been clearly, unambiguously and completely described?
  • Are all the actors and steps cited in the use case relevant to the execution of the task?
  • Are the described steps executable?
  • Is the result of the steps verifiable?
  • Do the pre- and post conditions correspond with the use case?

ChecklistThe use case test focuses on the coverage of the interactions between the user and the system. The basic technique used here is:

Variations on the use case test can be created by applying other basic techniques, such as:

  • Paths
  • Decision points: modified condition/decision coverage
  • Pairwise testing

The basic technique "checklist" is almost always usable. The effectiveness of the alternative basic techniques is strongly dependent on the content of the use case descriptions. In this section, the "checklist" basic technique is employed in an example. For an explanation of the other techniques, see "Coverage types and basic techniques".

In more detail

Use case diagram

A use case describes a (part of the) functionality. A use case diagram indicates the system boundaries, reflects possible mutual relationships between use cases, and especially shows which relationships there are between the actors (users) and the use cases.

A use case diagram is relatively simple. The three most important symbols are:

  • A 'doll' to indicate an actor
  • An oval, to indicate a use case
  • A line between actor and use case, or between use cases (see explanation below).

Use cases can have two types of connections: "extend" or "include":

  • Extend
    • An extend relationship is used when a use case present corresponds with another use case, but does something extra. This 'something extra' is removed from the use case and placed into a separate use case
  • Include
    • If a particular behaviour occurs in several use cases, this is usually modelled, rather than repeated in each use case. In this way, a use case is created that is used (include relationship) by other use cases. A simple example is a use case that looks up the marital status of an individual. This can be used by both the use case "Determine tax rate" and the use case "Amend marital status". This is then often modelled into the use case "Determine marital status".

The correspondence between extend and include is that, in both cases, similar behaviour is removed to avoid repetition. The difference between them is that an include relationship, in contrast to the extend relationship, often does not involve an actor. Furthermore, the include use case is always executed, whereas the extend use case is executed optionally.

For more information on use cases/models, refer to the official Unified Modelling Language (UML) documentation of the Object Management Group (

Points of focus in the steps

In this section, the use case test is explained step by step. In this, the generic steps are taken as a starting point. An example is also provided, showing at each step how the technique works. A use case diagram is set up in the example according to the above description. Since no uniform agreements exist concerning a description of a use case, only relevant use case components have been used in the example.


The figure below shows a use case diagram in which the student ("actor") can start an application ("TestDesignTechniqueAssessment"). After going through the logging-in procedure, the student selects a particular test design technique on which he wishes to be assessed.

During the assessment, there is the possibility of giving the student an explanation when a wrong answer is given. There is also a possibility of providing an interim score relating to the number of correct answers given. After a certain number of questions have been posed, the application stops. The tutor ("actor") can follow the student's progress and results.

Use case diagram "TestDesignTechniqueAssessment"

As an example, some use case descriptions are provided:

Name StartAssessment
Actor Student
Preconditions Student has followed test design technique training
Primary scenario
  1. The student starts the "TestDesignTechniqueAssessment" program
  2. Include <LoginTDTAssessment>
  3. The student chooses from the available test design techniques
  4. The student may choose the option "Explain"
  5. The student may choose the option "Provide interim score"
  6. The program is ready for the first question
Postconditions The student can start the selected test
Name StartUsecasetest
Actor Student
Preconditions Student has followed test design technique training
Primary scenario
  1. The use case starts when the student presses the question button for the frst time
  2. While fewer than 10 questions have been set
    • 2.1 The computer will generate a question concerning the use case test
    • 2.2T he student will read the question, think of the answer and type it in
    • 2.3 After checking, the student presses "Enter"
    • 2.4 The computer reads the answer
    • 2.5 If the answer is processable, then
      • 2.5.1 The assessment "Right" is given if the answer corresponds with that of the computer
      • 2.5.2 The assessment "Wrong" is given if the answer does not correspond with that of the computer
    • 2.6 Else, the student receives the message: "Answer is not processable and therefore counts as wrong"
    • <GiveInterimScore>
    • 2.7 The student thinks about a wrong answer
    • <Explain>
    • 2.8 The student presses the question button
  3. The student receives an assessment in the form of a score after 10 questions
  4. The program stops
  1. Student presses the question button, while the previous question has not been answered
  2. Student types something in the input field, while the question has already been answered
  3. Student stops before 10 questions have been answered
Postconditions The student is given a score for the test
Name GiveInterimScore
Actor Student
Preconditions At start of assessment, the student has opted for receiving the interim score
Primary scenario
  • The right answer increases the number of right answers by one. An interim score is generated based on this number
  • The interim score is provided in a status line


In more detail

Besides the above-mentioned components (name, actor, preconditions, primary scenario, exceptions and postconditions) use-case components that are often used in use-case templates are:

  • Scope (does it for example concern a system or a subsystem)
  • Level (does it for example concern a primary task or a subfunction)
  • Stakeholder (concerns the party/parties involved: e.g. student, tutor, employer and/or customer)
  • Trigger (the 'event' that ensures that the use case starts is also shown as the first step in the use-case scenario)
  • Priority (degree of importance of the use case)
  • Response time (the time available for executing the use case)
  • Frequency (number of times that the use case is executed)
  • Secondary actors (other actors involved in the use case)
The use of the above-mentioned use case components is not imperative. Depending on the nature of the use case, components are added or left out. Naturally, basic component, such as "Name", "Actor", "Preconditions", "Primary scenario" and "Postconditions" are almost always present in a use case description.

1 - Identifying test situations

Below, it is set out step by step how the use case test is applied in this example:

Deriving test situations from the use case is largely dependent on the level of detail to which the use case is described. When there is little detail, it may be the case that only one test case can be described for a use case (e.g. the purpose of the use case). Under such circumstances, no logical test cases can be created for these test situations, since not enough information is present. If the use case contains more detail, then of course detailed test situations can be distinguished, which can immediately be seen as the logical test cases. In all cases, the recognised test situations are included on a checklist, so that it will be possible at the next stage to check off whether at least one logical test case has been created for each test situation.

Since no uniform description exists for use cases, it is not possible to provide a formal way of deriving test situations. Depending on the knowledge and expertise of the testers, one will find it easier than another (see also the tip).


Depending on the way in which a use case is described, carrying out the following steps can help to get thoughts in order for identifying and describing test situations:

  1. Look for variables that result in a reaction of the system or the environment.
    Examples of variables are input data, output data, environment variables that force the actor into certain behaviour, status of the system, etc.
  2. Determine the domain of the variables.
  3. Determine which variables have a relationship with each other.
  4. Combine related variables into a test situation and describe this.
    A test situation contains at least the relationship between the variables, certain value(s) from the domain, the starting situation and a specific described result.

The result of the first three steps for the use case "StartUsecasetest" may look as follows:

Variable Domain Relationship with
Give interim score {Y, N} Given answer
Explain {Y, N} Given answer
Given answer (Right, Wrong} Processable answer, Give interim score, Explain
Processable answer {Y, N} Given answer
No. of answers given {0 - 10} Program

With the aid of the above table and the use case description, test situations can be identified and described (step 4).



Suppose that the use cases from the example "TestDesignTechniqueAssessment" were to contain almost no details; a checklist with test situations (these are not logical test cases) might look like this:

Use case "StartAssessmentt"  
1a. The student should be able to log on with his 'own' settings on the assessment application.
1b. The student should be able to select a test for a test design technique.
Use case "StartUsecasetest"  
2a. The student should be able to take the "use case test" test.
2b. The student should have the option of obtaining an explanation with a wrong answer.
2c. The student should have the option of getting an interim score.
Use case "StartCheck"  
3. The tutor should be able to log in with his 'own' settings to the test (check) application.
Use case "ViewResult"  
4. The tutor should be able to follow a student's progress and results.

However, as the example does contain more detail, the test situations description does not have to be restricted to a checklist. In the table below, a few example test situations for the use case "StartUsecasetest" are described. These test situations are at once the logical test cases. For the description of a test situation, a layout that is similar to the use case description has been chosen.

Use Case Name StartUsecasetest
Test case ID 1
Test case purpose Check whether the computer generates a question the first time the question button is pressed
Priority Medium
Actor Student
Precondition Select “Usecasetest” at the start of the test
Trigger Press the question button
Postconditions The first question about the “Usecasetest” is shown.
Test case ID 2
Test case purpose Check whether an interim score is given with a right answer.
Priority Low
Actor Student
Precondition At the start of the test, select the option “Give interim score”. The given answer is processable.
Trigger Type right answer and press “Enter”.
Postconditions The message “Right” is shown and an interim score is shown.
Test case ID 3
Test case purpose Check that no explanation is given with a wrong answer.
Priority High
Actor Student
Precondition At the start of the test, do not select the option “Explain”. The given answer is processable.
Trigger Type wrong answer and press “Enter’.
Postconditions The message “Wrong” is shown and no explanation is given.
Test case ID 4
Test case purpose Check that the program stops if the question button is pressed after 10 answers are given.
Priority Medium
Actor Student
Precondition 10 answers have been given.
Trigger Press the question button.
Postconditions A final score is shown and the program stops.

The component "Priority" can be used to indicate whether it is mandatory or optional to execute the test case. It can also be used to determine the sequence of execution.

2 - Creating logical test cases

If step 1 "Identifying test situations" has resulted only in a checklist, no logical and physical test cases can (at the moment) be created on the basis of the use cases. In that case, other parts of the test basis should be searched for additional information to enable the creation of the test cases. If step 1 has delivered detailed test situations, these are at once the logical test cases.

In a UCT test case traceability matrix, a track is kept (by checking off) of whether at least one logical test case has finally been made for all the recognised test situations.



With the checklist example, it will still be possible at a given point, on the basis of information from other parts of the test basis, to start creating logical test cases. The UCT test case traceability matrix (here completed with fictional values) might look as follows:

  Logical test case                  
Test situation   TC-1 TC-.. TC-.. TC-.. TC-.. TC-.. TC-.. TC-.. TC-n

It can be seen from the matrix that no logical test cases (TCs) have yet been created for test situations 2a and 3. It can also be seen, for example, that test situation 1a occurs in various logical test cases.

The simple UCT test case traceability matrix for the example with the detailed test situations (which are at once the logical test cases) looks as follows:

  Logical test case        
Test situation   TC-1 TC-2 TC-3 TC-4


3 - Creating physical test cases

No remarks.

4 - Establishing the starting point

No remarks.


An overview of all featured Test Design Techniques can be found here.