Indicators in the VOICE model

Examples of indicators in the VOICE model

The indicators in the VOICE model are the starting point for determining the needed testing activities, and other quality measuring activities.  

An indicator is a quantitative assessment for comparing or tracking the current state or level of the product or a part of the product, together with the business process it supports, and of the IT delivery process and of the people involved. Indicators are used to determine whether the business value and the IT objectives are achieved. 

Of course, the objectives that the team must achieve, and which will be measured based on these indicators, can vary a lot. Therefore, the following list of examples of indicators is by no means intended as complete, it merely shows ideas for creating your own indicators. 

Note: both the terms indicator and metric are used in TMAP, with different meanings, read more about this in the last section of the topic “metrics”. 

Examples of indicators

  • Business value related indicators
    • Number of user-stories compliant with definition of done (i.e. accepted by relevant stakeholders)
    • Stakeholder confidence level related to pursued business value (e.g. using confidence monitor)
    • Customer satisfaction
    • Conversion rate (people that visit the website and actually buy)
    • Number of returning customers
    • Number of service calls to helpdesk
    • Market share increase
    • Revenue increase
    • Improved forecast accuracy
    • Service time per customer (for example, number of days in hospital)
    • Personal opinion of involved stakeholders about quality level of the new/changed system compared to agreed level (this is measured by interviews or questionnaire)
  • IT delivery related indicators
    • Business features done compared to business features defined
    • Functional components done compared to functional components defined
    • Quality risks covered compared to quality risks identified
    • Test pass/fail ratio
    • Coverage achieved by tests (for example, requirement coverage, code coverage, risk coverage)
    • Percentage of availability (up-time) of production environment compared to agreed availability
    • Percentage of availability (up-time) of test environment compared to agreed availability
    • Reliability level of data connections to test environment compared to agreed level
    • Availability of team members for team-activities and tasks compared to agreed availability
    • Availability of non-team members (such as end-users) for team-supporting activities and tasks, compared to agreed availability
    • Automation rate (percentage of tasks automated versus total tasks)
  • Team related indicators
    • Availability of necessary skills in the team
    • Number of people with adequate experience and skills to operate the new/changed IT system compared to agreed level
    • Satisfaction and happiness of team members
    • Quality of the refinement process (for example, measured by number of arising questions during the development activities)
    • Velocity (amount of work done per amount of time)
    • Trust
  • Problem related indicators
    • Number of anomalies registered compared to number of anomalies expected
    • Mean time to investigate and fix anomalies compared to agreed level
    • Fault density (average number of faults per volume of code)
    • Operational failures occurred compared to expected level of operational failures
    • Mean time to fix operational failures
    • Lifecycle cost per problem (which indicates how much could be saved by preventing such type of problem)
    • Escaped fault ratio (number of faults detected in a later stage than where the fault was introduced)

And just as a reminder: the list above contains examples but must certainly be tuned to the needs of your team, by adding other indicators whenever relevant. And you normally do not use all indicators.

How to select your indicators?

As a general rule: a few well-measured and properly followed-up indicators are much better than a long list of unpractical indicators. So as a team, together with the relevant stakeholders, discuss which indicators show whether you are moving towards the pursued business value.

It is preferable to automate the measuring of indicators, so think about whether the indicators you select can be measured with tools. On the other hand, some important indicators, such as the opinion of people, may be difficult to measure with tools; do not ignore these just because it is more work to collect the data.

Over time you can extend the indicators to measure, especially when the needs of the stakeholders change or when there is a clear need for more detailed information.

Also keep in mind that the list specified above contains examples, you may find other indicators that better fit your purpose.

How do indicators and metrics relate 

In TMAP we distinguish two reasons for measuring: indicators and metrics. 

Indicators are used to determine whether the business value and the IT objectives are achieved by the IT system and the business process it supports, to accomplish this indicators measure products, processes and/or people. 

Metrics relate to continuous improvement (improving the IT delivery process and the people involved and indirectly also the products). 

If you perform a specific measurement the result may be used as an indicator, or as a metric, and in some cases the same measurement may even be used both as an indicator and a metric. 

Indicator and metrics
How measurements relate to indicators and/or metrics

Examples

An example of a measurement that is typically an indicator is “conversion rate” (people that visit a website and actually buy), it relates to the business value only. 

An example of a measurement that is typically a metric is “deployment frequency”, it relates to improving the IT delivery process only. 

An example of a measurement that can be both an indicator and a metric is “escaped fault ratio” (anomaly detection effectiveness) which both is about how well the business value is achieved and how the IT delivery process can be improved. 

Use the Goal-Question-Metric approach to find indicators

A well-known approach to derive indicators is the Goal-Question-Metric (GQM) approach. In this approach the Goal and the Question refer to the Value and the Objectives in the VOICE model. The resulting metrics are the indicators.

More information on the GQM approach can be found in Building Block "Continuous improvement", and in the TMap NEXT book [Koomen 2006].

 

Related content:

VOICE model