Manual OR automated testing? Both!!

Blog!

Recently Rik saw a discussion on the worldwide web about manual or automatic testing. This was certainly not the first time and it most likely won't be the last. In this blog he contributes to this exchange of views and gives you some ideas and reflections on this subject.

Recently I saw a discussion on the worldwide web about manual or automatic testing. This was certainly not the first time and it most likely won't be the last. I would like to contribute to this exchange of views and give you some ideas and reflections on this subject so that you can think about your opinion some more.
Before diving into manual or automatic testing, I would like to first look at the meaning of "testing". According to TMAP, renewed in 2020, the definition of testing is:

'Testing consists of verification, validation and exploration activities that provide information about the quality and the related risks, to establish the level over confidence that a test object will be able to deliver the pursued business value'


So, the goal is to gather information to enable stakeholders to establish their confidence that an IT product will have value for the organization and the users.

This information is gathered through activities in the field of verification, validation, and exploration. And for these three activities we can neatly establish the relationship between manual and automatic testing. I focus on preparing and executing test cases (although testing includes much more, of course).

Verification is about comparing the test object with the specified requirements. Since the requirements are clear, this is an activity that lends itself well to automation (at least if you plan to run the tests more than once, such as with a regression test, otherwise it is usually not worth the investment in creating the automation scripts) .

Validation aims to determine whether the test object does what the organization and the users need. The tricky part is that the users have not always specified what they really need, for example because it is so logical to them that they do not explicitly specify it, but sometimes also because they actually do not know what they need themselves. This makes validation much more difficult to automate, because you first have to investigate what is actually needed for the solution to be “fit for purpose”. Additionally, validation more often involves progression tests (testing new parts and changes) which, if all goes well, do not need to be repeated often. So, the question is whether automation is worthwhile.

In exploration you will look at what is possible with the test object, especially the things that are not described or expected in advance. For example, by looking at what happens when you perform unspecified actions. The most striking example of this is security testing, where you try, among other things, to do things that are not specified (for example misuse or abuse of the system). This type of evaluation of quality is creative intellectual work and cannot be automated. Exploratory testing is the best known example of this type of exploration and is also often used as an example for manual testing.

Sometimes I hear people (often project leaders and other people who want to keep the investment in testing activities as low as possible) proclaim: "all tests must be automated".
I don't think that's possible. With regression tests you can strive for a high degree of automation, but even then, it is wise to do some exploration regularly, because a test tool is only a tool and does not see everything. But for progression tests, in my opinion, it is really not possible to automate everything, simply because there are too many ambiguities and uncertainties. And this is precisely where the knowledge and experience of the quality-conscious IT professional comes in handy.

One topic that will remain manual work for a long time to come is investigation and assessment of the test results, especially where the actual result differs from the expected result. This 'topic' (which is, with good reason, now defined as a separate topic in the new TMAP book 'Quality for DevOps teams'  to find out why there is a difference. And figuring that out is typically human work.

Having said all this; a plea for (more) use of test tools is very legitimate! But don’t strive to automating all testing. (my saying is: “it is never wise to automate everything, it is seldomly wise to automate nothing”).
I think there is a big difference between "automatic testing" and "applying automation when testing". You can choose from a huge range of tools that make different parts of the testing activities easier / faster / cheaper / more fun. Automation is definitely wise (in fact, I am still amazed at how few tools the average tester turns out to use). And that varies from simply using an anomaly management tool (automating your anomaly administration) to an artificial intelligence-based tool that pushes all buttons via the GUI to check whether the application responds normally. But the use of a test data management tool to get / keep your test files in order is also a form of test automation.

My conclusion is simple: good testing requires human ingenuity; efficient testing requires the use of tools. So, a sensible tester uses various tools and will always continue to do manual testing tasks too.

Good luck and have fun with verification, validation, and exploration, whether automated or not!
 

Published: 10 February 2021
Author: Rik Marselis

Quality

for DevOps teams