There are some topics which you cannot discuss for granted or just start speaking or writing about it. Such topic demands you to leave the shallow area, and dive into the depths. Such discussions seem very simple in form, but in actuality they need more in-depth reviews and analysis.

Once such discussion is about “Testing and Checking” – often time we listen to this analogy, and think what can be so complicated about this?

“The difference between a test and a check closely mirrors the difference between an experiment and a demonstration.” – Michael Bolton

To make things clear, let’s start with the definitions and the difference they carry and comply with each other:

“Testing is the process of evaluating a product by learning about it through experimentation, which includes to some degree questioning, study, modeling, observation and inference.”

Whereas if we talk about Checking, then:

“Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.”

Considering on a defocused state, both checking and testing are activities, and their importance is considered as contextual in nature. Also, we need to understand that both these activities have their own importance as per the rise of the need, but, we also need to understand that these activities are different from each other.

Testing is evaluation; it involves learning of a product’s working and its operations by experimenting with it. Experimentation is practicing, writing down the conclusions, drawing up the matrices and then telling a compelling story for the discovery.

Whereas on the other hand Checking is algorithmic follow through of an already establish confirmatory path. It is as same as ticking off the completed to-do list items, without establishing the outcome of what actually was experienced in reality.

As testers we must be very open minded while selecting a tool for conducting our tests. It is very fascinating to say that “Let’s make tools do that!” but on the other side of the picture, this actually puts a huge challenge for the testers. Why?

The reason is that the tools are not a one model strategy. There are dozens of tools in the market, some are pre-packaged products and some are Open-source. Plus there is the difference in the coverage areas. For example, some of these tools only do testing for GUI related issues, while other are performance testing tools. A few of them however combines the abilities together, but then lacks somewhere and gains somewhere.

This is good for tech junkies, but for a company investing in huge amounts needs to make sure that they select the right tools.

This cause immense pressure on testers and alongside, challenges their ideas. This is why the sensible approach to have separate Testing Teams and Tool-Smiths teams. Usually for the best outcomes, the team structure comprised of “Regression” black box testers and Test Automation experts.

We need to understand the fact that there is a huge difference when a machine does testing via script created in an automated tool, and when a human does his work. The machine embodies the scripts; it is not affected by the environmental changes around it, noise, temperature and working conditions. Whereas the humans on the other hand who are real testers are affected by the stimulus and context, so there are actually three types of combinations that can be created regarding Testing, checking and the involvement of Human being in this whole process:

Human Checking

Is an attempted checking process wherein humans collect the observations and apply the rules without the mediation of tools.

Machine Checking

Is a checking process wherein tools collect the observations and apply the rules without the mediation of humans.

Human/Machine Checking

Is an attempted checking process wherein both humans and tools interact to collect the observations and apply the rules.

We also need to understand that dependency of “tools” is a necessity for skilled tester to enlighten their skills. It provides their craftsmanship an edge that is very creative and brings out extraordinary results. But, on the other hand it also create a certain risk element, and that is, if there is a mistake in conducting and creating automated tests on the wrong design board, with wrong requirement identifications, then the results can be disastrous.