How many times have you released a well-tested application to production only to find out that there is a huge spike in errors or customers are reporting an annoying bug?

And how many times have we encountered a similar situation in the demo sessions?

During the sprint all was good and glory, every build was Green, until we demo to the stakeholders. Someone from the business asks a question, we try to demo their scenario and embarrassingly we encounter a bug.

A lot of the times, failures in production are related to infrastructure and operation. Those, we have little or no control over. All we can do, is to setup monitoring and alerting to get notified if a service goes down.

On the other hand, application bugs can go unnoticed for a period of time. We would only know about it if and when customers start to report an issue or revenue starts to drop.

Then we start investigating with the aim of finding the underlying issue.

Missed Scenarios

Speaking from experience, almost all application bugs that I have come across is because someone somehow forgot to test a particular scenario and there was a latent bug.

And sods law tells us that if we forgot to test something, there’s probably a bug in there.

There are many reasons why we end up with a buggy software.

There are bad development practices that inject bugs in the software such as bad coding, bad design, requirement misunderstanding, and then there are bad testing practices that we end up with a buggy software such as missed scenarios and shallow testing.

A lot of the bad development practices can be rectified by means of unit testing, mutation testing, design reviews, peer code reviews, etc. Easy!

The real question is how can we tell ourselves and other stakeholders that the application which we are about to release to production has been adequately tested. Moreover, how can we say this with a high degree of confidence?

In other words how do we know there is no latent bug that is going to cause havoc after we’ve released?

The answer is how deeply did we test the application and how well did we think about scenarios.

Ultimately, it is the scenarios that illustrate the existence of defects.

A bug is a result of a human error, a failure is a manifestation of that error and a scenario is the illustration of that failure.

Scenarios vs Test Coverage

A note about coverage: test coverage is a loosely defined term. It is only as good as the person who defined the scenarios.

Here, I’m not talking about unit test coverage because that is absolute and can be measured.

It is more about scenario coverage, how do we know we have thought about all the possible scenarios? In other words, we cannot measure coverage based on the number of scenarios as we might have not been able to think about possible scenarios.

Thinking of Scenarios

How do we come up with scenarios? What are some practical ways that will help us to come up with scenarios to enhance our coverage?

All this is to help identify issues before releasing to production.

Experience and Heuristics

Experience plays an important role here. When we talk about a particular feature or functionality, if we have previous experience testing the same feature on another project, this gives us a head start.

We might not even need a prescribed specification in order to begin testing. We can apply intuition and heuristics from past knowledge.

Architectural and Data Flow Diagrams

Studying the architectural and data flow diagrams helps in identifying the integration points and how the data flows through the system.

  • What are the entry points?
  • How does the state of the application change depending on what the user does?
  • What data can we feed in the different flows through the system?

Scenario Workshops and Bug Bash Sessions

No matter how good you are at thinking of different scenarios, different people will have different mindset and different ways to test an application. The more diversity in the workshop the better.

You will have people from different departments using the application, Devs, QAs, BAs, Marketing, Designers, all have different perspectives.

You would be surprised by how many issues will be identified in these sessions!

Being Mindful and Thinking Outside of the Box

Story testing (vertical) vs end-to-end and integration testing (horizontal). There are times when we focus too much on a given user story that we actually lose context of the bigger picture.

For example, we are testing a user registration story. The acceptance criteria says the payload should be

{
    "username": "string",
    "email": "string",
    "password": "string",
}

We come up with various scenarios and all test cases pass and the story is done.

Then we move on to another story that talks about Multi-Factor Authentication with SMS. Again, we test all the requirement of the story, such as receiving notifications on a given number.

The two user stories have been tested individually according to their context. But when we want to run an integration test, we soon realise that our registration payload has no mobile field! So, how can users use MFA with SMS if they haven’t got a mobile number on their profile?

We always need to keep an open mind on how each feature will be used not by itself, but also what impact it would have on other features.

Developers vs Testers

Developer mindset is to take a user story and develop the feature according to the acceptance criteria. If it works it’s done.

When a tester starts executing various scenarios, all sorts of issues crop up. This is because there is usually only one way to use a feature (confirmation testing), but many ways to disuse it (evil testing).

I think this is where testers can really add value.

Good testers are able to come up with various scenarios and test from different perspectives. It is this mindset that can really differentiate testers from developers. Not that developers can’t be good testers, but testing is a different discipline.

Thinking of scenarios takes time and effort and research which testers might love doing, but developers just want to get on and code their heart out.

It is a good mental exercise to try to come up with various scenarios for a given object, such as “how do you test a pen?”

Conclusion

In testing, scenarios are king. Coming up with scenarios is an art and an activity that takes a lot of effort, thought process, lots of discussions and research. What greatly helps is years of experience and heuristics.

Testing is difficult and what makes it hard is trying to identify all the various possible scenarios that can be executed on an application that can reveal potential defects.

I leave you with one final thought:

A quote from Albert Einstein:

No amount of experimentation can ever prove me right; a single experiment can prove me wrong.

Applied to testing

No amount of testing can prove a software right, a single test case can prove a software wrong.

So, get your scenarios right.

#qa #featured