Join the Community

22,241
Expert opinions
44,209
Total members
414
New members (last 30 days)
204
New opinions (last 30 days)
28,752
Total comments

The Testing Challenge: Time

Accurate testing of systems and data processing software is critical to business success. Whether it’s a major twice-yearly waterfall software release or this month’s agile update, bug-free systems are critical. News travels fast when something goes wrong, and if it does you must act fast to minimize reputational damage and keep costs down. Everyone from the CEO downwards recognizes how important this is, so why is it so difficult to get right?

 

Scarcity of resources

Time and cost pressure are a constant challenge to effective implementation. Sometimes these pressures push in the same direction because more time usually equals added cost. At other times, they pull in opposite directions, at least in the short term. Adding to the challenge is the increasing complexity of integrated processing systems. But if we invested in tools which increased productivity, we could deliver more quickly and be better prepared to effectively deal with increasing complexity.  In a series of short articles over the coming weeks, we will illuminate each of these challenges and provide some ideas about how to address them.

 

The testing challenge

Today, let’s talk about time. Any developer or tester knows how this goes; there is a nice schedule laid out for development whether that is the next six months for a big waterfall release or the next two weeks for a focused agile release. The terminology may vary but the work falls into four categories:

  1. Defining what will change (requirements/story)
  2. Making the changes (development)
  3. Testing the changes and all other code (testing)
  4. Rolling out the successful results (implementation/release)

These are nicely laid out in a project plan that allows adequate but limited time to complete all the work. It also defines a neat critical path where all requirements are completed before development begins, development is finished before testing and all testing shows 100% accuracy before implementation. One thing is a given, the implementation date (D-day) has been agreed with management and/or customers. It cannot be delayed without significant consequences.

 

The reality of testing

But we know things rarely work out this way. Even the initial definition of requirements may be delayed to obtain full agreement between stakeholders. During development, unexpected dependencies may be discovered that cause delays and possibly updated requirements. Business priorities raise new requirements in the middle of development. Testing will expose development problems and/or inadequate requirements, not to mention inconsistencies between requirements and what was developed. Reworking is constant, but D-day (Delivery Day) does not change.

Sophisticated project plans allow for overlap of phases and provide some slack time. Sometimes that is enough but usually the impact of delays is a squeeze on the time allowed for testing. The tester despairs.

“Everyone else gets time that is taken away from mine.”

This is the reality of system testing. The question then is:

What kind of tests do we need and how can we do them faster?

 

Increased complexity and Regression testing

The most obviously necessary testing of the new development is for accurate implementation and conformity to the new requirements. Less obvious but equally critical is regression testing, to make sure that the new development does not break or change anything already in the system. Let’s discuss regression testing first.

A good rule of development, as in medicine, is to “do no harm”. Regression testing ensures that the updates for a release do not harm those functions which customers already depend upon. But over time, the range of capabilities that a system supports grows until the number of regression tests required to fully cover all activities becomes astronomical. As mentioned above, increasing complexity of systems means that:

(a) there are lots of systems to test and

(b) any test may involve multiple systems

Many organizations opt to only test a subset of capabilities in regression because of human resource constraints. That raises the risk of missing something in a release. The only answer to this problem is to create a testing system, preferably a single system, that can be an archive of all tests required for a full regression and that can run them all at a single or minimal number of commands.

But that’s just half the story. Many people forget that the goal is not to execute a test; it’s to make sure that the tests run successfully. If they aren’t successful, it’s critical to identify the failed tests and provide sufficient information for failure cause analysis. So you need a tool that will fully and speedily automate the regression tests and store the results for testing of future changes.

 

Increased complexity and Release testing

As big as regression testing can be, in some ways it is simpler than the testing of new capabilities. New requirements mean something new to test and when those requirements shift, the new tests also have to shift. This can create a fluid environment in which some tests are updated and others are not updated, even if they need to be. This may not be realized until the tests are executed.

Having a tool that allows large numbers of similar tests to be updated by modifying an underlying structure, or template, eases this process. Another helpful step is the creation of audit trails to show who altered a test and how it was altered. One of the strongest tools to support testing of new capabilities allows the comparison of results from tests with different outcomes.

An example would be the introduction to a software subsystem of a new operational capability, with many different potential outcomes based on the combination of parameters input to the system. As a result, a suite of 50 tests is developed to create this expected range of outcomes based on specified inputs.

Let’s say that when the tests are run, three of them do not show the expected outcome. This could have many causes including bad programming, faulty inputs or incorrect definitions of the expected results. Typically, the test results are examined manually, in isolation, to find the answers. But if you could take a similar, successful test and compare its data to that of the failed one, it might show whether the failure was a result of coding or test definition. It could also provide a good starting comparison for resolution in either case.

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,241
Expert opinions
44,209
Total members
414
New members (last 30 days)
204
New opinions (last 30 days)
28,752
Total comments

Now Hiring