Community
Our work focuses on functional and nonfunctional testing, most of the business being in the market infrastructure space, exchanges and clearing.
Three principles can help you to test even the most sophisticated technology platforms. They are:
True courage is required when one faces disruptive technology. What is the distinction between incumbent and disruptive? "Incumbent" means working with something you already know and something you can expect to perform in a certain way.
"Disruptive" technology occurs when we face new things - the things that will redefine what we already know. Incumbent testing of post-trade systems is a very complex area to begin with. Post-trade systems are massive, they have many interfaces, many distributed components and legacy systems. There are many challenges in providing Quality Assurance for such systems, starting with participant and asset classes complexity, followed by lifecycle complexity resulting in multi-day cycles, on top of risk calculations, existence of upstream and downstream systems, APIs, reports, etc. Traditional software testing is "hooked on" preconceptions about how the system under test is supposed to work. Only disruptive testing is able to reveal truly new information about the system and let us learn from it. And software testing is relentless learning. We think about ourselves as a defect mining company. The goal of disruptive testing is to find defects that are unlikely to be found in testing with other methods. It is quite challenging to be able to extract hidden defects that only reveal themselves under load, under risk conditions or other atypical conditions and cases. In order to do it, we rely on a set of testing tools. One of them is the ClearTH tool that is used within several global exchange groups. The tool simultaneously connects to all possible endpoints, where necessary, to simulate various data flows. It can work as a settlement simulator, as a risk management simulator, it can control reference data and the incoming market data. The tool concurrently operates on all these endpoints and produces diverse random loads, thus placing the post-trade system under stress conditions. Contrary to typical load or functional testing, we have carefully studied everything that happens at every endpoint and, also, internally within the system. This allows us to identify the problems that will not occur in the process of ordinary functional validation, where tests are run one by one. This approach is used in supporting large initiatives, both in the Waterfall mode and also on Agile projects. Most of the large financial sector organizations are going through an Agile transformation, and running testing scenarios in short sprints for large systems represents many challenges. The first idea that comes to mind during a transition to the Agile mode is to try and squeeze software testing into sprints and confine all the testing to sprints. Quite frequently, it leads to "confirmation-bias" testing: the system is expected to work, and instead of trying to obtain new knowledge, the testers convince themselves that it is working as expected. Only the truly brave can look at the world and understand that all of it - gods, men, everything else - will end badly. This is the mentality required from a software tester.
In Agile projects, it is important that every iteration delivers new value, and when people implement it, they frequently forget that the test library should reflect the same idea. So if no effort is spent on creating a test harness, the end result will be a test library that will not deliver value every step of the way. The law of requisite variety states that the control system must have at least as many possible states as the system it wants to control. It means that test harness development is a separate software development process. The challenges and the effort required to create a test harness is frequently underestimated. Performing ordinary functional testing is much like obeying the main safety rule on board a submarine: "Do NOT open the portholes when underwater". Ordinary functional testing comes down to iterating through a finite number of scenarios to prove that the "portholes" will, indeed, not open. The number of scenarios may vary from one to over a hundred, and their sole purpose is proving that the "portholes" stay shut in this particular subset of scenarios. Non-functional testing, on the other hand, lies in iterating through an even smaller number of scenarios - that is the general tendency across our projects - to prove that the "portholes" will not open by brute force. Last but not least - disruptive functional testing. Test without fear. Its first part consists in iterating through a huge number of random diverse scenarios under load to prove that the portholes stay shut. The second part of disruptive testing is opening the "porthole". Disruptive functional testing is the only way to ensure that your system is not only ready for what you expect, but it is also potentially ready for the unexpected. And when new disruptive technologies are introduced, the software testing approach that is used should match the complexity and the nature of disruptive technologies.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Kathiravan Rajendran Associate Director of Marketing Operations at Macro Global
25 November
Vitaliy Shtyrkin Chief Product Officer at B2BINPAY
22 November
Kunal Jhunjhunwala Founder at airpay payment services
Shiv Nanda Content Strategist at https://www.financialexpress.com/
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.