Community
So, there are two matching engines claiming 100mks latency: Gatelab and MillenniumIT. This post is neither about arguing whether industry and especially buy-side really needs such enormous achievements (being an IT person I support further advances in the trading technology) or about any particular platform. It is just a reflection on the figure itself, its meaning and the testing projects.
First of all it is necessary to explain the very statement that latency is below a particular threshold. Are we talking about average latency aggregated across normal matches and random spikes? Can we assume that a certain percentage of the orders will match faster (e.g. 80% or 99% of them)? Depending on the interpretation, the end results can differ significantly.
The second question is the rate of orders used in test. The optimal throughput and latency is achieved when the order matching rate equals the rate with which the orders appear in the inbound queue. Latency figures obtained in this assumption are good, but do they represent anything close to reality? If the order volume is low the matching engine will have to constantly query an empty queue. And as a result, it is either 100% CPU load on a particular core or additional delays caused by context switching and other tiny effects that will outweigh the target latency figure. On the other hand, when orders arrive in batches, extra delays come into play: while the system is busy processing one of the batches, the other ones have to wait. A real-life scenario represents a mixture of these two extreme cases: most of the times a queue is empty, when the orders arrive they are not alone.
The average size of the order book and types of the orders are equally important. One will get much faster execution if there is only a single opposite order on the book and there is no need to reflect the unexecuted part according to its price and time priority. A modern matching engine should not be limited to simple order types. Processing times for the order pegged to pan-European BBO or the order with minimum execution size would be much higher. Icebergs and uncrossing quotes could result in an unexpected number of messages.
There are many other essential factors: passing through trading gateways and firewalls, persistence and recovery, rate of the price source market data messages, etc. Each of the mentioned dimensions and their interpretation can have a dramatic effect on latency figures.
To produce a credible result, one needs to design a reasonable business scenario that contains a valid distribution of order types and their parameters, a randomized inbound flow, expected connectivity and measurement options (e.g. the end user gateway should be no farther than 15 km away). A market model needs to be identified to carry out such a verification. However, once the live system is available it is possible to obtain and publish the actual numbers.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Alex Kreger Founder & CEO at UXDA
27 November
Kyrylo Reitor Chief Marketing Officer at International Fintech Business
Amr Adawi Co-Founder and Co-CEO at MetaWealth
25 November
Kathiravan Rajendran Associate Director of Marketing Operations at Macro Global
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.