Community
Using advanced mathematical modelling for calculating, predicting and evaluating risk is nothing new. Financial institutions of all kinds have long been using numerical libraries, whether home-grown or from a third party, containing mathematical, statistical and — increasingly — machine learning algorithms. However, the increasing complexity, market evolution, economic challenges and sheer scale of the data involved all mean that risk modelling is more imperative yet more challenging than ever before. Now may be the ideal time to evaluate and even re-think the processes and tools being used for risk analysis.
Given the potential impact on a business, everyone within a financial services firm benefits from having some understanding of the best practice strategy around risk analysis: get it wrong and the organisation is vulnerable; get it right, and it can be more confident in their decision-making and forward-planning.
Use the right model
A good starting point is making sure that the most appropriate model (AKA algorithm), is being used to quantify the relationship between a target variable (such as credit risk) and the relationship with variables (such as guarantees or repayment capacity). There may be more than one suitable model, and I’m not going to go into detail about those here, but a good model is one that incorporates as closely as possible the true relationship between the target and the predictor variables, and manages to predict the target variable accurately.
Credit scoring risk modelling of companies
Here’s an example of a model in action. Using artificial neural networks and decision trees — two types of model — the results of a study showed that the key factors in predicting default risk of companies were: profitability ratios, repayment capacity, solvency, duration of a credit report, guarantees, size of the company, loan number, ownership structure and the corporate banking relationship duration. In that situation, the decision trees displayed higher predictive accuracy than artificial neural networks.
Another technique to consider is clustering, in other words grouping predictors on the basis of similarity (or dissimilarity), to find structure and reduce complexity. This helps shift through all the ‘noise’ and make more sense of the results. Credit risk clustering could be used to assess risk levels for existing credit accounts. Without going into the math, the advantage is the reduced uncertainty associated with parameters estimated from statistical models. Useful clusters of real credit card behaviours are obtained, together with sophisticated prediction and forecasting of account default, based on the clustering outcomes.
Survival analysis
Something else to think about is survival analysis, which is a branch of statistics for analyzing the expected duration of time until one or more events happen. Events such as death, mechanical failure, loan default have been modelled and analysed using survival analysis methods. Standard credit risk models do not adequately account for time-varying factors and censoring information, such as when an outcome for an individual remains unknown for the duration of the observation period. Instead, if survival-based credit risk modelling is applied, then the probability of personal default at different points in time can be predicted.
Optimization Optimisation is another option to support complex risk assessment. This can be used in a variety of ways, but an example usage is determining the best mixture of credit products most likely to provide a combination of risk and expected return that is optimal for loan performance.
It’s about people too
The ideal environment is to have access to a variety of risk models, techniques, tools that automate that process as much as possible, and —of course — the individuals who can put all of this into action. On that point, it is vital to weigh up the pros and cons of in-house versus seeking external help. Of course, some banks have invested in their own teams of experts, and developed home-grown numerical libraries for risk modelling, and do it very well.
However, it can quickly become cumbersome and expensive to manage and maintain the quality of numerical libraries. If open source tools are being used, alongside their advantages they also introduce unpredictable management overheads and costs, plus it can be hard to keep track of what is being used, which in turn brings its own risk. Given the value of what is at stake, it may make sense to look at external tool providers and expertise, particularly if that strategy provides access to multiple models (algorithms) that have already been tried-and-tested.
Final thoughts
Regardless of whether using in-house or external approaches, the bottom line is that toolboxes of numerical libraries, together with an understanding of the best models and approaches to use, will continue to be the backbone of robust risk analysis in the financial services sector. This is why it pays to take a long, hard look at how it is being carried out within each organisation today, and to make sure that the foundations are in place to deal with challenges both now and in the future.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Ben Parker CEO at eflow uk ltd
23 December
Jitender Balhara Manager at TCS
22 December
Arthur Azizov CEO at B2BINPAY
20 December
Sonali Patil Cloud Solution Architect at TCS
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.