Agile Series: Enabling agile with your software architecture

Be the first to comment

Agile Series: Enabling agile with your software architecture

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

If you have the wrong software architecture, then your ability to really implement an agile engineering culture will be dramatically hampered. In previous posts, you will have noted that I outlined three core principles I like to instil within any engineering department. These principles being:

  • Value delivery over predictability
  • Value principles more than practices
  • Autonomy is greater than control

Now, the theory behind agile and these principles is that individual teams are empowered to think for themselves, solve their problems, work in a way that suits them best and focus on delivering products into production. Note that I have put emphasis on individual teams. For this to happen, it means you need to have a software architecture in place that allows teams to build products that can be delivered independently of other teams/services, as much as possible.

In the very first post we talked about decoupling and distributed domains. We walked through the importance of strong cohesion and applied that to our teams. Now we need to apply exactly this concept back to our software architecture.

Strong cohesion refers to the degree to which the elements inside a module belong together and the stronger your cohesion, the more decoupled your module can be. In an ideal world, this would mean everything is pretty unique that lives within a module, and that it wouldn’t be required by another module. However, in the real world, that’s not the case. Often, we will find that the same functions that are needed in product one, are also needed in product two. Traditionally, this meant we might see product one interacts with product two, effectively binding the two together, one being dependent on another. The more this happens, the less likely you are to have individual teams being able to work independently of each other, you are effectively creating a dependency on other code and another team.

Now, this post is not going to push a given architectural model onto you the reader, nor is it going to delve into the real detail of software architecture, that’s for you to engage with your engineers and architects. No, this post is to help ensure the executive, especially a CTO/CIO/COO have the right concepts to mind when engaging with engineering about design and delivery. 

Fundamentals

In any software architecture, its important to set out the fundamentals of that architecture, and therefore the fundamentals that must reside within each engineering team. For me there are three fundamentals:

  • Security
  • Availability
  • Performance

Whatever you do, as an engineer, as an engineering team you must ensure you develop secure code, products that have high availability, they are highly robust and are able to meet the performance demands of your users.

Fundamentals should always be looked at as something to improve, constantly. In doing so, you move your product along with new innovations made on the underlying platforms, you migrate to new technologies, you look to take advantage of new concepts and ultimately, you irradicate legacy components from forming within your systems.

Microservices

This is a very common term now in software architecture, and rightfully so. However, there isn’t a specific single definition for microservices. Search Wikipedia and you will get a consensus of what they mean. For me though, what I would say to any executive/manager is the only thing you need to take away is this:

Services are small in size, messaging-enabled, autonomously developed, independently deployable.

In this single definition we cover off pretty much everything we need to know.

Microservices are small, they focus on providing just a few functions/capabilities back to the overall product. Being small, they are focussed, but makes it easier for them to be independently developed within a team and independently deployed by that team. If they have few dependencies (on other microservices) then they can almost be deployed at will, anytime. This means microservices have a few game changing capabilities than any architecture should follow:

  1. Isolated, easier to maintain, evolve and even replace
  2. Scalable in their nature

Point one focusses on our core principles, delivery over predictability and autonomy is greater than control. Here microservices allow teams to focus on delivery, pushing services into production independently of other services, independently of other products and independently of other teams. The second point focusses on our need to scale out solutions.

Typically, traditional architecture would have a great deal of focus on performance testing, bandwidth and bottlenecks within your solution. However, these considerations are now focussed down to specific areas of your architecture, these being individual services. This level of focus makes it easier to ensure your solutions perform better.  However, the point here is that a single service maybe able to perform X tasks in a second. But, since it is independent, I should be able to run multiple instances of the same service to effectively improve my performance, capacity and availability.

To illustrate this, lets think of a service which can process one payment per second. This is its maximum performance capability. You maybe thinking, “well that will never work for a bank, we process 000s of payments per second”. However, sine my architecture is set-up for horizontal scale, I can simply deploy a second instance of the same service. I am now able to process 2 payments per second. So if I want to process 5,000 payments in a second, from this service point of view, I simply deploy 5,000 instances of the same service. This is horizontal scale in action.

In a cloud environment, this is quite easy to do, scale up, and then, I can scale down, saving money on compute and storage needs. So while I may need 5,000 payments per second to be processed for an hour each morning, I may be able to scale my systems down to just 500 per second during the course of the day, and maybe down to just a handful overnight. In addition, I have an added resilience benefits, and that is, if a handful of my services stop working, the system is still processing payments. I can also replace “broken” service instances with new instances, ensuring my systems are highly resilient.  

Fabrics and containers

If we understand the concept behind microservices, then we can look at the concept of fabrics and containers. Both of these technologies allow your microservices to be deployed across multiple servers, removing dependency on a single server. So, just as we did with a microservice, here containers and fabrics allow us to run many more instances of our services, but scale them up at a server level, again providing us with greater performance, capacity, and availability.

We are even able to deploy our containers and fabrics over different geographic compounds within the Cloud. For example, using Microsoft Azure High Availability Zones, we can deploy our microservices across an array of servers which are running across three independent availability zones, which are actually three different compounds geographically separated by a number of miles, each working in an active:active:active fashion. Therefore, even in the unlikely event of an entire compound becoming unavailable, our systems still operate seamlessly.

Modern architecture almost always leverages a form of microservices running within a container or on top of a fabric.

Shared services and packages

So far, we have talked about the need for strong cohesion, in the real world we know that there will be dependencies on shared functions, simply because we know that coding out the same function time and time again is not good practice. Why have multiple teams write the same code and maintain that very same code? In order to solve this, while still maintaining strong cohesion we have two options,

  1. Create shared services
  2. Allow teams to take “packages” of shared components and embed them within their code base.

Firstly, there is no wrong and right answer here, its all down to what you are trying to achieve. However, at a macro level, shared services when updated are made available to all other areas of your solution that use those shared services, as in they all now use the updated service. This does weaken your cohesion within your platform, however, if they don’t change that often and are able to scale out horizontally, then this implementation maybe a wise option.

However, many more teams and architectures now utilise shared packages. These packages are effectively the service itself, however it is “added into” your own code base. A package therefore is pulled in, which means multiple products can use the same shared functions, but its their copy of that function, enabling strong cohesion and strong decoupling to persist. There is an additional upside, and downside to this approach. The upside is that if the shared function moves on and new versions are released, you don’t necessarily have to try and keep up with that release cycle. So that decouples your teams to an extent. However, the downside to that is that you don’t want your code not to be using the latest version, especially if a bug is found. So, for all other products across your platform that use a package, they need to ensure they upgrade their references to those new versions of those packages and re-deploy themselves. This means a single package update could lead to multiple updates and deployments elsewhere within your system.

Domain Driven Design (DDD)

In an earlier post we looked at Domain Driven Design. For me, this is about stepping back further and further from the real underlying solution. So, if microservices are our low level, then stepping back up, we see their dependency on the same principles within fabrics and containers. Stepping back further, we see how products are forming, and the need to keep things autonomous amongst our teams. For me, this is where DDD blends architecture with the teams that build the solutions.

DDD has a strategic design section where it really addresses the design of your products back to the teams that build them, which in turn has an influence on what code is built and “where” that resides in our overall architecture.

Bounded Context deals with larger models and is highly explicit with regards to their relationships with each other. I personally like to use “domains” to help drive out other management concepts, such as departments, focussing on the context of the products that are to be built. I also use the term domain for “sub domains”, as in other smaller products/modules/services within a given contextual domain.

I think a key thing that the executive must remember with Domains is that while ideally, they should be independent, they will share some entity context, for example a sales domain will have a customer, just as a support domain will have a customer. However, that does not mean that one domain takes a dependency on the others implementation of a customer, no, rather each will have their implementation of a customer (which maybe the same if shared through a “package”). The same applies to “data” behind a customer. Both domains may hold different data on a customer, data that is needed for their specific areas, and both may hold the same data, take a name for example. However, that name data will be “duplicated”, so it resides in two very separate databases, one dedicated to the sales domain, the other to the support domain. Think of this data as distributed. This replication of data isn’t anything to be afraid of, rather it allows the domains to remain independent, autonomous and decoupled.

APIs

Application Programming Interface (API) is a term that everyone within the financial services industry is getting to grips with. At an executive level, at 50,000 feet, an API is basically an interface that allows another bit of software to call it and tell that component to do its job. Simple as that.

What we must remember is that while APIs are very powerful, we should not always be looking to have components dependent on calling APIs. Why? Well for a simple reason, you are dependent on that API, and yes, they can change and therefore our software can break. Limiting your dependencies on APIs within your own solutions is therefore becoming increasingly important, especially if you really want to maintain autonomous teams and services that can be deployed independently. By changing an API you don’t want to break other areas of the system, so APIs need to be kept to a minimum and have strict change control associated with them.

So, while the financial services is starting to embrace everything API, the world is moving on at a rapid pace, replacing direct calling of APIs with event orchestration...

Event Pattern Model or Event Driven Architecture

Events allow us to capture specific moments within our software, these can be also seen as specific business moments, such as a transaction being initiated. Think of a bit of software that produces an event, say a customer that initiates a transaction. The software raises that event, it’s the producer of the event, which is posted onto an event broker. The broker has a certain consistent interface, so it doesn’t change, however the data you post into the event broker will. Other software components will subscribe to that event, so when its posted, all subscribers to that event receive it and can process accordingly.  

The beauty with this model is that you can have multiple software components listening for the same events, enabling parallel processing to take place. However, it also means that your software components aren’t directly linked to each other, effectively in-place of using a direct API, your software can post to a consistent event broker, ensuring further decoupling from other services.

This event pattern model is rapidly becoming the “norm” in software architecture, especially within Cloud environments. It is a trend that is growing when we look at how third-party applications interact with our own, be that receiving or sending data. The benefits of using an event broker between your own software components/products/sub-domains/domains are exactly the same when we look to interact with third parties, allowing third parties to subscribe to events our system raises, or to initiate events by posting onto a broker.  

Summary

An event pattern model really does enable that “domain” approach to be taken by your engineering teams. From an agile perspective, following that domain approach and implementing an event pattern model really enables team autonomy, removing dependencies on each other. This is further achieved by using packages for shared components, building microservices and deploying them within containers. But while these architectural approaches ensure your teams can be agile, follow your agile engineering culture, they also give you the right architecture to be able to scale to meet customer demands, capacity requirements and remain highly available and robust.

In the next article within this series, we will look at DevOps and why this is the brother of agile…

Channels

Comments: (0)

/devops Long Reads

Ann Lee

Ann Lee Trainee Trade Mark Attorney at Marks & Clerk

The future of finance? Financial services in the metaverse

/devops

Hamish Monk

Hamish Monk Reporter at Finextra

What is an API?

/devops

John Barber

John Barber Vice President and Head at Infosys Finacle

Trade finance: Progressive business models to beat the slowdown

/devops

Hamish Monk

Hamish Monk Reporter at Finextra

How to leverage quantum computing

/devops

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.