In all of the articles that make up this “Agile Series”, a fair few of the same themes keep on flowing richly through each article. This one is no different…If you want your teams to be autonomous, if you want your teams to build self-contained products,
if you want to be able to value delivery, then you need your teams to be able to deploy their products seamlessly, consistently and in a rapid fashion. Deploying from one environment to another and ultimately into production in a highly repeatable robust fashion.
Enter DevOps…
So, what is DevOps?
Well, DevOps is simply the blurring of software development with what was traditionally IT operations and infrastructure. It’s not that long ago that your typical software development lifecycle had a massive dependency on your IT Operational department.
IT Ops built out the underlying infrastructure, by which I mean physical machines, operating systems, networking and underlying components.
They deployed software that solutions took a dependency on, basically they built your environment, ready for your software to be deployed on. All of this changed with DevOps. Essentially, think of all of these things as something that is achievable in code,
in software itself, so you write software that builds out infrastructure, builds out dependencies, compiles and deploys software, it even ensures certain tests are run on that software. Since its code, its repeatable, its easy to deploy to different environments
over and over again.
DevOps therefore is something that enables many aspects of agile. DevOps though is a set of practices that when they all come together, really do enable agile engineering culture.
Infrastructure as code
The fundamental element of DevOps is that your infrastructure is built using code, so you write software which builds your environments on which your software solutions will run. By being able to do this, we ensure that software is deployed into environments
where the human element of risk has been removed, the software is deployed into an environment that is 100% repeatable and that variables cannot be missed form one environment to another.
Infrastructure as code therefore should form part of your actual software, part of any delivery within an agile environment. Infrastructure after all will either enable your software to work, or, it will stop it from being able to work. Your software ideally
therefore should include the infrastructure on which it is to run.
Continuous delivery
As an executive within a financial institution, if your engineering team is being “agile”, then no doubt you have been involved in discussions that talk of continuous delivery, continuous deployment, or CI/CD. Oh, and “pipelines”. Firstly, continuous delivery
is not the same as continuous deployment. Continuous delivery is about the team’s ability to produce their software products in short cycles, iterative cycles. As they develop, they ensure the software can be reliable released at any time.
To make sure software can be continuously ready to be deployed, then any changes, updates, bug fixes, enhancements have to be compiled and the software made ready. Continuous delivery is in a nutshell that process, but its automated. So, as an engineer is
happy with their code, they save it, commit it back to their source control and hey presto, this triggers the compilation of the software and the integration of it back into the wider product. This part is continuous integration (the CI part of CI/CD). DevOps
makes this all happen.
The “CD” part of CI/CD however can be continuous delivery, or, continuous deployment. The latter is really where you want to get to.
Continuous deployment
The difference between continuous deployment and delivery, well the software is continuously, and automatically, deployed. Now this deployment doesn’t necessarily have to mean into your production environment, no, rather the next environment in the path
to reaching production.
There are so many benefits of working in this way. As an autonomous, independent team, being able to continuously deploy your software has massive benefits, some of which include:
- Ensuring deployments are consistent.
- Ensuring deployments are working as expected.
- The ability to trigger automatically, automated tests against software.
- Ability to provide “gates” to ensure that only software that passes tests make it to the next phase.
- Ensuring environments are always up to date.
- Delivery of product into production as quickly as possible
- The de-risking of deployment errors
The steps software takes makes on its journey into production can often be described as part of the “pipeline”, as in the software moves through the various steps within the pipeline on its way to production.
Some agile engineering environments even have pipelines that take software at the point of it being “committed” by the engineer all the way automatically through to being deployed into production.
I would always make this point regarding continuous deployment. Often, especially in regulated institutions, there is a fear regarding deployments, deployments are hard and they are risky. Because of this, they are put off until they really must be made.
Now, because they haven’t been done that frequently, the deployment will be hard, re-affirming the mindset that deployments are hard and risky. But, like anything, if we do it often enough, frequently enough, practice it, then it becomes easy. If you deploy
software very frequently then deployments become easier, and because they are easier, they are less risky, now because they are easier and less risky you are happy to make more frequent deployments.
Agile engineering cultures are often quoted making hundreds of deployments into production on a monthly, if not weekly basis, some put the numbers in the 000s. The DevOps impact alone here puts pay to any form of “release” milestones in your Prince 2 project
management chart. DevOps makes this possible, but only if your architecture supports such independent deployable components, see the previous article.
Feature toggling
Software can make its way into production even when it is part ready, simply by “switching” it off. By this I mean, the feature (or product), which maybe not working, only partially built, but is deployable, can be deployed into production. However, as a
feature it has been toggled to off, making it unavailable to be used within production.
You may ask what the point is of deploying into production software features that cannot be used. Well, the point is that the software is being tested to a fuller extent. It is ensuring the infrastructure, everything that makes that feature to date is deployed
as expected through the various environments and into production. It also allows you to get a feature (product) into production in its entirety, even test it fully in production before making it generally available to customers. This form of toggling therefore
de-risks deployments massively.
Environments
In your Software Development Lifecycle (SDLC), software always passes through a number of environments before it is deployed into production. These environments serve different purposes, but its crucial to remember that software must be working as expected
before it moves onto the next environment. Typical environments may include:
- Dev
- Dev test
- QA
- Staged
- Pre-production (or Golden)
- Production
Since your software should be working as expected before being deployed into the next environment, automated tests are key to a streamlined process. Automated tests can be kicked off as part of the delivery pipeline, and only when these tests are passed
may the deployment be seen as a success / complete. Some software may require manual testing, this can be carried out in QA, Staged and Pre-production. Only once
ALL tests are passed should software move on to the next environment, this process maybe triggered manually, but it is an automated process.
If we have feature toggling, then its advantageous to take your continuous deployment right into production.
Consistent deployments within environments
If our infrastructure is code and is part of our deployment pipeline, then every time we deploy our software, the infrastructure is also re-deployed. The infrastructure itself, how it is built, configured, deployed is also being tested as part of our solution.
When supporting so many environments, infrastructure as code saves many pain staking hours of your IT Operations, while removing the margin of error.
Build and destroy
Entire environments can be defined using infrastructure as code. Because of this it can take just minutes to re-create your systems entire infrastructure, even deploy your entire set of solutions, all because everything is deployable as code. This has a
number of obvious benefits, none more so than the ability to build and environment, use it for a short period of time, prove what you need before destroying it.
Multiple production environments
Many organisations now run multiple production environments, typically one called blue, the other green. “Blue green” is a deployment technique that reduces downtime risk simply by providing two identical production environments. It allows you to run production
through one environment, say “blue”, deploy your upgrades, complete final testing on “green” ensuring you are happy before switching customers to the “green” environment. You then ensure “blue” is once again identical to “green” by simply redeploying your
infrastructure and software.
This type of deployment is only possibly because of infrastructure as code, and DevOps.
Summary
In this article we have looked at some DevOps concepts, infrastructure as code, continuous deployment, build and destroy, but the key thing to remember as an executive is that DevOps is the blurring of software development and IT Operations. Code replacing
IT Operation functions.
Many principles behind agile engineering aren’t practical if you don’t have DevOps. That ability for autonomous teams, independent delivery is partly possible because of your software architecture, and partly because of an investment in DevOps. If your software
architecture is your strategic approach, then DevOps is your ability to execute, both are required to operate a good agile engineering culture.
DevOps is massively interwoven into agile culture, but it is in part this desire for autonomous teams, continuous deployment of independent software components, made possible with DevOps that is often at odds with many a financial institutions Risk function.
In the next article in this series, I will look at the role of risk, compliance and internal audit in terms of agile and DevOps.