One of the main problems teams face when practicing
continuous delivery is to manage zero downtime deployments to the production
environments. The goal is to deploy as soon as possible and depending on the
heartbeat of the organization, this becomes a higher priority to manage active
users without losing their data and sessions during a deployment process. In this post I'll share some of the ideas and approaches that are been used for achieving the goal of
zero downtime deployments.
An important process for reducing risks and managing a zero
deployment downtime is by following the blue-green deployment technique. In a
blue-green deployment scenario, the approach is to bring up a parallel green environment
and once everything is tested and ready to go, you simply switch all the traffic
to the green environment and leave the blue environment idle. This also helps
in easy rollback and switch to the blue environment if anything goes wrong in
the current installation.
In a horizontally scaled environment, where you have
multiple servers handling the load where the traffic is routed to one of the
servers based on the load balancer scheduling algorithm, you can update the
servers one-by-one and bring them online after the updates. The same approach
will be used in this scenario also, but with the only difference that there
will be N blue and N-1 green servers where N is the number of servers in each
group in the web farm.
As long as deployment of application code is only
considered, there is no problem managing that with a zero downtime requirement.
But consider the deployment scenario which involves changes in the database
schema as well. You can’t now update the DB schema first and continue using the
old application code to use the new schema as it will create inconsistencies
considering the code is written to work on an old DB schema. This involves
taking extra precautions or considerations with updates that involve DB schema
changes. When it involves database changes two approaches that helps
the most are:
Strive for backward
compatibility by performing schema changes that won't affect the existing
code and also by ensuring that the deployed code can work with the old schema.
Some of the points to consider would be to:
- Perform schema changes in a way that won’t break existing code
- New columns added are always NULLABLE
- New columns provide a default value if it does not exist.
- Don't delete columns until none of the code uses them, or can handle their absence.
- Use triggers or similar mechanisms to populate values that are important for one deployed version of the application.
- Enforce referential integrity only when it makes sense.
Have an expansion and
contraction database script:
This allows you to handle database changes that are safe to
apply without breaking backward compatibility with the application code.
Changes like creating new tables, adding columns or tweaking indexes etc. can
be handled using the expansion scripts with a trigger or scripts that fills the
default values. Once the application code is updated, you can execute the
contraction script to clean up any database structure or data that is no longer
needed.
You should plan to execute the expansion scripts prior to
updating the application code and the contraction scripts once the application code
has been updated and is in a stable state. This produces a nice benefit of
decoupling database migrations from application deployments.
No comments:
Post a Comment