What Are “Blue-Green” Or “Red-Black” Deployments?

Deployment
Deployment

Chances are you’ve participated in changing or upgrading software or application. While the idea is to push new features to customers or end-users, there have been significant changes over the years in how dev teams build and deliver applications. This shift has been necessitated by the growing need for agility in businesses.

Today, enterprises are pushing their teams to deliver new product features, more often, more rapidly, and with minimal interruptions to the end-user experience. Ultimately, this has led to Shorter deployment cycles that translates to:

  • Reduced time-to-market
  • More updates
  • Quicker customer to the production team, leading to faster fixes on bugs and faster iterations on features
  • More value to customers within shorter times
  • More coordination between the development, test, and release teams.

But what has changed over the years, really? In this article, we’ll talk about the shift in deployment strategies from traditional deployment approaches to the newer and more agile deployment methods and the pros and cons of each of the strategy.

Let’s dive in, shall we?

Traditional Deployment Strategies. The “one-and-done” Approach.

Classic deployments strategy required dev teams to update large parts of an application and sometimes the entire application in one swoop. The implementation happened in one instance, and all users moved to the newer system immediately on a rollout.

This deployment model required businesses to conduct extensive and sometimes difficult development and testing of the monolithic systems before releasing the final application to the market.

Characterized by on-site installations, the end-users relied on plug-and-install to get the latest versions of an application. Since the new application updates were delivered as a whole new package, the user’s hardware and infrastructure had to be compatible with the software or system for it to run smoothly. Also, the end-user needed hours of training on critical updates and how to make use of the deliverable.

Pros of Traditional Deployment Strategies

Low operational costs: Traditional deployment models had lower operating expenses since all departments were replaced on a single day. Also, since most of the applications were vendor-packaged solutions like desktop apps or non-production systems, there were minimal maintenance expenses needed after installation.

No planning requirements. This means that teams would just start coding without tons of requirements and specification documents.

They worked well for small projects with small teams.

Faster Return on Investment since the changes occurred site-wide for all user hence better returns across the departments

Cons of Traditional Deployments

Traditional deployment strategies presented myriad challenges, including:

  • It was extremely risky to roll back to the older version in case of severe bugs and errors.
  • Potentially expensive: Since this model had no formal planning, no formal leadership, or even standard coding practices, the model was prone to costly mistakes down the line that would cost the enterprise money, reputation, and even loss of customers.
  • It needed a lot of time and manpower to test.
  • It was too basic for modular, complex projects.
  • It needed separate teams of developers, testers, and operations. Such huge teams were slow and lethargic.
  • High user disruptions and major downtimes. Due to the roll-over, organizations would experience a “catch-up” period that had low productivity effect as users tried to adapt to the new system.

As seen above, traditional deployment methodologies were rigorous and sometimes had a lot of repetitive tasks that consumed staggering amounts of coding hours and resources that could otherwise have been used in working on core application features. This kind of deployment approach would not just cut it in today’s fast-paced economies where enterprises are looking for lean, highly-effective teams that can quickly deliver high-quality apps to the market.

The solution was to come up with deployment strategies that allowed enterprises to release and update different components frequently and seamlessly. These deployments sometimes happen very fast to meet the increasing end-user needs. For instance, a mobile app can have several deployments within a day for optimum UX needs. This is made possible with the adoption of more agile approaches such as Blue-Green or Red-Black deployments.

What Is “Blue-Green” Vs. “Red-Black” Deployments?

Blue-green and red-black deployments are fail-safe, immutable deployment processes for cloud-native applications and virtualized or containerized services — ideally, Blue-Green” Vs. “Red-Black” Deployments are identical and are designed to reduce application downtime and minimize risks by running 2 identical production environments.

Unlike the traditional approach where engineers fix the failed features by deploying an older stable version of the application, the Blue-green or the red-black deployment approach is super-agile, more scalable, and is highly automated so that bugs and updates are done seamlessly.

 “Blue-Green” Vs. “Red-Black”. What’s the difference?

Both Blue-Green and Red-Black deployments represent similar concepts in the sense that they both apply to the automatable cloud or containerized services such as a web service or SaaS Systems. Ideally, once the dev team has made an update or an upgrade, the release team will create two mirror production environments with identical sets of hardware and route traffic to one of the environments while they test the other idle environment.

So, what is the difference between the two?

The only difference lies in the amount of traffic routed to the live and idle environments.

In Red-Black redeployment, the new release is deployed to the red-environment while maintaining the traffic to the black-environment. All smoke test for functionality and performance can be run on the black environment without affecting how the end-user is using the system. When the new updates have been confirmed to be working properly, and the black version is fully operational, the traffic is then moved to the new environment by simply changing the router configuration from red to black. This ensures near-zero downtime with the latest release.

This is similar to blue-green deployments. Only that, with blue/green deployments, it is possible for both environments to get requests at the same time temporarily through load-balancing, unlike the red/black deployment, where only one version can get traffic at any given time. This means that in blue-green deployments, enterprises can release the new version of the application to a select group of users to test and give feedback before the system goes live for all users.

Pros Of “Blue-Green” Or “Red-Black” Deployments

You can roll back the traffic to the still-operating environment seamlessly and with near-zero user disruptions.

  • Reduced downtime
  • Allows teams to test their disaster recovery procedures in a live production environment
  • Less Risky since test teams can run full regression testing before releasing the new versions
  • Since the two codes are already loaded on the mirror environments and traffic to the live site is unaffected, test and release teams have no time pressures to push the release of the new code.

Cons of Blue-Green and Red-Black Deployments

  • It requires infrastructure to carry out Blue-Green deployments.
  • It can lead to significant downtime if you’re running hybrid microservice apps and some traditional apps together.
  • Database dependent: Database migrations are sometimes tricky and would need to be migrated alongside the app deployment
  • It is difficult to run at large scale

There you have it!

Technical Debt

During the software engineering process, there are different issues which should be dealt with or else they will subject the project to unnecessary costs later. The technical debt perspective should be considered in each step of software development. For instance, when analyzing the cost of cloud approaches, you need to take into consideration the technical debt. You should as well factor the engineering aspect when making technical decisions such as choosing between cloud services vs. homegrown solutions.

What is technical debt?

Technical debt refers to the implied cost which will be incurred to do additional rework on a system after the engineering process is done. For example, engineers can choose to go for an easy option so that they can save time during the product design. The right steps which they will avoid will later need to be implemented which will mean a product has to be recalled or it will have to be fixed after it has reached the market which will cost more in terms of resources and manpower.

What are the most common types/causes technical debts?

Deliberate tech debt

In this case, engineers are aware of the step which is necessary during project implementation, but they will ignore it provided they can go for a shortcut which will save on cost and avail the product to the market. For instance, when analyzing the advantage of using the public cloud, some engineers may assume certain benefits, and later they will realize they are very necessary hence they are forced to go back and procure the system. It will lead to wastage in the company. Some engineers will not like doing the same process every now and then; they can avoid a given process only to expose the final product to flaws which will require re-engineering.

Accidental/outdated design tech debt

After designing a product or software, with time the technology will advance and render the design less effective in solving certain needs. For instance, due to advancement in technology, the tools you incorporated in a given software may end up being flawed which will make the product less effective which may necessitate re-engineering. Engineers may try their level best to come up with great designs, but advancement in technology can make their designs less effective.

Bit rot tech debt

It is a situation where a complexity develops over time. For example, a system or a component can develop unnecessary complexity due to different changes which have been incorporated over time. As engineers try to solve emerging needs, they can end up exposing the product to more complications which can be costly in the long run.

Strategies for minimizing technical debt

How to minimize deliberate tech debt

To avoid the tech debt, you need to track the backlog when engineers started the work. If you can track the backlog and identify areas where the engineers are trying to save time, you can avoid the debt.

Minimizing Accidental/outdated design tech debt

You need to refactor the subsystem every now and then so that you can identify the technical debt and fix it. For example, if the software is exposing you to unnecessary slowdowns, you need to fix the errors and make it meet industry standards.

Addressing Bit rot tech debt

Engineers should take time to understand the system they are running and clear any bad codes.

What is Source Control?

Acronyms, Abbreviations, Terms, And Definitions
Acronyms, Abbreviations, Terms, And Definitions

Source Control is an Information technology environment management system for storing, tracking and managing changes to software. This is commonly done through a process of creating branches (copies for safely creating new features) off of the stable master version of the software, then merging stable feature branches back into the master version. This is also known as version control or revision control.

Netezza / PureData – How To Get A List Of When A Store Procedure Was Last Changed Or Created

Netezza / Puredata - SQL (Structured Query Language)
Netezza / Puredata – SQL (Structured Query Language)

In the continuing journey to track down impacted objects and to determine when the code in a database was last changed or added, here is another quick SQL, which can be used in Aginity Workbench for Netezza to retrieve a list of when Store Procedures were last updated or were created.

SQL List of When A Stored Procedure was Last Changed or Created

select t.database — Database
, t.OWNER — Object Owner
, t.PROCEDURE — Procedure Name
, o.objmodified — The Last Modified Datetime
, o.objcreated — Created Datetime

from _V_OBJECT o
, _v_procedure t
where
o.objid = t.objid
and t.DATABASE = ‘<<Database Name>>
order by o.objmodified Desc, o.objcreated Desc;

 

Related References

 

What is a Common Data Model (CDM)?

Data Model, Common Data Model, CDM, What is a Common Data Model (CDM)
Data Model

What is a Common Data Model (CDM)?

A Common Data Model (CDM) is a share data structure designed to provide well-formed and standardized data structures within an industry (e.g. medical, Insurance, etc.) or business channel (e.g. Human resource management, Asset Management, etc.), which can be applied to provide organizations a consistent unified view of business information.   These common models can be leveraged as accelerators by organizations form the foundation for their information, including SOA interchanges, Mashup, data vitalization, Enterprise Data Model (EDM), business intelligence (BI), and/or to standardize their data models to improve meta data management and data integration practices.

Related references

IBM, IBM Analytics

IBM Analytics, Technology, Database Management, Data Warehousing, Industry Models

github.com

Observational Health Data Sciences and Informatics (OHDSI)/Common Data Model

Oracle

Oracle Technology Network, Database, More Key Features, Utilities Data Model

Oracle

Industries, Communications, Service Providers, Products, Data Mode, Oracle Communications Data Model

Oracle

Oracle Technology Network, Database, More Key Features, Airline data Model

Netezza / PureData – How to add a Foreign Key

DDL (Data Definition Language), Netezza PureData How to add a Foreign Key
DDL (Data Definition Language)

Adding a forging key to tables in Netezza / PureData is a best practice; especially, when working with dimensionally modeled data warehouse structures and with modern governance, integration (including virtualization), presentation semantics (including reporting, business intelligence and analytics).

Foreign Key (FK) Guidelines

  • A primary key must be defined on the table and fields (or fields) to which you intend to link the foreign key
  • Avoid using distribution keys as foreign keys
  • Foreign Key field should not be nullable
  • Your foreign key link field(s) must be of the same format(s) (e.g. integer to integer, etc.)
  • Apply standard naming conventions to constraint name:
    • FK_<<Constraint_Name>>_<<Number>>
    • <<Constraint_Name>>_FK<<Number>>
  • Please note that foreign key constraints are not enforced in Netezza

Steps to add a Foreign Key

The process for adding foreign keys involves just a few steps:

  • Verify guidelines above
  • Alter table add constraint SQL command
  • Run statistics, which is optional, but strongly recommended

Basic Foreign Key SQL Command Structure

Here is the basic syntax for adding Foreign key:

ALTER TABLE <<Owner>>.<<NAME_OF_TABLE_BEING_ALTERED>>

ADD CONSTRAINT <<Constraint_Name>>_fk<Number>>

FOREIGN KEY (<<Field_Name or Field_Name List>>) REFERENCES <<Owner>>.<<target_FK_Table_Name>.(<<Field_Name or Field_Name List>>) <<On Update | On Delete>> action;

Example Foreign Key SQL Command

This is a simple one field example of the foreign key (FK)

ALTER TABLE Blog.job_stage_fact

ADD CONSTRAINT job_stage_fact_host_dim_fk1

FOREIGN KEY (hostid) REFERENCES Blog.host_dim(hostid) ON DELETE cascade ON UPDATE no action;

Related References

Alter Table

PureData System for Analytics, PureData System for Analytics 7.2.1, IBM Netezza database user documentation, Netezza SQL command reference, Alter Table, constraints

Database – What is a foreign key?

Acronyms, Abbreviations, Terms, And Definitions, DDL (Data Definition Language), What is a foreign key
Acronyms, Abbreviations, Terms, And Definitions

Definition of a Foreign Key

  • A foreign Key (FK) is a constraint that references the unique primary key (PK) of another table.

Facts About Foreign Keys

  • Foreign Keys act as a cross-reference between tables linking the foreign key (Child record) to the Primary key (parent record) of another table, which establishing a link/relationship between the table keys
  • Foreign keys are not enforced by all RDBMS
  • The concept of referential integrity is derived from foreign key theory
  • Because Foreign keys involve more than one table relationship, their implementation can be more complex than primary keys
  • A foreign-key constraint implicitly defines an index on the foreign-key column(s) in the child table, however, manually defining a matching index may improve join performance in some database
  • The SQL, normally, provides the following referential integrity actions for deletions, when enforcing foreign-keys

Cascade

  • The deletion of a parent (primary key) record may cause the deletion of corresponding foreign-key records.

No Action

  • Forbids the deletion of a parent (primary key) record, if there are dependent foreign-key records.   No Action does not mean to suppress the foreign-key constraint.

Set null

  • The deletion of a parent (primary key) record causes the corresponding foreign-key to be set to null.

Set default

  • The deletion of a record causes the corresponding foreign-keys be set to a default value instead of null upon deletion of a parent (primary key) record

Related References