Ship Faster with Code Pipelines

“It’s not done until it’s shipped”

This is a very much valid statement to all out there who sell products to customers. And fast shipping is an important practice all should follow.

We, at ShoutOUT, always listen to our customers and deliver what they want, because, at the end of the day, they are the people who are going to use our products. When there is a particular requirement, what’s crucial is delivering it fast enough before the customers leave you.

To keep up with this promise, we always optimise and improve our delivery process for fast delivery. It’s an incremental task, and nobody can implement a perfect delivery process at the very beginning and continue it all along. When the application evolves, it gets complex, bigger and sometimes distributed, so that you have to alter your delivery process accordingly. So, in this post, I’m going to share the delivery process we follow as of today. And please note that we’re still on the process of implementing it, and all the required components are not in there yet. However, I’m sharing the plan that we’ll be improving in the near future.

Continuous delivery is not a novel topic. It has been practised by many for a long time. However, unlike the old days when we had to install and configure all the tools by ourselves allocating dedicated infrastructure to run them, now we have it as flavours of *aaS. Thus, in most of today’s cases, the role of a release manager or a DevOps engineer is obsolete and is transferred to the development team. It’s a matter of clicking a few buttons to get things going.

Being a happy customer of AWS as always, we were able to improve our delivery process to a great extent using their DevOps services stack. They now have almost all the required services for development and deployment. So, you don’t need to leave AWS.

In our case, we are currently using the following AWS devops services for the continuous deployment setup.

  • Code Commit — Fully managed source control system (similar to Github, Bitbucket).
  • Code Build — Fully managed build service.
  • Code Pipeline — Continuous integration and continuous delivery service.

Apart from the above AWS services we use the following third party services.

The good thing about the above AWS services is that you have to pay only when you use them. Especially, with code build and code pipeline, you can have the setup in place, but you will be charged only when you run it.

Apart from the aforementioned services, there is also Bitbucket in the setup, which is again a source control system. You might wonder why there are two services for the same purpose. Well, basically there’s a couple of reasons. By the time we started, code commit was not there and still it is in entry level, and Bitbucket has a lot of important features which are required to practise the process we follow in source controlling. Nevertheless, with the latest addition of pipelines to bitbucket, it fits into this picture quite nicely. So, in a way, we are offloading the continuous deployment to AWS while using bitbucket for development.

We have configured continuous delivery for both our frontends and backends and other worker services too. But, in this post, I’m focusing on the frontend only. Given below is the logical architecture of the deployment environment we use for our frontend. And for the record, our frontend is a single page web application built using React.js and served for public via a cloud front cdn having static assets in a S3 bucket.

Logical Architecture of Deployment Setup

I’ll brief on what happens here in simple terms.

  1. The development team works on their forked copies of the repository and send a pull request to the main bitbucket repository.
  2. Once the pull request is merged, bitbucket pipelines start and synchronize the source code changes to code commit.
  3. Code commit updates event triggers code pipeline execution.
  4. Code pipeline starts executing the stages (in this setup there are three stages).

 

  1. Source stage — Pulls the source code from code commit and pass it on to build stage.
  2. Build stage — Passes the source code to code build, and code build builds the deployment package and uploads it to S3 (beta environment).
  3. Test stage — Triggers Ghost Inspector to run the UI tests.

 

 

 

Optionally, Ghost Inspector posts the test results on Slack.

Deploying to production is only a matter of copying files between S3 buckets. We handle this task manually at the moment since we are not yet done with automating all the tests. Once we get the test suites in place, we can update the code pipeline with another stage to automatically move the deployment package to production S3 bucket if all tests are passed.

As I mentioned, this is an incremental task which needs to be optimised and improved over time. So, I’ll be updating this post in future about the updates we do. If you have got any comments and feedback on this, you are more than welcome to share them.

Leave a Comment

Your email address will not be published. Required fields are marked *