Skip to main content

Traits of a Build and Deployment Pipeline

12-16-19 Ryan Cromwell

A great software build and deployment pipeline encourages collaboration and transparency. Here’s what we have found makes a successful pipeline.

Organizations often encourage collaboration and transparency by promoting day-to-day activities like pairing, standups, and code reviews. While these activities are very important, we also find that a solid build and deployment pipeline can make all the difference between a well-meaning team hitting or just missing the target. That’s because working software provides a tangible focal point for collaboration and transparency across skill sets.

The best build and deployment pipelines allow stakeholders and the project team to engage together early and often to focus on the product more and the mechanics of the process less. Let’s look at what makes a solid build and deployment pipeline regardless of the technology stack.

No Thought Deployment

The best pipelines don’t require our teams to think about deploying. Instead, the team just decides if code is ready to be shared via a push to GitHub (or your favorite Git hosting service). The pipeline takes care of the rest.

Heroku and Netlify do this really well. When a Heroku or Netlify app is connected to a GitHub repository, you can configure them to deploy with each commit to any branch. In some situations, purpose-built services like Heroku or Netlify aren’t an option. Clients use tools that serve infrastructure constraints or other demands. That’s ok. The same experience can be created with tools like Jenkins, Bamboo, and Azure DevOps Services.

If you’re not able to automatically deploy to production at the moment, targeting a pre-production environment is a great first step. It allows teams to see early the effects of changes integrated with those of their teammates. Product teams can interact and provide feedback on features they had otherwise been describing using lower fidelity techniques like wireframes, designs, or descriptions. Collaboration for the win!

Automated Data and Schema Transformations

You might ask how automated deployment is possible when it comes to data transformations, schema changes, or other stateful application data changes. These can all be worked out, I assure you. Schema migrations are a common and reliable mechanism built into most modern platforms, including .Net, PHP, Ruby, Node, etc.

For example, Knex.js allows you to craft migrations to create and alter tables for popular relational databases like MySQL, PostgreSQL, and Microsoft SQL Services. When included in the git repository, the commits provide a history of changes to schema and data transformations that correspond to dependent application code changes. These can follow the same pipeline to production as any other change to your system.

Of course, you might be concerned about tight dependencies between application code and data. Or maybe you’re worried about the performance impact of data transformations on large data sets. At times, it can be helpful to execute migrations and transformations offline or in the background. In order to do this, we need to distinguish between deploying code to an environment and releasing the functionality that code enables to users.

Decoupling deployment from release enables high-frequency deployments. This allows you to make features available (released) to some user groups before they are available across your user base. By building your application in this manner and moving data transformations offline, you can deploy and enable features early for users as their data is transformed. This brings to mind data partitioning techniques aligned with feature flags for progressive feature enablement.

For example, you may deploy new functionality and its corresponding data changes to production while disabling those features via feature flags. Via a scheduled task, background worker, or queuing, data and schema changes can be executed while normal activity continues on the system. Once the changes are available, the feature flag can be enabled for a set of users, and they see the new behavior. At no point was the system unavailable for them.

Collaboration Enabled

We’re always looking for ways to tighten feedback loops. Once code is merged, it’s too late to undo. While we agree with the authors of the book Accelerate that teams should roll forward with fixes rather than revert them, we’d love to avoid the need for fixes in the first place.

To do this, we look for pipelines that allow our teams to deploy (again, without thought) potential changes to the product. In development terms, this means deploying a pull request. By integrating automatic deployments into the natural source code collaboration tools, we make high fidelity collaboration and first-class confirmation of changes a natural part of development.

Out of the box, Heroku and Netlify both support this model of deploying pull requests automatically, which is wonderful! Continuous integration tools have long had the ability to build and, should you choose, deploy branch code. On the web, this becomes a little more tricky as you need to provide a unique URL for each instance. It can be done, but we like to rely on purpose-built services like Heroku and Netlify where we can.

Precooked and Ready to Run

We want the result of a build to create a deployable artifact. The form that the artifact takes isn’t really important. We have some projects for which CircleCI simply creates and stores a tarball or zip file after successfully building. For some, a Docker image is created and made available. For you, it may be an installer, such as Apple’s DMG format, Microsoft’s MSI, or one of the popular Linux formats. The goal is the same: produce a runnable artifact that can be deployed many times without change.

By building once and storing the deployable results, our deployments are faster, more reliable, and we can have more confidence in the artifact working in each environment. Speed and reliability create tight feedback loops and drive up confidence—two things we’re always trying to achieve.

Sparkbox works with organizations using varying processes, tools, and techniques. Over time, we have honed a set of tools that work exceptionally well. Above all, our goal is to be flexible while advocating for the practices that we see bringing success to projects. A build and deployment pipeline that encourages collaboration and transparency, without getting in the way, brings an irreplaceable heartbeat to successful projects.

Sparkbox’s Development Capabilities Assessment

Struggle to deliver quality software sustainably for the business? Give your development organization research-backed direction on improving practices. Simply answer a few questions to generate a customized, confidential report addressing your challenges.

Related Content

User-Centered Thinking: 7 Things to Consider and a Free Guide

Want the benefits of UX but not sure where to start? Grab our guide to evaluate your needs, earn buy-in, and get hiring tips.

More Details

Want to talk about how we can work together?

Katie can help

A portrait of Vice President of Business Development, Katie Jennings.

Katie Jennings

Vice President of Business Development