Staffing Report Shows Data Analytics will Transform Recruitment in 2018

Last month we published our first Staffing Business Intelligence Report, revealing the changing role of data within a modern recruitment business. Based on a survey of 135 staffing businesses, the…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Level Up Your Development Workflow with Continuous Delivery

September 13, 2019 — We’ve now published CD Pipelines Part 2: How to Add Performance Testing and Utility Scripts to Your Deployment. Once you’ve mastered the basics of continuous integration, head over to learn how to add performance testing with Google’s Lighthouse tool and run a cleanup script to make room for your theme deployment.

Software teams today move quickly, automating the testing and deployment process to get changes into production as they occur. Instead of deploying changes in periodic major releases, small changes are deployed and tested automatically, speeding up the time to delivery and getting software into the hands of users right away. That’s essential for incorporating user feedback into your next development cycle.

A continuous delivery pipeline is the workflow that automates moving code changes to production, and it’s part of a larger set of principles about how software teams manage updates. Based on Agile software development and DevOps, continuous integration, continuous delivery, and continuous deployment all represent practices that help organizations achieve lightweight and flexible development.

Continuous delivery introduces automation into the process of deploying code changes from version control to your hosting platform, usually with the added step of testing. You can build and push to your hosting platform when merging a feature branch or after individual commits, but to maintain the advantages of continuous delivery, it’s best to build often and push changes to staging on a regular basis.

You may have come across a few other related terms which are similar, but represent different practices on the delivery spectrum: continuous integration and continuous deployment. We’ll compare each of these terms to characterize the differences.

Continuous integration is the practice of frequently merging changes from local development into your code base. The goal is to prevent untested changes from building up; ideally, when changes are merged, the software should be built to make sure that everything works as expected.

Continuous delivery takes that a step further by automating the process of deployment to either a staging or production environment, usually incorporating automated testing. With continuous delivery, you may still manually run some processes within your pipeline.

Continuous deployment automates deployment to production, fully end to end. Where continuous delivery might automate some processes but take a more manual approach to production deployments, continuous deployment fully automates the entire process.

A typical continuous delivery pipeline manages code changes as they move from a version control system to the production server. Let’s outline the key touchpoints along the delivery pipeline:

Continuous delivery pipelines automate deployments by executing a YAML file that tells the pipeline what environment to run the build in and which steps to take to deploy the application. In this blog post, we’ll focus on CD pipeline automation in Bitbucket, but the principles carry over to pipeline setups in other code management systems.

At runtime, Docker images generate an instance of a container, which is a lightweight environment in which to run an application. The advantage of containerization is that it gives you a portable and consistent environment that can be used to run applications. It’s similar to a VM, but with less overhead.

Let’s look at a simple example of a Bitbucket build file (bitbucket-pipelines.yml) which uses the default Bitbucket Docker image configured to run Node.js:

At the top of the file, we specify the version of Node.js to run inside the Docker image. The build script accepts a sequence of bash commands to execute at runtime. In this case, we’re simply running a command to check the Node version.

Note: Bitbucket’s free plan allows you to create private repositories and includes 50 minutes of pipeline build time per month.

3. In your repository settings, navigate to the Pipelines tab. Choose JavaScript as your language template. This step selects a default docker image preloaded with common JavaScript utilities like npm.

4. Configure your pipeline build file in the editor. Try the following to create a simple build file that installs npm dependencies, installs Stencil CLI as a global package, and then executes the stencil push command. The -a flag on Stencil push automatically applies the Light version of the theme and activates it on the storefront without additional command prompts.

5. Commit a change to the master branch to run the pipeline and upload a new theme to your staging store.

Our pipeline works! But now, let’s see if we can make it more efficient. Every time we run the pipeline, a new container is created. If we defined multiple steps in the bitbucket-pipelines.yml file (you can define up to 10) each of those steps creates a new container from the Docker image at runtime. You might have noticed that means that every time we run the pipeline, we install Stencil CLI again, which slows down the build.

To streamline, we can create our own custom Docker image and install Stencil CLI at the image level. That way, every container generated by the image will have Stencil CLI globally installed already.

We add RUN npm -g config set user root to avoid permissions issues when installing a global package and allow npm to install binaries owned by the root user. (Shoutout to Aleksandr Guidrevitch for his helpful troubleshooting article!)

4. Build the Docker image by running this command from the directory that contains your Dockerfile:
docker build -t yourdockerusername/imagename .

Imagename can be anything you’d like.

5. In your Bitbucket repository, update bitbucket-pipelines.yml to require the custom Docker image instead of Bitbucket’s default, and remove the Stencil CLI installation step from the build script:

The decision to move a dependency out of the container and into an image will depend on a few factors:

For Zaneray, the main advantages of continuous delivery were staging environments that were always up to date with the latest features and the ability to automate deployment, leaving team members free to focus on more important tasks.

Zaneray uses Bitbucket to manage the continuous delivery pipeline for Skullcandy, a brand with multiple regional storefronts catering to markets across North America and Europe. Each regional production storefront is twinned to a staging storefront, and Dean spoke to how he’s addressed the challenge of keeping the data between the two environments in sync.

Dean describes the overall architecture of Zaneray’s pipeline setup:

“We have a master branch that we use for deployments to production, and we have a main development branch, that we call staging. Anytime anything is checked into that development branch, that’s where continuous integration happens, where we deploy automatically to all our sandbox instances.”

Production deployments, on the other hand, are handled manually and deliberately. “Anytime we do a production deployment, we merge staging in with the master. And then we can go into Bitbucket and run the pipeline. There are various pipelines: you can choose to deploy to all the BigCommerce instances, or an operational subset, like the European stores.”

When the pipeline runs, it automates cleanup and tagging as part of the process. “It tags the branch with a timestamp so that we know exactly when this was associated with a production deployment and the deployment actually names the theme. So when you go into the the theme management area of the BigCommerce admin, the active theme would be named accordingly.”

One of the challenges Dean faced when configuring the pipeline was that at the time, there was no -a flag on the stencil push command to automatically activate a theme version. Running stencil push would always present the user with a prompt for input before activating the theme. Dean recounts, “We came back and asked if there was a chance that we could have some kind of command line switch for that and quite honestly based on experience with other vendors we were not very hopeful that anything would happen. But then Nikita Puzanenko, who’s a Senior Product Support Engineer at BigCommerce, banged that out in an afternoon and we had a zip file for a special Stencil CLI version — — that kind of made all the difference in the world.” Now, Nikita’s updates have been merged with the core version of Stencil CLI, making it possible to run the stencil push command as part of an automated pipeline.

Any last tips for other developers setting up their own pipelines?

“First, I think you need to figure out your branching strategy. And then, you need to decide what’s going to cause a deployment to your sandbox instances and make sure that that’s going to be relatively stable. You don’t want to have a situation where you’re deploying code overzealously and introducing bugs to your staging environment. Once you have that, you can start thinking about fallback strategy with tagging of release deployments. You need to have a strategy to manage your API keys so that you can do your deployments for each instance. That needs to be a part of your repository so you can use the appropriate keys and make that part of your deployment. After that, you need to make sure you have room to do your deployment. That’s where the script to remove the oldest theme comes in, and then really, it’s just a simple push at that point.”

Automating your development workflow with a continuous delivery pipeline allows you to accelerate software delivery cycles and manage code updates efficiently. In this post, we’ve seen how to set up a simple Bitbucket pipeline to automatically push theme updates to a staging environment using Stencil CLI and heard from the Zaneray Group, who described how their team uses continuous delivery in their development workflow.

Add a comment

Related posts:

Robert Rothenberg and His Contributions to Jewish Culture

Robert Rothenberg is known for his philanthropy as it relates to Israel and the business world. The Jewish community owes quite a lot to Robbie, and this article explains how he has helped many…

nsfw ai chatbot

1. succinct description of their use and purpose The Development of AI Chatbots Advancements in AI chatbots AI chatbots are used in a variety of businesses. 2. Information about NSFW AI Chatbots How…

5 lessons learnt from Grief after Suicide

Firstly I apologise if the title was a hard one for you to read — I wanted you to know what blog you are clicking on, as this topic is just too hard for some people. But this is also why I wanted to…