How are people managing deploys with Buildkite?

We use Buildkite to perform several build steps, and our build steps integrate with other tools to trigger deploys, but I’d like to look at using something off the shelf to report on the deploys that Buildkite runs instead.

eg, I’d want Buildkite and its agents to be doing the heavy lifting of deployment to test, staging and production, but be provided with an interface to monitor what versions of apps are running in various environments, whether an application is mid-deploy, and whether a deploy has failed.

Companies with more than a couple of apps / services and where BK is doing the deploy: what are you using to get a high-level overview of those deploys?

Kubernetes + Helm + Nagios checks to verify pod status, etc.

You use a chatbot to kick off a new Buildkite build in the proper pipeline with a special env var that tells our pipeline script to issue the deploy steps instead of the general Docker Build + CI steps. From there, we run a Helm container and helm upgrade away!

Can you elaborate more on this?

  • How are you handling multiple environments?
  • What chatbot?
  • How is the chatbot connecting and authing to Buildkite?
  • Are you switching the pipeline YAML file via a script or something else?

I’m looking at Buildkite coming from a Jenkins world (where mostly using promoted builds and build parameters). The deployment aspects definitely seem to be lacking in Buildkite (or are mostly DIY via scripts) when coming from a Jenkins or Bamboo world.

I setup 2 different deployments through buildkite.

Rails app deployed with Capistrano:

  • docker-compose to build the app image
  • runs the tests
  • if its a master branch and tests are green, it triggers deployment on deployment pipeline for staging
  • production deployments are triggered manually
  • deployments picks up same image from test pipeline
  • runs cap production deploy within container

Hardest part was to get ssh-agent to work within container, coz buildkite uses ssh-agent to load keys, so they are never stored on a server

Phoenix app deployed to Kubernetes (AWS)

  • docker-compose to build the production image
  • custom aws-helm-kubectl docker image to run deployment
  • pipeline build is triggered manually (you could use chatbot, but don’t have to)
  • it waits for input - for example number of containers you wanna deploy
  • then it builds production image and pushes it to registry
  • after that it runs helm upgrade XYZ to apply changes on a cluster

Originally I wanted to get separate user account to do deployment to AWS, but I couldn’t get it work. Within aws-helm-kubectl container I was still authenticated as a buildkite role and not my user. So I just gave permissions to buildkite role to run updates on a cluster.

Honestly setting it up is not that hard. If your process works outside of buildkite, then you can always get it done through buildkite as well. Most of the time you will spend recreating environment for deployment and fighting with permissions.

To elaborate to your questions specificaly.

  1. each environment has its own pipeline
  2. you don’t need chatbot
  3. I believe the idea is to use github deployments that are triggered ie through chat bot, and they trigger build in buildkite pipeline similar to push events trigger builds (you can trigger this build even manually)
  4. you can define multiple pipelines where each has it’s own yml file

I wish buildkite would allow pipeline groups, so we could separate in UI deployment pipelines from test pipelines.