Is it possible to share volumes between steps?

Hi!

I have a following pipeline.yml:

build-plugins: &build-plugins
  - docker-login#v2.1.0:
      username: bgr11n
      server: ghcr.io
      password-env: GHCR_TOKEN
  - docker-compose#v4.12.0:
      run: app
      config:
        - docker-compose.ci.yml
      env:
        - RAILS_ENV=test
        - DOCKER_CLIENT_TIMEOUT=120
        - COMPOSE_HTTP_TIMEOUT=120
        - BUILDKITE_ANALYTICS_TOKEN=123
        - CDN_HOST=cdn.pflocal.dev
      volumes:
        - /tmp/asd/bundle/:/usr/local/bundle/

steps:
  - label: ":docker: Prepare"
    command: bundle install
    plugins: *build-plugins

  - wait: ~

  - name: ":rubocop: Rubocop"
    command:
      - bundle exec rubocop -A -f emacs -o rubocop.txt
    plugins: *build-plugins

In first step I am pulling all the images and installing gems.
And on second step I am expecting that I will not need to install gems again because of volume added /tmp/asd/bundle/:/usr/local/bundle/ but it happens that files outside in the host are not being synced inside the docker.

Could you please help, maybe I am doing something wrong?
Or meaybe there is a better approach on how to not run bundle install on every step?

Thank you!

Hey @bgr11n (Bohdan assuming!)

Thanks for reaching out to us, hope you are well :wave:

An interesting use case - and can see the volume mount use case in this situation. Since there isn’t node affinity used in this case for the two steps, the host directory could well be on a completely different host on the second step than on the first.

Since you’d need these run on the same host (ala agent), node affinity should help in this case (i.e run these 2 jobs on the agent that performed the pipeline upload)

Cheers :slightly_smiling_face:

Hi @james.s, thank you for your reply!

Maybe there could be some other ways to do it? Because from docs looks like it may bring some performance issues and in my real pipeline I will have about 4 steps like this.

Also I have a question regarding docker-compose. I have a postgres and redis services defined in docker-compose.yml does this mean that on every step (from example above) I need to seed DB?

Or maybe there is a better way how to run efficiently different steps inside same docker-compose config?

Maybe it would be better to run those step within group step?

Thank you!

I was thinking about using artifacts and then download them on others steps, but to be honest it looks like it was not deisgned for this.

Hey @bgr11n :wave:

As @james.s mentioned, node affinity is the way to go here. One consideration you could make would be to run multiple agents on the same host, so that you aren’t tying up resources on a single job, and target agents on that host where the volume mounts will persist.

Regarding your docker-compose question, if you are tearing down your dependencies after each step, then you’ll need to re-seed after each one. Configuring your pipeline to seed the DB and then tear it down after running the steps needed would most likely be the way to go.

For clarification, group steps are only a visual grouping of the steps in the Buildkite UI, they don’t affect the behaviour of the steps within the group.

Re: Artifacts - that’s absolutely the way they are intended to be used! If there are artifacts you want to use in between steps, then uploading them from one step and downloading in a subsequent step is the intended purpose :grin:

Hope that helps! :slightly_smiling_face:

@jeremy Got it, thank you!