How to Make Docker Compose Use Same Running Instance?

Hi again,

After successfully testing and starting with buildkite pipelines, I created a pipeline based on docker-compose:

steps:
    - label: "Create docker image"
      plugins:
          - docker-compose#v4.14.0:
                build: app

    - wait

    - label: "Setup"
      command: npm run setup
      plugins:
          - docker-compose#v4.14.0:
                run: app
                env:
                    - AWS_REGION
                    - AWS_ACCESS_KEY_ID
                    - AWS_SECRET_ACCESS_KEY

    - wait

    - label: "Compile"
      command: npm run compile
      plugins:
          - docker-compose#v4.14.0:
                run: app

    - wait

    - label: "Test"
      command: npm run test
      plugins:
          - docker-compose#v4.14.0:
                run: app

    - wait

    - label: "Start"
      command: npm run prod
      plugins:
          - docker-compose#v4.14.0:
                run: app

The setup and compile jobs create files and directories that will be used by the start job. But the start job can not find those files because new image run by the agent on each job. Should we concat the commands such as npm run setup && npm run compile ... or is there anything we can do to use the same running instance? I preferred the approach above because it seems like a better practice in designing pipelines.

Hey @muratgozel!

We advise against this due to the inefficiency of it (uneven distribution of work amongst agents) and potential unreliability it can cause (an agent/instance being offline).

However, it sounds like what you’re looking for is node affinity; the ability to target the host that ran the upload command for all jobs.

Let me know if that’s going to meet your requirements here. Like I said, we advise against it and would actually recommend running test and compile in the same step, perhaps then uploading to ECR (or another image storage solution) and then using that image in your prod step.

Cheers!

thank you, this was the answer i need, i guess i will redesign my architecture.

Hi again, so if there is anyone who see this question and look for guidance, I took the foolowing approach. I build and upload the container in the first step and then deploy it with docker-compose up. Dockerfile does the app build and test during the first step. So its like pipeline inside pipeline.

steps:
    - label: "Create container build"
      command: ./devops/cd/image.sh
      env:
        NODE_ENV: production
        CONTAINER_CONN_STR: ghcr.io/path/to/repo

    - wait

    - label: "Deploy"
      command: ./devops/cd/deploy.sh
      concurrency: 1
      concurrency_group: 'path/to/repo/deploy'
      env:
        NODE_ENV: production
        CONTAINER_CONN_STR: ghcr.io/path/to/repo
        APP_PORT: 3030

Dockerfile runs build and test stages:

FROM node:18-alpine3.17 as build

RUN ...

FROM build as test

RUN ...

FROM test as prod

RUN ...

CMD npm run fetch-env && npm run prod

Thanks @muratgozel for sharing your approach!