Hi, I am a newbie and I have been fighting a cascade of issues in our pipeline that uses Buildkite Elastic CI Stack for AWS.
Our pipeline is a bit old and it uses stack version v5.11.0 and a bunch of old plugins. An issue with one of those plugins made me upgrade it to v6.22.1. The stack got successfully deployed using the aws CLI. So that’s great!
Now I am migrating a pipeline that was running jobs on the old stack (unclustered agents) to a new one (clustered). Unfortunately there are gaps in Buildkite’s documentation on how to set this up correctly, so here’s what I tried:
- add new cluster
- defined 2 queues,
default
,16cores-1agent
(as named in stack parameterBuildkiteQueue
)- I picked self-hosted infrastructure there
- switch the pipeline from unclustered area to this cluster
- try building a branch that is known to build on the old stack
I get this strange error, and I don’t know where it is coming from, and why?:
Have I configured something wrong? Does plugin nienbo/cache-buildkite-plugin
requires some special parameter/configuration?
Here’s a snippet from the pipeline upload YAML:
cache: &cache
id: my_app
backend: s3
key: "v1-cache-{{ id }}-{{ git.commit }}"
restore-keys:
- "v1-cache-{{ id }}-{{ git.commit }}"
s3:
bucket: my-test-bucket
paths:
- .
steps:
- label: ":buildkite: meta-data"
command: .buildkite/scripts/set_version_meta_data.sh my_app
- wait
- label: ":cpp: build my_app"
command: .buildkite/scripts/build_my_app.sh
plugins:
- chronotc/metadata-env#v1.0.0:
keys:
- my-app-release-version=MY_APP_RELEASE_VERSION
- docker-login#v2.1.0:
- docker#v5.3.0:
image: "docker-registry.something.com/images/ubuntu-22.04:0.2.0"
environment:
- MY_APP_RELEASE_VERSION
- nienbo/cache#v2.4.16: *cache
- artifacts#v1.8.0:
upload: "build/my_app/my_app-*"
agents:
queue: "16cores-1agent"
I have studied the plugin’s documentation and all logs on CloudWatch. Nothing hints as to what could be the cause. Can anyone tell me what could be the issue?