Cool! tomorrow Iāll spend a few hours to propose the change here: https://github.com/buildkite/conditional! I send it to you on the issue and here to see if I can help you with something!
it is also interesting to say that we did a workaround with hook. At the time of decision we look for metadata and transform it into environment, but it is definitely not the best approach, considering that metadata will not always be present in the step.
Hi folks! After a deep discussion weāve decided not to support meta-data in conditionals just yet, sorry! We want to do it well, and have an answer for what happens when meta-data changes over the lifecycle of a build, and we havenāt quite figured that out yet.
For now, you can do this with the buildkite-agent and a bit of scripting ā and itās not super different, like:
# buildkite.yml
steps:
- block: "Request release"
prompt: "Fill out the details for release"
fields:
- text: "Version"
key: "version"
hint: "Include the version according to the standard: 2.x.x.x"
required: true
- select: "Type"
key: "release-type"
default: "stable"
options:
- label: "Stable"
value: "stable"
- label: "Beta"
value: "beta"
- label: "Debug"
value: "debug"
- command: |
if [[ "$$(buildkite-agent meta-data get release-type)" == "stable" ]]; then
buildkite-agent pipeline upload buildkite-release-stable.yml
fi
# or even
- command: buildkite-agent pipeline upload "buildkite-release-$$(buildkite-agent meta-data get release-type).yml"
Iāll get that thread dirty to ask you somethingā¦We have artifact dependency between jobs, in this case, before I call stable.yml I generate an artifact. How do I get this artifact inside another step that was uploaded by the agent?
Iād love for the if: clause mechanism to be made more powerful ā at my company we use top-level git diffing to do a lot of downstream decisions w.r.t. which parts of a build hierarchy to run (complex mono-repo!)
And Iād love to be able to build a (more flexible) mechanism to use diff output in if: clauses, which is complicated by the limitations of which data sources can be fed into this mechanism.
If there were some more general mechanism to feed data (files) reliably and without formatting changes into inspectable variables for downstream pipelines and steps thatād open up a large swathe of improvements and simplifications w.r.t. ease-of-use for our developers.
Hey, Iām actually trying to do something similar, just wondering whatās the difference between using buildkite-agent to upload a pipeline that triggers the target pipeline, and directly using buildkite-agent to upload the target pipeline?
And if I want to use the proposed method, how do I pass arguments when doing the upload?
For example
# pipeline.yml
- command:
if [[ "$$(buildkite-agent meta-data get release-type)" == "stable" ]]; then
buildkite-agent pipeline upload trigger.yml
fi
I donāt think there is much difference in the target and trigger pipeline, itās just an example in this case where the release pipeline most likely already exists and we donāt want to do a pipeline upload for it and a trigger makes more sense.
If I was going to create trigger.yml like that, I would probably do it as a dynamic pipeline which would make passing those values a little easier.
# pipeline.yml
- command:
if [[ "$$(buildkite-agent meta-data get release-type)" == "stable" ]]; then
.buildkite/trigger.sh | buildkite-agent pipeline upload
fi
Thanks for the reply! I have another similar question, since metadata doesnāt work for the if clause in trigger, will it work in the build.env or build.meta_data? Like is it possible for me to pass in a metadata in the current pipeline to another pipeline with the trigger?
And how does pipeline upload actually work or whatās the expected behavior of pipeline upload?
If I have a pipeline with 4 steps, and if I do a pipeline upload at step 2, what will happen? Will it overwrite step 3 & 4 and run the new steps uploaded, or will it put the uploaded pipeline in between step 2 & 3, or will it run the uploaded pipeline somewhere else?
I tried running
buildkite-agent pipeline upload mypipeline.yml
and the step ran successfully but nothing really happens. I donāt see anything from the mypipeline.yml running
Hey @jeffhappily The upload command will insert the steps from the script into the build immediately after the upload step.
If you send us through a message to support@buildkite.com with a link to your build and the issues you are having we can help you with your pipeline in a bit more detail.
Hi @Dave, welcome to the forum! ā yeah, I think that might be a leftover fragment of a design decision we made thatās largely to do with conditionals being evaluated at the time of pipeline upload vs. how meta-data might change over the course of a build. Iāll cross-link a post from @sj26 about this on GitHub.
Hi folks! After a deep discussion weāve decided not to support meta-data in conditionals just yet, sorry! We want to do it well, and have an answer for what happens when meta-data changes over the lifecycle of a build, and we havenāt quite figured that out yet.
Thereās a description of a workaround using scripting in the previously linked issue:
Thanks for the reply @anon18884907 , makes sense and I appreciate the workaround.
The weird use case I have is an initial step that checks a database and a ready/not ready value to determine whether or not to run the rest of the pipeline.
I thought perhaps something like 3 pipeline files
pipeline.yml contains the initial step and a publish pipeline based on that
noop.yml just echoes that there is nothing to do
dostuff.yml has the full rest of the pipeline
Is that sensible? what would pseudocode for that look like, or is there something cleaner and simpler?
tbh I think this is probably nicer than using metadata in conditionals - I can keep the dostuff.yml pipeline clean
Hmm - kinda sounds like you could achieve that with a dynamic pipeline upload on the one pipeline?
If you need any further help with this you could also send us a message at support@buildkite.com with your pipeline(s) and we can look at them a little deeper for you?
This conditional feature will be helpful for us, as developers want to manually skip UAT step when committing specific branches, and expect a block step with the selection of "deply in ***ā ļ¼YES or NO), then the next step (like trigger another pipeline in DEV or UAT) can be skipped if the selection is āNOā.