We have some pipelines that depend heavily on build metadata, and have steps that look like:
- command: |
export SOMEVAR1=$(buildkite-agent meta-data get some-var-1)
export SOMEVAR2=$(buildkite-agent meta-data get some-var-2)
./script
Given that metadata has close conceptual relationship between environment variables and metadata (they’re both data that come from “the environment,” be that parent processes or other buildkite jobs), it would be valuable to be able to formally initialize environment variables based on meta-data, like so:
First of all welcome to Buildkite community and thank you for reaching out with your questions
We do not have any updates yet on this feature request. Regarding workaround, as mentioned in the initial post the current option is to use buildkite-agent meta-data get var in order to fetch the meta data value and assign it to an environment variable.
Please let us know if you are observing any issues with that approach at the moment.
But the workaround mentioned will only export the variable for the single step from what i understand, I want to export it across the whole pipeline and I can’t export it in the same step since I am using the docker-compose plugin for one of my steps
You will need to execute that command in each step where you need the environment variable.
And the docker-compose plugin allows you to mount the buildkite-agent into the container so that you can do this at runtime.
If you need it before then, you can use dynamic pipelines to upload the steps whilst defining the variable so that it is available for the command.
Hi folks! Thanks for the feedback, it’s an interesting idea. But we don’t have any immediate plans to do this. It’s a bit complicated to think about with the lifecycles involved. Env is defined when a job is uploaded, while meta-data changes over the course of a build. It’s also possible to achieve the result today with the commands already mentioned.
If you want to use env across multiple steps at once I’d advise using the top-level env key:
Longer term we’re considering doing more powerful interpolation of values including meta-data into dynamic pipeline uploads and commands.
I’m trying to export for the whole pipeline using the top level env, but I need to run a command to get my environment values.
env:
CURRENT_UID : $(id -u):$(id -g)
But, I noticed that fro environment variables the value is not “executed”, it gets assigned the commnd string instead. Am i doing it wrong here?
That is working as expected @saurabh. Env variables are plain string values. To achieve what you want, you’ll need to dynamically generate the pipeline yaml using a command step. Then upload the generated yaml with buildkite-agent pipeline upload
Yeah, sorry @saurabh, we don’t support command interpolation into pipeline yaml files directly. But you can do the interpolation with bash, first, in a dynamic pipeline like Jarryd suggests:
#!/usr/bin/env bash
# Bash will interpolate the command results before the yaml is passed to the upload command:
buildkite-agent pipeline upload <<YAML
env:
CURRENT_UID: $(id -u):$(id -g)
steps:
- ...
YAML
Longer term we’re considering doing more powerful interpolation of values including meta-data into dynamic pipeline uploads and commands.
I’ve been thinking about this as well. At least in the pipelines I write, I’m usually not trying to get metadata shared between completely unrelated steps: there’s typically a wait step or chain-of-dependency from the step getting the metadata back to the step setting the metadata, and as such, as an author, I can be confident that the metadata will by set by the time it’s read later.
The issue is that metadata is handled in an imperative way (any step can set metadata at any time, and any step can get metadata at any time), rather than handling it in a declarative way, such as described in the original post. If steps declared that they are using certain meta-data, and other steps declare that they may be setting meta-data variables, then static analysis of the pipeline becomes possible, and the buildkite server/orchestration can validate that metadata reads and writes are non-concurrent (that there’s either a wait/block step or a dependency separating each phase of reads and writes).