Group steps together with individual steps

Hello,

Is there a possibility to have in a yml file group steps but also steps outside of group?

Pretty much I am trying to have certain steps like setup, docker build as individual steps and then have a group for around 3 steps in regarding with running tests.

Hello, @UnknownTester404!

Sure, you can have both non-group and group steps in your pipeline yaml configuration. For instance, take a look at an example in our documentation here where there are two group steps and an individual command step in a yaml config file.

Best!
Karen

I’ve tried that and it throws that command is not accepted

Are you able to share more info about the error for us to help diagnose. If you can share a build link that would be the most helpful :pray:

The error received:

fatal: Failed to upload and process pipeline: Pipeline upload rejected: `command` is not a valid property on a step group. Please check the documentation to get a full list of supported properties.

And have something like in the yml file

steps:
  - label: "Something outside"
    key: otherStepWhichIsNotInGroup

  - group: "My Group"
    depends_on:
      - otherStepWhichIsNotInGroup
    steps:
      - label: "Something 1"
        command: "scripts/script1.sh ${VARIABLE}"
      - label: "Something 2"
        command: "scripts/script2.sh ${VARIABLE}"
      - label: "Something 3"
        command: "scripts/script3.sh ${VARIABLE}"
      - label: "Something 4"
        command: "scripts/script4.sh ${VARIABLE}"
    plugins:

Hey @UnknownTester404

Thanks for sharing that example, really helpful to understand the context of what is being uploaded and figure out what the issue is.

From what you have shared, the reason for the error is the plugins: attribute on the Group Step, this attribute is only available on Command Steps. But as some Plugins can be executed without specifying the command: attribute on Steps (as they run command hooks) the error message is interchangable in what it is referring to. Example of a Plugin being run without command: attribute:

steps:
  - plugins:
      - docker#v5.11.0:
          image: "mesosphere/aws-cli"
          always-pull: true
          command: ["s3", "sync", "s3://my-bucket/dist/", "/app/dist"]
    artifact_paths: "dist/**"

This is mentioned in the docs: Command step | Buildkite Documentation

For the example shared, it will need to be updated with the plugins: attribute specificed on each of the Steps within the Group Step e.g.,

steps:
  - label: "Something outside"
    key: otherStepWhichIsNotInGroup
    command: ...

  - group: "My Group"
    depends_on:
      - otherStepWhichIsNotInGroup
    steps:
      - label: "Something 1"
        command: "scripts/script1.sh ${VARIABLE}"
        plugins:
          - some-plugin:
             ...
      - label: "Something 2"
        command: "scripts/script2.sh ${VARIABLE}"
        plugins:
          - some-plugin:
             ...
      - label: "Something 3"
        command: "scripts/script3.sh ${VARIABLE}"
        plugins:
          - some-plugin:
             ...
      - label: "Something 4"
        command: "scripts/script4.sh ${VARIABLE}"
        plugins:
          - some-plugin:
            ...

Will mention if the Plugin to be used across all the Steps within the Group Step is the same and using the same configuration, it is possible to use YAML Anchors and Aliases to reduce duplicate code e.g.,

common:
  - docker_plugin: &docker
      docker#v3.3.0:
        image: something-quiet

steps:
  - label: "Read in isolation"
    command: echo "I'm reading..."
    plugins:
      - *docker
  - label: "Read something else"
    command: echo "On to a new book"
    plugins:
      - *docker

More details on YAML Anchors and Aliases in Pipelines, and how to override them here:

Hope that helps, but let us know if you have any more questions or issues!

Cheers,

Tom

Unfortunately it does not work since it cannot reference a metaenv variable which is set in a previous step via command

  - label: "Install"
    key: install
    command:
      - buildkite-agent meta-data set "DK_IMAGE" ${BUILDKITE_PLUGIN_DOCKER_IMAGE}

The DK Image is passed is something like:

      - docker#vx.x.x:
          image: "{metaenv.DK_IMAGE}"

Hey again @UnknownTester404 :wave:

You’ll need to use buildkite-agent meta-data get "DK_IMAGE" in your subsequent build step, either as part of your build script, or via a hook in order to ensure the value is available for those subsequent steps.

An example could be in an environment hook you check for the meta-data and then set it as an environment variable:

#!/bin/bash

if [ buildkite-agent meta-data exists "DK_IMAGE" ]; then
  export DK_IMAGE=$(buildkite-agent meta-data get "DK_IMAGE")
fi

But the value is needed in the plugin part

How should I treat it in the plugin context usage for each step from the group??

    plugins:
      - 'plugin 1':
      - docker#vx.x.x:
          image: "{metaenv.DK_IMAGE}"
          propagate-environment: true
          mount-buildkite-agent: true
          workdir: /app
          volumes:
            - /app/node_modules
          environment:
            - ????

@UnknownTester404 :wave:

So in the context of passing the value of DK_IMAGE from meta-data into the plugin, you’ll need to ensure that you are getting the meta-data with the buildkite-agent meta-data get command, at which point you would be able to reference the value of DK_IMAGE directly in your pipeline YAML.

In the step that uploads this pipeline yaml to Buildkite with buildkite-agent pipeline upload, you would have an environment hook (you can use other hooks that run before command, but for this example I’m going to use environment) which checks for the meta-data with key DK_IMAGE, and then downloads that meta-data and exports it in the job environment to be referenced by the pipeline upload command. I’ll break this down below with some examples:

Let’s assume for this example I’ve set the meta-data for this build already with this command:

buildkite-agent meta-data set "DK_IMAGE" "foo:1.2.3"

and I have the following pipeline yaml in my repo under
.buildkite/pipeline.yaml:

plugins:
      - plugin-1:
      - docker#vx.x.x:
          image: $DK_IMAGE
          propagate-environment: true
          mount-buildkite-agent: true
          workdir: /app
          volumes:
            - /app/node_modules
          environment:
            - ????

And I have the following environment hook configured on my agent:

# Environment hook

#!/bin/bash

if [ buildkite-agent meta-data exists "DK_IMAGE" ]; then
  export DK_IMAGE=$(buildkite-agent meta-data get "DK_IMAGE")
fi

What ends up happening is when buildkite-agent pipeline upload runs, the value of $DK_IMAGE is parsed and is set in the pipeline YAML received by Buildkite, which will look like this:

plugins:
      - plugin-1:
      - docker#vx.x.x:
          image: foo:1.2.3
          propagate-environment: true
          mount-buildkite-agent: true
          workdir: /app
          volumes:
            - /app/node_modules
          environment:
            - ????

It’s important to note that you cannot access meta-data directly in your pipeline YAML (via a declaration like{metaenv.DK_IMAGE}), but you can access it via the meta-data command mentioned above. Some other valuable reading would be our docs around environment variables, as there are some nuances that are good to be aware of!

Hopefully the above helps, but feel free to reach out with any further questions :slight_smile:

To be honest it is unclear where should I add this:

Environment hook

#!/bin/bash

if [ buildkite-agent meta-data exists “DK_IMAGE” ]; then
export DK_IMAGE=$(buildkite-agent meta-data get “DK_IMAGE”)
fi

The variable is set in a previous step so even I have that logic in a sh script, should I call it in each follow-up step which uses the plugin with the metaev.DK_IMAGE?

In the example I shared, the exported value of DK_IMAGE is actually configured in the environment hook - a bash script that will run in every job.

Where you would have that hook depends on the platform you have installed your agent on - for example on macOS with an M1 processor, the hooks are saved in /opt/homebrew/etc/buildkite-agent/hooks.

You can read more about hooks in our docs here :slight_smile::

Every step that would refer to that image could be referenced as I have in my example above by referencing the value of DK_IMAGE.

The problem is that I don’t have access to that part (agent configuration) since it is configured by the client’s DevOps team and they are not so eager to make changes to the agents.

That certainly makes things more challenging! You do have some options around storing those hooks in the repository.

You can also include the same commands as part of a build script that runs before running the pipeline upload, if you want to avoid using hooks entirely. Hope that helps!

Hello,

I’ve tried the approach with a pre-command hook repository but it seems that it doesn’t work

So it is printed

 .buildkite/hooks/pre-command
Pre-command repo hook execution
Pre-command repo hook executed
DK_IMAGE is: *IMAGE NAME IS PROPERLY PRINTED HERE*

The pre-command hook is something like:

#!/bin/bash

set -euo pipefail

echo "Pre-command repo hook execution"
if buildkite-agent meta-data exists "DK_IMAGE"; then
  echo "Pre-command repo hook executed"
  DK_IMAGE=$(buildkite-agent meta-data get "DK_IMAGE")
  echo "DK_IMAGE is: ${DK_IMAGE}"
  export DK_IMAGE
fi

The step is:

- label: "Label Me"
   key: label
   command: command here
   plugins:
      - 1
      - 2
      - docker#vX.X.X:
          image: $DK_IMAGE
          propagate-environment: true
          mount-buildkite-agent: true
          workdir: /workdir
          environment:
            - VARIABLE 1
            - VARIABLE 2

tried also with image: $$DK_IMAGE

@UnknownTester404 :wave:

In this scenario, $DK_IMAGE is the correct syntax.

You’ll want to ensure this hook is run on the job that performs the buildkite-agent pipeline upload in your pipeline - because of the way that environment variable interpolation works in Buildkite, DK_IMAGE must be set in the job’s environment before the pipeline is uploaded, so that when the pipeline is parsed during upload the value of DK_IMAGE is correctly interpolated.

Sorry, but isn’t pre-comand hook running before each command? Am I missing something?

Hey @UnknownTester404 ,

This is Suma from Buildkite support team jumping in on this.

Since the variable interpolation happens at the time of pipeline upload so suggestion is to set the environment variable using the environment hook.

Otherwise when the pipeline gets uploaded the image tag will be empty which I think is what you are observing.

If not, please send the build url to support@buildkite.com so we can take a look at this further and let you know what else could be the issue here.

As I saw in the documentation, the environment hook can be sent only on agent which I mentioned that I don’t have access to. At the repository level it seems that it isn’t supported. I tried to set it in repo hooks folder but it wasn’t working

:wave: Unfortunately, as Suma mentioned, because interpolation happens at pipeline upload, you’ll need to use an environment hook.
You can send us your build links so we can look into the issue further.