I have a project where I want to run a lengthy build process, then execute some suite of tests on the output of that build, inside the same execution environment.
In short: I build a big beefy suite of C++ programs in a containerized environment with a large number of dependencies. After the build, I want to execute a large suite of integration tests on the same software. I want each integration test to be an individual step in buildkite.
In other systems such as CircleCI, this is trivial, because you can spin up a single runner that will exist for the entire duration of the pipeline, and the filesystem sticks around between individual jobs in the pipeline.
Buildkite, on the other hand, has a much “cleaner” paradigm in that jobs within a pipeline execute in a completely isolated environment. This helps guide you towards clean CI/CD architecture, but makes it slow/overcomplicated to run a quick step within a larger pipeline.
The options I have now, and why they aren’t great:
- Use build artifacts: Keeps steps isolated, at the cost of spinning up/down a large number of extra containers, unnecessarily reading from the artifacts many times, etc. which imposes a very high time overhead
- Use a single job/step, and a bash script: Now I lose the job status in Buildkite but I could potentially use the Buildkite CLI/API to push a custom step onto the build as it runs?
…
I wish I could somehow define a group of jobs which are all to run on a single agent in sequence and share the filesystem / execution environment. These jobs would execute as subshells in that environment. The benefit of being able to do this would be to reduce the time waiting for jobs to spin up & copy around artifacts, as well as the ability to declare the pipeline declaratively in the buildkite config instead of a big shell script.
What is the canonical way of doing this, am I missing something conceptually that would make this style of pipeline work better in Buildkite, or should I be thinking about this job another way?