Question about "Controlling Concurrency"

Until a coworker pointed me to Controlling concurrency | Buildkite Documentation I was completely unaware of this cool use-case of concurrency groups to create concurrency gates.

I have a concern though that maybe someone in Buildkite can alleviate.

As far as I can tell, this pattern works because by default the same concurrengy_group causes jobs to end up in a FIFO queue for evaluation, meaning a second run of the same pipeline cannot execute its “Start of concurrency gate” step before the previous one completes its “End of concurrency gate” step. (As long as the concurrency_method is NOT eager)

However, this only works if you make the assumption that all steps in a single pipeline are atomically added to a given concurrency queue. Which is not a guarantee I have been able to find articulated in the docs anywhere (probably because at first blush that’s an implementation detail).

Before I depend on this pattern for mission-critical pipelines, I wanted to confirm this underlying assumption is actually guaranteed inside the codebase?


Example failure scenario if atomicity is not guaranteed:

  • Pipeline invocation A, and B arrive at virtually the same time
  • Invocation A adds the “Start of concurrency gate” job to the queue
  • Invocation B adds the “Start of concurrency gate” job to the queue
  • Invocation A adds the “End of concurrency gate” job to the queue
  • Invocation B adds the “End of concurrency gate” job to the queue
  • Result: both pipelines will execute critical steps in parallel contrary to what I wanted

So, basically, the question is; is the server code definitely implemented to guarantee ^^ cannot happen?

It is probably a scenario that will occur only very rarely at worst because the timing would have to be just perfect… but I cannot accept the risk it might happen at all for my specific use-case, so I do need to know for sure that this concurrency problem is protected against.

@jerry Please can you confirm in the example scenario you mentioned above A and B are two pipelines or just two builds triggered for same pipeline ?

Also you want to define parallelism for certain steps in those pipelines correct ?

I can’t speak for the OP but I’m interested in this scenario when A and B are two builds triggered for the same pipeline. Is the described concurrency issue possible in that case?

Ah, I never came back to this because I decided not to depend on this mechanism because of the unclear danger, but my intent was for this to be two invocations of the same pipeline that needed to run mutually exclusively.

The way this pattern is described in the documentation, it looks like a simple Mutex implementation to lock/unlock a block of steps in between. But there is a potential for that mutex not to work as intended if the interleaving of steps I proposed is possible as a result of the two invocations, and I could not see a clear articulation that this was prevented by the design/implementation.

Hi @jerry,

We do have a blog post with an example of builds running within a concurrency gate. You can read more about that here that Concurrency Gates - Buildkite Blog. However, this shows only builds triggered from within the same pipeline.

Cheers!