BUILDKITE_JOB_LOG_TMPFILE not set in kubernetes stack

I’m trying to move from the CloudFormation stack to the Kubernetes stack. I have a pre-exit hook that analyzes my build log to put interesting details in annotations rather than forcing folks to look at logs. Unfortunately, it fails on the new stack because BUILDKITE_JOB_LOG_TMPFILE doesn’t appear to be defined. Is there an alternative? Or config that I’m missing?

Hi @ianwremmel The BUILDKITE_JOB_LOG_TMPFILE environment variable requires the agent to be started with the --enable-job-log-tmpfile flag (or enable-job-log-tmpfile=true in the agent config file) but I don’t believe this is available to use with the kubernetes stack. In this case you could capture logs differently by redirecting command output to a file in your step using the tee command as described here in the Managing log output | Buildkite Documentation then analyzing that file in your pre-exit hook instead

Gotcha. I actually just stumbled across this line in the bootstrap script I’m using in my CF stack:

echo 'enable-job-log-tmpfile=true' >> /etc/buildkite-agent/buildkite-agent.cfg

Is there an equivalent script for k8s agents? Or would it be enough to add that line to my k8s agent Dockerfile (the one used by k8s, not the one used by the docker-compose plugin)

Hi @ianwremmel , adding that line to your k8s agent Dockerfile would be enough. Although I do want to call out that the tmpfile would be created in the agent container’s filesystem. Your hooks run in the command container so this means you need a shared volume between the agent and command containers for your pre-exit hook to access the log file.