Self-Hosted Azure Devops Build Agent using Docker - AzureDevops 2019 and above - Docker in Docker
Problem Statement
Initial Reading: Microsoft Documentation - Running a self-hosted agent in Docker
Our Scenario:
- Self-Hosted build agents
- Using docker for pipeline
- Need to utilize resources effectively (instead of setting up many VMs as build agents)
- Need to run some automation testing (involves docker containers) in the pipeline
Issue:
- Standard scripts provided by Microsoft cannot handle DIND (docker in docker)
- docker-compose with volume mapping in dind is not working at all
Solution
First we need to build an docker image to run build agents in docker environments.
Docker File
VOLUME /var/lib/docker
VOLUME /azp
From host we will be mapping some directory into above volumes. You can find that mapping later in this article in docker-compose file.
FROM ubuntu:latest
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update && \
apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu55 \
libssl-dev \
libunwind8 \
netcat \
docker-ce
RUN curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose && \
chmod +x /usr/local/bin/docker-compose
#setting up init script
WORKDIR /azpinit
COPY ./start.sh .
RUN chmod +x start.sh
#build agent work folder
WORKDIR /azp
#these are important to work with docker in docker (dind) volume mappings
VOLUME /var/lib/docker
VOLUME /azp
#Start script
CMD ["/azpinit/start.sh"]
Build Agent Initialize Script File (Start.sh)
We have slightly modified the file we copied from Microsoft standard documentation as follow.
#!/bin/bash
set -e
if [ -z "$AZP_ROOT" ]; then
echo 1>&2 "error: missing AZP_ROOT environment variable"
exit 1
fi
if [ -z "$AZP_URL" ]; then
echo 1>&2 "error: missing AZP_URL environment variable"
exit 1
fi
if [ -z "$AZP_TOKEN_FILE" ]; then
if [ -z "$AZP_TOKEN" ]; then
echo 1>&2 "error: missing AZP_TOKEN environment variable"
exit 1
fi
AZP_TOKEN_FILE=/azp/.token
echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi
unset AZP_TOKEN
if [ -n "$AZP_WORK" ]; then
mkdir -p "$AZP_WORK"
fi
#This is required if you are planning to run many build agents from once VM/Server rm -rf "/$AZP_ROOT/agent"
mkdir "/$AZP_ROOT/agent"
cd "/$AZP_ROOT/agent"
export AGENT_ALLOW_RUNASROOT="1"
cleanup() {
if [ -e config.sh ]; then
print_header "Cleanup. Removing Azure Pipelines agent..."
./config.sh remove --unattended \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE")
fi
}
print_header() {
lightcyan='\033[1;36m'
nocolor='\033[0m'
echo -e "${lightcyan}$1${nocolor}"
}
# Let the agent ignore the token env variables
export VSO_AGENT_IGNORE=AZP_TOKEN_FILE,AZP_TOKEN
print_header "1. Determining matching Azure Pipelines agent..."
AZP_AGENT_RESPONSE=$(curl -LsS \
-u user:$(cat "$AZP_TOKEN_FILE") \
-H 'Accept:application/json;api-version=3.0-preview' \
"$AZP_URL/_apis/distributedtask/packages/agent?platform=linux-x64")
if echo "$AZP_AGENT_RESPONSE" | jq . >/dev/null 2>&1; then
AZP_AGENTPACKAGE_URL=$(echo "$AZP_AGENT_RESPONSE" \
| jq -r '.value | map([.version.major,.version.minor,.version.patch,.downloadUrl]) | sort | .[length-1] | .[3]')
fi
if [ -z "$AZP_AGENTPACKAGE_URL" -o "$AZP_AGENTPACKAGE_URL" == "null" ]; then
echo 1>&2 "error: could not determine a matching Azure Pipelines agent - check that account '$AZP_URL' is correct and the token is valid for that account"
exit 1
fi
print_header "2. Downloading and installing Azure Pipelines agent..."
curl -LsS $AZP_AGENTPACKAGE_URL | tar -xz & wait $!
source ./env.sh
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
print_header "3. Configuring Azure Pipelines agent..."
./config.sh --unattended \
--agent "${AZP_AGENT_NAME:-$(hostname)}" \
--url "$AZP_URL" \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE") \
--pool "${AZP_POOL:-Default}" \
--work "${AZP_WORK:-_work}" \
--replace \
--acceptTeeEula & wait $!
# remove the administrative token before accepting work
rm $AZP_TOKEN_FILE
print_header "4. Running Azure Pipelines agent..."
# `exec` the node runtime so it's aware of TERM and INT signals
# AgentService.js understands how to handle agent self-update and restart
exec ./externals/node/bin/node ./bin/AgentService.js interactive
Starting Build Agents Using Docker Compose
We have used a docker compose file to run build agents in docker as follow.
You can see how we mapped directories from host to the build agent get volume mapping working in docker-in-docker situation.
/azp01 mapped to build agent 1
/azp02 mapped to build agent 2
When builds are running in the docker build agents, it will create directories as below.
/azp01/agent/azp_work/1/s/yourprojectcode
/azp02/agent/azp_work/1/s/yourprojectcode
It is also very important to map host docker.sock as a volume into the build agent container to get docker-in-docker working.
/var/run/docker.sock:/var/run/docker.sock
version: '3.2'
services:
agent01:
image: vstsagent:latest
environment:
- AZP_URL=https://yourazuredevops.url/tfs
- AZP_TOKEN=YOUT_PAT_HERE
- AZP_AGENT_NAME=MyBuildAgent-01
- AZP_POOL=Default
- AZP_ROOT=azp01
- AZP_WORK=azp_work
stdin_open: true
tty: true
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./NuGet.config:/root/.nuget/NuGet:ro
- type: bind
source: /tmp/libdocker1
target: /var/lib/docker
- type: bind
source: /azp01
target: /azp01
agent02:
image: vstsagent:latest
environment:
- AZP_URL=https://yourazuredevops.url/tfs
- AZP_TOKEN=YOUR_PAT_HERE
- AZP_AGENT_NAME=MyBuildAgent-02
- AZP_POOL=Default
- AZP_ROOT=azp02
- AZP_WORK=azp_work
stdin_open: true
tty: true
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- type: bind
source: /tmp/libdocker2
target: /var/lib/docker
- type: bind
source: /azp02
target: /azp02
Run a Pipeline with Docker Compose
With above setup now we can run some automation testing in our pipeline which uses docker-compose as below.
- task: DockerCompose@0
displayName: 'Docker compose up'
inputs:
containerregistrytype: 'Container Registry'
dockerRegistryEndpoint: 'your.container.registry.url'
dockerComposeFile: 'Automation.docker-compose.yml'
dockerComposeCommand: 'up -d'
As an example, our automation docker compose now can have volume mapping (even with dind) as below.
ruby:
image: ruby:2.5.5-alpine
restart: "no"
container_name: buildamcapitest
volumes:
- ./../../AcceptanceTest/:/acceptance_test/:rw
- ./run-ruby-automation-tests.sh:/tmp/run-ruby-automation-tests.sh:ro
- ./TestResult.xml:/tmp/TestResult.xml:rw
depends_on:
- api
- auth
command: ["/bin/sh", "./tmp/run-ruby-automation-tests.sh"]
environment:
- ENVIRONMENT=staging
- AUTOMATION_API_HOST_NAME=myapicontainer
- AUTOMATION_API_HOST_PORT=8080
- AUTOMATION_AUTH_HOST_NAME=myauthcontainer
- AUTOMATION_AUTH_HOST_PORT=8080
networks:
- automation
Hope this helps someone who is struggling with dind volume mapping for AzureDevOps build agents running in docker.
Comments
Post a Comment