Azure Pipelines has supported container jobs for a while now. You craft a container with exactly the versions of exactly the tools you need, and we’ll run your pipeline steps inside that container. Recently we expanded our container support to include service containers: additional, helper containers accessible to your pipeline.
Service containers let you define the set of services you need available, containerize them, and then have them automatically available while your pipeline is running. Azure Pipelines manages the starting up and tearing down of the containers, so you don’t have to think about cleaning them up or resetting their state. In many cases, you don’t even have to build and maintain the containers – Docker Hub has many popular options ready to go. You simply point your app at the services, typically by name, and everything else is taken care of.
So how does it work? You tell Azure Pipelines what containers to pull, what to call them, and what ports & volumes to map. The pipelines agent manages everything else. Let’s look at two examples showing how to use the feature.
Basic use of service containers
First, suppose you need memory cache and proxy servers for your integration tests. How do you make sure those servers get reset to a clean state each time you build your app? With service containers, of course:
resources: containers: - container: my_container image: ubuntu:16.04 - container: nginx image: nginx - container: redis image: redis pool: vmImage: 'ubuntu-16.04' container: my_container services: nginx: nginx redis: redis steps: - script: | apt install -y curl curl nginx apt install redis-tools redis-cli -h redis ping
When the pipeline runs, Azure Pipelines pulls three containers: Ubuntu 16.04 to run the build tasks in, nginx for a proxy server, and Redis for a cache server. The agent spins up all three containers and networks them together. Since everything is running on the same container network, you can access the services by hostname: that’s what the curl nginx
and redis-cli -h redis ping
lines are doing. Of course, in your app, you’d do more than just ping the service – you’d configure your app to use the services. When the job is complete, all three containers will be spun down.
Combining service containers with a matrix of jobs
Suppose you’re building an app that supports multiple different database backends. How do you easily test against each database, without maintaining a bunch of infrastructure or installing multiple server runtimes? You can use a matrix with service containers, like this:
resources: containers: - container: my_container image: ubuntu:16.04 - container: pg11 image: postgres:11 - container: pg10 image: postgres:10 pool: vmImage: 'ubuntu-16.04' strategy: matrix: postgres11: postgresService: pg11 postgres10: postgresService: pg10 container: my_container services: postgres: $[ variables['postgresService'] ] steps: - script: | apt install -y postgresql-client psql --host=postgres --username=postgres --command="SELECT 1;"
In this case, the listed steps will be duplicated into two jobs, one against Postgres 10 and the other against Postgres 11.
Service containers work with non-container jobs, where tasks are running directly on the host. They also support advanced scenarios such as defining your own port and volume mappings; see the documentation for more details. Like container jobs, service containers are available in YAML-based pipelines.