Build Your Own Bind9 Docker Image

Feb 1, 2024 · 27 mins read
Build Your Own Bind9 Docker Image

In the video below, we show you how to build your own Bind9 Docker image


Having a DNS server on your network is very important if you have your own local servers or other IT devices that you need to connect to and manage

But in a small network, dedicating an entire computer to just DNS isn’t efficient

Now installing applications into the same environment is still best avoided because it can lead to conflicts

Fortunately though, we have another option, which is to run applications in containers

But while you can download a pre-built Docker image for Bind9, it’s actually better to build your own Docker images

Aside from the security reasons, knowing how to build your own images will help you create images for other software that may not be available

Useful links:
https://hub.docker.com/_/debian
https://docs.docker.com/engine/reference/builder/
https://docs.docker.com/compose/compose-file/compose-versioning/

Assumptions:
Now because this video is specifically about building a Docker image for Bind9, I’m going to assume that you already have Docker installed as well as the build plugin, or you know how to install these

If not, then I do have another video available that shows you to set this up in a Debian virtual machine running on Proxmox VE for instance

Dockerfile:
To make our own Docker image, we’re going to take an existing image and add to it

Now what image you start with depends on what you’re trying to do, but it would be good to select images that are flagged as official and provided by a verified publisher

In our case, we just need a basic OS image

There are quite a few you can use, but we’ll be using the one from Debian

Now although you could run a container from this, connect to it and install the Bind9 software, any changes that are made to an instance are lost if it’s restarted for example

So what we need to do is to create our own image that uses this Debian one as it’s base, but has Bind9 installed as well

To create a Docker image, we need to create a dockerfile that defines what’s going on

Each image you create needs its own dockerfile, so it’s best to keep projects like this in their own folder

In which case we’ll create a new folder for Bind9

mkdir bind9_image

And then we’ll switch to it

cd bind9_image

Now we can create the dockerfile that defines what’s going on

nano dockerfile

FROM debian:latest

RUN apt update
RUN apt install bind9 -y

EXPOSE 53/tcp
EXPOSE 53/udp

USER bind:bind

CMD ["/usr/sbin/named", "-f"]

Now save and exit

The FROM instruction defines what we want to use as our base image, and in this case we want the latest version of the Debian image

The RUN instruction is to run commands and in this case we’ll use it twice, to do a package update and then install Bind9

The EXPOSE instruction is to define which ports the container will listen on

A basic DNS server uses TCP and UDP port 53 so we need to expose both

TIP: Bind9 can also be configured to support DNS over HTTPS (DoH) for instance. which uses TCP port 443, but it’s beyond the scope of this video

Now the goal of this container is to run a DNS server, however, before start Bind we change the default user to bind and the group to bind

Without this, Bind9 would be run by the root account, which is bad from a security perspective, because if anybody exploited a software vulnerability, they would be able to install software in the container for instance

NOTE: If Docker is running containers using the root account, there is also the risk of breaking out of the container and having root access to the host computer

Normally, Bind would be run as a service in the background, but a container needs to be doing something all the time otherwise it will start up, complete its task and then stop

So in this case, we use the CMD instruction so that when the container starts it runs Bind as an application in the foreground and continues to keep running

To clarify, we use the RUN instruction to do things as part of the image build, in other words, our image will have Bind9 installed and ready to use

The CMD instruction on the other hand is an instruction used when a container is created from the image. In this case, when container starts, the DNS server will be started

TIP: According to the documentation, a container can only use one CMD instruction. So if you put several of them in the dockerfile, only the last one will be executed when a container starts

Now we could do other things as well, and I would suggest checking out the reference documentation, but this should be enough for our DNS server

Build Image:
The Dockerfile we created contains instructions to build an image, so the next thing to do is to instruct Docker to do that

We’ll use the build plugin by running a command like this

docker build -t bind9_image:0.1 .

This will create an image which will be called bind9_image using files in the current folder and it will be tagged as version 0.1

TIP: If you don’t specify a version in the tag, it will be set to latest

In this case there’s only the dockerfile itself, but if you’re creating your own application, the files for that could be stored in this folder and copied into the image as part of the instructions

If you’re not familiar with version control, it really helps to assign versions, not just for development reasons but also for users

So create an image with a version number, test it, and if it works create another version with a tag of latest

If you later make any changes, create an image with a higher version number, test it, and if it works create a newer image with the tag of latest

This makes sure users can get access to the latest version, but they can still use an older version if the newer version causes compatibility problems for instance

You can get details of the Docker images on your computer by running this command

docker images

This will give you information about the name, tag, the image ID, the date of creation and the size of each of the image files available

Docker Compose File:
Now that we have our own image for Bind9 we can run this in a container

To do that I’ll be using Docker Compose, but first I need to switch back to the parent folder

cd ..

Next, you either need to create a new Docker Compose file or update an existing one

nano docker-compose.yml

version: '3.8'

services:
  bind9:
    image: bind9_image:0.1
    container_name: bind9
    ports:
      - '53:53/udp'
      - '53:53/tcp'
    restart: unless-stopped
    volumes:
      - ./bind9/etc/bind/named.conf.local:/etc/bind/named.conf.local
      - ./bind9/etc/bind/named.conf.options:/etc/bind/named.conf.options
      - ./bind9/etc/bind/db.homelab.lan:/etc/bind/db.homelab.lan
      - ./bind9/etc/bind/db.192.168:/etc/bind/db.192.168

For a new file, you need to define the compatibility version and create a services: section

Otherwise you could just append the container details at the end of the section

Currently we only have a version 0.1 of our Bind9 image so we’ll use that

And to make it easier to identify the container when using CLI tools, we’ll give it a name of bind9

We then define the ports that this container will use and set this instance to be automatically started, unless it’s manually stopped for maintenance reasons for instance

NOTE: We have to define both UDP and TCP

Because this will be an instance and data can be lost, we’ll place our configuration files for Bind9 in a folder on the host computer and map these to the /etc/bind folder in the container so that it can use them

In this example, I’m using /bind9/etc/bind as the host folder because if this server is later configured to support DDNS, we’ll need a separate /var/lib/bind folder to store zone files in

The reason I haven’t mapped the entire folder across is because we’d have to create several other files on the host, ones I’m not planning to change, but could be subject to change in future software updates for instance

Configuration Files:
As part of the container setup, we did some file mappings, so we need to create these configuration files on the host itself

But first we need to create a folder and sub-folders

mkdir -p bind9/etc/bind

Now if these files already exist, it’s probably quicker and easier to copy them to the host using SFTP for instance but we’ll create the configuration files manually

nano bind9/etc/bind/named.conf.local

zone "homelab.lan"  {
	type master;
	file "/etc/bind/db.homelab.lan";
};

zone "168.192.in-addr.arpa" {
	type master;
	file "/etc/bind/db.192.168";
};

Now save and exit

The above file is used to define the zones the server will be responsible for, and in this example we’re defining a forward and reverse lookup zone

This will be the primary DNS server and will hold a master copy of the database files

But Bind9 needs to know where these are and so we point it to files which will be in the container’s /etc/bind folder

nano bind9/etc/bind/named.conf.options
acl trustedclients {
	localhost;
	localnets;
	192.168.102.0/24;
};

options {
	directory "/var/cache/bind";

	recursion yes;
	allow-query { trustedclients; };
    allow-recursion { trustedclients; };
	
	forward only;
	forwarders {
		1.1.1.2;
		1.0.0.2;
	};
};

Now save and exit

A simple strategy for DNS is to setup a local name server which will resolve local FQDNs but it will also be used as a forwarder to resolve public FQDNs

For Bind9 it’s recommended to setup an ACL to restrict access to the DNS service

NOTE: Unlike a normal server, localnets refers to the IP range assigned to containers

The server will need to allow queries and also recursive queries, but these will be restricted by the ACL

While the server can cache recursive query results, it’s understood that you don’t need to define a restriction for both allow-recursion and allow-query-cache

The suggestion is to apply an ACL for allow-recursion and allow-query-cache will use that same ACL; It makes sense and it reduces administration I suppose

For Public FQDN resolution the server can act as a forwarder. In other words it will query Public DNS servers on behalf of the client and return an IP address as a response, if there is one that is

The benefit of this is that you can restrict Internet DNS queries to your own DNS servers only and make it easier to control which Public DNS servers are used

In other words, less Internet access for clients is better for security and administration and it reduces the risk of clients receiving false DNS responses that may otherwise steer the user to a malicious website

To set this up, the forward only setting is configured so DNS queries will only be forwarded for zones the DNS server does not manage

You then define which forwarding servers will be used for these recursive lookups

As this is a container, with a single interface, there is no need to restrict which interface(s) to listen on

NOTE: As this is best limited to small networks, we aren’t configuring the deny-answer-addresses option. This would require a customer network and a static IP address for the container and thus extra network configuration

Forward Lookup Zone:
A forward lookup zone is basically a database of DNS records, known as resource records (RR), used for resolving Fully Qualified Domain Names (FQDN) to IP addresses

There are different ways you can configure the file and if you ever setup Dynamic DNS you’ll probably see your zone files being reconfigured regardless

But using ISC’s own documentation, a forward lookup zone file looks like this

nano bind9/etc/bind/db.homelab.lan

$TTL 24h
$ORIGIN homelab.lan.
@		IN	SOA	testdns1.homelab.lan. admin.homelab.lan. (
					1       ; serial number
					24h	; refresh
					2h	; update retry 
					1000h	; expire
					2d	; minimum
					)

		IN	NS	testdns1.homelab.lan.

pvedemo1	IN	A	192.168.102.10
pvedemo2	IN	A	192.168.102.11
pvedemo3	IN	A	192.168.102.12
testdns1	IN	A	192.168.102.30
dockerdemo	IN	A	192.168.102.30
pbs		IN	A	192.168.102.31

Now save and exit

NOTE: The IP address for the DNS server is the same as the one for the Docker host because other devices on the network will need to connect to the DNS server using the host’s IP address

The file starts with the default Time To Live (TTL) setting, in this case 24 hours

IP addresses can change over time, especially when traffic needs to be load balanced across multiple servers

What the setting means is that when a client receives a DNS response, it should expire it from its cache after 24 hours in this example and then request a new DNS response

This applies more to other DNS servers than personal computers as they will be running 24x7. They will likely cache a response to speed up DNS lookups for their clients but it’s also a more efficient use of resources

The next line is $ORIGIN homelab.lan. where homelab.lan. is the domain name that the DNS server is resposonsible for

Basically this says that records below this line without a domain should have homelab.lan. appended to them

TIP: Remember the . at the end as this is important and represents the root domain

The next section is to define the settings for the zone

The @ is a shortcut you can use to save typing out the current origin

What follows that is IN which indicates that the class for this record is of type Internet, which is pretty much the only type of class now being used

We then have a record type of SOA which stands for Start of Authority

This is the FQDN of the primary DNS server, the DNS server that holds the master copy of the database for the zone

Next we have the email address of the owner, albeit it uses a . instead of an @ after the username because @ is a reserved character

The settings begin with a serial number. This is used for versioning and it needs to be incremented or changed in some other way, every time an update to the file is made

That way a secondary server can compare its copy of the database to the one held by the primary server and decide if its copy needs to be updated

But to help with troubleshooting, some administrators encode the date of the change in a format like this (YYYYMMDDnn), for example, 2023081201 represents change number 1 on August 12th 2023

The refresh timer, in this case 24 hours, is to tell secondary servers how long to wait before requesting a zone update

The update retry timer, which has been set to 2 hours, is to tell servers how long to wait before they next attempt to ask an unresponsive primary server for an update

The expire timer, in this case 1000 hours, is how long a secondary server should wait before it stops responding to queries for zone records if the primary server remains unresponsive

The minimum setting, which has been set to 2 days, defines how long to cache negative responses. These are responses for when a record doesn’t exist

Now these settings are certainly not set in stone mind and you can use whatever you prefer. The ones in the example are based on recommendations from RIPE NCC for scalability and stability

Another record type required for a zone is the NS or Name Server record. These provide details of the DNS servers for the zone. In this case there’s only one defined and so it matches the FQDN for the DNS server defined as the SOA

If you have secondary servers, you should add NS records for them so that clients can use these for DNS lookups as well to balance the load and provide read redundancy

After this we have A record entries which are host entries used in IPv4

They begin with the name of the host, follwed by the class type, again IN, then the resource type A and finally the IP address of the host

TIP: If an IP address change is required and the default TTL is too long to wait for the cache to expire, you can add a TTL entry in a host record, for example

pvedemo1	1h	IN 	A	192.168.102.10

The existing cache entry on computers needs to expire first, so in this example, a DNS change like this would be applied to the DNS server maybe two days before the maintenance would be carried out

After that, the host entry will begin to expire after 1 hour

Once the work has been carried out succesfully, a day or two later, DNS would be updated again to remove the TTL entry in the record so that it reverts to the default TTL

There are several other record types that can be defined in a DNS zone, but the example provided is only intended to cover the basics

For further details, check out the documentation https://bind9.readthedocs.io/en/v9.18.4/chapter3.html

Reverse Lookup Zone:
A reverse lookup is another database of DNS records, but this one is to resolve IP addresses into FQDNs

Again, there are different ways you can configure these but we’ll follow ISC’s own documentation

nano bind9/etc/bind/db.192.168

$TTL 24h
$ORIGIN 168.192.in-addr.arpa.
@	IN	SOA	testdns1.homelab.lan. admin.homelab.lan. (
					1       ; serial number
					24h	; refresh
					2h	; update retry 
					1000h	; expire
					2d	; minimum
					)
				
	IN	NS	testdns1.homelab.lan.
	
$ORIGIN 102.168.192.in-addr.arpa.
10	IN	PTR	pvedemo1.homelab.lan.
11	IN	PTR     pvedemo2.homelab.lan.
12	IN	PTR	pvedemo3.homelab.lan.
30	IN	PTR	dockerdemo.homelab.lan.
31	IN	PTR     pbs.homelab.lan.

Now save and exit

The file is very much the same as the forward lookup zone file, however, instead of A records, this time we have Pointer (PTR) records

These ones are in reverse order to the A records we saw earlier, beginning with the last Octet in an IP address, the one that is unique to that computer, and ends with the FQDN

My expectation is several 192.168.x.x subnets will be used in the network, so for that reason, the SOA record is defined to cover all of these

After the NS records are defined, $ORIGIN definitions will be created for each individual subnet to make it easier to identify and manage them

Otherwise we would end up with entries like this

102.10	IN	PTR	pvedemo1.homelab.lan.
104.20  IN  PTR web1.homelab.lan.

An alternative option would be to create separate zone files for each subnet

One thing to bear in mind is that Docker will be running multiple containers but we can only allocate an IP address to one name otherwise it causes confusion

So we’ll associate the host IP address to the name of the host itself

Port 53 Already Used:
It is possible that the host computer is running something like systemd-resolved for instance and this will prevent running the Bind9 container as it’s using port 53

So before we try to start up the container we’ll check what ports are in use

sudo ss -tunlp

NOTE: While you can run the command without privilege it won’t show you details about the process involved

While the computer used in the video was fine, I have come across other Debian installations that would have had issues

If you do find systemd-resolved is listening on port 53, it needs to be disabled

sudo nano /etc/systemd/resolved.conf

Append this line at the end

DNSStubListener=no

Then save and exit

TIP: The default setting line is commented out and could be changed, but I prefer to add my own lines so they’re easier to identify

For the changes to take effect you need to restart the service

sudo systemctl restart systemd-resolved

Lastly, double check port 53 is now free

sudo ss -tunlp

Initial Testing:
Now we’ll start up and test our DNS server

Since we’ve used docker compose, we’ll run this command so that it runs in the background

docker compose up -d

Now we’ll test to see if the DNS server is working

host pvedemo1.homelab.lan 127.0.0.1
host 192.168.102.31 127.0.0.1

Assuming all is well, we’ll stop the container as there are other changes to be made

docker container stop bind9

Latest Version:
Now although our image seems to be working as expected, most users want to run the latest version, so we’ll create one

First we’ll switch to the folder our dockerfile is in

cd bind9_image

Then we’ll build another image, but we’ll omit the version in the tag

docker build -t bind9_image .

As before, if you don’t specify a version, it will be labelled with latest

Next we’ll switch back to the parent folder

cd ..

Then update the docker compose file as we want to use the latest version

nano docker-compose.yml

We just need to update the existing image line

    image: bind9_image:latest

Now save and exit

And then we’ll start a new container instance

docker compose up -d

Nothing has really changed, so this one should work as before, but we’ll check all the same

docker ps -a

And while we’re here well test we can access DNS from another computer

host testdns1.homelab.lan 192.168.102.30

Update Network Settings:
Now that we have a DNS server, we need to update the host computer, plus any other devices on the network so that they use it

How you do this depends on the operating system and how the network settings are managed

In this case of server used in the video, it’s quite simple as it uses text files

Now the only catch here is it’s going to need a reboot of the computer for the changes to take effect

That’s because we’re connected in using SSH and if I try to use the ifdown/ifup method I’ll be disconnected

It’s really better to do this from a console session, or at least have the ability to get console access if things go wrong

There’s the potential here to make a mistake and if the network settings no longer work, you’ll not be able to get remote access again

I do have that available as a backup option, so we’ll update the interfaces file and change the DNS server entry so that it’s pointing to itself

sudo nano /etc/network/interfaces

dns-nameservers 127.0.0.1

Now save exit

As it warns, we should update the resolv.conf file, so we’ll do that as an added extra

sudo nano /etc/resolv.conf

nameserver 127.0.0.1

Now save and exit

And then we’ll reboot this computer

sudo reboot now

Once the computer is back up, we’ll check the files are still configured as expected

cat /etc/network/interfaces
cat /etc/resolv.conf

Then we’ll do some DNS testing

host pvedemo1.homelab.lan
host www.google.com

In this case, we’re checking to make sure both internal and external DNS resolution works

If the computer is using Netplan, then updating DNS on this is slightly easier and less disruptive

However, as before, make sure you can access the computer via a console session in case something goes wrong

We need to edit the YAML file used by Netplan, but first we need to check what the file is

ls -l /etc/netplan

In this example its 50-cloud-init.yaml so we’ll edit that

sudo nano /etc/netplan/50-cloud-init.yaml

In that file we need to change the IP address of the nameserver, for example

            nameservers:
                addresses:
                - 127.0.0.1

As an extra measure, we’ll update the resolv.conf file

sudo nano /etc/resolv.conf

nameserver 127.0.0.1

For the changes to take effect, we can run this command

sudo netplan apply

We should still retain our SSH session in this case

Although you’ll likely see warnings about permissions, commands no longer being deprecated, etc. they aren’t too concerning and are likely due to changes arising since Debian implemted this in their deployments

In the grand scheme of things, networking is working, or at least it should, as long as an error wasn’t made in the update we made

So what’s left is to do some DNS testing

host pvedemo1.homelab.lan
host www.google.com

Remove An Unwanted Image:
Now if you’re a developer, and you’re making images available to others, it’s best to retain older versions for end users in case they run into compatibility problems with a newer one

However, you may still want to remove an image that’s no longer practical for instance

To check which images you have in your library you can run this command

docker images

For the video, I created an extra image to demonstrate how to remove one

First you need to prune unused data, because Docker won’t remove images that have been used to create a container for instance

docker system prune -f

The -f option is to avoid being prompted for confirmation, but you can leave that out if you want to check what’s going on

Then you can remove a specific image, for example

docker image rm bind9_image:0.2

Then you can check to make sure it has been removed

docker images

Troubleshooting:
Sometimes things don’t go according to plan

If a container is running, you can troubleshoot problems by connecting to the container, for example

docker exec -it bind9 bash

This puts us in a bash shell for our bind9 container, but bear in mind that you’ll be limited in what you can do because very little software has been installed in the base Debian image

You can install additional software while in the container, however, for this video we deliberately ran Bind9 using the bind user account to limit things like this

For further troubleshooting we could create another image so that we have root access in the container i.e. we delete the USER instruction. That would then give us a lot more flexibility

However, the dockerfile should be reverted back to having that USER line for security reasons once any troubleshooting is done

Bear in mind though, any changes made in a container will be lost if the instance is restarted

So if there is other software you want to use regularly, it’s better to add this as part of the image build by using RUN instructions

Just be mindful of what software will then be available to someone who gains control of the container

DNS Updates:
Even in a small network, there will be a need to make updates to the DNS server

The config files are stored on the host and they can be updated at any time

For this video, we’ll add an extra host record to the forward lookup zone

nano bind9/etc/bind/db.homelab.lan

testserver	IN	A	192.168.102.100

Now save and exit

But for the changes to take affect, Bind9 needs to be updated

One option is to restart the container

docker container restart bind9

Another is to connect to the container

docker exec -it bind9 bash

And then use the rndc utility to reload the config and zone files

rndc reload

Both options work, although using the rndc utility seems quicker and is “less painful”

Either way, we’ll want to check this works

host testserver

Security Concerns:
If we connect to the container, we can do a security check

docker exec -it bind9 bash

You should see that you’re logged in as bind, which is a good thing

Without that USER instruction in the dockerfile, we’d be running Bind9 using the root account and that wouldn’t be good at all, especially if a software vulnerability was exploited in Bind9

An exploit of Bind9, should now be limited to whatever that bind user can do, or at least in most cases

And this is a reason why it’s recommended to build your own images, because otherwise you could be downloading images with software that is running as root

You can still get situations where an exploit results in the attacker being able to run commands with privilege and/or break out of a container

Now if we check the processes on our Docker server, we see another concern

ps aux | grep "port 53"

If you’ve used the default Docker installation method, like I have in this video, container’s are being run using the root account

And that’s due to Docker’s architecture

So if there’s a breakout from a container, the attacker could take advantage of root privilege on the host

In which case, running Docker as a non-root user would be better but there’s quite a few hoops to jump through to set that up

And because we’re running a DNS server on port 53, running Docker as a non-root user requires additional work because non-root users can’t use ports below 1024 by default

So these are things you need to bear in mind when running Docker containers

In which case, it would be better from a security perspective to give each network segment its own Docker server. This way you aren’t running containers with different security levels on the same computer

Summary:
In a small network, reducing your compute and power requirements will be very important

And as we’ve shown, you can setup your infrastructure services like DNS to be run in containers

Building your own images is relatively straightforward, once you understand the process

But you do need to be aware of the security risks

Sharing is caring!