Overcomplicated homelab DNS configuration
Published on Monday, 01 February, 2021Table of content
Intro
In this guide we will look into how to configure an overcomplicated DNS setup using pihole
, bind
and cloudflared
, running inside a podman
pod
.
For this one you will obviously need podman
. If you are (like me in this case) doing this on centOS
or Red Hat
machine, getting podman
is as simple as:
# dnf install podman
If you are on some other distro, it shouldn't be that complicated.
Now that we have podman
let's talk about what exactly we are doing. We want to achieve following:
- custom domain(s) for home lab
- DNS over HTTPS to cloudflare
- DNS blackhole with
pihole
For those who are not familiar, let's go through each component.
pihole
is a dns blackhole, it has lists of malicious and/or unwanted addresses and discards them. You can find it at pihole.net and consider it a network wide AD blocker. It also has a web interface that you can use for configuration and tracking of dns queries.
bind
is a nameserver. It's probably most common nameserver in the world, it has many features and it's able to run ISP sized DNS servers. In this case we will just use it to provide a local domain. Speaking of domain, you need to decide what you will use, in this example i'll just use domain.tld
.
Cloudflare is a company that provides internet services related to security and performance. Similar to googles 8.8.8.8 dns, cloudflare provides their own dns server at 1.1.1.1. Since cloudflare is not an AD revenue driven corporation, I prefer them over google. cloudflared
is a daemon
that forwards UDP dns requests over HTTPS to cloudflare.
So the path of request will be as follows:
origin -> bind -> pihole -> cloudflared -> cloudflare
Building container images
Now that we have all the basics covered, let's start building the images. First, we build a simple folder structure to keep all the files:
/containers
├── build
│ ├── bind
│ └── cloudflared
└── run
├── bind
└── pihole
So there is containers
folder in root of the filesystem, that holds build and runtime files for our containers. We need to build two containers, bind
and cloudflared
. We'll start with bind
.
BIND
For bind, we create a simple Dockerfile
in /containers/build/bind
with following content:
FROM alpine:latest
LABEL maintainer="Marvin Sinister"
RUN addgroup -S -g 2001 bind && adduser -S -u 2001 -G bind bind; \
apk add --no-cache ca-certificates bind-tools bind; \
rm -rf /var/cache/apk/*; \
mkdir /var/cache/bind;
RUN chown -R bind: /etc/bind; \
chown -R bind: /var/cache/bind;
HEALTHCHECK --interval=5s --timeout=3s --start-period=5s CMD nslookup -port 5053 ns.domain.tld 127.0.0.1 || exit 1
USER bind
CMD ["/bin/sh", "-c", "/usr/sbin/named -g -4 -p 5053"]
What we are doing here is building a simple container from alpine
, creating user, installing the service and fixing some permissions. Two things to notice:
- healthcheck running
nslookup
onlocalhost
port5053
for address of our nameserver for chosen domaindomain.tld
- the bind command itself running service on port
5053
And build the container image with:
# podman build . -t bind
STEP 1: FROM alpine:latest
STEP 2: LABEL maintainer="Marvin Sinister"
--> aebbb98fb2e
STEP 3: RUN addgroup -S -g 2001 bind && adduser -S -u 2001 -G bind bind; apk add --no-cache ca-certificates bind-tools bind; rm -rf /var/cache/apk/*; mkdir /var/cache/bind;
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
(1/53) Installing ca-certificates (20191127-r5)
(2/53) Installing brotli-libs (1.0.9-r3)
...
(52/53) Installing bind-dnssec-root (9.16.11-r0)
(53/53) Installing bind (9.16.11-r0)
Executing bind-9.16.11-r0.pre-install
Executing bind-9.16.11-r0.post-install
wrote key file "/etc/bind/rndc.key"
Executing busybox-1.32.1-r2.trigger
Executing ca-certificates-20191127-r5.trigger
OK: 41 MiB in 67 packages
--> c72cd8464c8
STEP 4: RUN chown -R bind: /etc/bind; chown -R bind: /var/cache/bind;
--> dcdfc85fd12
STEP 5: HEALTHCHECK --interval=5s --timeout=3s --start-period=5s CMD nslookup -port 5053 ns.domain.tld 127.0.0.1 || exit 1
--> 0d742c04892
STEP 6: USER bind
--> 54b96184563
STEP 7: CMD ["/bin/sh", "-c", "/usr/sbin/named -g -4 -p 5053"]
STEP 8: COMMIT bind:latest
--> 3a18e54af59
3a18e54af590947fd0230193a02675f26010ab2a177e859305f0f3f98d9c22e6
If you are running recent enough version of podman
you might get warnings about HEALTCHECK
not being supported. In that case, just add --format docker
to end of build command.
Once finished we can list the images with:
# podman image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/bind latest 3a18e54af590 About a minute ago 89.2 MB
docker.io/library/alpine latest e50c909a8df2 2 days ago 5.88 MB
cloudflared
Same procedure as bind, create a Dockerfile
in /containers/build/cloudflared
:
ARG ARCH=amd64
FROM golang:alpine as gobuild
ARG GOARCH
ARG GOARM
RUN apk update; \
apk add git gcc build-base; \
go get -v github.com/cloudflare/cloudflared/cmd/cloudflared
WORKDIR /go/src/github.com/cloudflare/cloudflared/cmd/cloudflared
RUN GOARCH=${GOARCH} GOARM=${GOARM} go build ./
FROM multiarch/alpine:${ARCH}-edge
LABEL maintainer="Marvin Sinister"
ENV DNS1 1.1.1.1
ENV DNS2 1.0.0.1
RUN addgroup -S -g 2002 cloudflared && adduser -S -u 2002 -G cloudflared cloudflared; \
apk add --no-cache ca-certificates bind-tools; \
rm -rf /var/cache/apk/*;
COPY --from=gobuild /go/src/github.com/cloudflare/cloudflared/cmd/cloudflared/cloudflared /usr/local/bin/cloudflared
HEALTHCHECK --interval=5s --timeout=3s --start-period=5s CMD nslookup -po=5054 cloudflare.com 127.0.0.1 || exit 1
USER cloudflared
CMD ["/bin/sh", "-c", "/usr/local/bin/cloudflared proxy-dns --address 127.0.0.1 --port 5054 --upstream https://${DNS1}/dns-query --upstream https://${DNS2}/dns-query"]
Similar to bind, but this time we use golang:alpine
to build and multiarch\alpine:amd64-edge
to run the cloudflared daemon. And this time the health check is for cloudflared.com
address, and the port where service is running is 5054
.
And run:
# podman build . -t cloudflared:latest
STEP 1: FROM golang:alpine AS gobuild
STEP 2: ARG GOARCH
--> 793a7c5f9f9
STEP 3: ARG GOARM
--> 3bb89556acd
STEP 4: RUN apk update; apk add git gcc build-base; go get -v github.com/cloudflare/cloudflared/cmd/cloudflared
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
v3.13.1-12-g1c7edde73a [https://dl-cdn.alpinelinux.org/alpine/v3.13/main]
v3.13.1-13-g9824e6cf00 [https://dl-cdn.alpinelinux.org/alpine/v3.13/community]
OK: 13878 distinct packages available
(1/26) Installing libgcc (10.2.1_pre1-r3)
...
(26/26) Installing git (2.30.0-r0)
Executing busybox-1.32.1-r2.trigger
OK: 208 MiB in 41 packages
github.com/cloudflare/cloudflared (download)
...
github.com/cloudflare/cloudflared/cmd/cloudflared
--> 07c20f86a9d
STEP 5: WORKDIR /go/src/github.com/cloudflare/cloudflared/cmd/cloudflared
--> 92fd2da44f5
STEP 6: RUN GOARCH=${GOARCH} GOARM=${GOARM} go build ./
# github.com/mattn/go-sqlite3
sqlite3-binding.c: In function 'sqlite3SelectNew':
sqlite3-binding.c:125322:10: warning: function may return address of local variable [-Wreturn-local-addr]
125322 | return pNew;
| ^~~~
sqlite3-binding.c:125282:10: note: declared here
125282 | Select standin;
| ^~~~~~~
--> 36f3ba92883
STEP 7: FROM multiarch/alpine:amd64-edge
STEP 8: LABEL maintainer="Marvin Sinister"
--> a133bb4e290
STEP 9: ENV DNS1 1.1.1.1
--> c68bcb25fe6
STEP 10: ENV DNS2 1.0.0.1
--> ce1efac0a83
STEP 11: RUN addgroup -S -g 2002 cloudflared && adduser -S -u 2002 -G cloudflared cloudflared; apk add --no-cache ca-certificates bind-tools; rm -rf /var/cache/apk/*;
fetch http://dl-cdn.alpinelinux.org/alpine/edge/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/edge/community/x86_64/APKINDEX.tar.gz
(1/18) Installing fstrm (0.6.0-r1)
...
(18/18) Installing ca-certificates (20191127-r5)
Executing busybox-1.33.0-r1.trigger
Executing ca-certificates-20191127-r5.trigger
OK: 22 MiB in 38 packages
--> 3fc0d971570
STEP 12: COPY --from=gobuild /go/src/github.com/cloudflare/cloudflared/cmd/cloudflared/cloudflared /usr/local/bin/cloudflared
--> f99b87627f7
STEP 13: HEALTHCHECK --interval=5s --timeout=3s --start-period=5s CMD nslookup -po=5054 cloudflare.com 127.0.0.1 || exit 1
--> c0b3a24146e
STEP 14: USER cloudflared
--> 66804934da3
STEP 15: CMD ["/bin/sh", "-c", "/usr/local/bin/cloudflared proxy-dns --address 127.0.0.1 --port 5054 --upstream https://${DNS1}/dns-query --upstream https://${DNS2}/dns-query"]
STEP 16: COMMIT cloudflared:latest
--> 7d8b02575fe
7d8b02575fea97bc09ca94fdf85569c47d15112f3930405e8c25d477fa491aaf
After a while we have the new image:
# podman image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/cloudflared latest 7d8b02575fea 3 minutes ago 95.7 MB
localhost/bind latest 3a18e54af590 18 minutes ago 89.2 MB
docker.io/multiarch/alpine amd64-edge 901ee590dcdb 22 hours ago 25.8 MB
docker.io/library/golang alpine 54d042506068 2 days ago 308 MB
docker.io/library/alpine latest e50c909a8df2 2 days ago 5.88 MB
pihole
We will use the upstream docker images for pihole.
Creating configuration files
Now that we have images, we will create the configuration files to run services.
cloudflared
Let's begin with the simplest one. cloudflared
doesn't need any kind of configuration, we'll run it as pure ephemeral container.
pihole
There is not much we need to configure for pihole
. We will just prepare the folders for persistence. The configuration options will be provided as environment variables on runtime.
We need to create the following folder structure under /containers/run/
:
pihole/
└── etc
├── dnsmasq.d
└── pihole
And that's it for pihole for the moment.
BIND
The most configuration will be required for bind
. Here we will define our own domain.
Create the following structure under /containers/run/
:
bind/
└── etc
Here we have two options, either copy the config from existing bind
; by either running a container without mounting volumes or having existing bind
or creating them ourselves. Since it's simple config I'll just put all files here.
The main config goes into etc/named.conf
:
// This is the primary configuration file for the BIND DNS server named.
//
// Please read /usr/share/doc/bind9/README.Debian.gz for information on the
// structure of BIND configuration files in Debian, *BEFORE* you customize
// this configuration file.
//
// If you are just adding zones, please do that in /etc/bind/named.conf.local
include "/etc/bind/named.conf.options";
include "/etc/bind/named.conf.local";
And then we need to create the rest of the bunch, etc/named.conf.options
:
options {
directory "/var/cache/bind";
forwarders {
127.0.0.1 port 5054;
};
recursion yes;
allow-query { lan; };
dnssec-validation auto;
auth-nxdomain no; # conform to RFC1035
listen-on port 5053 { any; };
};
The things configured here:
- forward the unknown queries to
127.0.0.1
on port5054
which is the address ofpihole
- allow recursion
- limit queries to
lan
acl (defined later) - listen on port
5053
on all interfaces
Next, etc/named.conf.local
:
acl lan {
10.88.0.0/16;
127.0.0.1;
};
zone "domain.tld" {
type master;
file "/etc/bind/db.domain.tld";
};
zone "122.168.192.in-addr.arpa" {
type master;
notify no;
file "/etc/bind/db.122.168.192.in-addr.arpa";
};
The things configured here:
- the
lan
access control list allowing10.88.0.0/16
, which is the default address space ofpodman
and127.0.0.1
which islocalhost
- the forward zone for
domain.tld
(defined later) - the reverse zone for
192.168.122.0/24
(defined later) - any other zones (not in scope of this document)
Now we can create our own domain.tld
zone in etc/db.domain.tld
file with following content:
; BIND data file for domain.tld zone
;
$TTL 86400
@ IN SOA ns.domain.tld. root.domain.tld. (
5 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
86400 ) ; Negative Cache TTL
;
@ IN NS ns.domain.tld.
ns IN A 192.168.122.254
; hosts
containers IN A 192.168.122.254
; services
pihole IN A 192.168.122.254
Reverse domain will in this case go into db.122.168.192.in-addr.arpa
, where the numbers in file name correspond to reverse of the ip range, in this case we are doing reverse zone for 192.168.122.0/24
so the file is named 122.168.192
(dropping the last 0
because it's a /24
domain).
;
; BIND reverse data file for lan zone
;
$TTL 604800
@ IN SOA ns.domain.tld. root.domain.tld. (
5 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
@ IN NS ns.domain.tld.
254 IN PTR ns.domain.tld.
; hosts
254 IN PTR containers.domain.tld.
A few notes:
- If your domain spans across multiple subnets, you will want multiple reverse zones
- Notice that we have two forward addresses (
containers.domain.tld
andpihole.domain.tld
) but only one reverse entry (containers.domain.tld
). While you can have multiple reverse (PTR
) records, you don't have to.
Since bind uses DNSSEC to check the upstream repositories, we need to provide the upstream keys, you can download them from ICS website from bind.keys (at the time of writing this document). Download the file and save it as etc/bind.keys
.
Since we specified user with id 2001
in Dockerfile
we will make that user owner of those files:
# chown -R 2001 /containers/run/bind
And the config is done. Almost there.
Podman
Finally it's time to create and start the pod. To create pod run:
# podman pod create --name dns -p '192.168.122.254:53:53/udp' -p '127.0.0.1:8080:80/tcp' -p '127.0.0.1:8443:443/tcp'
A bit of theory time. Pods are similar to containers, but within a pod, you can run multiple containers. The networks within pod is shared, therefore, you define the port mappings on pod level, and not for each container. In this case we'll run three containers within pod, and we map the following ports:
- The standard dns port
53/udp
to our host address, so that it's available from whole network 80/tcp
and443/tcp
to our localhost. This ispihole
web interface. We are making it available only from localhost, but we can stick a reverse proxy in front later.
We also give the pod a name, in this case dns
.
And once we have a pod running, we can start the containers:
# podman run -d --name cloudflared --pod dns --user 2002 localhost/cloudflared:latest
# podman run -d --name bind --pod dns -v '/containers/run/bind/etc:/etc/bind:Z' --user 2001 localhost/bind:latest
# podman run -d --name pihole --pod dns -v '/containers/run/pihole/etc/pihole:/etc/pihole:Z' -v '/containers/run/pihole/etc/dnsmasq.d:/etc/dnsmasq.d:Z' -e=ServerIP='192.168.122.254' -e=DNS1='127.0.0.1#5053' -e=DNS2='no' -e=IPv6='false' -e=TZ='Europe/Berlin' -e=WEBPASSWORD='MY_STRONG_PASSWORD' pihole/pihole:latest
As you can notice, we are binding the appropriate folders to each container, and providing a few extra options to pihole
, of which password is the one you should probably change.
Once everything starts, you can try running some dns queries to check if everything is okay, if you don't have them install bind-utils
to get dig
command:
# dnf install bind-utils
And then check if you can resolve some addresses:
# dig +short @192.168.122.254 google.com
172.217.16.110
# dig +short @192.168.122.254 ns.domain.tld
192.168.122.254
To access the dns service from outside the host, we need to open 53/udp
on firewall:
# firewall-cmd --add-service=dns
# firewall-cmd --add-service=dns --permanent
Web access for pihole
To access web interface of pihole
we need a proxy in front, in this case we'll use NGINX. First thing we need to do is install it:
# dnf install nginx
Once done, create the config file for new virtual host in /etc/nginx/conf.d/pihole.conf
:
server {
listen 192.168.122.0:80;
server_name pihole.domain.tld;
root /usr/share/nginx/html;
index index.html index.htm;
location / {
proxy_pass http://127.0.0.1:8080/;
}
access_log /var/log/nginx/pihole.access.log;
error_log /var/log/nginx/pihole.error.log;
}
And start and enable the service:
# systemctl start nginx
# systemctl enable nginx
A few more details, open the ports in firewall:
# firewall-cmd --add-service=http --permanent
# firewall-cmd --add-service=https --permanent
# firewall-cmd --reload
And allow nginx to connect to pihole on localhost
in selinux
:
# setsebool -P httpd_can_network_connect 1
You should be able to access pihole
at http://192.168.122.254
in your web browser and login using your password. You can also point other machines on your network to use it as dns server. And of course, you should add pihole to your local domain.