Setting up Pi-hole DNS

At home I attempt to maintain two networks — a reliable home network with WiFi and internet access and a separate home lab where services may not be as reliable. These networks overlap at times (for example, the home lab still needs internet access) but I don’t want to end up in a situation where the lab needs to be working for watching something on Netflix.

One place where these two requirements sometimes conflict is around name resolution. I would like everything on my home network to be able to resolve my lab resources, but I don’t wan to point everything at my lab DNS server.

The best solution I’ve found for this has been Pi-hole (https://pi-hole.net/). This is a service that can run on a variety of platforms and provide network-wide ad blocking — and more.

I have two instances of this running in my lab — one runs on a Raspberry Pi (to be separate from my lab infrastructure) and another backup instance runs in an Ubuntu 20.04 VM.

Deploying Pi-hole in a VM

This couldn’t be easier. I started with an Ubuntu Server 20.04 VM (using the template from here & the deploy process from here). I then ran the easy installer method by piping the install script to bash using:

1
curl -sSL https://install.pi-hole.net | bash

I followed the wizard and accepted the defaults. At the end of the process I changed the autogenerated admin password to something easier to remember. You can do this with the command:

1
pihole -a -p

Once this is complete, you can update DHCP scopes to use your Pi-hole IP addresses as the DNS server(s) for your network.

Customizing to resolve Lab domain names

Once clients are using Pi-hole to resolve DNS names, we can enable conditional forwarding to handle lab specific domains. In the Pi-hole web interface there is an option to enable this (under Settings > DNS > Advanced DNS settings), but it only supports a single domain name and target DNS server. For my lab, I have a couple additional names that need to be resolved, but this is still possible using configuration files.

From an SSH session to the Pi-hole DNS server, we can create a file using a command/path such as:

1
nano /etc/dnsmasq.d/05-custom.conf

In this file we will add server entries for any domain name we want to forward. Multiple entries should provide redundancy in the event one of our lab DNS servers is unavailable.


1
2
3
4
5
6
server=/lab.enterpriseadmins.org/192.168.10.10
server=/lab.enterpriseadmins.org/192.168.99.10
server=/example.com/192.168.10.10
server=/example.com/192.168.99.10
server=/168.192.in-addr.arpa/192.168.10.10
server=/168.192.in-addr.arpa/192.168.99.10

NTP while we are at it?

One other service that I run on this Pi-hole VM (and Raspberry Pi) is NTP. While not the main purpose of this post, I’ll list the commands below to enable an NTP Server on Ubuntu 20.04.

1
apt install chrony

Once chrony is installed, we can configure it by editing the configuration file:

1
nano /etc/chrony/chrony.conf

We can review which upstream DNS servers are used, but more importantly we need to define which systems can use this host as an NTP server. We do that my adding the following lines to the end of the configuration file:


1
2
# Define the subnets that can use this host as an NTP server
allow 192.168.0.0/16

Which will allow any device in the 192.168.0.0/16 network to be able to query us for NTP. You can list additional ‘allow’ lines as needed.

Once the configuration file is updated, we can restart the service to apply changes using:

1
systemctl restart chrony.service

We can now query the NTP server using a variety of tools. On Windows I prefer ntptool.exe (https://www.ntp-time-server.com/ntp-server-tool.html). This should show the output that we were successfully able to query the new time server.

From the NTP server we can list recently checked in clients using the command:

1
sudo chronyc clients

Conclusion

Now we have an appliance that can provide DNS and NTP for our home network, and will forward any lab specific DNS queries to lab DNS servers. If those lab DNS servers are unavailable it doesn’t impact our home network.

Creating a Two Factor Authentication Server

This post will cover building a two factor authentication provider using RADIUS and Google Authenticator. This is an update to a post from over three years ago. The basic configuration is roughly the same, there are just a few minor updates to account for the move from Ubuntu 16.04 to 18.04.

  1. Create Ubuntu 20.04 VM
  2. Modify Ubuntu to only allow login by specific Two Factor Users
  3. Install and configure required packages
  4. Enroll 2FA Users
  5. Migrate existing google_authenticator config files to a new server
  6. Testing with radtest

Modify Ubuntu to only allow login by Specific Two Factor Users

The default Ubuntu template we created has a configuration that will grant all members of the Domain Users group the ability to login. While this is generally ‘good enough’ for a lab, in this specific case we want to limit the users who have the ability to login using Two Factor Authentication (2FA), as we will be using the 2FA solution to provide remote access to the network and want to ensure that only trusted users are permitted.

The specific setting we previously defined in joinad.sh was RequireMembershipOf. We can use the following command to see which groups have the ability to login:

1
/opt/pbis/bin/config --show RequireMembershipOf

Running this command shows four rows — multistring, lab\domain^users, blank, and local policy. This is because of our ./joinad.sh script that previously set the default group to Domain Users.  If I run the ./config RequireMembershipOf command again, it replaces domain users with the new group.  For my purposes I want two groups — the Linux Sudoers group (which is defined in the sudoers group, which contains my Linux Admin users) and the 2FA Users group.  We can do that by passing both group names in one command, like this:

1
/opt/pbis/bin/config RequireMembershipOf "lab\\lab^linux^sudoers" "lab\\lab^2fa^users"

This will replace the Domain Users row with the two groups specified above. When searching to see if there was an easier way to append, I found the following blog post which contains a script that uses regex to find the existing group and then add the group that is needed: https://techblog.jeppson.org/2017/01/append-users-groups-powerbroker-open-requiremembershipof/

Install and configure required packages

The first command we will run is identical to the previous instructions — we’ll install the pluggable authentication module (PAM) for Google Authenticator and FreeRadius. With Ubuntu 18.04 this will install FreeRadius 3.0, which will lead to a few of the changes from the initial article (mainly paths and one extra command).

1
apt-get install libpam-google-authenticator freeradius -y

By default, FreeRadius 3.0 does not enable the PAM module. We will manually do that by creating a link in the available mods directory using this command:

1
 ln -s /etc/freeradius/3.0/mods-available/pam /etc/freeradius/3.0/mods-enabled/pam

We will be storing our per-user Google Authenticator files in each users home directory. This will prevent users from finding/using other users 2FA secret. Because of this configuration, we need the FreeRadius service to be able to run as root (so it can read all users Google Authenticator configuration files). To do this we will edit the radiusd.conf using a text editor:

1
nano /etc/freeradius/3.0/radiusd.conf

We will find the rows that state user = freerad and group = freerad and replace them with:

1
user = root<br>group = root

Next we will append the users file to set PAM as the default authentication type.

1
echo "DEFAULT Auth-Type := PAM" &gt;&gt; /etc/freeradius/3.0/users

Next we need to update a file to enable PAM pluggable authentication modules in FreeRadius. Note: to find text using nano, you can use CTRL+W.

1
nano /etc/freeradius/3.0/sites-enabled/default

When we find the text for #PAM, remove the pound sign (which will remove the example/comment and enable PAM).

We will now configure the PAM Radius service to use Google Authenticator.

1
nano /etc/pam.d/radiusd

Comment out the @include lines in the file, then add the following text:


1
2
auth requisite pam_google_authenticator.so forward_pass
account required pam_unix.so use_first_pass

Now we need to define which clients can use our RADIUS server, and what their ‘secret’ will be. It is common to restrict this so only specific hosts can access RADIUS, and each server can have a unique secret. However, for my lab, I’m going to allow all servers to connect to RADIUS using the same secret. Password/secret re-use is a bad security practice. We do this by editing this file:

1
nano /etc/freeradius/3.0/clients.conf

And add a client entry like this one:


1
2
3
4
client 192.168.0.0/16 {
        secret          = s3cur3.rad
        shortname       = PrimaryLabSubnet
}

We can have multiple secrets, for example one per host, which would be more secure. Note: the secret should only contain alphanumeric and _-+.  (underscore hyphen plus period) characters.

Now that we have all of the configuration files edited, we can restart the freeradius service:

1
service freeradius restart

Enroll 2FA Users

When an authorized user logs into the Radius server (over SSH) they can run google-authenticator to generate a configuration file. This interactive prompt will ask them a few questions and then generate a QR code that they can scan from a mobile phone with the Google Authenticator application. However, we can suppress some of these questions and ensure a consistent experience for our users.

We can do this by adding an alias to the bashrc file, which is executed at every interactive login.

1
nano /etc/skel/.bashrc

Now we scroll down to where the other aliases are and add one of our own:

1
alias google-auth='google-authenticator -tdf -l "$USER Home Lab" -r 3 -R 30 -w 17 -Q ANSI'

This will create an alias named google-auth that will pass in all the expected inputs. Users will now run google-auth instead of google-authenticator. If you have already logged in, the changes to the default bashrc file may not be available in your profile. You can fix this by copying the updated bashrc file (cp /etc/skel/.bashrc /home/yourusername/.bashrc).

Migrate existing google_authenticator config files to a new server

In each users home directory, there is a hidden .google_authenticator file. This file is only readable by the user and root (permissions = 400) and contains the information on the clients one time use key. In my scenario I have an existing FreeRadius server running on Ubuntu 16.04 that I want to replace with this new 18.04 server, and as such I have some existing files that I’d like to move over to the new server. This same process (coupled with cron) could be tweaked slightly to use as a backup solution for these .google_authenticator files. Here is what I did to get these moved over, starting with commands ran on the source server:


1
2
tar -czvf googleauth-backup.tar.gz /home/*/.google_authenticator
scp googleauth-backup.tar.gz bwuchner@new-radius-01.lab.enterpriseadmins.org:/tmp

Once the files are backed up and copied to the new server, we will run this command on the destination server:

1
sudo tar --same-owner -xzvf googleauth-backup.tar.gz -C /

This will extract the .google_authenticator files, retaining the previous owner for the files. If the user home directory did not exist, a new folder will be created. The problem is that this new folder structure is created & owned by root. This isn’t an issue for the operation of the 2FA solution, but could be a problem if the user needs to log back in to the Radius server at some point — as they won’t be able to save files in their profile. We can fix that by resetting the ownership of each home directory back to the specific user & Domain Users group. To do that we could run a command like this on the destination server:


1
2
cd /home
for i in *; do sudo chown $i:domain^users /home/$i; done

To confirm this worked, we can execute a ls -lha /home on destination server, to ensure that we see expected ownership on each directory.

Testing with radtest

When we installed FreeRadius we also received some handy command line tools that we can use to test our configuration. This syntax of rad test expects a username, one time passcode, RADIUS server iP address, port, and secret, like this:

1
radtest bwuchner '442287' 192.168.0.100 1812 s3cur3.rad

The response should show Access-Accept. If you get something else, like Access-Reject, then check /var/log/auth.log to see what went wrong. I find that it is easiest to have two SSH sessions opened — one running radtest and the other running

1
tail -f /var/log/auth.log

At this point you should be able to use the Radius server to provide Two Factor Authentication (2FA) to any necessary service. To see one example of doing this with a VMware Unified Access Gateway (UAG) check out this video I recorded as part of the VMware TAM Lab program: https://www.youtube.com/watch?v=8Ybl58x0CLg.

Lab Updates: Ubuntu Server 20.04 LTS Template

Earlier this year I posted a thread on creating an Ubuntu Server 18.04 LTS Template for use in a lab. You can check that out here: https://enterpriseadmins.org/blog/virtualization/lab-updates-ubuntu-18-04-template/

In late April of this year (2020) Canonical released a new Ubuntu 20.04 LTS edition. I recently followed the same steps from my previous 18.04 template to build a new template — but with 2 additional changes.

Guest OS Customization Fails

The first issue I noticed was that customization specs were not properly applying, resulting in a deployed VM that had the following symptoms:

  • No IP Address customization
  • No Hostname customization
  • Network Adapter disconnected

I found the following bug report that had a workaround related to a similar previous issue: https://bugs.launchpad.net/ubuntu/+source/open-vm-tools/+bug/1793715. I also found a VMware KB article with more details related to 18.04 here: https://kb.vmware.com/s/article/54986. While I didn’t encounter this issue with 18.04, the workaround resolved my issue with 20.04. Specifically, I performed one action — removing cloud-init. I did this by running a single command:

1
sudo apt purge cloud-init

Install package for ifconfig

While we are modifying packages for this template, there is one command that I was missing — ifconfig. While I know I can find the IP other ways (like with the command

1
ip address
), I still like having ifconfig. We can include the package containing this command with another command:

1
sudo apt install net-tools

Conclusion

Removing one package and adding another are the only differences between my Ubuntu Server 20.04 template and the Ubuntu Server 18.04 template (previous post here).

Which Linux Template?

I was recently showing someone my homelab and they noticed that I had templates for multiple Linux distributions including Ubuntu Server, Photon OS, and Tiny Core, as well as different versions of each. They asked why I didn’t standardize on just one Linux distribution or version. The version issue is easy — I can be lazy when it comes to cleaning up old templates. The reason for multiple distributions required a bit of extra explanation, so I figured I would type it up here.

Tiny Core is a very small Linux distribution that I can clone in seconds. If I want to test something like network connectivity, a new DHCP scope, scripting something against a handful of VMs, or something that deals with the disk of a VM like vSphere Replication or a VM image backup/restore Tiny Core is perfect… just enough of a VM to get the job done. Instructions on how to create such a template can be found in this previous post: https://enterpriseadmins.org/blog/lab-infrastructure/lightweight-vm-for-testing-tinycore-linux/.

Photon OS is a minimal Linux distribution optimized to run on VMware platforms. Its already available as an OVA here: https://github.com/vmware/photon/wiki/Downloading-Photon-OS, so I just imported it once and saved it as a template. It only takes a couple minutes to deploy one of these and configure it to run Docker containers. For an example of something you can use it for, check out this previous post https://enterpriseadmins.org/blog/scripting/vrealize-operations-alerts-using-rest-notification-plugin/.

Ubuntu Server is the largest Linux template I have on disk, but is still relatively small compared to a Windows template (my configured Ubuntu Server 18.04 template is ~5GB). Ubuntu is a consumer friendly distribution with current support for a lot of different packages. I have one of these VMs running PiHole and another used as a 2FA/RADIUS server that I use for Two Factor Authentication with Horizon UAG. Here is a previous post for building an Ubuntu 18.04 template: https://enterpriseadmins.org/blog/virtualization/lab-updates-ubuntu-18-04-template/

What Linux distribution do you like that I am missing?

Scripts to max out CPU and Memory

Most of the time we want our virtual machines to run as optimally as possible. We don’t want to see CPU contention or high memory conditions. However, on occasion we may want to have some stress to see what that looks like in monitoring tools like vRealize Operations. I created two small scripts that will run in TinyCore Linux, one consuming CPU and the other memory. For info on creating a TinyCore template, you may want to check out this post: https://enterpriseadmins.org/blog/lab-infrastructure/lightweight-vm-for-testing-tinycore-linux/. Here are the scripts for reference:

cpubusy.sh


1
2
3
4
5
6
7
8
9
10
11
cpus=$(grep -ci processor /proc/cpuinfo)
echo "System has $cpus CPUs, starting thread for each."

for i in $( seq 1 $cpus )
do
  echo " ..starting background process #$i to consume CPU."
  sha1sum /dev/zero &amp;
done

echo "You can check processes with 'top' sorting by CPU with 'P'."
echo "To end all processes run 'killall sha1sum'"

Note: this file is available at https://raw.githubusercontent.com/bwuch/code-snips/master/cpubusy.sh

memfill.sh


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
echo "This script will fill memory to 90% with zeros."
# disable swap, only use RAM
sudo swapoff -a

# find current system memory
mem=$(sudo grep -i memtotal /proc/meminfo | awk '{print $2}')

# calculate 90% of current memory, using 100% will cause instability
fillmem=$(expr $mem / 100 \* 90)

# tmpfs is mounted as 50% by default, remount with our 90% number
echo " ..remounting /dev/shm to use 90% of MemTotal"
sudo mount -o remount,size=$fillmem"k" /dev/shm

# show the current size of tmpfs
df -h | grep tmpfs

# fill that space with 1k block zeros
echo " ..starting memory fill process."
dd if=/dev/zero of=/dev/shm/fill bs=1k

Note: this file is available at: https://raw.githubusercontent.com/bwuch/code-snips/master/memfill.sh

I placed these files in the tc user home directory (/home/tc) and set them to executable with chmod +x filename.sh.

If you’d like, you can add entries to have these scripts start automatically at boot — if you want an appliance that maxes out resources all the time. To do this, use sudo vi /opt/bootsync.sh and add entries at the end of the file for /home/tc/cpubusy.sh & and/or /home/tc/memfill.sh &. Note: the ending ampersand causes the script to run in the background and not wait for completion.

Typing backup will allow you to make these files & changes persistent.

Note: its much easier getting this text copied over if ssh is installed/configured on your tinycore VM. There is a very good write up on how to do this here: https://iotbytes.wordpress.com/configure-ssh-server-on-microcore-tiny-linux/. This post also covers how to include credentials (etc/shadow) in the file list backed up by TinyCore, which is also very useful.