Wednesday December 26 2018
Logging, Firewall and Backups - A Void Linux VPS Part II
In part one I covered the installation of Void Linux on a Digital Ocean VPS. In this second part I’m going to cover the process of setting up basic logging, firewall and backups.
Logging
When we installed Void Linux in part one we didn’t setup any logging.
Although rsyslog
is in the repositories, we don’t need all of its features,
socklog
can take care of logging for us instead.
# xbps-install socklog{,-void}
For those unfamiliar, the above bracket expansion is supported in both mksh
and bash
and will translate to two arguments being passed, socklog
and
socklog-void
.
Then go ahead and enable the daemons
# ln -s /etc/sv/socklog-unix /var/service/
# ln -s /etc/sv/nanoklogd /var/service/
This should take care of logging and put put them /var/log/socklog
. Unlike
unlike the other logging utilities out there these ones don’t have as much
or as easily accessible documentation.
The author has some documentation as well as
some more on configuration here. You can gather a little bit
more from the socklog-void
package here.
The default configuration should be adequate for our purposes. You can easily
tail the logs with svlogtail
, and then the class of logs. For instance, you
can do svlogtail secure
to see the logs for authentication and sshd
.
Firewall
Most tutorials out there on the internet are going to point you towards
iptables
for your Linux firewall needs. Typically using some sort of bash
script, or similar. Such a setup is sub-optimal for many reasons–even if it is
simple. I’m not going to focus on that here. Instead we’re going to setup and use
nftables
which has been in the mainline Linux kernel since 3.13
in 2014!
Void Linux packages nftables
and it’s as simple as:
# xbps-install nftables
From there we can create a simple configuration file:
/etc/nftables.conf
#!/usr/sbin/nft -f
# This is somewhat important, otherwise it will just append to your existing
# rules. This can be somewhat confusing unless you run `nft list table inet
# filter` or similar
flush ruleset
table inet filter {
chain input {
type filter hook input priority 0;
# Allow all input on loopback
iif lo accept
# Accept stateful traffic
ct state established,related accept
# Accept SSH
tcp dport 22 accept
# Accept HTTP and HTTPs
tcp dport { 80, 443 } accept
# Allow some icmp traffic for ipv6
ip6 nexthdr icmpv6 icmpv6 type {
nd-neighbor-solicit, echo-request,
nd-router-advert, nd-neighbor-advert
} accept
counter drop
}
chain forward {
type filter hook forward priority 0;
}
chain output {
type filter hook output priority 0;
}
}
That’s about the bare minimum for a webserver firewall. On my systems I
specifically block all output except from root
, select groups or to
the hosts and services I’ve approved.
Outbound filtering ( Optional )
chain output {
type filter hook output priority 0;
# Allow SSH outbound
tcp sport 22 accept
# Allow HTTP and HTTPs outbound
tcp sport { 80, 443 } accept
oif lo accept
# Allow some icmp traffic for ipv6
ip6 nexthdr icmpv6 icmpv6 type {
nd-neighbor-solicit, echo-request,
nd-router-advert, nd-neighbor-advert
} accept
meta skuid 0 accept
meta skgid users accept
meta skgid wheel accept
counter drop
}
Note that the way skgid
works is with the originating group. This is usually
the users’s default group as set in /etc/passwd
unless an application is
specifically sgid
. If you don’t know what you’re doing here it’s safe to
skip the outbound filtering.
Enabling the firewall on boot
It’s as simple as:
# ln -sv /etc/sv/nftables /var/service/
Installing a cron
daemon
For my purposes I’m going to use scron
because it’s simple, and in the repositories. I don’t need all of the advanced
features of other cron
implementations and I have my backups checked on the
backup server so I don’t need email on failed cron jobs.
There are other cron implementations in the repositories though:
# xbps-query -Rs cron
[-] cronie-1.5.2_1 Runs specified programs at scheduled times
[-] cronutils-1.9_2 Set of tools to assist the reliable running periodic and batch jobs
[-] dcron-4.5_32 Dillon's lightweight cron daemon
[-] fcron-3.3.0_3 Feature-rich cron implementation
[-] incron-0.5.12_1 A daemon that executes commands due to inotify events
[-] kcron-18.12.0_1 KDE Configure and schedule tasks
[*] scron-0.4_3 Simple cron daemon
[-] tinycron-0.4_12 A very small replacement for cron
Simply:
# xbps-install scron
# ln -sv /etc/sv/crond /var/service/
Backups
We’re going to use restic for our backups. It’s simple
straightforward and allows easy backup to s3
compatible data stores. I’m
using Minio for my backup server. Something I can cover
at a later point if there is interest.
Backup script, I’ve placed mine in /root/bin/backup.sh
#!/bin/sh
export AWS_ACCESS_KEY_ID=$YOUR_AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$YOUR_AWS_ACCESS_KEY_ID
# You can change the following line to your AWS endpoint or other S3 compatible
# server
export RESTIC_REPOSITORY="s3:https://minio.example.com/servers"
# The file contains a single line with the repository password.
export RESTIC_PASSWORD_FILE="/root/.restic-passwd"
/usr/bin/restic backup -q --exclude-file /.exclude -x / /var /home
Don’t forget to make it executable with chmod +x /root/bin/backup.sh
Exclude files easily recovered via other means
Excluding modules
and firmware
mean that you will have to force re-install
kernel and firmware when you restore from a backup. This isn’t the end of the
world though especially since you’re going to have to re-install the boot
loader anyway.
The cache directories shouldn’t cause any harm, other than perhaps some re-downloading if you re-install the kernel, etc.
/root/.cache/*
/usr/lib/modules/*
/usr/lib/firmware/*
/var/cache/*
/var/tmp/*
/home/*/.cache/*
Running on a schedule
Simply adjust /etc/crontab
30 0 * * 0 /root/bin/backup.sh > /var/log/backup.log 2>&1
Note on this setup
If you’re running multiple file systems ( as suggested in Part
I) then you will need to specify all of them above
as the -x
flag prevents restic
from crossing file system boundaries.
Note on databases
If you’re running a database such as MySQL/MariaDB or PostgreSQL you’ll need
to use an LVM snapshot or similar to complete a proper backup. Many databases
also have options that need to be set to ensure consistency. Most enable these
by default so they’re crash-tolerant. For Postgres this is usually the WAL
and for MySQL/MariaDB using innodb
is usually sufficient. Still test your backups.
An example snippet to add your backups to handle MySQL:
#!/bin/sh
set -e
volume_group="vg"
fs="mysql"
mountpoint="/mnt/${fs}-snap"
# How much of the base dataset can change before the snapshot goes away?
overwrite_percentage="50"
mysql <<EOF
flush tables with read lock;
EOF
lvm lvcreate "vg/$fs" --name "${fs}-snap" -s -l "${overwrite_percentage}%ORIGIN"
mysql <<EOF
unlock tables;
EOF
if ! [ -d "$mountpoint" ] ; then mkdir "$mountpoint" fi
mount -o nouuid,ro "/dev/${volume_group}/${fs}-snap" "$mountpoint"
# Then run the backups as normal, but specify the snapshot filesystem instead
# of the current writable version
/usr/bin/restic backup -x / /var /home /mnt/mysql-snap
umount -lf "$mountpoint" # Just in-case something is still accessing it
rmdir "$mountpoint"
lvm lvremove "$vg/${fs}-snap" -y
Testing database backups is left as an exercise to the reader.
End of Part II, and the future
Not a particularly exciting set of tasks but necessary for any system that you care about. In Part III we’ll setup Nginx and PHP.