[Howto] Using systemd timers instead of /etc/cron entries

Executing certain commands at given intervals or times is a very typical task for system administrators. In the past it was common to use cron or some variation for this, in some way or the other.

Background

Cron does the job it was written for. But this was years ago, and these days Kernels offer neat things like CPU quotas and memory limits. Cron has no means to use those – but other tools have.

Additionally, newer tools provide dependencies, a proper configuration language (instead of hard-to-maintain bash lines), multiple triggers, randomized delays and real logging.

Especially the last bit, real logging, is essential: Cron can forward log messages it thinks needs to be forwarded. But without real kernel backed process management (cgroups) there is no real way for Cron to see if a job is running or has finished, and what log lines belong to it.

Systemd has all this – and thus it makes sense to create new recurrent jobs in Systemd and even migrate old ones sometimes.

Setting up the timer

What is needed are two things: a service file describing WHAT should be done, and a timer file describing WHEN to do it.

Let’s start with the WHAT: we create a typical service file named backupjob.service executing a backup bash script:

[Unit]
Description=Backup job

[Service]
Type=oneshot
ExecStart=/usr/local/bin/backup.sh

Note that we do not enable the service here! We can start it for quick and easy debugging – which is also way easier than with Cron.

Keep in mind that this is a typical systemd service. You can also add requirements, dependencies, performance options and so on in a standardized fashion. With Cron this is not possible, would have to be done in the bash script itself and thus would be messy, hard to maintain, and most likely duplicate work between multiple Cron jobs. And btw.: that way not at all following the UNIX philosophy!

Next, the WHEN. We need a timer file, something which describes when to execute the service, backupjob.timer.

[Unit]
Description=Run backup jobs regularly

[Timer]
OnCalendar=daily
AccuracySec=1h
Unit=backupjob.service

[Install]
WantedBy=timers.target

As you can see, this job is scheduled to run daily, with an accuracy of 1 hour. The accuracy is an interesting bit: systemd tries to avoid starting all services at the exact same time, to avoid massive system load at :00. Just recently a technician at a large cloud provider mentioned that data centers could be designed way more efficient if not everyone would put their Cron jobs to the full minute.

Speaking about, besides OnCalendar there is also a way to start jobs relative to the boot up time, or relative to when the timer was run last. For example, to run something every 15 minutes, set OnActiveSec=15min. More information can be found in the timer documentation.

Starting and stopping the timer

As mentioned above, the service unit files are not activated, instead the timers are: sudo systemctl enable --now backupjob.timer

Stopping a timer is equally simple: systemctl stop backupjob.timer . If you want to avoid that it is started again during the next boot, also disable it: systemctl disable backupjob.timer.

Additional tooling

One of the great things of using the systemd ecosystem is that it is very easy to work with timers: with systemctl list-timers a nice and clear overview of the current state and time of next execution is given:

❯ sudo systemctl list-timers
[sudo] password for liquidat: 
NEXT                         LEFT          LAST                         PASSED       UNIT                         ACTIVATES                     
Thu 2021-04-15 18:06:34 CEST 1h 14min left Thu 2021-04-15 16:09:32 CEST 42min ago    dnf-makecache.timer          dnf-makecache.service         
Fri 2021-04-16 00:00:00 CEST 7h left       Thu 2021-04-15 00:01:01 CEST 16h ago      logrotate.timer              logrotate.service             
Fri 2021-04-16 00:00:00 CEST 7h left       Thu 2021-04-15 00:01:01 CEST 16h ago      mlocate-updatedb.timer       mlocate-updatedb.service      
Fri 2021-04-16 00:00:00 CEST 7h left       Thu 2021-04-15 00:01:01 CEST 16h ago      unbound-anchor.timer         unbound-anchor.service        
Fri 2021-04-16 12:32:32 CEST 19h left      Thu 2021-04-15 12:32:32 CEST 4h 19min ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Mon 2021-04-19 01:34:34 CEST 3 days left   Mon 2021-04-12 15:46:39 CEST 3 days ago   fstrim.timer                 fstrim.service                

6 timers listed.
Pass --all to see loaded but inactive timers, too.

At the same time, you can get the detailed status with systemctl status logrotate.timer – or of all of them via systemctl status *timer.

Logs are simply available via journalctl -u logrotate.timer – and the logs for the executed service can be read via journalctl -u logrotate.service.

And if you just don’t want to deal with systemd files right now – but nevertheless want all the goods from it, you can also launch systemd services or general commands with a one-time execution:

systemd-run --on-active="10h 30m" --unit myonetimescript.service

Final words

Writing systemd timers instead of traditional Cron jobs makes operating, maintaining and even writing of recurrent jobs much easier.

The only thing you might be missing is easily sending out stuff via mail.

Image by Ryan McGuire from Pixabay

[Howto] My own mail & groupware server, part 4: Nextcloud

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #4 of the series and covers the integration of Nextcloud for storage.

Let’s add Nextcloud to the existing mail server. This part will focus on setting it up and configuring it in basic terms. Groupware and webmail will come in a later post! If you are new to this series, don’t forget to read part 1: what, why, how?, and all about the mail server setup itself in the second post, part 2: initial mail server setup. We also added a Git server in part 3: Git server.

Nextcloud as “cloud” server

Today’s online experience does not only cover mails and other groupware functions these days, but also the interaction with files in some online storage. Arguably, for many this is these days sometimes more important than e-mail systems.

Thus adding a service to the mail server providing a “cloud” experience around file management makes sense. The result is lacking cloud functionality in terms of high availability, but provides a rich UI, accessibility from all kinds of devices and integration in various services. It also offers the option to extend the functions further.

Nextcloud is probably the best known solution for self-hosted cloud solutions, and is also used large scale by universities, governments and companies. I also picked it because I had past experience with it and it offers some integrations and add-ons I really like and depend on.

Alternatives worth checking out are owncloud, Seafile and Pydio.

Integration into mailu setup

Nextcloud can be added to an existing mailu setup in three steps:

  1. Let Nginx know about the service
  2. Add a DB and set it up
  3. Add Nextcloud

The proxy bit is easily done by creating the file /data/mailu/overrides/nginx/nc.conf with the following content:

location /nc/ {
  add_header Front-End-Https on;
  proxy_buffering off;
  fastcgi_request_buffering off;
  proxy_pass http://nc/;
  client_max_body_size 0;
}

We also need a DB. Add this to docker-compose.yml:

  # Nextcloud

  ncpostgresql:
    image: postgres:12
    restart: always
    environment:
      POSTGRES_PASSWORD: ...
    volumes:
      - /data/ncpostgresql:/var/lib/postgresql/data

Make sure to add a proper password here! Next, we have to bring the environment down and up again to add the DB container, and then access the DB and create the right users and database with corresponding privileges:

  • Get the DB up & running: docker compose down and docker compose up
  • access DB container: sudo docker exec -it mailu_ncpostgresql_1 /bin/bash
  • become super user: su - postgres
  • add user nextcloud, add proper password here: create user nextcloud with password '...';
  • add nextcloud database: CREATE DATABASE nextcloud TEMPLATE template0 ENCODING 'UNICODE';
  • change database owner to user nextcloud: ALTER DATABASE nextcloud OWNER TO nextcloud;
  • grant all privileges to nextcloud: GRANT ALL PRIVILEGES ON DATABASE nextcloud TO nextcloud;

Now we can add the Nextcloud container itself. We will add a few environment variables to properly configure the DB access and the initial admin account. Add the following listing to the Docker Compose file:

  nc:
    image: nextcloud:apache
    restart: always
    environment:
      POSTGRES_HOST: ncpostgresql
      POSTGRES_USER: nextcloud
      POSTGRES_PASSWORD: ....
      POSTGRES_DB: nextcloud
      NEXTCLOUD_ADMIN_USER: admin
      NEXTCLOUD_ADMIN_PASSWORD: ...
      NEXTCLOUD_TRUSTED_DOMAINS: front
      REDIS_HOST: redis
    depends_on:
      - resolver
      - ncpostgresql
    dns:
      - 192.168.203.254
    volumes:
      - /data/nc/main:/var/www/html
      - /data/nc/custom_apps:/var/www/html/custom_apps
      - /data/nc/data:/var/www/html/data
      - /data/nc/config:/var/www/html/config
      - /data/nc/zzz_upload_php.ini:/usr/local/etc/php/conf.d/zzz_upload_php.ini

Nextcloud configuration

Before we launch Nextcloud, we need to configure it properly. As shown in the last line in the previous example, a specific file is needed to define the values for PHP file upload sizes. This is only needed in corner cases (browsers split up files during upload automatically these days), but can help sometimes. Create the file /data/nc/zzz_upload_php.ini:

upload_max_filesize=2G
post_max_size=2G
memory_limit=4G

Next, we need to create the configuration for the actual Nextcloud instance. Stop the Docker Compose setup, and start it up again. That generates the basic config files on the disk, and you can access/data/nc/config/config.php and adjust the following variables (others are left intact):

  'overwritewebroot' => '/nc',
  'overwritehost' => 'nc.bayz.de',
  'overwriteprotocol' => 'https',
  'trusted_domains' => 
  array (
    0 => 'lisa.bayz.de',
    1 => 'front',
    2 => 'mailu_front_1.mailu_default',
    3 => 'nc.bayz.de',
  ),

After another Docker Compose down and up, the instance should be all good! If the admin password need to be reset, access the container via sudo docker exec -it mailu_nc_1 /bin/bash and reset the password with: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ user:resetpassword admin"

Next we can connect Nextcloud to the mailu IMAP server to use it for authentication. First we install the app “External user authentication” from the developers section. Next we add the following code to the above mentioned config.php:

  'user_backends' => array(
    array(
        'class' => 'OC_User_IMAP',
        'arguments' => array(
            'imap', 143, 'null', 'bayz.de', true, false
        ),
    ),
),

Restart the setup, and a login as user should be possible.

Sync existing files

In my case the instance was following a previous one. As part of the migration, a lot of “old” data had to be copied. The problem: copying the data for example via webdav is time consuming, does not perform and might be troublesome when the sync needs to be picked up after interruption again.

It is easier to sync direct from disc to disc with established tools like rsync. However, Nextcloud does not know that new files arrived that way and does not list them. The steps to make Nextcloud aware of those are:

  1. Log in as each user for which data should be synced so that target directories exist underneath the files/ directory
  2. Sync data with rsync or other tool of choice
  3. Correct permissions: chown -R ...:... files/
  4. Access container: sudo docker exec -it mailu_nc_1 /bin/bash
  5. Trigger file scan in Nextcloud: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ files:scan --all"

Recurrent updater

For add-ons like the newsreader, Nextcloud needs to perform tasks on a regular base. Surprisingly enough, Nextcloud cannot easily do this on its own. The best way is to add a cron job to do that. And the best way to do that is a Systemd timer.

So first we add the service to be triggered regularly. On the host itself (not inside the container) create the file /etc/systemd/system/nextcloudcron.service:

[Unit]
Description=Nextcloud cron.php job

[Service]
ExecStart=/usr/bin/docker exec mailu_nc_1 su -s /bin/sh www-data -c "/usr/local/bin/php -f /var/www/html/cron.php"

Then, create the timer via the file /etc/systemd/system/nextcloudcron.timer:

[Unit]
Description=Run Nextcloud cron.php every 5 minutes

[Timer]
OnBootSec=5min
OnUnitActiveSec=5min
Unit=nextcloudcron.service

[Install]
WantedBy=timers.target

Enable the timer: systemctl enable --now nextcloudcron.timer. And it is done. And is way more flexible and usable and maintainable than the old cron jobs. If you are new to timers, check their execution with sudo systemctl list-timers.

DB performance

A lot of Nextcloud’s performance depends on the performance of the DB. And DBs are all about indices. There are a few commands which can help with that – and which are recommended on the self check inside Nextcloud anyway:

access container: sudo docker exec -it mailu_nc_1 /bin/bash
add missing indices: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:add-missing-indices" www-data
convert filecache: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:convert-filecache-bigint" www-data
add missing columns: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:add-missing-columns" www-data

Preview generator – fast image loading

The major clients used to access Nextcloud will probably be the Android client and a web browser. However, scrolling through galleries full of images is a pain: it takes ages until all the previews are loaded. Sometimes even a slide show is not possible because it all just takes too long.

This is because the images are not downloaded in real size (that would take too long), instead previews of the size required in that moment are generated live (still takes long, but not that long).

To make this all faster, one idea is to pre-generate the previews! To do so, we install the app “Preview Generator” in our instance. However, this generates a bit too many preview files, many in sizes which are hardly ever used. So we need to alter the sizes to be generated:

$ sudo docker exec -it mailu_nc_1 /bin/bash
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator squareSizes --value='256 1024'"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator widthSizes  --value='384 2048'"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator heightSizes --value='256 2048'"

Also we want to limit the preview sizes to not waste too much storage:

$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set preview_max_x --value 2048"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set preview_max_y --value 2048"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set jpeg_quality --value 80"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set preview jpeg_quality --value='80'"

Last but not least we run the preview generator:

su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ preview:generate-all -vvv"

Note that this can easily take hours, and thus I recommend to launch this via a tmux shell.

Of course new files will reach the system, so once in a while new previews should be generated. Use this command:

su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ preview:pre-generate"

This can also be added to a cron job similar to the timer above. Create the file /etc/systemd/system/nextcloudpreview.service:

[Unit]
Description=Nextcloud image preview generator job

[Service]
ExecStart=/usr/bin/docker exec mailu_nc_1 su -s /bin/sh www-data -c "/usr/local/bin/php -f /var/www/html/occ preview:pre-generate"

And add a timer similar to the above one triggering the service every 15 minutes. Create the file /etc/systemd/system/nextcloudpreview.timer:

[Unit]
Description=Run Nextcloud preview generator every 15 minutes

[Timer]
OnBootSec=15min
OnUnitActiveSec=15min
Unit=nextcloudpreview.service

[Install]
WantedBy=timers.target

Launch the timer: sudo systemctl enable --now nextcloudpreview.timer

One final word of caution: previews can take up a lot of space. Like A LOT. Expect maybe an additional 20% of storage needed for your images.

What’s next?

With Nextcloud up and running and all old data synced I was feeling good: all basic infrastructure services were running again. People could access all their stuff with only slight adjustments to their URLs.

The missing piece now was webmail and general groupware functionality. This will be covered in another post of this series.

More about that in the next post.

Image by RÜŞTÜ BOZKUŞ from Pixabay

[Howto] My own mail & groupware server, part 3: Git server

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #3 of the series and covers the integration of an additional Git server.

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #3 of the series and covers the integration of an additional Git server setup.

This post is all about setting up an additional Git server to the existing mail server. Read about the background to this setup in the first post, part 1: what, why, how?, and all about the mail server setup itself in the second post, part 2: initial mail server setup.

Gitea as Git server

I heavily use Git all the time. And there are enough information where I feel more comfortable when I host them only on infrastructure I control. So for the new setup I also wanted to have a Git server, as fast as possible.

In the past I used Gitlab as Git server, but that is very resource intensive and just overkill for my use cases. Thus years ago I already replaced Gitlab with Gitea – a lightweight, painless self-hosted git service. It is quickly set up, simple to use, offers nevertheless all relevant Git features, and simply does it’s job. Gitea itself is a fork of Gogs, which was not really community friendly. These days Gitea is a way more active and prospering project than Gogs.

Background: Nginx as reverse proxy in Mailu

So how do you “attach” Gitea to a running Mailu infrastructure? Mailu itself comes with a set of defined services, and that’s it. There is no plugin or module system to extend it. However, the project does offer special “overrides” directories where additional configuration can be placed – this applies to Nginx as well! That way, a service can be placed right next to other Mailu services, behind the same reverse proxy, and that way benefit from the already existing setup and for example the certificate regeneration. Also, there is no problem with the already used ports 80 and 443, etc.

Overrides can be placed in /data/mailu/overrides/nginx. They are basically just snippets of Nginx configuration. Note though that they are included within the the main server block! That means they can only work on locations, not on server names. This is somewhat unfortunate since I used to address all my old services via sub-domains. git.bayz.de, nc.bayz.de, etc. With the new setup and the limit to locations this is not an option anymore, everything has to work on different working directories: bayz.de/git, bayz.de/nc, etc.

This is somewhat unfortunate, because that also meant that I had to reconfigure clients, and also ask others to reconfigure their clients when using my infrastructure. I would be happy to get back to a pure sub-domain based addressing, but I don’t see how this could be possible without changing the actual Nginx image.

Adding Gitea entry to Nginx Override

Having said that, to add Gitea to Nginx, create this file: /data/mailu/overrides/nginx/git.conf

location /gitea/ {
  proxy_pass http://git:3000/;
}

And that’s it already. More configuration is not needed since Mailu already configures Nginx with reasonable defaults.

This also gives a first hint that it is pretty easy to add further services – I will cover more examples in this ongoing blog post series.

Additional entry to docker compose

To start Gitea itself, add it to Mailu’s docker-compose.yml:

  # gitea

  git:
    image: gitea/gitea:latest
    restart: always
    env_file: mailu.env
    depends_on:
      - resolver
    dns:
      - 192.168.203.254
    volumes:
      - /data/gitea:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "22:22"

Note the shared volumes: that way the Gitea configuration file will be written on your storage in /data/gitea/gitea/conf/app.ini .

Also, we want to set UID and GUID for Gitea via environment variables. Set this in your mailu.env:

###################################
# Gitea settings
###################################
USER_UID=1000
USER_GID=1000

Setting up Gitea basic configuration

Now let’s configure Gitea. It is possible to pre-create a full Gitea configuration and start the container with it. However, documentation on that is sparse and in my tests there were always problems.

So in my case, I just started and stopped the container (docker compose down and up) a few times, edited some configuration, once registered an admin user via the GUI and was done. While this worked, I can only recommend to closely track the logs during that time to ensure that no one else is accessing the container and does mischief!

So, the first step is to start the new Docker compose service. This will write a first vanilla configuration of Gitea. Afterwards, add the correct domain information in the [server] section in the Gitea configuration file app.ini:

Note that the ROOT_URL value ensures the required rewrite of all requests and links so that the setup works flawlessly with the above mentioned Nginx configuration!

Next, bring the service down and up again (Docker compose down and up), login to your new service (here: git.bayz.de/gitea ) and register a new admin user. Note that here you also have to pick the database option. For small systems with only very few concurrent connections sqlite is fine. If you will serve more users here, or automated access, pick Postgresql. However, to make that work you need to bring up another Postgresql container. One of the next posts will introduce one, so you might want to re-think your setup then.

Directly after this admin registration is done, in the Gitea configuration fileapp.ini in the [server] section change the value of DISABLE_REGISTRATION to true. Stop and start the service again, and no new (external) users can register anymore.

But how do we register new users now?

Central services authentication in Mailu: mail

One of the major hassles with my last setup was the authentication. I started with a fully blown OpenLDAP years ago, which was a pain to manage and maintain already. Moving over to FreeIPA meant that I had better interfaces and maybe even a UI, but it was still a complex, tricky service. Also while almost every service out there can be connected to LDAP, that is not always easy or a pleasant experience. And given that I only have a few users on my system, that is hardly worth the trouble.

Mailu offers an interesting approach here: users are stored in a DB, and external services are asked to authenticate against the email services (IMAP, SMTP). I was surprised to learn that indeed many services out there support this or have plugins for that.

Gitea can authenticate against SMTP sources, and I decided to go that route:

  • In Gitea, access “Site Administration”
  • Click on “Authentication Sources”
  • Pick the blue button “Add Authentication Source”
  • As “Authentication Type“, choose SMTP
  • Give it a name
  • As “SMTP Authentication Type“, enter LOGIN
  • As “SMTP Host“, provide the external host name here (more on that further down below) lisa.bayz.de
  • Pick the right “SMTP Port“, 587
  • And limit the “Allowed Domains” if you want, in my case to bayz.de
  • Of course, tick the check box “Enable TLS Encryption” and also the check box “This Authentication Source is Activated

After this is done, log out of Gitea and log in with an existing mail user. It should just work! And that all without any trace of LDAP! Awesome, right?

A word about the SMTP host in the above configuration: do not try to enter here the SMTP docker compose service directly. This will not work: port 587 is managed by the Nginx proxy which acts as mail proxy here, which redirects auth mail requests to the admin portal. The internal SMTP container does not even listen on port 587.

What’s next?

With my private Git server back to live I felt slightly better again. Now I had the infrastructure at my hands I needed to tackle the cloud/file sharing part of all of it to also lay the foundations for the groupware pieces: Nextcloud.

More about that in the next post.

Featured image by Myriam Zilles from Pixabay

[Howto] My own mail & groupware server, part 2: initial mail server setup

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #2 of the series and covers the initial mail server setup.

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #2 of the series and covers the initial mail server setup.

This post is all about setting up an initial mail server. Read about the background to this setup and the decisions I took in the first post, My own mail & groupware server, part 1: what, why, how?

Getting a server

The first thing of hosting your own mail server is to answer a simple question: where? Do you have an internet connection at home with fast uploads and maybe even a fixed IPv4? Or is cloud the only option? And if you take the cloud, will it be a virtual server or a root server?

My home connection doesn’t really allow for a larger server setup, so cloud is the only option. Cloud always means someone else’s computer, never forget that! But if you pick a hardware machine it is at least harder to access/copy that machine without your knowledge compared to a virtual instance.

My mail setup always run on root servers, and for the last years I picked Hetzner as the hoster. Their server auction often has appealing things on sale, so that you can get something “decent enough” for around $30 per months.

So, for my new server I got one from the server auction again:

Server provisioning

The server was up quickly. The next step was to get it provisioned properly. Mailu requires docker compose. And since Docker is not properly supported on CentOS 8 I decided to got with CentOS 7. This I rebooted the server into security mode and started Hetzner’s custom installer to install centos77 minimal. RAID 1 was already configured, I just altered the partition sizes and moved most of the storage to the custom mount point /data.

DNS

Besides the basic provisioning I added rDNS entries for IPv4 and IPv6 – don’t forget those, they are important for many spam filters!

Speaking of DNS, the domain someone wants to use for mail needs to be set up in DNS as well. At least the following things should be done:

  • create an A entry named @ for your server IPv4
  • create an AAAA entry named @ for your IPv6
  • create an A entry with your server’s host name for the server IPv4
  • create an AAAA entry with your server’s host name the for server IPv6
  • create a MX entry pointing to A entry for server host name (not a CNAME!)
  • add a CAA entry for letsencrpyt: 0 issue "letsencrypt.org"

Many of those entries were still there from my previous setup, but I had to adjust the IP addresses to the new server and add the new host – host name “lisa”, named after the Simpsons.

Basic user and SSH

After setting up a new server, the next step usually is to add a new user: adduser liquidat creates it, usermod -aG wheel liquidat adds my user to the sudo group, and additionally I set the group wheel to NOPASSWD via visudo.

Also, copy the authorized keys from the root user to the new user – cp -r /root/.ssh /home/liquidat/ – and correct their ownership: chown -R liquidat:liquidat /home/liquidat/.ssh

Just to be sure the “right” ssh keys are there, copy your usual set over: ssh-copy-id liquidat@lisa.bayz.de . Personally, afterwards I removed all besides the currently most trusted (ssh-ed25519…).

Last but not least deactivate root login to ssh:

  • Set PermitRootLogin no in /etc/ssh/sshd_config
  • Also, in /etc/ssh/sshd_config set Port 2222 (we want to use port 22 for the git server later on)
  • Restart sshd: systemctl restart sshd

Encrypted partition

One of the most important steps for me (YMMV) is an encrypted hard drive. In my personal risk assessment this impedes many possible hardware attacks from certain actors. Of course certain risks remain.

As mentioned, most of the storage is setup in a partition on /data. So it has to be encrypted properly, formatted and made available again. Note that this requires you to log into the machine after every reboot and actively decrypt the partition. If your server ever goes down, all services on it are down until you decrypted the partition!

  • Umount existing mount: sudo umount /data/
  • Remove /data entry from /etc/fstab
  • Set up crypted device: sudo cryptsetup luksFormat /dev/md3
  • Get passphrase via pwgen -y 50
  • Decrypt the device: sudo cryptsetup luksOpen /dev/md3 verysecret
  • Create a file system on it: sudo mkfs.ext4 /dev/mapper/verysecret
  • Mount it: sudo mount /dev/mapper/verysecret /data

I can only recommend to verify the decryption afterwards:

  • Reboot server
  • Decrypt storage: sudo cryptsetup luksOpen /dev/md3 verysecret
  • Mount storage: sudo mount /dev/mapper/verysecret /data

Docker Compose

Next I had to install Docker. Personally not my first choice to run containers, I would rather use Podman or even get my hands dirty with Kubernetes. But alas, time was short and pressure was high.

To get Docker onto the machine:

  • Remove CentOS’ Docker packages: sudo yum remove -y docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine
  • Install tooling to easier add third party repos: sudo yum install -y yum-utils
  • Add Docker’s third party repo for CentOS: sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  • Install Docker: sudo yum install -y docker-ce docker-ce-cli containerd.io
  • We don’t want autostarts of Docker since the device is still encrypted: sudo systemctl disable docker
  • Get Docker up: sudo systemctl start docker
  • Create Docker group: sudo groupadd docker
  • Add user to it: sudo usermod -aG docker $USER
  • Load new group immediately: newgrp docker
  • Get compose: sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  • Make compose executable: sudo chmod +x /usr/local/bin/docker-compose

I will always wonder why they never managed to get compose out there as a package. Not that hard, I’d say?!

But anyhow, the stage was set now: the server was up and running, a user was ready to do work, the device was properly secured, I was ready to set up the mail server!

Installing Mailu

Getting Mailu up and running is really a matter of minutes. I must admit I was impressed – especially since I knew how much time my own setup ate over the years. Basically you use a config file generator from them which will generate a docker-compose.yml, and then start it. That’s really all!

  • sudo mkdir /data/mailu
  • create config file via https://setup.mailu.io/master/
    • make sure to add all kinds of subdomains you will be using
    • in my case: don’t activate webmail or caldav, I will be using Nextcloud for that
  • download config files via wget as instructed​: one docker-compose.yml and a mailu.env containing all the entered variables
  • verify that ANTIVIRUS is indeed defined and not commented out in mailu.env (thanks to dhoppe for that)
  • docker-compose -p mailu up -d

And that’s it, really! My mail server setup was already running, after minutes. Next I added an admin user:

  • create a password for the admin user via pwgen 20
  • create admin password: docker-compose -p mailu exec admin flask mailu admin postmaster bayz.de $PASSWORD
  • log in to admin interface: https://lisa.bayz.de/admin/, login is postmaster@$DOMAIN

The Mailu admin interface is nothing spectacular, but does it’s job:

After this, I did some housework: not strictly necessary, but helpful:

  • Add abuse alias to postmaster
  • Add admin alias to postmaster (needed for RUA, dmarc aggregated reports)
  • Generate DNS entries for SPF, DKIM, etc and add them to your DNS domain entries
  • Add other users or even domains at will; all domains entered must be present in the mailu.env config file!

And that’s it! I was able to send myself mail via some freemail accounts. And with a classic mail client (Thunderbird, or something on the phone) I could also send mails. It all just worked!

Get word out there

However, I still had to get word out there that there is a new mail server and that it will be sending valid mails.

For example, I registered the new mail server at the DNS whitelist, DSWL: many spam filters check against that.

Next, I let Microsoft know of the new machine and registered it at postmater.live.

Last but not least I checked in with Google’s postmaster service.

Verifying and testing the setup

Now it was time for serious testing. I already said mail is hard, right? You better not do mistakes in your configuration, otherwise your mail is marked as spam quickly. So how about some online tests to check how good my new server scored against various spam filters? Here is a list of online checks of all kinds, including services to which someone can send mails to get them analyzed:

All these tests were green. And should always be! As a small private mail server I cannot afford it to have even the tiniest error. If you decide to setup something like this: do not proceed in your mail setup if some test shows something like “9/10” or other inferior results. Fix them all! I cannot stress this enough.

Having said that, you will realize that indeed this setup is not perfect: first and foremost, we will not be accepting mails via IPv6. Thus services testing delivery in IPv6 will report problems. Second, DANE is not working out of the box with Mailu. In the long term I hope that I will be able to update this guide to include both functions properly.

What’s next?

So the mail server was up and running. I was already able to use it with IMAP clients. And given my story leading to this setup you cannot believe how relieved I was once everything worked again and mails were coming in.

I knew that there was still a lot to do – and I will post more posts about the other steps in other blog posts – but the most important task was accomplished.

I’d like to thank the Mailu team for their awesome work on this piece of code – it is really great and I highly appreciate the ease of use and the simple admin capabilities.

Featured image by Felix Lichtenfeld from Pixabay

[Howto] Using the new Podman API

Podman is a daemonless container engine to develop, run and manage OCI containers. In a recent version the API was rewritten and now offers a REST interface as well as a docker compatible endpoint.

Podman is a daemonless container engine to develop, run and manage OCI containers. In a recent version the API was rewritten and now offers a REST interface as well as a docker compatible endpoint.

In case you never heard of Podman before, it is certainly worth a look. Besides offering a more secure drop-in-replacement for many docker functions, it can also manage pods and thus provides a container experience more aligned with what Kubernetes uses. It even can understand Kubernetes yaml (see podman-play-kube), easing the transition from single host container development over to fully fledged container management environments. Last but not least it is among the tools supporting newest features in the container space like cgroups v2.

Background: Podman API

Of course Podman is not perfect – due to the focus on Kubernetes yaml there is no support for docker-compose files (though alternatives exist), networking and routing based on names is not as simple as on Docker (read more about Podman container networking) and last but not least, the API was different – making it hard to migrate solutions dependent on the docker API.

This changed: recently, a new API was merged:

The new API is a simpler implementation based on HTTP/REST. We provide two basic groups of endpoints. The first one is for libpod; the second is for Docker compatibility, to ease adoption. 

New API coming for Podman

So how can I access the new API and fool around with it?

If you are familiar with Podman, or read carefully, the first question is: where is this API running if Podman is daemonless? And in fact, an API service needs to be started explicitly:

$ podman system service --timeout 5000

This starts the API on a UNIX socket. Other options, like a TCP socket or to run this without a timeout are also possible, the documentation provides examples.

How to use the Docker API endpoint

Let’s use the Docker API endpoint. To talk to a UNIX socket based REST API a recent curl (version >= 7.40) is quite helpful:

$ curl --unix-socket /$XDG_RUNTIME_DIR/podman/podman.sock http://localhost/images/json
[{"Containers":1,"Created":1583300892,"Id":"8c2e0da7c436e45be5ebf2adf26b41d13939190bd186214a4d45c30485071f9f","Labels":{"license":"MIT","name":"fedora","vendor":"Fedora Project","version":"31"},"ParentId":...

Note that here we are speaking to the rootless container, thus the unix domain socket is in the user runtime directory. Also, localhost has to be provided in the URL for very recent curl versions, otherwise it does not output anything!

The answer is a JSON listing, which is not easily readable. Simplify it with the help of Python (and silence curl info with the silent flag):

$ curl -s --unix-socket /$XDG_RUNTIME_DIR/podman/podman.sock http://localhost/containers/json|python -m json.tool
[
    {
        "Id": "4829e030ab1beb83db07dbc5e51481cb66562f57b79dd9eb3069dfcde91019ed",
        "Names": [
            "/87faf76aea6a-infra"
...

So what can you do with the API? Podman tries to recreate most of the docker API, so you can basically use the docker API documentation to see what should be possible. Note though that not all API endpoints are supported since Podman does not provide all functions Docker offers.

How to use the Podman API endpoint

As mentioned the API does provide two endpoints: the Docker endpoint, and a Podman specific endpoint. This second API is necessary for multiple reasons: first, Podman has functions which are alien to Docker and thus not part of the Docker API. The pod function is the most notable here. Another reason is that an independent API enables the Podman developers to further innovate in their own way and velocity, and to change the API when needed or wanted.

The API for Podman can be reached via curl as mentioned above. However, there are two notable differences: first, the Podman endpoint is marked via an additional “podman” string in the API URI, and second the Podman API is always versioned. To list the images as shown above, but via podman’s own API, the following call is necessary:

$ curl -s --unix-socket /$XDG_RUNTIME_DIR/podman/podman.sock http://localhost/v1.24/libpod/images/json
[{"Id":"8c2e0da7c436e45be5ebf2adf26b41d13939190bd186214a4d45c30485071f9f","RepoTags":["registry.fedoraproject.org/fedora:latest"],"Created":1583300892,"Size":199632198,"Labels":{"license":"MIT","name":"fedora","vendor":"Fedora ...

For pods, the endpoint is for example /pods instead of /images:

$ curl -s --unix-socket /$XDG_RUNTIME_DIR/podman/podman.sock http://localhost/v1.24/libpod/pods/json|python -m json.tool
[
    {
        "Cgroup": "user.slice",
        "Containers": [
            {
                "Id": "1510dca23d2d15ae8be1eeadcdbfb660cbf818a69d5780705cd6535d97a4a578",
                "Names": "wonderful_ardinghelli",
                "Status": "running"
            },
            {
                "Id": "6c05c20a42e6987ac9f78b277a9d9152ab37dd05e3bfd5ec9e675979eb93bf0e",
                "Names": "eff81a37b4b8-infra",
                "Status": "running"
            }
        ],
        "Created": "2020-04-19T21:45:17.838549003+02:00",
        "Id": "eff81a37b4b85e92916613239001cddc2ba42f3595236586f7462492be0ac5fc",
        "InfraId": "6c05c20a42e6987ac9f78b277a9d9152ab37dd05e3bfd5ec9e675979eb93bf0e",
        "Name": "testme",
        "Namespace": "",
        "Status": "Running"
    }
]

Currently there is no documentation of the API available – or at least none of the level of the current Docker API documentation. But hopefully that will change soon.

Takeaways

Podman providing a Docker API is a great step for people who are dependent on the Docker API but nevertheless want switch to Podman. But providing a unique, but simple to consume REST API for Podman itself is equally great because it makes it easy to integrate Podman processes into existing tools and frameworks.

Just don’t forget that the API is still in development!

Featured image by Magnascan from Pixabay