[Howto] Using systemd timers instead of /etc/cron entries

Executing certain commands at given intervals or times is a very typical task for system administrators. In the past it was common to use cron or some variation for this, in some way or the other.

Background

Cron does the job it was written for. But this was years ago, and these days Kernels offer neat things like CPU quotas and memory limits. Cron has no means to use those – but other tools have.

Additionally, newer tools provide dependencies, a proper configuration language (instead of hard-to-maintain bash lines), multiple triggers, randomized delays and real logging.

Especially the last bit, real logging, is essential: Cron can forward log messages it thinks needs to be forwarded. But without real kernel backed process management (cgroups) there is no real way for Cron to see if a job is running or has finished, and what log lines belong to it.

Systemd has all this – and thus it makes sense to create new recurrent jobs in Systemd and even migrate old ones sometimes.

Setting up the timer

What is needed are two things: a service file describing WHAT should be done, and a timer file describing WHEN to do it.

Let’s start with the WHAT: we create a typical service file named backupjob.service executing a backup bash script:

[Unit]
Description=Backup job

[Service]
Type=oneshot
ExecStart=/usr/local/bin/backup.sh

Note that we do not enable the service here! We can start it for quick and easy debugging – which is also way easier than with Cron.

Keep in mind that this is a typical systemd service. You can also add requirements, dependencies, performance options and so on in a standardized fashion. With Cron this is not possible, would have to be done in the bash script itself and thus would be messy, hard to maintain, and most likely duplicate work between multiple Cron jobs. And btw.: that way not at all following the UNIX philosophy!

Next, the WHEN. We need a timer file, something which describes when to execute the service, backupjob.timer.

[Unit]
Description=Run backup jobs regularly

[Timer]
OnCalendar=daily
AccuracySec=1h
Unit=backupjob.service

[Install]
WantedBy=timers.target

As you can see, this job is scheduled to run daily, with an accuracy of 1 hour. The accuracy is an interesting bit: systemd tries to avoid starting all services at the exact same time, to avoid massive system load at :00. Just recently a technician at a large cloud provider mentioned that data centers could be designed way more efficient if not everyone would put their Cron jobs to the full minute.

Speaking about, besides OnCalendar there is also a way to start jobs relative to the boot up time, or relative to when the timer was run last. For example, to run something every 15 minutes, set OnActiveSec=15min. More information can be found in the timer documentation.

Starting and stopping the timer

As mentioned above, the service unit files are not activated, instead the timers are: sudo systemctl enable --now backupjob.timer

Stopping a timer is equally simple: systemctl stop backupjob.timer . If you want to avoid that it is started again during the next boot, also disable it: systemctl disable backupjob.timer.

Additional tooling

One of the great things of using the systemd ecosystem is that it is very easy to work with timers: with systemctl list-timers a nice and clear overview of the current state and time of next execution is given:

❯ sudo systemctl list-timers
[sudo] password for liquidat: 
NEXT                         LEFT          LAST                         PASSED       UNIT                         ACTIVATES                     
Thu 2021-04-15 18:06:34 CEST 1h 14min left Thu 2021-04-15 16:09:32 CEST 42min ago    dnf-makecache.timer          dnf-makecache.service         
Fri 2021-04-16 00:00:00 CEST 7h left       Thu 2021-04-15 00:01:01 CEST 16h ago      logrotate.timer              logrotate.service             
Fri 2021-04-16 00:00:00 CEST 7h left       Thu 2021-04-15 00:01:01 CEST 16h ago      mlocate-updatedb.timer       mlocate-updatedb.service      
Fri 2021-04-16 00:00:00 CEST 7h left       Thu 2021-04-15 00:01:01 CEST 16h ago      unbound-anchor.timer         unbound-anchor.service        
Fri 2021-04-16 12:32:32 CEST 19h left      Thu 2021-04-15 12:32:32 CEST 4h 19min ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Mon 2021-04-19 01:34:34 CEST 3 days left   Mon 2021-04-12 15:46:39 CEST 3 days ago   fstrim.timer                 fstrim.service                

6 timers listed.
Pass --all to see loaded but inactive timers, too.

At the same time, you can get the detailed status with systemctl status logrotate.timer – or of all of them via systemctl status *timer.

Logs are simply available via journalctl -u logrotate.timer – and the logs for the executed service can be read via journalctl -u logrotate.service.

And if you just don’t want to deal with systemd files right now – but nevertheless want all the goods from it, you can also launch systemd services or general commands with a one-time execution:

systemd-run --on-active="10h 30m" --unit myonetimescript.service

Final words

Writing systemd timers instead of traditional Cron jobs makes operating, maintaining and even writing of recurrent jobs much easier.

The only thing you might be missing is easily sending out stuff via mail.

Image by Ryan McGuire from Pixabay

[Howto] Run programs as non-root user on privileged ports via Systemd [Update]

TuxRunning programs as a non-root user is must in security sensitive environments. However, these programs sometimes need to publish their service on privileged ports like port 80 – which cannot be used by local users. Systemd offers a simple way to solve this problem.

Background

Running services as non-root users is a quite obvious: if it is attacked and a malicious user gets control of the service, the rest of the system should still be secure in terms of access rights.

At the same time plenty programs need to publish their service at ports like 80 or 443 since these are the default ports for http communication and thus for interfaces like REST. But these ports are not available to non-root users.

Problem shown at the example gitea

To show how to solve this with systemd, we take the self hosted git service gitea as an example. Currently there are hardly any available packages, so most people end up installing it from source, for example as the user git. A proper sysmted unit file for such an installation in a local path, running the service as a local user, is:

$ cat /etc/systemd/system/gitea.service
[Unit]
Description=Gitea (Git with a cup of tea)
After=syslog.target
After=network.target
After=postgresql.service

[Service]
RestartSec=2s
Type=simple
User=git
Group=git
WorkingDirectory=/home/git/go/src/code.gitea.io/gitea
ExecStart=/home/git/go/src/code.gitea.io/gitea/gitea web
Restart=always
Environment=USER=git HOME=/home/git

[Install]
WantedBy=multi-user.target

If this service is started, and the application configuration is set to port 80, it fails during the startup with a bind error:

Jan 04 09:12:47 gitea.qxyz.de gitea[8216]: 2018/01/04 09:12:47 [I] Listen: http://0.0.0.0:80
Jan 04 09:12:47 gitea.qxyz.de gitea[8216]: 2018/01/04 09:12:47 [....io/gitea/cmd/web.go:179 runWeb()] [E] Failed to start server: listen tcp 0.0.0.0:80: bind: permission denied

Solution

One way to tackle this would be a reverse proxy, running on port 80 and forwarding traffic to a non-privileged port like 8080. However, it is much more simple to add an additional systemd socket which listens on port 80:

$ cat /etc/systemd/system/gitea.socket
[Unit]
Description=Gitea socket

[Socket]
ListenStream=80
NoDelay=true

As shown above, the definition of a socket is straight forward, and hardly needs any special configuration. We use NoDelay here since this is a default for Go on sockets it opens, and we want to imitate that.

Given this socket definition, we add the socket as requirement to the service definition:

[Unit]
Description=Gitea (Git with a cup of tea)
Requires=gitea.socket
After=syslog.target
After=network.target
After=postgresql.service

[Service]
RestartSec=2s
Type=simple
User=git
Group=git
WorkingDirectory=/home/git/go/src/code.gitea.io/gitea
ExecStart=/home/git/go/src/code.gitea.io/gitea/gitea web
Restart=always
Environment=USER=git HOME=/home/git
NonBlocking=true

[Install]
WantedBy=multi-user.target

As seen above, the unit definition hardly changes, only the requirement for the socket is added – and NonBlocking as well, to imitate Go behavior.

That’s it! Now the service starts up properly and everything is fine:

[...]
Jan 04 09:21:02 gitea.qxyz.de gitea[8327]: 2018/01/04 09:21:02 Listening on init activated [::]:80
Jan 04 09:21:02 gitea.qxyz.de gitea[8327]: 2018/01/04 09:21:02 [I] Listen: http://0.0.0.0:80
Jan 04 09:21:08 gitea.qxyz.de gitea[8327]: [Macaron] 2018-01-04 09:21:08: Started GET / for 192.168.122.1
[...]

The Gitea documentation covers more examples of possible systemd service configurations, including the usage of capabilities instead of sockets.

Sources, further reading

[Howto] Using D-BUS to query status information from NetworkManager (or others)

920839987_135ba34fffMost of the current Linux installations rely on the inter process communication framework D-Bus. D-Bbus can be used to gather quite some information about the system – however the usage can be a bit troublesome. This howto sheds some light on the usage of D-Bus by the example of querying the NetworkManagaer interface.

Background

D-BUS enables tools and programs to talk to each other. For example tools like NetworkManager, systemd or firewalld all provide methods and information via D-Bus to query their information and change their configuration or trigger some specific behavior. And of course all these operations can also be performed on the command line. This can be handy in case you want to include it in some bash scripts or for example in your monitoring setup. It also helps understanding the basic principles behind D-Bus in case you want to use it in more complex scripts and programs.

First steps: qdbus

For this example I use qdbus which is shipped with Qt. There are corresponding tools like gdbus and others available in case you don’t want to install qt on your machine for whatever reason.

When you first launch qdbus it shows you a list of strange names which roughly remind you of the apps currently running on your desktop/user session. The point is that you are asking your own user environment – but in case of NetworkManager or other system tools you need to query the system D-Bus:

$ qdbus --system
...
 org.freedesktop.NetworkManager
...

This outputs show a list of all available services, or better said, interfaces. You can connect to these and can get a list of the objects the have:

$ qdbus --system org.freedesktop.NetworkManager
...
/org
/org/freedesktop
/org/freedesktop/NetworkManager
/org/freedesktop/NetworkManager/AccessPoint
/org/freedesktop/NetworkManager/AccessPoint/0
...

Each object has a path which identifies, well, the path to the object. That’s how you call it and everything which is connected to it.

Querying objects

Now that we have a list of objects, we can check which members belong to an object. Members can be actions which can be triggered, or information about a current state, signals, etc. – when we have access to the members things get interesting. In this case we query the object NetworkManager itself, not one of its sub-objects:

$ qdbus --system org.freedesktop.NetworkManager /org/freedesktop/NetworkManager
...
method QDBusVariant org.freedesktop.DBus.Properties.Get(QString interface, QString propname)
method QVariantMap org.freedesktop.DBus.Properties.GetAll(QString interface)
...
property read QList<QDBusObjectPath> org.freedesktop.NetworkManager.ActiveConnections
...

The output shows a list of various members. In the above given code snippet I highlighted the methods to get information – and a property which is called org.freedesktop.NetworkManager.ActiveConnections. Guess what, that property holds the information of the current active connections (there can be more than one!) of the NetworkManager. And we can ask this information (using the --literal because otherwise the output is not possible):

$ qdbus --system --literal org.freedesktop.NetworkManager /org/freedesktop/NetworkManager org.freedesktop.DBus.Properties.Get org.freedesktop.NetworkManager ActiveConnections
[Variant: [Argument: ao {[ObjectPath: /org/freedesktop/NetworkManager/ActiveConnection/0]}]]

Please note that as arguments we gave not the entire property as a whole, but we separated at the last dot. Formally we asked for the content of the property ActiveConnections at the interface org.freedesktop.NetworkManager. The interface and the property are merged in the output, but the query always needs to have them separated by a space. I’m not sure why…
But well, now we know that our active connection is actually a NetworkManager object with the path given above. We can again query that object to get a list of all members:

$ qdbus --system --literal org.freedesktop.NetworkManager /org/freedesktop/NetworkManager/ActiveConnection/0
...
method QDBusVariant org.freedesktop.DBus.Properties.Get(QString interface, QString propname)
...
property read QDBusObjectPath org.freedesktop.NetworkManager.Connection.Active.Ip4Config
...

There is again a member to get properties – and the interesting property again is an object path:

$ qdbus --system --literal org.freedesktop.NetworkManager /org/freedesktop/NetworkManager/ActiveConnection/0 org.freedesktop.DBus.Properties.Get org.freedesktop.NetworkManager.Connection.Active Ip4Config
[Variant: [ObjectPath: /org/freedesktop/NetworkManager/IP4Config/1]]

We query again that given object path and see rather promising members:

$ qdbus --system --literal org.freedesktop.NetworkManager /org/freedesktop/NetworkManager/IP4Config/1
property read QDBusRawType::aau org.freedesktop.NetworkManager.IP4Config.Addresses
property read QStringList org.freedesktop.NetworkManager.IP4Config.Domains
property read QString org.freedesktop.NetworkManager.IP4Config.Gateway
...

And indeed: if we now query these members, we get for example the current Gateway:

$ qdbus --system --literal org.freedesktop.NetworkManager /org/freedesktop/NetworkManager/IP4Config/1 org.freedesktop.DBus.Properties.Get org.freedesktop.NetworkManager.IP4Config Gateway
[Variant(QString): "192.168.178.1"]

That’s it. Now you know the gateway I have configured right now. If you do not want to query each member individually, you can simply call all given members of an interface:

$ qdbus --system --literal org.freedesktop.NetworkManager /org/freedesktop/NetworkManager/IP4Config/1 org.freedesktop.DBus.Properties.GetAll org.freedesktop.NetworkManager.IP4Config|sed 's/, /\n/g'
[Argument: a{sv} {"Gateway" = [Variant(QString): "192.168.178.1"]
"Addresses" = [Variant: [Argument: aau {[Argument: au {565356736
24
28485824}]}]]
"Routes" = [Variant: [Argument: aau {}]]
"Nameservers" = [Variant: [Argument: au {28485824}]]
"Domains" = [Variant(QStringList): {"example.com"}]
"Searches" = [Variant(QStringList): {}]
"WinsServers" = [Variant: [Argument: au {}]]}]

As you see the ipv4 addresses are encoded in reverse decimal notation. I am sure there is reason for that. A good one. Surely. But well, that’s just a stupid encoding problem, nothing else. In the end, the queries worked: the current gateway was successfully identified via D-Bus.

Methods: calling panic mode in firewalld

As mentioned above there are also methods which influence the behavior of an application. One simple example I came across is to kill all networking by calling the firewalld panic mode. For that you need the interface org.fedoraproject.FirewallD1, the object /org/fedoraproject/FirewallD1 and the method org.fedoraproject.FirewallD1.enablePanicMode:

$ qdbus --system --literal org.fedoraproject.FirewallD1 /org/fedoraproject/FirewallD1 org.fedoraproject.FirewallD1.enablePanicMode
[]

And your internet connection is gone. It comes back by disabling the panic mode again:

$ qdbus --system --literal org.fedoraproject.FirewallD1 /org/fedoraproject/FirewallD1 org.fedoraproject.FirewallD1.disablePanicMode
[]

Rights

You should also be aware that there is a rights management embedded in D-Bus – not every user is allowed to do anything. For example, as a normal user you cannot simply query all configured chains. If you call the following method:

$ qdbus --system --literal org.fedoraproject.FirewallD1 /org/fedoraproject/FirewallD1 org.fedoraproject.FirewallD1.direct.getAllChains
[Argument: a(sss) {}]

you are greeted with a password dialog before the command is executed.

Summary

D-Bus is used for inter process communication and thus can help when various programs are supposed to work together. It can also used on the shell to query information or to call specific methods as long as they are provided via the D-Bus interface. That might come in handy – some applications have rather strange ways to provide data or procedures via their user interfaces, and D-Bus offers a very generic way to interact without the need to respect any user interfaces.

First look at cockpit, a web based server management interface [Update]

TuxOnly recently the Cockpit project was launched, aiming at providing a web based management interface for various servers. It already leaves an interesting impression for simple management tasks – and the design is actually well done.

I just recently came across the only three month old Cockpit project. The mission statement is clear:

Cockpit is a server manager that makes it easy to administer your GNU/Linux servers via a web browser.

The web page also states three aims: beginners friendly interface, multi server management – and that there should be no interference in mixed usage of web interface and shell. Especially the last point caught my attention: many other web based solutions introduce their own magic, thus making it sometimes tricky to co-administrate the system manually via the shell. The listed objectives also make clear that cockpit does not try to replace tools that go much deeper into the configuration of servers, like Webmin, which for example offers modules to configure Apache servers in a quite detailed manner. Cockpit tries to simply administrate the server, not the applications. I must admit that I would always do such a application configuration manually anyway…

The installation of Cockpit is a bit bumpy: besides the requirement of tools like systemd which limits the usage to only very recent distributions (excluding Ubuntu, I guess) there are no packages yet, some manual steps are required. A post at unshut.me highlights the necessary steps for Fedora which I followed: in includes installing dependencies, setting firewall rules, etc. – and in the end it just works. But please note, in case you wanna give it a try: it is not ready for production. Not at all. Use virtual machines!

What I did see after the installation was actually rather appealing: a clean, yet modern web interface offering the most important and simple tasks a sysadmin might need in a daily routine: quickly showing the current health state, providing logs, starting and stopping services, creating new users, switching between servers, etc. And: there is even a working rescue console!

And where ever you click you see quickly what the foundation for Cockpit is: systemd. The logviewer shows systemd journal logs, services are displayed as seen and managed by systemd, and so on. That is the reason why one goal – no interference between shell and web interface – can be rather easily reached: the web interface communicates with systemd, just like a administrator on such a machine would do.  <Update> Speaking about: if you want to get an idea of *how* Cockpit communicates with its components, have a look at their transport graphic. </Update> Systemd by the way also explains why Cockpit currently is developed on Fedora: it ships with fully activated systemd.

But back to Cockpit itself: Some people might note that running a web server on a machine which is not meant to provide web pages is a security issue. And they are right. Each additional service on a server is a potential threat. But also keep in mind that many simple server installations already have an additional web server for example to show Munin statistics. So as always you have to carefully balance the pros of usable system management with the cons of an additional service and a web reachable system console…

To summarize: The interface is slick and easy to use, for simple server setups it could come in handy as a server management tool for example for beginners and accessible from the internal network only. A downside currently is the already mentioned limit to the distributions: as far as I got it, only Fedora 18 and 20 are supported yet. But the project has just begun, and will most certainly pick up more support in the near future, as long as the foundations (systemd) are properly supported in the distribution of your choice. And in the meantime Cockpit might be an extra bonus for people testing the coming Fedora Server. 😉

Last but not least, in case you wonder how server management looks like with systemd, Cockpit can give you a first impression: it uses systemd and almost nothing else for exactly that.