[Short Tip] Plot live-data in Linux terminal

Recently I realized that one of the disks in my server had died. After the replacement, the RAID sync started – and I quickly had to learn that this was going to take days (!). But I also learned that the time it might take massively jumped up and down.

Thus I thought it would be fun to monitor the progress of this. First, I just crated a command to watch the minutes (calculated into days) every few seconds with watch:

watch 'cat /proc/mdstat |grep recovery|cut -d " " -f 13|cut -d "=" -f 2|cut -d "." -f 1|xargs -n 1 -I {} echo "{}/60/24"|bc'

But since it was jumping so much I was wondering if I could live-plot the data in the terminal (remote server, after all). There are many ways to do that, even gnuplot seems to have an options for that, but I wanted something more simple. Enter: pipeplot

First I tried to use watch together with pipeplot, but it was easier to just write a short for loop around it:

while true;
do
  cat /proc/mdstat |grep recovery|cut -d " " -f 13|cut -d "=" -f 2|cut -d "." -f 1|xargs -n 1 -I {} echo "{}/60/24"|bc;
  sleep 5;
done \
| pipeplot

And the result is rather nice (also shown in the header image):

Figuring out the container runtime you are in

Containers are meant to keep processes contained. But there are ways to gather information about the host – like the actual execution environment you are running in.

Containers are pretty good at keeping everyone and everything inside their boundaries – thanks to SELinux, namespaces and so on. But they are not perfect. Thanks to a recent Azure security flaw I was made aware of a nice trick via the /proc/ file system to figure out what container runtime the container is running in.

The idea is that the started container inherits some /proc/ entries – among the entry for /proc/self/. If we are able to load a malicious container, we can use this knowledge to execute the container host binary and by that get information about the runtime.

As an example, let’s take a Fedora container image with podman (and all it’s library dependencies) installed. We can run it and check the version of crun inside:

❯ podman run --rm podman:v3.2.0 crun --version
crun version 0.20.1
commit: 0d42f1109fd73548f44b01b3e84d04a279e99d2e
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL

Note that I take an older version here to better highlight the difference between host and container binary!

If we now change the execution to /proc/self/exe, which points to the host binary, the result shows a different version:

❯ podman run --rm podman:v3.2.0 /proc/self/exe --version
crun version 1.0
commit: 139dc6971e2f1d931af520188763e984d6cdfbf8
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL

This is actually the version string of the binary on the host system:

❯ /usr/bin/crun --version
crun version 1.0
commit: 139dc6971e2f1d931af520188763e984d6cdfbf8
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL

With this nice little feature we now have insight into the version used – and as it was shown in the detailed write-up of the Azure security problem mentioned above, this insight can be crucial to identify and use the right CVEs to start lateral movement.

Of course this requires us to be able to load containers of our choice, and to have the right libraries inside the container. This example is simplified because I knew about the target system and there was a container available with everything I needed. For a more realistic attack container image, check out whoc.

[Howto] Installing Cilium with Minikube on Fedora

Cilium is a networking plugin for Kubernetes based on eBPF. If you want to give it a try, Minikube is a good option to get started.

Background

I just started with Isovalent – and since I am very much a beginner regarding everything related to Kubernetes I decided to get some hands-on experience with the technology I am going to work with for the foreseeable future.

Isovalent’s offering is an Enterprise version of Cilium which basically manages and secures connections between containers and adds observability to it. It all runs on eBPF and thus is pretty performant. eBPF can run sandboxed programs in Linux kernel space without the need to recompile the kernel; A tiny bit like a “Kernel VM”. I always wanted to get my hands dirty with eBPF anyway, and Cilium is a very good way to approach it. But where to start? The answer is: with a small Kubernetes setup based on Minikube, a tiny Kubernetes distribution for testing and fooling around which leaves your main system almost unchanged.

Preparing the environment

Minikube runs itself in a tightly confined environment to not disturb your other systems. This abstraction is done via containers or VMs realized via so called “drivers”. Drivers are available for Docker, VMWare, KVM, Podman and others. I decided to go with the KVM driver, so the virtualization bits need to be installed:

❯ sudo dnf install @virtualization
[...]
❯ sudo systemctl start libvirtd
❯ sudo systemctl enable libvirtd
❯ sudo usermod --append --groups libvirt ( whoami )

Note in the above commands that the last command only works in Nushell and has to be slightly adjusted for Bash or Zsh.

Next we have to install Minikube itself:

❯ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 15.1M  100 15.1M    0     0  8304k      0  0:00:01  0:00:01 --:--:-- 8300k

❯ sudo rpm -Uvh minikube-latest.x86_64.rpm
Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Updating / installing...
   1:minikube-1.22.0-0                ################################# [100%]

Also, to install and manage Cilium easily it makes sense to use the Cilium CLI. Unfortunately the CLI is currently not available as a RPM package for Fedora, so we have to install the binary and move it to /usr/local/bin:

❯ curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz{,.sha256sum}
[...]
❯ sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
cilium-linux-amd64.tar.gz: OK
❯ sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin

Starting Minikube with CNI

We now need to start up our Kubernetes cluster, and it needs to be in a way that we can install and use Cilium in it. So we set the network configuration to CNI:

❯ minikube start --network-plugin=cni
😄  minikube v1.22.0 on Fedora 34
✨  Automatically selected the kvm2 driver. Other choices: podman, ssh
💾  Downloading driver docker-machine-driver-kvm2:
    > docker-machine-driver-kvm2....: 65 B / 65 B [----------] 100.00% ? p/s 0s
    > docker-machine-driver-kvm2: 11.47 MiB / 11.47 MiB  100.00% 12.50 MiB p/s 
❗  With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
💿  Downloading VM boot image ...
    > minikube-v1.22.0.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
    > minikube-v1.22.0.iso: 242.95 MiB / 242.95 MiB [ 100.00% 20.05 MiB p/s 12s
👍  Starting control plane node minikube in cluster minikube
🔥  Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.21.2 on Docker 20.10.6 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Installing Cilium into Kubernetes

Since Cilium CLI is already installed, it is fairly easy to install Cilium into the cluster itself. The installation is done into the current kubectl context, so make sure you are running in the right context for example with kubectl get nodes. Afterwards, fire up the installation:

❯ cilium install
🔮 Auto-detected Kubernetes kind: minikube
✨ Running "minikube" validation checks
✅ Detected minikube version "1.22.0"
ℹ️  using Cilium version "v1.10.2"
🔮 Auto-detected cluster name: minikube
🔮 Auto-detected IPAM mode: cluster-pool
🔮 Auto-detected datapath mode: tunnel
🔮 Custom datapath mode: tunnel
🔑 Generating CA...
2021/07/13 14:09:33 [INFO] generate received request
2021/07/13 14:09:33 [INFO] received CSR
2021/07/13 14:09:33 [INFO] generating key: ecdsa-256
2021/07/13 14:09:33 [INFO] encoded CSR
2021/07/13 14:09:33 [INFO] signed certificate with serial number 122640105911298337607907666763746132599853501126
🔑 Generating certificates for Hubble...
2021/07/13 14:09:33 [INFO] generate received request
2021/07/13 14:09:33 [INFO] received CSR
2021/07/13 14:09:33 [INFO] generating key: ecdsa-256
2021/07/13 14:09:33 [INFO] encoded CSR
2021/07/13 14:09:33 [INFO] signed certificate with serial number 459020519400202498147292503280351877404424824247
🚀 Creating Service accounts...
🚀 Creating Cluster roles...
🚀 Creating ConfigMap...
🚀 Creating Agent DaemonSet...
🚀 Creating Operator Deployment...
⌛ Waiting for Cilium to be installed...
⌛ Waiting for Cilium to become ready before restarting unmanaged pods...
♻️  Restarting unmanaged pods...
♻️  Restarted unmanaged pod kube-system/coredns-558bd4d5db-8s4f6
♻️  Restarted unmanaged pod kubernetes-dashboard/dashboard-metrics-scraper-7976b667d4-ctq4p
♻️  Restarted unmanaged pod kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-5wkbx
✅ Cilium was successfully installed! Run 'cilium status' to view installation health

The installation went through flawlessly. But does it really work? As mentioned in the last line of the above listing, we can check the status of Cilium easily:

❯ cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:         OK
 \__/¯¯\__/    Operator:       OK
 /¯¯\__/¯¯\    Hubble:         disabled
 \__/¯¯\__/    ClusterMesh:    disabled
    \__/

DaemonSet         cilium             Desired: 1, Ready: 1/1, Available: 1/1
Deployment        cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Containers:       cilium             Running: 1
                  cilium-operator    Running: 1
Image versions    cilium-operator    quay.io/cilium/operator-generic:v1.10.2: 1
                  cilium             quay.io/cilium/cilium:v1.10.2: 1

We can even get one step further and check the connectivity of the cluster – after all, Cilium is all about proper networking:

❯ cilium connectivity test
ℹ️  Single-node environment detected, enabling single-node connectivity test
ℹ️  Monitor aggregation detected, will skip some flow validation steps
[...]
..
✅ All 11 tests (76 actions) successful, 0 tests skipped, 0 scenarios skipped.

As you see Cilium creates a set of pods and a service in a dedicated namespace and runs tests on them afterwards.

Interacting with the Cilium agent

Let’s have a first look at our installed Cilium environment by running a few commands on the local Cilium agent. First we have to figure out the name of the actual Cilium pod:

❯ minikube kubectl -- -n kube-system get pods -l k8s-app=cilium
NAME           READY   STATUS    RESTARTS   AGE
cilium-8hx2v   1/1     Running   0          35m

With the name of the pod we can now reach into the pod and execute the Cilium command right inside, for example querying the list of endpoints:

❯ minikube kubectl -- -n kube-system exec cilium-8hx2v -- cilium endpoint list
Defaulted container "cilium-agent" out of: cilium-agent, ebpf-mount (init), clean-cilium-state (init)
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                           IPv6   IPv4         STATUS   
           ENFORCEMENT        ENFORCEMENT                                                                                                                            
208        Disabled           Disabled          7182       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                   10.0.0.70    ready   
                                                           k8s:io.cilium.k8s.policy.cluster=minikube                                                                         
                                                           k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                   
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                                       
                                                           k8s:k8s-app=kube-dns                                                                                              
452        Disabled           Disabled          4506       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-test                   10.0.0.254   ready   
                                                           k8s:io.cilium.k8s.policy.cluster=minikube                                                                         
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default                                                                   
                                                           k8s:io.kubernetes.pod.namespace=cilium-test                                                                       
                                                           k8s:kind=client                                                                                                   
                                                           k8s:name=client2                                                                                                  
                                                           k8s:other=client                                             
[...]  

This list is long, detailed and only really makes sense on a wide monitor. But it already tells us a lot about the current enforcement of ingress and egress policies (here they are not enforced as of yet).

But there is more: since Cilium is eBPF based, we can go one layer deeper, and for example look at the policy related eBPF maps:

❯ minikube kubectl -- -n kube-system exec cilium-8hx2v -- cilium bpf policy get --all
Defaulted container "cilium-agent" out of: cilium-agent, ebpf-mount (init), clean-cilium-state (init)
/sys/fs/bpf/tc/globals/cilium_policy_00208:

POLICY   DIRECTION   LABELS (source:key[=value])   PORT/PROTO   PROXY PORT   BYTES     PACKETS   
Allow    Ingress     reserved:unknown              ANY          NONE         16959     183       
Allow    Ingress     reserved:host                 ANY          NONE         1098509   4452      
Allow    Egress      reserved:unknown              ANY          NONE         393706    4204  
[...]

Note that the policy number is related to the endpoint ID in the Cilium endpoint list above.

We now have a running Cilium setup which can be used to run tests and examples!

Next: write and enforce policies, add observability

Doing a policy enforcement test goes beyond of the scope of this blog post – but it certainly is worth a look in the future. Also with all the data already shown above it makes sense to make a deep-dive into the topic of observation in the future.

If you already want to check policy enforcement out on your own the Cilium documentation has a beautiful example prepared which walks through some policy challenges and how those can be answered with Cilium.

The same is true for observability: if you wonder how deep the rabbit hole really is there is Hubble which provides serious observability into the Kubernetes network, services and security, comes with a UI and can be quickly installed since it is tightly integrated with Cilium.

And if you have stories to share around eBPF, Cilium and similar topics I am finally getting an idea of what you are talking about. 😉

Image by stux from Pixabay

Hello Isovalent!

As mentioned in my last post I left Red Hat – and today is my first day at Isovalent!

In my new position I will be a technical marketing manager and thus working on technical content, messaging and enablement. With Cilium Enterprise Isovalent offers an eBPF based solution for Kubernetes networking, observability, and security – and since I am rather new to Kubernetes, I expect a steep learning curve.

I am looking forward to the challenges ahead of me, and will drop a blog post about it once in a while =)

[Howto] My own mail & groupware server, part 4: Nextcloud

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #4 of the series and covers the integration of Nextcloud for storage.

Let’s add Nextcloud to the existing mail server. This part will focus on setting it up and configuring it in basic terms. Groupware and webmail will come in a later post! If you are new to this series, don’t forget to read part 1: what, why, how?, and all about the mail server setup itself in the second post, part 2: initial mail server setup. We also added a Git server in part 3: Git server.

Nextcloud as “cloud” server

Today’s online experience does not only cover mails and other groupware functions these days, but also the interaction with files in some online storage. Arguably, for many this is these days sometimes more important than e-mail systems.

Thus adding a service to the mail server providing a “cloud” experience around file management makes sense. The result is lacking cloud functionality in terms of high availability, but provides a rich UI, accessibility from all kinds of devices and integration in various services. It also offers the option to extend the functions further.

Nextcloud is probably the best known solution for self-hosted cloud solutions, and is also used large scale by universities, governments and companies. I also picked it because I had past experience with it and it offers some integrations and add-ons I really like and depend on.

Alternatives worth checking out are owncloud, Seafile and Pydio.

Integration into mailu setup

Nextcloud can be added to an existing mailu setup in three steps:

  1. Let Nginx know about the service
  2. Add a DB and set it up
  3. Add Nextcloud

The proxy bit is easily done by creating the file /data/mailu/overrides/nginx/nc.conf with the following content:

location /nc/ {
  add_header Front-End-Https on;
  proxy_buffering off;
  fastcgi_request_buffering off;
  proxy_pass http://nc/;
  client_max_body_size 0;
}

We also need a DB. Add this to docker-compose.yml:

  # Nextcloud

  ncpostgresql:
    image: postgres:12
    restart: always
    environment:
      POSTGRES_PASSWORD: ...
    volumes:
      - /data/ncpostgresql:/var/lib/postgresql/data

Make sure to add a proper password here! Next, we have to bring the environment down and up again to add the DB container, and then access the DB and create the right users and database with corresponding privileges:

  • Get the DB up & running: docker compose down and docker compose up
  • access DB container: sudo docker exec -it mailu_ncpostgresql_1 /bin/bash
  • become super user: su - postgres
  • add user nextcloud, add proper password here: create user nextcloud with password '...';
  • add nextcloud database: CREATE DATABASE nextcloud TEMPLATE template0 ENCODING 'UNICODE';
  • change database owner to user nextcloud: ALTER DATABASE nextcloud OWNER TO nextcloud;
  • grant all privileges to nextcloud: GRANT ALL PRIVILEGES ON DATABASE nextcloud TO nextcloud;

Now we can add the Nextcloud container itself. We will add a few environment variables to properly configure the DB access and the initial admin account. Add the following listing to the Docker Compose file:

  nc:
    image: nextcloud:apache
    restart: always
    environment:
      POSTGRES_HOST: ncpostgresql
      POSTGRES_USER: nextcloud
      POSTGRES_PASSWORD: ....
      POSTGRES_DB: nextcloud
      NEXTCLOUD_ADMIN_USER: admin
      NEXTCLOUD_ADMIN_PASSWORD: ...
      NEXTCLOUD_TRUSTED_DOMAINS: front
      REDIS_HOST: redis
    depends_on:
      - resolver
      - ncpostgresql
    dns:
      - 192.168.203.254
    volumes:
      - /data/nc/main:/var/www/html
      - /data/nc/custom_apps:/var/www/html/custom_apps
      - /data/nc/data:/var/www/html/data
      - /data/nc/config:/var/www/html/config
      - /data/nc/zzz_upload_php.ini:/usr/local/etc/php/conf.d/zzz_upload_php.ini

Nextcloud configuration

Before we launch Nextcloud, we need to configure it properly. As shown in the last line in the previous example, a specific file is needed to define the values for PHP file upload sizes. This is only needed in corner cases (browsers split up files during upload automatically these days), but can help sometimes. Create the file /data/nc/zzz_upload_php.ini:

upload_max_filesize=2G
post_max_size=2G
memory_limit=4G

Next, we need to create the configuration for the actual Nextcloud instance. Stop the Docker Compose setup, and start it up again. That generates the basic config files on the disk, and you can access/data/nc/config/config.php and adjust the following variables (others are left intact):

  'overwritewebroot' => '/nc',
  'overwritehost' => 'nc.bayz.de',
  'overwriteprotocol' => 'https',
  'trusted_domains' => 
  array (
    0 => 'lisa.bayz.de',
    1 => 'front',
    2 => 'mailu_front_1.mailu_default',
    3 => 'nc.bayz.de',
  ),

After another Docker Compose down and up, the instance should be all good! If the admin password need to be reset, access the container via sudo docker exec -it mailu_nc_1 /bin/bash and reset the password with: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ user:resetpassword admin"

Next we can connect Nextcloud to the mailu IMAP server to use it for authentication. First we install the app “External user authentication” from the developers section. Next we add the following code to the above mentioned config.php:

  'user_backends' => array(
    array(
        'class' => 'OC_User_IMAP',
        'arguments' => array(
            'imap', 143, 'null', 'bayz.de', true, false
        ),
    ),
),

Restart the setup, and a login as user should be possible.

Sync existing files

In my case the instance was following a previous one. As part of the migration, a lot of “old” data had to be copied. The problem: copying the data for example via webdav is time consuming, does not perform and might be troublesome when the sync needs to be picked up after interruption again.

It is easier to sync direct from disc to disc with established tools like rsync. However, Nextcloud does not know that new files arrived that way and does not list them. The steps to make Nextcloud aware of those are:

  1. Log in as each user for which data should be synced so that target directories exist underneath the files/ directory
  2. Sync data with rsync or other tool of choice
  3. Correct permissions: chown -R ...:... files/
  4. Access container: sudo docker exec -it mailu_nc_1 /bin/bash
  5. Trigger file scan in Nextcloud: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ files:scan --all"

Recurrent updater

For add-ons like the newsreader, Nextcloud needs to perform tasks on a regular base. Surprisingly enough, Nextcloud cannot easily do this on its own. The best way is to add a cron job to do that. And the best way to do that is a Systemd timer.

So first we add the service to be triggered regularly. On the host itself (not inside the container) create the file /etc/systemd/system/nextcloudcron.service:

[Unit]
Description=Nextcloud cron.php job

[Service]
ExecStart=/usr/bin/docker exec mailu_nc_1 su -s /bin/sh www-data -c "/usr/local/bin/php -f /var/www/html/cron.php"

Then, create the timer via the file /etc/systemd/system/nextcloudcron.timer:

[Unit]
Description=Run Nextcloud cron.php every 5 minutes

[Timer]
OnBootSec=5min
OnUnitActiveSec=5min
Unit=nextcloudcron.service

[Install]
WantedBy=timers.target

Enable the timer: systemctl enable --now nextcloudcron.timer. And it is done. And is way more flexible and usable and maintainable than the old cron jobs. If you are new to timers, check their execution with sudo systemctl list-timers.

DB performance

A lot of Nextcloud’s performance depends on the performance of the DB. And DBs are all about indices. There are a few commands which can help with that – and which are recommended on the self check inside Nextcloud anyway:

access container: sudo docker exec -it mailu_nc_1 /bin/bash
add missing indices: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:add-missing-indices" www-data
convert filecache: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:convert-filecache-bigint" www-data
add missing columns: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:add-missing-columns" www-data

Preview generator – fast image loading

The major clients used to access Nextcloud will probably be the Android client and a web browser. However, scrolling through galleries full of images is a pain: it takes ages until all the previews are loaded. Sometimes even a slide show is not possible because it all just takes too long.

This is because the images are not downloaded in real size (that would take too long), instead previews of the size required in that moment are generated live (still takes long, but not that long).

To make this all faster, one idea is to pre-generate the previews! To do so, we install the app “Preview Generator” in our instance. However, this generates a bit too many preview files, many in sizes which are hardly ever used. So we need to alter the sizes to be generated:

$ sudo docker exec -it mailu_nc_1 /bin/bash
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator squareSizes --value='256 1024'"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator widthSizes  --value='384 2048'"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator heightSizes --value='256 2048'"

Also we want to limit the preview sizes to not waste too much storage:

$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set preview_max_x --value 2048"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set preview_max_y --value 2048"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set jpeg_quality --value 80"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set preview jpeg_quality --value='80'"

Last but not least we run the preview generator:

su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ preview:generate-all -vvv"

Note that this can easily take hours, and thus I recommend to launch this via a tmux shell.

Of course new files will reach the system, so once in a while new previews should be generated. Use this command:

su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ preview:pre-generate"

This can also be added to a cron job similar to the timer above. Create the file /etc/systemd/system/nextcloudpreview.service:

[Unit]
Description=Nextcloud image preview generator job

[Service]
ExecStart=/usr/bin/docker exec mailu_nc_1 su -s /bin/sh www-data -c "/usr/local/bin/php -f /var/www/html/occ preview:pre-generate"

And add a timer similar to the above one triggering the service every 15 minutes. Create the file /etc/systemd/system/nextcloudpreview.timer:

[Unit]
Description=Run Nextcloud preview generator every 15 minutes

[Timer]
OnBootSec=15min
OnUnitActiveSec=15min
Unit=nextcloudpreview.service

[Install]
WantedBy=timers.target

Launch the timer: sudo systemctl enable --now nextcloudpreview.timer

One final word of caution: previews can take up a lot of space. Like A LOT. Expect maybe an additional 20% of storage needed for your images.

What’s next?

With Nextcloud up and running and all old data synced I was feeling good: all basic infrastructure services were running again. People could access all their stuff with only slight adjustments to their URLs.

The missing piece now was webmail and general groupware functionality. This will be covered in another post of this series.

More about that in the next post.

Image by RÜŞTÜ BOZKUŞ from Pixabay