[Howto] My own mail & groupware server, part 4: Nextcloud

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #4 of the series and covers the integration of Nextcloud for storage.

Let’s add Nextcloud to the existing mail server. This part will focus on setting it up and configuring it in basic terms. Groupware and webmail will come in a later post! If you are new to this series, don’t forget to read part 1: what, why, how?, and all about the mail server setup itself in the second post, part 2: initial mail server setup. We also added a Git server in part 3: Git server.

Nextcloud as “cloud” server

Today’s online experience does not only cover mails and other groupware functions these days, but also the interaction with files in some online storage. Arguably, for many this is these days sometimes more important than e-mail systems.

Thus adding a service to the mail server providing a “cloud” experience around file management makes sense. The result is lacking cloud functionality in terms of high availability, but provides a rich UI, accessibility from all kinds of devices and integration in various services. It also offers the option to extend the functions further.

Nextcloud is probably the best known solution for self-hosted cloud solutions, and is also used large scale by universities, governments and companies. I also picked it because I had past experience with it and it offers some integrations and add-ons I really like and depend on.

Alternatives worth checking out are owncloud, Seafile and Pydio.

Integration into mailu setup

Nextcloud can be added to an existing mailu setup in three steps:

  1. Let Nginx know about the service
  2. Add a DB and set it up
  3. Add Nextcloud

The proxy bit is easily done by creating the file /data/mailu/overrides/nginx/nc.conf with the following content:

location /nc/ {
  add_header Front-End-Https on;
  proxy_buffering off;
  fastcgi_request_buffering off;
  proxy_pass http://nc/;
  client_max_body_size 0;
}

We also need a DB. Add this to docker-compose.yml:

  # Nextcloud

  ncpostgresql:
    image: postgres:12
    restart: always
    environment:
      POSTGRES_PASSWORD: ...
    volumes:
      - /data/ncpostgresql:/var/lib/postgresql/data

Make sure to add a proper password here! Next, we have to bring the environment down and up again to add the DB container, and then access the DB and create the right users and database with corresponding privileges:

  • Get the DB up & running: docker compose down and docker compose up
  • access DB container: sudo docker exec -it mailu_ncpostgresql_1 /bin/bash
  • become super user: su - postgres
  • add user nextcloud, add proper password here: create user nextcloud with password '...';
  • add nextcloud database: CREATE DATABASE nextcloud TEMPLATE template0 ENCODING 'UNICODE';
  • change database owner to user nextcloud: ALTER DATABASE nextcloud OWNER TO nextcloud;
  • grant all privileges to nextcloud: GRANT ALL PRIVILEGES ON DATABASE nextcloud TO nextcloud;

Now we can add the Nextcloud container itself. We will add a few environment variables to properly configure the DB access and the initial admin account. Add the following listing to the Docker Compose file:

  nc:
    image: nextcloud:apache
    restart: always
    environment:
      POSTGRES_HOST: ncpostgresql
      POSTGRES_USER: nextcloud
      POSTGRES_PASSWORD: ....
      POSTGRES_DB: nextcloud
      NEXTCLOUD_ADMIN_USER: admin
      NEXTCLOUD_ADMIN_PASSWORD: ...
      NEXTCLOUD_TRUSTED_DOMAINS: front
      REDIS_HOST: redis
    depends_on:
      - resolver
      - ncpostgresql
    dns:
      - 192.168.203.254
    volumes:
      - /data/nc/main:/var/www/html
      - /data/nc/custom_apps:/var/www/html/custom_apps
      - /data/nc/data:/var/www/html/data
      - /data/nc/config:/var/www/html/config
      - /data/nc/zzz_upload_php.ini:/usr/local/etc/php/conf.d/zzz_upload_php.ini

Nextcloud configuration

Before we launch Nextcloud, we need to configure it properly. As shown in the last line in the previous example, a specific file is needed to define the values for PHP file upload sizes. This is only needed in corner cases (browsers split up files during upload automatically these days), but can help sometimes. Create the file /data/nc/zzz_upload_php.ini:

upload_max_filesize=2G
post_max_size=2G
memory_limit=4G

Next, we need to create the configuration for the actual Nextcloud instance. Stop the Docker Compose setup, and start it up again. That generates the basic config files on the disk, and you can access/data/nc/config/config.php and adjust the following variables (others are left intact):

  'overwritewebroot' => '/nc',
  'overwritehost' => 'nc.bayz.de',
  'overwriteprotocol' => 'https',
  'trusted_domains' => 
  array (
    0 => 'lisa.bayz.de',
    1 => 'front',
    2 => 'mailu_front_1.mailu_default',
    3 => 'nc.bayz.de',
  ),

After another Docker Compose down and up, the instance should be all good! If the admin password need to be reset, access the container via sudo docker exec -it mailu_nc_1 /bin/bash and reset the password with: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ user:resetpassword admin"

Next we can connect Nextcloud to the mailu IMAP server to use it for authentication. First we install the app “External user authentication” from the developers section. Next we add the following code to the above mentioned config.php:

  'user_backends' => array(
    array(
        'class' => 'OC_User_IMAP',
        'arguments' => array(
            'imap', 143, 'null', 'bayz.de', true, false
        ),
    ),
),

Restart the setup, and a login as user should be possible.

Sync existing files

In my case the instance was following a previous one. As part of the migration, a lot of “old” data had to be copied. The problem: copying the data for example via webdav is time consuming, does not perform and might be troublesome when the sync needs to be picked up after interruption again.

It is easier to sync direct from disc to disc with established tools like rsync. However, Nextcloud does not know that new files arrived that way and does not list them. The steps to make Nextcloud aware of those are:

  1. Log in as each user for which data should be synced so that target directories exist underneath the files/ directory
  2. Sync data with rsync or other tool of choice
  3. Correct permissions: chown -R ...:... files/
  4. Access container: sudo docker exec -it mailu_nc_1 /bin/bash
  5. Trigger file scan in Nextcloud: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ files:scan --all"

Recurrent updater

For add-ons like the newsreader, Nextcloud needs to perform tasks on a regular base. Surprisingly enough, Nextcloud cannot easily do this on its own. The best way is to add a cron job to do that. And the best way to do that is a Systemd timer.

So first we add the service to be triggered regularly. On the host itself (not inside the container) create the file /etc/systemd/system/nextcloudcron.service:

[Unit]
Description=Nextcloud cron.php job

[Service]
ExecStart=/usr/bin/docker exec mailu_nc_1 su -s /bin/sh www-data -c "/usr/local/bin/php -f /var/www/html/cron.php"

Then, create the timer via the file /etc/systemd/system/nextcloudcron.timer:

[Unit]
Description=Run Nextcloud cron.php every 5 minutes

[Timer]
OnBootSec=5min
OnUnitActiveSec=5min
Unit=nextcloudcron.service

[Install]
WantedBy=timers.target

Enable the timer: systemctl enable --now nextcloudcron.timer. And it is done. And is way more flexible and usable and maintainable than the old cron jobs. If you are new to timers, check their execution with sudo systemctl list-timers.

DB performance

A lot of Nextcloud’s performance depends on the performance of the DB. And DBs are all about indices. There are a few commands which can help with that – and which are recommended on the self check inside Nextcloud anyway:

access container: sudo docker exec -it mailu_nc_1 /bin/bash
add missing indices: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:add-missing-indices" www-data
convert filecache: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:convert-filecache-bigint" www-data
add missing columns: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:add-missing-columns" www-data

Preview generator – fast image loading

The major clients used to access Nextcloud will probably be the Android client and a web browser. However, scrolling through galleries full of images is a pain: it takes ages until all the previews are loaded. Sometimes even a slide show is not possible because it all just takes too long.

This is because the images are not downloaded in real size (that would take too long), instead previews of the size required in that moment are generated live (still takes long, but not that long).

To make this all faster, one idea is to pre-generate the previews! To do so, we install the app “Preview Generator” in our instance. However, this generates a bit too many preview files, many in sizes which are hardly ever used. So we need to alter the sizes to be generated:

$ sudo docker exec -it mailu_nc_1 /bin/bash
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator squareSizes --value='256 1024'"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator widthSizes  --value='384 2048'"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator heightSizes --value='256 2048'"

Also we want to limit the preview sizes to not waste too much storage:

$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set preview_max_x --value 2048"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set preview_max_y --value 2048"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set jpeg_quality --value 80"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set preview jpeg_quality --value='80'"

Last but not least we run the preview generator:

su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ preview:generate-all -vvv"

Note that this can easily take hours, and thus I recommend to launch this via a tmux shell.

Of course new files will reach the system, so once in a while new previews should be generated. Use this command:

su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ preview:pre-generate"

This can also be added to a cron job similar to the timer above. Create the file /etc/systemd/system/nextcloudpreview.service:

[Unit]
Description=Nextcloud image preview generator job

[Service]
ExecStart=/usr/bin/docker exec mailu_nc_1 su -s /bin/sh www-data -c "/usr/local/bin/php -f /var/www/html/occ preview:pre-generate"

And add a timer similar to the above one triggering the service every 15 minutes. Create the file /etc/systemd/system/nextcloudpreview.timer:

[Unit]
Description=Run Nextcloud preview generator every 15 minutes

[Timer]
OnBootSec=15min
OnUnitActiveSec=15min
Unit=nextcloudpreview.service

[Install]
WantedBy=timers.target

Launch the timer: sudo systemctl enable --now nextcloudpreview.timer

One final word of caution: previews can take up a lot of space. Like A LOT. Expect maybe an additional 20% of storage needed for your images.

What’s next?

With Nextcloud up and running and all old data synced I was feeling good: all basic infrastructure services were running again. People could access all their stuff with only slight adjustments to their URLs.

The missing piece now was webmail and general groupware functionality. This will be covered in another post of this series.

More about that in the next post.

Image by RÜŞTÜ BOZKUŞ from Pixabay

[Howto] My own mail & groupware server, part 3: Git server

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #3 of the series and covers the integration of an additional Git server.

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #3 of the series and covers the integration of an additional Git server setup.

This post is all about setting up an additional Git server to the existing mail server. Read about the background to this setup in the first post, part 1: what, why, how?, and all about the mail server setup itself in the second post, part 2: initial mail server setup.

Gitea as Git server

I heavily use Git all the time. And there are enough information where I feel more comfortable when I host them only on infrastructure I control. So for the new setup I also wanted to have a Git server, as fast as possible.

In the past I used Gitlab as Git server, but that is very resource intensive and just overkill for my use cases. Thus years ago I already replaced Gitlab with Gitea – a lightweight, painless self-hosted git service. It is quickly set up, simple to use, offers nevertheless all relevant Git features, and simply does it’s job. Gitea itself is a fork of Gogs, which was not really community friendly. These days Gitea is a way more active and prospering project than Gogs.

Background: Nginx as reverse proxy in Mailu

So how do you “attach” Gitea to a running Mailu infrastructure? Mailu itself comes with a set of defined services, and that’s it. There is no plugin or module system to extend it. However, the project does offer special “overrides” directories where additional configuration can be placed – this applies to Nginx as well! That way, a service can be placed right next to other Mailu services, behind the same reverse proxy, and that way benefit from the already existing setup and for example the certificate regeneration. Also, there is no problem with the already used ports 80 and 443, etc.

Overrides can be placed in /data/mailu/overrides/nginx. They are basically just snippets of Nginx configuration. Note though that they are included within the the main server block! That means they can only work on locations, not on server names. This is somewhat unfortunate since I used to address all my old services via sub-domains. git.bayz.de, nc.bayz.de, etc. With the new setup and the limit to locations this is not an option anymore, everything has to work on different working directories: bayz.de/git, bayz.de/nc, etc.

This is somewhat unfortunate, because that also meant that I had to reconfigure clients, and also ask others to reconfigure their clients when using my infrastructure. I would be happy to get back to a pure sub-domain based addressing, but I don’t see how this could be possible without changing the actual Nginx image.

Adding Gitea entry to Nginx Override

Having said that, to add Gitea to Nginx, create this file: /data/mailu/overrides/nginx/git.conf

location /gitea/ {
  proxy_pass http://git:3000/;
}

And that’s it already. More configuration is not needed since Mailu already configures Nginx with reasonable defaults.

This also gives a first hint that it is pretty easy to add further services – I will cover more examples in this ongoing blog post series.

Additional entry to docker compose

To start Gitea itself, add it to Mailu’s docker-compose.yml:

  # gitea

  git:
    image: gitea/gitea:latest
    restart: always
    env_file: mailu.env
    depends_on:
      - resolver
    dns:
      - 192.168.203.254
    volumes:
      - /data/gitea:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "22:22"

Note the shared volumes: that way the Gitea configuration file will be written on your storage in /data/gitea/gitea/conf/app.ini .

Also, we want to set UID and GUID for Gitea via environment variables. Set this in your mailu.env:

###################################
# Gitea settings
###################################
USER_UID=1000
USER_GID=1000

Setting up Gitea basic configuration

Now let’s configure Gitea. It is possible to pre-create a full Gitea configuration and start the container with it. However, documentation on that is sparse and in my tests there were always problems.

So in my case, I just started and stopped the container (docker compose down and up) a few times, edited some configuration, once registered an admin user via the GUI and was done. While this worked, I can only recommend to closely track the logs during that time to ensure that no one else is accessing the container and does mischief!

So, the first step is to start the new Docker compose service. This will write a first vanilla configuration of Gitea. Afterwards, add the correct domain information in the [server] section in the Gitea configuration file app.ini:

Note that the ROOT_URL value ensures the required rewrite of all requests and links so that the setup works flawlessly with the above mentioned Nginx configuration!

Next, bring the service down and up again (Docker compose down and up), login to your new service (here: git.bayz.de/gitea ) and register a new admin user. Note that here you also have to pick the database option. For small systems with only very few concurrent connections sqlite is fine. If you will serve more users here, or automated access, pick Postgresql. However, to make that work you need to bring up another Postgresql container. One of the next posts will introduce one, so you might want to re-think your setup then.

Directly after this admin registration is done, in the Gitea configuration fileapp.ini in the [server] section change the value of DISABLE_REGISTRATION to true. Stop and start the service again, and no new (external) users can register anymore.

But how do we register new users now?

Central services authentication in Mailu: mail

One of the major hassles with my last setup was the authentication. I started with a fully blown OpenLDAP years ago, which was a pain to manage and maintain already. Moving over to FreeIPA meant that I had better interfaces and maybe even a UI, but it was still a complex, tricky service. Also while almost every service out there can be connected to LDAP, that is not always easy or a pleasant experience. And given that I only have a few users on my system, that is hardly worth the trouble.

Mailu offers an interesting approach here: users are stored in a DB, and external services are asked to authenticate against the email services (IMAP, SMTP). I was surprised to learn that indeed many services out there support this or have plugins for that.

Gitea can authenticate against SMTP sources, and I decided to go that route:

  • In Gitea, access “Site Administration”
  • Click on “Authentication Sources”
  • Pick the blue button “Add Authentication Source”
  • As “Authentication Type“, choose SMTP
  • Give it a name
  • As “SMTP Authentication Type“, enter LOGIN
  • As “SMTP Host“, provide the external host name here (more on that further down below) lisa.bayz.de
  • Pick the right “SMTP Port“, 587
  • And limit the “Allowed Domains” if you want, in my case to bayz.de
  • Of course, tick the check box “Enable TLS Encryption” and also the check box “This Authentication Source is Activated

After this is done, log out of Gitea and log in with an existing mail user. It should just work! And that all without any trace of LDAP! Awesome, right?

A word about the SMTP host in the above configuration: do not try to enter here the SMTP docker compose service directly. This will not work: port 587 is managed by the Nginx proxy which acts as mail proxy here, which redirects auth mail requests to the admin portal. The internal SMTP container does not even listen on port 587.

What’s next?

With my private Git server back to live I felt slightly better again. Now I had the infrastructure at my hands I needed to tackle the cloud/file sharing part of all of it to also lay the foundations for the groupware pieces: Nextcloud.

More about that in the next post.

Featured image by Myriam Zilles from Pixabay

[Howto] Launch traefik as a docker container in a secure way

Traefik is a great reverse proxy solution, and a perfect tool to direct traffic in container environments. However, to do that, it needs access to docker – and that is very dangerous and must be tightly secured!

The problem: access to the docker socket

Containers offer countless opportunities to improve the deployment and management of services. However, having multiple containers on one system, often re-deploying them on the fly, requires a dynamic way of routing traffic to them. Additionally, there might be reasons to have a front end reverse proxy to sort the traffic properly anyway.

In comes traefik – “the cloud native edge router”. Among many supported backends it knows how to listen to docker and create dynamic routes on the fly when new containers come up.

To do so traefik needs access to the docker socket. Many people decide to just provide that as a volume to traefik. This usually does not work because SELinux prevents it for a reason. The apparent workaround for many is to run traefik in a privileged container. But that is a really bad idea:

Docker currently does not have any Authorization controls. If you can talk to the docker socket or if docker is listening on a network port and you can talk to it, you are allowed to execute all docker commands. […]
At which point you, or any user that has these permissions, have total control on your system.

http://www.projectatomic.io/blog/2014/09/granting-rights-to-users-to-use-docker-in-fedora/

The solution: a docker socket proxy

But there are ways to securely provide traefik the access it needs – without exposing too much permissions. One way is to provide limited access to the docker socket via tcp via another container which cannot be reached from the outside that easily.

Meet Tecnativa’s docker-socket-proxy:

What?
This is a security-enhaced proxy for the Docker Socket.
Why?
Giving access to your Docker socket could mean giving root access to your host, or even to your whole swarm, but some services require hooking into that socket to react to events, etc. Using this proxy lets you block anything you consider those services should not do.

https://github.com/Tecnativa/docker-socket-proxy/blob/master/README.md

It is a container which connects to the docker socket and exports the API features in a secured and configurable way via TCP. At container startup it is configured with booleans to which API sections access is granted.

So basically you set up a docker proxy to support your proxy for docker containers. Well…

How to use it

The docker socket proxy is a container itself. Thus it needs to be launched as a privileged container with access to the docker socket. Also, it must not publish any ports to the outside. Instead it should run on a dedicated docker network shared with the traefik container. The Ansible code to launch the container that way is for example:

- name: ensure privileged docker socket container
  docker_container:
    name: dockersocket4traefik
    image: tecnativa/docker-socket-proxy
    log_driver: journald
    env:
      CONTAINERS: 1
    state: started
    privileged: yes
    exposed_ports:
      - 2375
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:z"
    networks:
      - name: dockersocket4traefik_nw

Note the env right in the middle: that is where the exported permissions are configured. CONTAINERS: 1  provides access to container relevant information. There are also SERVICES: 1 and SWARM: 1 to manage access to docker services and swarm.

Traefik needs to have access to the same network. Also, the traefik configuration needs to point to the docker container via tcp:

[docker]
endpoint = "tcp://dockersocket4traefik:2375"

Conclusion

This setup works surprisingly easy. And it allows traefik to access the docker socket for the things it needs without exposing critical permissions to take over the system. At the same time, full access to the docker socket is restricted to a non-public container, which makes it harder for attackers to exploit it.

If you have a simple container setup and use Ansible to start and stop the containers, I’ve written a role to get the above mentioned setup running.