Hello Isovalent!

As mentioned in my last post I left Red Hat – and today is my first day at Isovalent!

In my new position I will be a technical marketing manager and thus working on technical content, messaging and enablement. With Cilium Enterprise Isovalent offers an eBPF based solution for Kubernetes networking, observability, and security – and since I am rather new to Kubernetes, I expect a steep learning curve.

I am looking forward to the challenges ahead of me, and will drop a blog post about it once in a while =)

Good bye Red Hat

Over 6 years ago I joined Red Hat. For me it was a huge step – going from a small SI to a large software vendor, moving to a dedicated pre-sales role (“Solution Architect”) in a larger team, and so on.

And I learned a lot. Like, A LOT. About how such a large enterprise is run, how a huge software vendor operates, but with the changing landscape in customers also how customers of certain sizes and in certain industries work. At the same time I learned a lot about how the sales process of a software vendor works, what a role a pre-sales engineer can take (and what not), and how good sales teams work. I was not without success in that role.

After my few years as a solution architect I had the chance to join the Ansible product team. This meant another big change since I suddenly stopped talking to customers on a daily base, but instead talked to the larger sales organization within Red Hat. Also, moving from a German team to a mainly US team meant a lot of changes in how my daily schedules were set up – but that worked well with the kids who arrived at the same time. The new job brought had a lot of new components for me as well: Technical Marketing Manager means to shape the product message into consumable bites for people with a technical taste. Suddenly I had to wonder how I can enable other solution architects to present this to a technical savy audience – especially if these solution architects are not product experts and do have to sell multiple products anyway. I had a steep learning curve, but again the feedback was not bad, and the constantly growing team was just awesome.

But nothing is forever: over the recent months I realized that I have growth aspirations which simply don’t fit with my position anymore. Thus I had to make the hard decision to look for something else – and I found this something else outside Red Hat. My future is still within the Open Source ecosystem, deeply connected to Linux – no surprises there. But more about that in another post.

Right now I’d just like to thank Red Hat for an awesome time. And I especially would like to thank the various teams I worked in over the time with all the people in there:

  • The public sector Germany team, with the simply best sales person I ever met and the best middleware solution architect I worked with.
  • The “Ansible workshop” crew with the greatest mind in writing workshops at the top. We rocked so many conferences and summits.
  • The Ansible product team when I joined – being right there when a just acquired company settles into the arms of a new owner is a very interesting experience, thanks for all the support!
  • The BTE which formed later on – I never worked in a team like that, but I loved almost every day. It was great to grow and thrive in this team, through all transitions and re-organisations.

Without you, I wouldn’t be the person I am today – thanks for that, thanks for all the support! All the best for the future =)

[Howto] My own mail & groupware server, part 4: Nextcloud

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #4 of the series and covers the integration of Nextcloud for storage.

Let’s add Nextcloud to the existing mail server. This part will focus on setting it up and configuring it in basic terms. Groupware and webmail will come in a later post! If you are new to this series, don’t forget to read part 1: what, why, how?, and all about the mail server setup itself in the second post, part 2: initial mail server setup. We also added a Git server in part 3: Git server.

Nextcloud as “cloud” server

Today’s online experience does not only cover mails and other groupware functions these days, but also the interaction with files in some online storage. Arguably, for many this is these days sometimes more important than e-mail systems.

Thus adding a service to the mail server providing a “cloud” experience around file management makes sense. The result is lacking cloud functionality in terms of high availability, but provides a rich UI, accessibility from all kinds of devices and integration in various services. It also offers the option to extend the functions further.

Nextcloud is probably the best known solution for self-hosted cloud solutions, and is also used large scale by universities, governments and companies. I also picked it because I had past experience with it and it offers some integrations and add-ons I really like and depend on.

Alternatives worth checking out are owncloud, Seafile and Pydio.

Integration into mailu setup

Nextcloud can be added to an existing mailu setup in three steps:

  1. Let Nginx know about the service
  2. Add a DB and set it up
  3. Add Nextcloud

The proxy bit is easily done by creating the file /data/mailu/overrides/nginx/nc.conf with the following content:

location /nc/ {
  add_header Front-End-Https on;
  proxy_buffering off;
  fastcgi_request_buffering off;
  proxy_pass http://nc/;
  client_max_body_size 0;
}

We also need a DB. Add this to docker-compose.yml:

  # Nextcloud

  ncpostgresql:
    image: postgres:12
    restart: always
    environment:
      POSTGRES_PASSWORD: ...
    volumes:
      - /data/ncpostgresql:/var/lib/postgresql/data

Make sure to add a proper password here! Next, we have to bring the environment down and up again to add the DB container, and then access the DB and create the right users and database with corresponding privileges:

  • Get the DB up & running: docker compose down and docker compose up
  • access DB container: sudo docker exec -it mailu_ncpostgresql_1 /bin/bash
  • become super user: su - postgres
  • add user nextcloud, add proper password here: create user nextcloud with password '...';
  • add nextcloud database: CREATE DATABASE nextcloud TEMPLATE template0 ENCODING 'UNICODE';
  • change database owner to user nextcloud: ALTER DATABASE nextcloud OWNER TO nextcloud;
  • grant all privileges to nextcloud: GRANT ALL PRIVILEGES ON DATABASE nextcloud TO nextcloud;

Now we can add the Nextcloud container itself. We will add a few environment variables to properly configure the DB access and the initial admin account. Add the following listing to the Docker Compose file:

  nc:
    image: nextcloud:apache
    restart: always
    environment:
      POSTGRES_HOST: ncpostgresql
      POSTGRES_USER: nextcloud
      POSTGRES_PASSWORD: ....
      POSTGRES_DB: nextcloud
      NEXTCLOUD_ADMIN_USER: admin
      NEXTCLOUD_ADMIN_PASSWORD: ...
      NEXTCLOUD_TRUSTED_DOMAINS: front
      REDIS_HOST: redis
    depends_on:
      - resolver
      - ncpostgresql
    dns:
      - 192.168.203.254
    volumes:
      - /data/nc/main:/var/www/html
      - /data/nc/custom_apps:/var/www/html/custom_apps
      - /data/nc/data:/var/www/html/data
      - /data/nc/config:/var/www/html/config
      - /data/nc/zzz_upload_php.ini:/usr/local/etc/php/conf.d/zzz_upload_php.ini

Nextcloud configuration

Before we launch Nextcloud, we need to configure it properly. As shown in the last line in the previous example, a specific file is needed to define the values for PHP file upload sizes. This is only needed in corner cases (browsers split up files during upload automatically these days), but can help sometimes. Create the file /data/nc/zzz_upload_php.ini:

upload_max_filesize=2G
post_max_size=2G
memory_limit=4G

Next, we need to create the configuration for the actual Nextcloud instance. Stop the Docker Compose setup, and start it up again. That generates the basic config files on the disk, and you can access/data/nc/config/config.php and adjust the following variables (others are left intact):

  'overwritewebroot' => '/nc',
  'overwritehost' => 'nc.bayz.de',
  'overwriteprotocol' => 'https',
  'trusted_domains' => 
  array (
    0 => 'lisa.bayz.de',
    1 => 'front',
    2 => 'mailu_front_1.mailu_default',
    3 => 'nc.bayz.de',
  ),

After another Docker Compose down and up, the instance should be all good! If the admin password need to be reset, access the container via sudo docker exec -it mailu_nc_1 /bin/bash and reset the password with: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ user:resetpassword admin"

Next we can connect Nextcloud to the mailu IMAP server to use it for authentication. First we install the app “External user authentication” from the developers section. Next we add the following code to the above mentioned config.php:

  'user_backends' => array(
    array(
        'class' => 'OC_User_IMAP',
        'arguments' => array(
            'imap', 143, 'null', 'bayz.de', true, false
        ),
    ),
),

Restart the setup, and a login as user should be possible.

Sync existing files

In my case the instance was following a previous one. As part of the migration, a lot of “old” data had to be copied. The problem: copying the data for example via webdav is time consuming, does not perform and might be troublesome when the sync needs to be picked up after interruption again.

It is easier to sync direct from disc to disc with established tools like rsync. However, Nextcloud does not know that new files arrived that way and does not list them. The steps to make Nextcloud aware of those are:

  1. Log in as each user for which data should be synced so that target directories exist underneath the files/ directory
  2. Sync data with rsync or other tool of choice
  3. Correct permissions: chown -R ...:... files/
  4. Access container: sudo docker exec -it mailu_nc_1 /bin/bash
  5. Trigger file scan in Nextcloud: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ files:scan --all"

Recurrent updater

For add-ons like the newsreader, Nextcloud needs to perform tasks on a regular base. Surprisingly enough, Nextcloud cannot easily do this on its own. The best way is to add a cron job to do that. And the best way to do that is a Systemd timer.

So first we add the service to be triggered regularly. On the host itself (not inside the container) create the file /etc/systemd/system/nextcloudcron.service:

[Unit]
Description=Nextcloud cron.php job

[Service]
ExecStart=/usr/bin/docker exec mailu_nc_1 su -s /bin/sh www-data -c "/usr/local/bin/php -f /var/www/html/cron.php"

Then, create the timer via the file /etc/systemd/system/nextcloudcron.timer:

[Unit]
Description=Run Nextcloud cron.php every 5 minutes

[Timer]
OnBootSec=5min
OnUnitActiveSec=5min
Unit=nextcloudcron.service

[Install]
WantedBy=timers.target

Enable the timer: systemctl enable --now nextcloudcron.timer. And it is done. And is way more flexible and usable and maintainable than the old cron jobs. If you are new to timers, check their execution with sudo systemctl list-timers.

DB performance

A lot of Nextcloud’s performance depends on the performance of the DB. And DBs are all about indices. There are a few commands which can help with that – and which are recommended on the self check inside Nextcloud anyway:

access container: sudo docker exec -it mailu_nc_1 /bin/bash
add missing indices: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:add-missing-indices" www-data
convert filecache: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:convert-filecache-bigint" www-data
add missing columns: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:add-missing-columns" www-data

Preview generator – fast image loading

The major clients used to access Nextcloud will probably be the Android client and a web browser. However, scrolling through galleries full of images is a pain: it takes ages until all the previews are loaded. Sometimes even a slide show is not possible because it all just takes too long.

This is because the images are not downloaded in real size (that would take too long), instead previews of the size required in that moment are generated live (still takes long, but not that long).

To make this all faster, one idea is to pre-generate the previews! To do so, we install the app “Preview Generator” in our instance. However, this generates a bit too many preview files, many in sizes which are hardly ever used. So we need to alter the sizes to be generated:

$ sudo docker exec -it mailu_nc_1 /bin/bash
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator squareSizes --value='256 1024'"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator widthSizes  --value='384 2048'"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator heightSizes --value='256 2048'"

Also we want to limit the preview sizes to not waste too much storage:

$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set preview_max_x --value 2048"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set preview_max_y --value 2048"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set jpeg_quality --value 80"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set preview jpeg_quality --value='80'"

Last but not least we run the preview generator:

su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ preview:generate-all -vvv"

Note that this can easily take hours, and thus I recommend to launch this via a tmux shell.

Of course new files will reach the system, so once in a while new previews should be generated. Use this command:

su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ preview:pre-generate"

This can also be added to a cron job similar to the timer above. Create the file /etc/systemd/system/nextcloudpreview.service:

[Unit]
Description=Nextcloud image preview generator job

[Service]
ExecStart=/usr/bin/docker exec mailu_nc_1 su -s /bin/sh www-data -c "/usr/local/bin/php -f /var/www/html/occ preview:pre-generate"

And add a timer similar to the above one triggering the service every 15 minutes. Create the file /etc/systemd/system/nextcloudpreview.timer:

[Unit]
Description=Run Nextcloud preview generator every 15 minutes

[Timer]
OnBootSec=15min
OnUnitActiveSec=15min
Unit=nextcloudpreview.service

[Install]
WantedBy=timers.target

Launch the timer: sudo systemctl enable --now nextcloudpreview.timer

One final word of caution: previews can take up a lot of space. Like A LOT. Expect maybe an additional 20% of storage needed for your images.

What’s next?

With Nextcloud up and running and all old data synced I was feeling good: all basic infrastructure services were running again. People could access all their stuff with only slight adjustments to their URLs.

The missing piece now was webmail and general groupware functionality. This will be covered in another post of this series.

More about that in the next post.

Image by RÜŞTÜ BOZKUŞ from Pixabay

[Howto] My own mail & groupware server, part 3: Git server

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #3 of the series and covers the integration of an additional Git server.

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #3 of the series and covers the integration of an additional Git server setup.

This post is all about setting up an additional Git server to the existing mail server. Read about the background to this setup in the first post, part 1: what, why, how?, and all about the mail server setup itself in the second post, part 2: initial mail server setup.

Gitea as Git server

I heavily use Git all the time. And there are enough information where I feel more comfortable when I host them only on infrastructure I control. So for the new setup I also wanted to have a Git server, as fast as possible.

In the past I used Gitlab as Git server, but that is very resource intensive and just overkill for my use cases. Thus years ago I already replaced Gitlab with Gitea – a lightweight, painless self-hosted git service. It is quickly set up, simple to use, offers nevertheless all relevant Git features, and simply does it’s job. Gitea itself is a fork of Gogs, which was not really community friendly. These days Gitea is a way more active and prospering project than Gogs.

Background: Nginx as reverse proxy in Mailu

So how do you “attach” Gitea to a running Mailu infrastructure? Mailu itself comes with a set of defined services, and that’s it. There is no plugin or module system to extend it. However, the project does offer special “overrides” directories where additional configuration can be placed – this applies to Nginx as well! That way, a service can be placed right next to other Mailu services, behind the same reverse proxy, and that way benefit from the already existing setup and for example the certificate regeneration. Also, there is no problem with the already used ports 80 and 443, etc.

Overrides can be placed in /data/mailu/overrides/nginx. They are basically just snippets of Nginx configuration. Note though that they are included within the the main server block! That means they can only work on locations, not on server names. This is somewhat unfortunate since I used to address all my old services via sub-domains. git.bayz.de, nc.bayz.de, etc. With the new setup and the limit to locations this is not an option anymore, everything has to work on different working directories: bayz.de/git, bayz.de/nc, etc.

This is somewhat unfortunate, because that also meant that I had to reconfigure clients, and also ask others to reconfigure their clients when using my infrastructure. I would be happy to get back to a pure sub-domain based addressing, but I don’t see how this could be possible without changing the actual Nginx image.

Adding Gitea entry to Nginx Override

Having said that, to add Gitea to Nginx, create this file: /data/mailu/overrides/nginx/git.conf

location /gitea/ {
  proxy_pass http://git:3000/;
}

And that’s it already. More configuration is not needed since Mailu already configures Nginx with reasonable defaults.

This also gives a first hint that it is pretty easy to add further services – I will cover more examples in this ongoing blog post series.

Additional entry to docker compose

To start Gitea itself, add it to Mailu’s docker-compose.yml:

  # gitea

  git:
    image: gitea/gitea:latest
    restart: always
    env_file: mailu.env
    depends_on:
      - resolver
    dns:
      - 192.168.203.254
    volumes:
      - /data/gitea:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "22:22"

Note the shared volumes: that way the Gitea configuration file will be written on your storage in /data/gitea/gitea/conf/app.ini .

Also, we want to set UID and GUID for Gitea via environment variables. Set this in your mailu.env:

###################################
# Gitea settings
###################################
USER_UID=1000
USER_GID=1000

Setting up Gitea basic configuration

Now let’s configure Gitea. It is possible to pre-create a full Gitea configuration and start the container with it. However, documentation on that is sparse and in my tests there were always problems.

So in my case, I just started and stopped the container (docker compose down and up) a few times, edited some configuration, once registered an admin user via the GUI and was done. While this worked, I can only recommend to closely track the logs during that time to ensure that no one else is accessing the container and does mischief!

So, the first step is to start the new Docker compose service. This will write a first vanilla configuration of Gitea. Afterwards, add the correct domain information in the [server] section in the Gitea configuration file app.ini:

Note that the ROOT_URL value ensures the required rewrite of all requests and links so that the setup works flawlessly with the above mentioned Nginx configuration!

Next, bring the service down and up again (Docker compose down and up), login to your new service (here: git.bayz.de/gitea ) and register a new admin user. Note that here you also have to pick the database option. For small systems with only very few concurrent connections sqlite is fine. If you will serve more users here, or automated access, pick Postgresql. However, to make that work you need to bring up another Postgresql container. One of the next posts will introduce one, so you might want to re-think your setup then.

Directly after this admin registration is done, in the Gitea configuration fileapp.ini in the [server] section change the value of DISABLE_REGISTRATION to true. Stop and start the service again, and no new (external) users can register anymore.

But how do we register new users now?

Central services authentication in Mailu: mail

One of the major hassles with my last setup was the authentication. I started with a fully blown OpenLDAP years ago, which was a pain to manage and maintain already. Moving over to FreeIPA meant that I had better interfaces and maybe even a UI, but it was still a complex, tricky service. Also while almost every service out there can be connected to LDAP, that is not always easy or a pleasant experience. And given that I only have a few users on my system, that is hardly worth the trouble.

Mailu offers an interesting approach here: users are stored in a DB, and external services are asked to authenticate against the email services (IMAP, SMTP). I was surprised to learn that indeed many services out there support this or have plugins for that.

Gitea can authenticate against SMTP sources, and I decided to go that route:

  • In Gitea, access “Site Administration”
  • Click on “Authentication Sources”
  • Pick the blue button “Add Authentication Source”
  • As “Authentication Type“, choose SMTP
  • Give it a name
  • As “SMTP Authentication Type“, enter LOGIN
  • As “SMTP Host“, provide the external host name here (more on that further down below) lisa.bayz.de
  • Pick the right “SMTP Port“, 587
  • And limit the “Allowed Domains” if you want, in my case to bayz.de
  • Of course, tick the check box “Enable TLS Encryption” and also the check box “This Authentication Source is Activated

After this is done, log out of Gitea and log in with an existing mail user. It should just work! And that all without any trace of LDAP! Awesome, right?

A word about the SMTP host in the above configuration: do not try to enter here the SMTP docker compose service directly. This will not work: port 587 is managed by the Nginx proxy which acts as mail proxy here, which redirects auth mail requests to the admin portal. The internal SMTP container does not even listen on port 587.

What’s next?

With my private Git server back to live I felt slightly better again. Now I had the infrastructure at my hands I needed to tackle the cloud/file sharing part of all of it to also lay the foundations for the groupware pieces: Nextcloud.

More about that in the next post.

Featured image by Myriam Zilles from Pixabay

[Howto] My own mail & groupware server, part 1: what, why, how?

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground, and this blog post is the first in a series describing all the steps.

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground, and this blog post is the first in a series describing all the steps.

Running mail servers on your own

Running your own mail server sounds tempting: having control over a central piece of your own communication does sound good, right?

But that can be quite challenging: The mail standards are not written for today’s way to operate technology. Many of them are vague, too generic to be really helpful, or just not deployed widely in reality. Other things are not standardized at all so you have to guess and test (max message sizes, anyone?). Also, even the biggest providers have their own interpretations of the standards and sometimes aggressively ignore them which you are forced to accept.

Also there is the spam problem: there is still a lot of spam out there. And since this is an ongoing fight mail server admins have to constantly adjust their systems to newest tricks and requirements. Think of SPF, DKIM, DMARC and DANE here.

Last but not least the market is more and more dominated by large corporations. If your email is tagged as spam by one of those, you often have no way to figure out what the problem is – or how to fix it. They simply will not talk to you if you are not of equal size (or otherwise important). In fact, if I have a pessimistic look into the future of email, it might happen that all small mail service providers die and we all have to use the big services.

Thus the question is if anyone should run their own mail server at all. Frankly, I would not recommend it if you are not really motivated to do so. So be warned.

However, if you do decide to do that on your own, you will learn a lot about the underlying technology, about how a core technology of “the internet” works, how companies work and behave, and you will have huge control about a central piece of today’s communication: mail is still a corner stone of today’s communication, even if we all hate it.

My background

To better understand my motivation it helps to know where I come from: In my past job at credativ I was project manager for a team dealing with large mail clusters. Like, really large. The people in the team were and are awesome folks who *really* understand mail servers. If you ever need help running your own open source mail infrastructure, get them on board, I would vouch for them anytime.

And while I never reached and never will reach the level of understanding the people in my team had, I got my fair share of knowledge. About the the technological components, the developments in the field, the challenges and so on. Out of this I decided at some point that it would be fun to run my own mail server (yeah, not the brightest day of my life, in hindsight…).

Thus at some point I set up my own domain and mail server. And right from the start I wanted more than a mail server: I wanted a groupware server. Calendars, address books, such a like. I do not recall how it all started, and how the first setup looked like, but I know that there was a Zarafa instance once, in 2013. Also I used OpenLDAP for a while, munin was in there as well, even a trac service to host a git repository. Certificates were shipped via StartSSL. Yeah, good times.

In summer 2017 this changed: I moved Zarafa out of the picture, in came SOGo. Also, trac was replaced by Gitlab and that again by Gitea. The mail server was completely based on Postfix, Dovecot and the likes (Amavisd, Spamassassin, ClamAV). OpenLDAP was replaced by FreeIPA, StartSSL by letsencrypt. All this was setup via docker containers, for easier separation of services and for simpler management. Nginx was the reverse proxy. Besides the groupware components and the git server there was also a OwnCloud (later Nextcloud) instance. Some of the container images were upstream, some I built myself. There was even a secondary mail server for emergencies, though that one was always somewhat out of date in terms of configuration.

This all served me well for years. Well, more or less. It was never perfect and missed a lot of features. But most mail got through.

Why the restart?

If it all served me well, why did I have to re-create the setup? Well, a few days ago I had to run an update of the certificates (still manually at that time). Since I had to bring down the reverse proxy for it, I decided run a full update of the underlying OS and also of the docker images and to reboot the machine.

It went fine, came back up – but something was wrong. Postfix had problems accepting mails. The more I dug down, the deeper the rabbit hole got. Postfix simply didn’t answer after the “DATA” part in the SMTP communication anymore. Somehow I got that fixed – but then Dovecot didn’t accept the mails for unknown reasons, and bounced were created!

I debugged for hours. But every time I thought I had figured it out, another problem came up. At one point I realized that the underlying FreeIPA service had erratic restarts and I had no idea why.

After three or four days I still had no idea what was going on, why my system was behaving that bad. Even with a verified working configuration from backup things went randomly broken. I was not able to receive or send mails reliably. My three major suspects were:

  • FreeIPA had a habit in the past to introduce new problems in new images – maybe this image was broken as well? But I wasn’t able to find overly obvious issues or reports.
  • Docker was updated from an outdated version to something newer – and Docker never was a friend of CentOS firewall rules. Maybe the recent update screwed up my delicate network setup?
  • Faulty RAM? Weird, hard to reproduce and changing errors of known-to-be-working setups can be the sign of faulty RAM. Maybe the hardware was done for.

I realized I had to make a decision: abandon my own mail hosting approaches (the more sensible option) – or get a new setup running fast.

Well – guess what I did?

Running your own mail server: there is a project for that!

I decided to re-create my setup. And this time I decided to not do it all by myself: Over the years I noticed that I was not the only person with the crazy idea to run their own mail server in containers. Others started entire projects around this with many contributors and additional tooling. I realized that I would loose little by using code from such existing projects, but would gain a lot: better tested code, more people to ask and discuss if problems arise, more features added by others, etc.

Two projects caught my interest over time, I followed them on Github for quite a while already: Mailu and mailcow. Indeed, my original plan was to migrate to one of them in the long term, like in 2021 or something, and maybe even hosted on Kubernetes or at least Podman. However, with the recent outage of my mail server I had to act quickly, and decided to go with a Docker based setup again.

Both projects mentioned above are basically built around Docker COmpose, Postfix, Dovecot, RSpamd and some custom admin tooling to make things easier. If you look closer they both have their advantages and special features, so if you think to run your own mail server I suggest you look into them yourself.

For me the final decision was to go with mailu: mailu does support Kubernetes and I wanted be prepared for a kube based future.

What’s next?

So with all this background you already know what to expect from the next posts: how to bring up mailu as a mail server, how to add Nextcloud and Gitea to the picture, and a few other gimmicks.

This will all be tailored to my needs – but I will try to keep it all as close to the defaults as possible. First to keep it simple but also to make this content reusable for others. I do hope that this will help others to start using their own setups or fine tuning what they already have.

Image by Gerhard Gellinger from Pixabay