[Howto] My own mail & groupware server, part 4: Nextcloud

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #4 of the series and covers the integration of Nextcloud for storage.

Let’s add Nextcloud to the existing mail server. This part will focus on setting it up and configuring it in basic terms. Groupware and webmail will come in a later post! If you are new to this series, don’t forget to read part 1: what, why, how?, and all about the mail server setup itself in the second post, part 2: initial mail server setup. We also added a Git server in part 3: Git server.

Nextcloud as “cloud” server

Today’s online experience does not only cover mails and other groupware functions these days, but also the interaction with files in some online storage. Arguably, for many this is these days sometimes more important than e-mail systems.

Thus adding a service to the mail server providing a “cloud” experience around file management makes sense. The result is lacking cloud functionality in terms of high availability, but provides a rich UI, accessibility from all kinds of devices and integration in various services. It also offers the option to extend the functions further.

Nextcloud is probably the best known solution for self-hosted cloud solutions, and is also used large scale by universities, governments and companies. I also picked it because I had past experience with it and it offers some integrations and add-ons I really like and depend on.

Alternatives worth checking out are owncloud, Seafile and Pydio.

Integration into mailu setup

Nextcloud can be added to an existing mailu setup in three steps:

  1. Let Nginx know about the service
  2. Add a DB and set it up
  3. Add Nextcloud

The proxy bit is easily done by creating the file /data/mailu/overrides/nginx/nc.conf with the following content:

location /nc/ {
  add_header Front-End-Https on;
  proxy_buffering off;
  fastcgi_request_buffering off;
  proxy_pass http://nc/;
  client_max_body_size 0;
}

We also need a DB. Add this to docker-compose.yml:

  # Nextcloud

  ncpostgresql:
    image: postgres:12
    restart: always
    environment:
      POSTGRES_PASSWORD: ...
    volumes:
      - /data/ncpostgresql:/var/lib/postgresql/data

Make sure to add a proper password here! Next, we have to bring the environment down and up again to add the DB container, and then access the DB and create the right users and database with corresponding privileges:

  • Get the DB up & running: docker compose down and docker compose up
  • access DB container: sudo docker exec -it mailu_ncpostgresql_1 /bin/bash
  • become super user: su - postgres
  • add user nextcloud, add proper password here: create user nextcloud with password '...';
  • add nextcloud database: CREATE DATABASE nextcloud TEMPLATE template0 ENCODING 'UNICODE';
  • change database owner to user nextcloud: ALTER DATABASE nextcloud OWNER TO nextcloud;
  • grant all privileges to nextcloud: GRANT ALL PRIVILEGES ON DATABASE nextcloud TO nextcloud;

Now we can add the Nextcloud container itself. We will add a few environment variables to properly configure the DB access and the initial admin account. Add the following listing to the Docker Compose file:

  nc:
    image: nextcloud:apache
    restart: always
    environment:
      POSTGRES_HOST: ncpostgresql
      POSTGRES_USER: nextcloud
      POSTGRES_PASSWORD: ....
      POSTGRES_DB: nextcloud
      NEXTCLOUD_ADMIN_USER: admin
      NEXTCLOUD_ADMIN_PASSWORD: ...
      NEXTCLOUD_TRUSTED_DOMAINS: front
      REDIS_HOST: redis
    depends_on:
      - resolver
      - ncpostgresql
    dns:
      - 192.168.203.254
    volumes:
      - /data/nc/main:/var/www/html
      - /data/nc/custom_apps:/var/www/html/custom_apps
      - /data/nc/data:/var/www/html/data
      - /data/nc/config:/var/www/html/config
      - /data/nc/zzz_upload_php.ini:/usr/local/etc/php/conf.d/zzz_upload_php.ini

Nextcloud configuration

Before we launch Nextcloud, we need to configure it properly. As shown in the last line in the previous example, a specific file is needed to define the values for PHP file upload sizes. This is only needed in corner cases (browsers split up files during upload automatically these days), but can help sometimes. Create the file /data/nc/zzz_upload_php.ini:

upload_max_filesize=2G
post_max_size=2G
memory_limit=4G

Next, we need to create the configuration for the actual Nextcloud instance. Stop the Docker Compose setup, and start it up again. That generates the basic config files on the disk, and you can access/data/nc/config/config.php and adjust the following variables (others are left intact):

  'overwritewebroot' => '/nc',
  'overwritehost' => 'nc.bayz.de',
  'overwriteprotocol' => 'https',
  'trusted_domains' => 
  array (
    0 => 'lisa.bayz.de',
    1 => 'front',
    2 => 'mailu_front_1.mailu_default',
    3 => 'nc.bayz.de',
  ),

After another Docker Compose down and up, the instance should be all good! If the admin password need to be reset, access the container via sudo docker exec -it mailu_nc_1 /bin/bash and reset the password with: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ user:resetpassword admin"

Next we can connect Nextcloud to the mailu IMAP server to use it for authentication. First we install the app “External user authentication” from the developers section. Next we add the following code to the above mentioned config.php:

  'user_backends' => array(
    array(
        'class' => 'OC_User_IMAP',
        'arguments' => array(
            'imap', 143, 'null', 'bayz.de', true, false
        ),
    ),
),

Restart the setup, and a login as user should be possible.

Sync existing files

In my case the instance was following a previous one. As part of the migration, a lot of “old” data had to be copied. The problem: copying the data for example via webdav is time consuming, does not perform and might be troublesome when the sync needs to be picked up after interruption again.

It is easier to sync direct from disc to disc with established tools like rsync. However, Nextcloud does not know that new files arrived that way and does not list them. The steps to make Nextcloud aware of those are:

  1. Log in as each user for which data should be synced so that target directories exist underneath the files/ directory
  2. Sync data with rsync or other tool of choice
  3. Correct permissions: chown -R ...:... files/
  4. Access container: sudo docker exec -it mailu_nc_1 /bin/bash
  5. Trigger file scan in Nextcloud: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ files:scan --all"

Recurrent updater

For add-ons like the newsreader, Nextcloud needs to perform tasks on a regular base. Surprisingly enough, Nextcloud cannot easily do this on its own. The best way is to add a cron job to do that. And the best way to do that is a Systemd timer.

So first we add the service to be triggered regularly. On the host itself (not inside the container) create the file /etc/systemd/system/nextcloudcron.service:

[Unit]
Description=Nextcloud cron.php job

[Service]
ExecStart=/usr/bin/docker exec mailu_nc_1 su -s /bin/sh www-data -c "/usr/local/bin/php -f /var/www/html/cron.php"

Then, create the timer via the file /etc/systemd/system/nextcloudcron.timer:

[Unit]
Description=Run Nextcloud cron.php every 5 minutes

[Timer]
OnBootSec=5min
OnUnitActiveSec=5min
Unit=nextcloudcron.service

[Install]
WantedBy=timers.target

Enable the timer: systemctl enable --now nextcloudcron.timer. And it is done. And is way more flexible and usable and maintainable than the old cron jobs. If you are new to timers, check their execution with sudo systemctl list-timers.

DB performance

A lot of Nextcloud’s performance depends on the performance of the DB. And DBs are all about indices. There are a few commands which can help with that – and which are recommended on the self check inside Nextcloud anyway:

access container: sudo docker exec -it mailu_nc_1 /bin/bash
add missing indices: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:add-missing-indices" www-data
convert filecache: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:convert-filecache-bigint" www-data
add missing columns: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:add-missing-columns" www-data

Preview generator – fast image loading

The major clients used to access Nextcloud will probably be the Android client and a web browser. However, scrolling through galleries full of images is a pain: it takes ages until all the previews are loaded. Sometimes even a slide show is not possible because it all just takes too long.

This is because the images are not downloaded in real size (that would take too long), instead previews of the size required in that moment are generated live (still takes long, but not that long).

To make this all faster, one idea is to pre-generate the previews! To do so, we install the app “Preview Generator” in our instance. However, this generates a bit too many preview files, many in sizes which are hardly ever used. So we need to alter the sizes to be generated:

$ sudo docker exec -it mailu_nc_1 /bin/bash
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator squareSizes --value='256 1024'"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator widthSizes  --value='384 2048'"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator heightSizes --value='256 2048'"

Also we want to limit the preview sizes to not waste too much storage:

$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set preview_max_x --value 2048"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set preview_max_y --value 2048"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set jpeg_quality --value 80"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set preview jpeg_quality --value='80'"

Last but not least we run the preview generator:

su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ preview:generate-all -vvv"

Note that this can easily take hours, and thus I recommend to launch this via a tmux shell.

Of course new files will reach the system, so once in a while new previews should be generated. Use this command:

su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ preview:pre-generate"

This can also be added to a cron job similar to the timer above. Create the file /etc/systemd/system/nextcloudpreview.service:

[Unit]
Description=Nextcloud image preview generator job

[Service]
ExecStart=/usr/bin/docker exec mailu_nc_1 su -s /bin/sh www-data -c "/usr/local/bin/php -f /var/www/html/occ preview:pre-generate"

And add a timer similar to the above one triggering the service every 15 minutes. Create the file /etc/systemd/system/nextcloudpreview.timer:

[Unit]
Description=Run Nextcloud preview generator every 15 minutes

[Timer]
OnBootSec=15min
OnUnitActiveSec=15min
Unit=nextcloudpreview.service

[Install]
WantedBy=timers.target

Launch the timer: sudo systemctl enable --now nextcloudpreview.timer

One final word of caution: previews can take up a lot of space. Like A LOT. Expect maybe an additional 20% of storage needed for your images.

What’s next?

With Nextcloud up and running and all old data synced I was feeling good: all basic infrastructure services were running again. People could access all their stuff with only slight adjustments to their URLs.

The missing piece now was webmail and general groupware functionality. This will be covered in another post of this series.

More about that in the next post.

Image by RÜŞTÜ BOZKUŞ from Pixabay

[Howto] Create your own cloud gaming server to stream games to Fedora

A few months back I wanted to give a game a try which only runs on Windows and requires a dedicated GPU. Since I have neither of those, a decided to set up my own Windows cloud gaming server to stream the game to my Linux machine.

A few months back I wanted to give a game a try which only runs on Windows and requires a dedicated GPU. Since I have neither of those, a decided to set up my own Windows cloud gaming server to stream the game to my Linux machine.

Dozens of years ago there was one game I played day and night. For weeks, months, maybe even years. Till today I can still remember the distinct soundtrack which makes the hair stand up on the back of my neck: UFO: Enemy Unknown. I loved the game! A few years ago I also played one of the open source games inspired by UFO quite some time, UFO: AI. That was fun.

Sequels to the original game were released, two over the last couple of years. But they never really were an option since they required Windows (or so I thought) and above all, time. However, few months ago I first realized that one of the sequels, XCOM: Enemy Unknown, was available for Android. Since I have a brand new flagship Android tablet I gave it a shot – and it was great! But since the Android version was seriously limited, I played it again on Linux. That barely worked with my limited Intel GPU. But it was playable, and I had fun.

I was infected with the urge to play the game more – and when a thid sequel was announced, I at least wanted to play the second one, XCOM 2. But how? My GPU was too limited and eGPUs are expensive and often involve a lot of hassle – even if I would be willing to buy a Windows license. So I searched if cloud gaming could do the trick.

Cloud Gaming Services

The idea of cloud gaming is that heavy machines in the data center do the rendering, and the client machine only displays the end result. That shifts the burden of the powerful GPU towards the data center, and the client only needs to have simple graphics to show a stream of images. This does however require a rather responsive broad band connection between the client and the data center.

This principle is not new, but got new attention recently when Google announced their cloud gaming offer Stadia. I checked if any cloud gaming services offered my game of choice – and was available on Linux. Unfortunately, the results were disappointing:

  • Stadia: no XCOM2, no Linux client via Chrome Browser (thanks to zesoup)
  • GeForce Now: no XCOM2, no Linux client
  • Playstation Now: XCOM2 available, but no Linux client
  • Vortex: no XCOM2, no Linux client

Some of the above can be used on Linux with the help of Lutris, which uses Wine in the background. But for me that would only count as a last resort. I was not that desperate yet.

However, not all was lost yet: some services are not tied to a certain game catalog, but instead offer a generic server and client onto which you can install your games. The research results were first promising: shadow.tech offers machines for just that and a working Linux client! However, they are not available at my place.

The solution: Parsec

So with all ready-to-consume options out of the picture, I was almost willing to give up (or give Lutris and Playstation Now a chance, or even buy a eGPU). But then I stumbled upon something interesting: Parsec, a client for interactive game streaming.

Parsec is a high performance, low latency 60 FPS remote access product connecting you to your computer from anywhere.

Parsec features

That itself didn’t solve my problem. But it opened a window to a new solution: in the past, the company offered cloud hosted game servers on their own. Players could connect to it with their Parsec client and play games on them together – or on their own. The Parsec promise is that their client is fast enough for a reasonable good experience.

The server offer was canceled some time ago – but there was no one stopping me launching my own server and connect the Parsec client to it. And that is what I did. Read on to learn how to do that yourself.

Step 1: Getting a Windows cloud server with a reasonable GPU

What is needed is a cloud hosted Windows machine with a reasonable GPU. In best case the data center hosting the machine should not be on the other side of the planet. AWS, Azure, GCP and other have such offers. But there is even a better route: during my research I found Paperspace, a company specialized on providing access to GPU or AI cloud platforms. That is perfect for this use case!

Paperspace does not really advertise their support for gaming platforms. But after I signed up and looked what was needed to create my first cloud server I found a Parsec template:

That makes the entire process very easy!

  • Sign up with Paperspace, get billing sorted out (yes, this stuff costs money)
  • Get to Core -> Compute -> Machines, create a new machine
  • From Public Templates, get the Parsec cloud gaming template
  • Pick the right size for your games; for me a P4000 was enough.
  • Make sure to add a public IP and enough storage. Many today’s games easily consume dozens of GB
  • Set the auto-shutdown timer. No need to waste money.
  • Start the machine.

And that’s it already. Once the machine starts, you will notice a Parsec icon on the home screen. Time to get that working.

Step 2: Get Parsec

Parsec has clients for Linux based operating systems such as Ubuntu and Raspberry. There is even an AppImage or a Snap – unfortunately not a Flatpak yet. Update: there is now even a Flatpak package available! Thanks Sheogorath for the hint!

And if you are not willing to use Flatpak, AppImage or Snap for whatever reason, you can download the Ubuntu deb and create a RPM out of it. There is even a handy script for that. Any way, get it installed.

Sign up to Parsec, start the client, log in, and you are almost there:

Step 3: Play

After Parsec is all set, just start the cloud server, start Parsec there (maybe log in to your Parsec account), connect to the session on your client – and you are good to go: You can start playing!

For a first test I just watched some Youtube videos and was surprised by the quality. Next I logged in to my Steam account, got my XCOM2 installed and played along happily!

Performance and user experience

But how good is the performance? Well, that depends mostly on one factor: network. Due to unfortunate circumstances I was “able” to test this setup with three very distinct networks in a short time frame:

  • A rather slowish, unstable WiFi with a lot of jitter
  • A LTE connection, provided to me via WiFi hotspot
  • A top-notch, high performance mesh WiFi

When you have slow pings (everything below 25 ms) and/or a lot of jitter, I cannot recommend that you go this path. Otherwise it can be a serious option!

The first network I was on was horrible slow, and the experience was horrible. XCOM2 has basically permanent background music, and the constant interruptions in the music and audio sequences were in fact the worst for me.

The LTE based network was slightly better, but still far from a native feeling. I was able to get a good experience out of this and have fun, but that about was it.

However, the third option, WiFi on almost wired quality, was so good that in times I forgot that I was not playing the game natively. There was no visible lag, the graphics were crystal clear, the music was never interrupted, etc. I was impressed – and had great sessions that way!

I can only recommend to always keep an eye on the connection quality reported in the Parsec overlay:

As Parsec mentions:

At 60 frames per second, 1 frame is around 16ms. By combining decode, encode and network, you’ll have the amount of frames the client lags behind.

Parsec about lag latency

Having this in mind, the above screenshot shows a connection with an unfortunate lag, leading to a not-that-good experience.

Recap

If you don’t have the hardware and/or software to play your favorite game, cloud gaming can be a solution for your problem. And if there is no proper offering out there, it is possible to get this working on your own.

Running your own cloud gaming server is surprisingly easy and not too expensive. It does feel somewhat weird in the beginning especially if you usually only use clouds for your professional work. But it is a fun experience, and the results can be staggering – if your network is up for the job!

Featured image by Martin Str from Pixabay

[Short Tip] Quickly copy files to Samsung Galaxy S20+ on Fedora 32

Sometimes, if you cannot properly transfer files to your phone, try another mtp implementation!

Transferring files from a Fedora 32 machine to an Android phone is usually not a problem: plugin in it (via USB), unlock screen, make sure that USB connection is set to file transfer, and open the phone in Nautilus.

However, I recently had to get a new phone, decided to opt for the Samsung Galaxy S20+ – and there I was not able to write files to the device:

rsync: open "/run/user/1000/gvfs/mtp:host=SAMSUNG_SAMSUNG_Android_R58N23ZDCVN/Phone/appdata/2020-04-12_23-38 - foldersync.db" failed: Operation not supported (95)

Note that other phones made no problems so far. And things got even more weird when I realized that I was able to create folders – but not files?!

After the usual tricks (enabling USB debugging, switching cables, etc.) I realized that this might be a special problem with the implementation of mtp by this phone. So instead of using libmtp, wich is used by default by for example gvfs, I tested other mtp implementations – and found simple-mtpfs, which worked like a charm:

$ sudo simple-mtpfs -l
1: SamsungGalaxy models (MTP)
$ sudo simple-mtpfs --device 1 /mnt
# you have to acknowledge access to the phone on the phone screen
# then you have to mount it again
$ sudo simple-mtpfs --device 1 /mnt
$ sudo rsync --verbose --progress --size-only --omit-dir-times --no-perms --recursive --inplace /home/liquidat/backup/ /mnt/Phone/

The performance is good – way better than trying to copy files via gphoto2, btw 😉

Image by Martin Pyško from Pixabay

Flatpak – a solution to the Linux desktop packaging problem [Update]

Linux packaging was a nightmare for years. But recently serious contenders came up claiming to solve the challenge: first containers changed how code is deployed on servers for good. And now a solution for the desktops is within reach. Meet Flatpak!

Preface

In the beginning I probably should admit that over the years I identified packaging within the Linux ecosystem as a fundamental problem. It prevented wider adoption of Linux in general, but especially on the desktop. I was kind of obsessed with the topic.

The general arguments were/are:

  • Due to a missing standard it was not easy enough for developers to package software. If they used one of the formats out there they could only target a sub-set of distributions. This lead to lower adoption of the software on Linux, making it a less attractive platform.
  • Since Linux was less attractive for developers, less applications were created on/ported to Linux. This lead to a smaller ecosystem. Thus it was less attractive to users since they could not find appealing or helpful applications.
  • Due to missing packages in an easy-accessible format, installing software was a challenge as soon as it was not packaged for the distribution in use. So Linux was a lot less attractive for users because few software was available.

History, server side

In hindsight I must say the situation was not as bad as I thought on the server level: Linux in the data center grew and grew. Packaging simply did not matter that much because admins were used to problems deploying applications on servers anyway and they had the proper knowledge (and time) to tackle challenges.

Additionally, the recent rise of container technologies like Docker had a massive impact: it made deploying of apps much easier and added other benefits like sandboxing, detailed access permissions, clearer responsibilities especially with dev and ops teams involved, and less dependency hell problems. Together with Kubernetes it seems as there is an actual standard evolving of how software is deployed on Linux servers.

To summarize, in the server ecosystem things never were as bad, and are quite good these days. Given that Azure serves more Linux servers than Windows servers there are reasons to believe that Linux is these days the dominant server platform and that Windows is more and more becoming a niche platform.

History, client side

On the desktop side things were bad right from the start. Distribution specific packaging made compatibility a serious problem, incompatible packaging formats with RPMs and DEBs made it worse. One reason why no package format ever won was probably that no solution offered real benefits above the other. Given today’s solutions for packaging software out there RPM and DEB are missing major advantages like sandboxing and permission systems. They are helplessly outdated, I question if they are suited for software packaging at all today.

There were attempts to solve the problem. There were attempts at standardization – for example via the LSB – but that did not gather enough attraction. There were platform agnostic packaging solutions. Most notably is Klik which started already 15 years ago and got later renamed to AppImage. But despite the good intentions and the ease of use it never gained serious attention over the years.

But with the approach of Docker things changed: people saw the benefits of container formats and the technology technology for such approaches was widely available. So people gave the idea another try: Flatpak.

Flatpak

Flatpak is a “technology for building and distributing desktop applications on Linux”. It is an attempt to establish an application container format for Linux based desktops and make them easy consumable.

According to the history of Flatpak the initial idea goes way back. Real work started in 2014, and the first release was in 2015. It was developed initially in the ecosystem of Fedora and Red Hat, but soon got attention from other distributions as well.

Many features look somewhat similar to the typical features associated with container tools like Docker:

  • Build for every distro
  • Consistent environments
  • Full control over dependencies
  • Easy to use  build tools
  • Future-proof builds
  • Distribution of packages made easy

Additionally it features a sandboxing environment and a permissions system.

The most appealing feature for end users is that it makes it simple to install packages and that there are many packages available because developers only have to built them once to support a huge range of distributions.

By using Flatpak the software version is also not tied to the distribution update cycle. Flatpak can update all installed packages centrally as well.

Flathub

One thing I like about Flatpak is that it was built with repositories (“shops”) baked right in. There is a large repository called flathub.org where developers can submit their applications to be found and consumed by users:

The interface is simple but has a somewhat proper design. Each application features screenshots and a summary. The apps themselves are grouped by categories. The ever changing list of new & updated apps shows that the list of apps is ever growing. A list of the two dozen most popular apps is available as well.

I am a total fan of Open Source but I do like the fact that there are multiple closed source apps listed in the store. It shows that the format can be used for such use cases. That is a sign of a healthy ecosystem. Also, there are quite a few games which is always good 😉

Of course there is lots of room for improvement: at the time of writing there is no way to change or filter the sorting order of the lists. There is no popularity rating visible and no way to rate applications or leave comments.

Last but not least, there is currently little support from external vendors. While you find many closed source applications in Flathub, hardly any of them were provided by the software vendor. They were created by the community but are not affiliated with the vendors. To have a broader acceptance of Flatpak the support of software vendors is crucial, and this needs to be highlighted in the web page as well (“verified vendor” or similar).

Hosting your own hub

As mentioned Flatpak has repositories baked in, and it is well documented. It is easy to generate your own repository for your own flatpaks. This is especially appealing to projects or vendors who do not want to host their applications themselves.

While today it is more or less common to use a central market (Android, iOS, etc.) some still prefer to keep their code in there hands. It sometimes makes it easier to provide testing and development versions. Other use cases are software which is just developed and used in-house, or the mirroring of existing repositories for security or offline reasons: such use cases require local hubs, and it is no problem at all to bring them up with Flatpak.

Flatpak, distribution support

Flatpak is currently supported on most distributions. Many of them have the support built in right from the start, others, most notably Ubuntu, need to install some software first. But in general it is quite easy to get started – and once you did, there are hundreds of applications you can use.

What about the other solutions?

Of course Flatpak is not the only solution out there. After all, this is the open source world we are talking about, so there must be other solutions 😉

Snap & Snapcraft

Snapcraft is a way to “deliver and update your app on any Linux distribution – for desktop, cloud, and Internet of Things.” The concept and idea behind it is somewhat similar to Flatpak, with a few notable differences:

  • Snapcraft also target servers, while Flatpak only targets desktops
  • Among the servers, Snapcraft additionally has iot devices as a specific target group
  • Snapcraft does not support additional repositories; there is only one central market place everyone needs to use, and there is no real way to change that

Some more technical differences are in the way packages are built, how the sandbox work and so on, but we will noch focus on those in this post.

The Snapcraft market place called snapcraft.io provides lists of applications, but is much more mature than Flathub: it has vendor testimonials, features verified accounts, multiple versions like beta or development can be picked from within the market, there are case stories, for each app additional blog posts are listed, there is integration with social accounts, you can even see the distribution by countries and Linux flavors.

And as you can see, Snapcraft is endorsed and supported by multiple companies today which are listed on the web page and which maintain their applications in the market.

Flathub has a lot to learn until it reaches the same level of maturity. However, while I’d say that snapcraft.io is much more mature than Flathub it also misses the possibility to rate packages, or just list them by popularity. Am I the only one who wants that?

The main disadvantage I see is the monopoly. snapcraft.io is tightly controlled by a single company (not a foundation or similar). It is of course Canonical’s full right to do so, and the company and many others argue that this is not different from what Apple does with iOS. However, the Linux ecosystem is not the Apple ecosystem, and in the Linux ecosystem there are often strong opinions about monopolies, closed source solutions and related topics which might lead to acceptance problems in the long term.

Also, technically it is not possible to launch your own central server for example for in-house development, or for hosting a local mirror, or to support offline environments or for other reasons. To me this is particularly surprising given that Snapcraft targets specifically iot devices, and I would run iot devices in an closed network wherever I can – thus being unable to connect to snapcraft.io. The only solution I was able to identify was running a http proxy, which is far from the optimal solution.

Another a little bit unusual feature of Snapcraft is that updates are installed automatically, thanks to theo.9dor for the hint:

The good news is that snaps are updated automatically in the background every day! 

https://tutorials.ubuntu.com/tutorial/basic-snap-usage#2

While in the end a development model with auto deployments, even dozens per day, is a worthwhile goal I am not sure if everyone is there yet.

So while Snapcraft has a mature market place, targets much more use cases and provides more packages to this date, I do wonder how it will turn out in the long run given that we are talking about the Linux ecosystem here. And while Canonical has quite some experience to develop their own solutions outside the “rest” of the community, those attempts seldom worked out.

AppImage

I’ve already mentioned AppImage above and I’ve written about it in the past when it was still called Klik. AppImage is “way for upstream developers to provide native binaries for Linux”. The result is basically a file that contains your entire application and which you can copy everywhere. It exists for more than a dozen years now.

The thing that is probably most worth mentioning about it is that it never caught on. After all, already long time ago it provided many impressive features, and made it possible to install software cross distribution. Many applications where also available as AppImage – and yet I never saw wider adoption. It seems to me that it only got traction recently because Snapcraft and Flatpak entered the market and kind of dragged it with them.

I’d love to understand why that is the case, or have an answer to the “why”. I only have few ideas but those are just ideas, and not explanations why AppImage, in all the years, never managed to become the Docker of the Linux desktop.

Maybe one problem was that it never featured a proper store: today we know from multiple examples on multiple platforms that a store can mean the difference. A central place for the users to browse, get a first idea of the app, leave comments and rate the application. Docker has a central “store”, Android and iOS have one, Flatpak and Snapcraft have one. However, AppImage never put a focus on that, and I do wonder if this was a missed opportunity. And no, appimage.github.io/apps is not a store.

Another difference to the other tools is that AppImage always focused on the Open Source tools. Don’t get me wrong, I appreciate it – but open source tools like Digikam were available on every distribution anyway. If AppImage would have focused to reach out to closed source software vendors as well, together with marketing this aggressively, maybe things would have turned out differently. You do not only need to make software easily available to users, you also need to make software available the people want.

Last but not least, AppImage always tried to provide as many features as possible, while it might have benefited from focusing on some and marketing them stronger. As an example, AppImage advertises that it can run with and without sandboxing. However, sandboxing is a large benefit of using such a solution to begin with. Another thing is integrated updates: there is a way to automatically update all appimages on a system, but it is not built in. If both would have been default and not optional, things maybe would have been different.

But again, these are just ideas, attempts to find explanations. I’d be happy if someone has better ideas.

Disadvantages of the Flatpak approach

There are some disadvantages with the Flatpak approach – or the Snapcraft one, or in general with any container approach. Most notably: libraries and dependencies.

The basic argument here is: all dependencies are kept in each package. This means:

  • Multiple copies of the same libraries on the system, leading to larger disk consumption
  • Multiple copies of the same library in the RAM during execution, leading to larger memory consumption
  • And probably most important: if a library has a security problem, each and every package has to be updated

Especially the last part is crucial: in case of a serious library security problem the user has to rely on each and every package vendor that they update the library in the package and release an updated version. With a dependency based system this is usually not the case.

People often compare this problem to the Windows or Java world were a similar situation exists. However, while the underlying problem is existent and serious, with Flatpak at least there is a sandbox and a permission system something which was not the case in former Windows versions.

There needs to be made a trade off between the advanced security through permissions and sandboxing vs the risk of having not-updated libraries in those packages. That trade off is not easily done.

But do we even need something like Flatpak?

This question might be strange, given the needs I identified in the past and my obvious enthusiasm for it. However, these days more and more apps are created as web applications – the importance of the desktop is shrinking. The dominant platform for users these days are mobile phones and tablets anyway. I would even go so far to say that in the future desktops will still be there but mainly to launch a web browser

But we are not there yet and today there is still the need for easy consumption of software on Linux desktops. I would have hoped though to see this technology and this much traction and distribution and vendor support 10 years ago.

Conclusion

Well – as I mentioned early on, I can get somewhat obsessed with the topic. And this much too long blog post shows this for sure 😉

But as a conclusion I say that the days of difficult-to-install-software on Linux desktops are gone. I am not sure if Snapcraft or Flatpak will “win” the race, we have to see that.

At the same time we have to face that desktops in general are just not that important anymore.  But until then, I am very happy that it became so much easier for me to install certain pieces of software in up2date versions on my machine.

Current distribution of WhatsApp alternatives [Update]

Android_robotMany people are discussing alternatives to WhatsApp right now. Here I just track how many installations the currently discussed, crypto-enabled alternatives have according to the app store.

WhatsApp was already bad before Facebook acquired it. But at least now people woke up and are considering secure alternatives. Yes, this move could have come earlier, but I do welcome the new opportunity: its the first time wide spread encryption actually has a chance in the consumer market. So for most of the people out there the question is more “which alternative should I use” instead of “should I use one”. Right now I do not have the faintest idea which alternative with crypto support will make the break through – but you could say I am well prepare.

Screenshot installed instant messengers
Screenshot installed instant messengers

Well – that’s obviously not a long term solution. Thus, to shed some light on the various alternatives and how they stand right now, here is a quick statistical overview:

Secure Instant Messengers, state updated 2014-03-11
Name WebPage/GooglePlay installed devices Ratings Google +1
ChatSecure Website / Google Play 100 000 – 500 000 1 626 2 620
Kontalk Website / Google Play 10 000 – 50 000 237 265
surespot Website / Google Play 50 000 – 100 000 531 632
Telegram Website / Google Play 10 000 000 – 50 000 000 273 089 97 641
Threema Website / Google Play 500 000 – 1 000 000 9 368 12 594
TextSecure Website / Google Play 100 000 – 500 000 2 478 2 589

The statistics are taken from Google’s Android Play Store. I would love to include iTunes statistics, but it seems they are not provided via the web page. If you know how to gather them please drop me a note and I’ll include them here.

These numbers just help to show how fat an application is spread – it does not say anything about the quality. For example Threema is not Open Source and thus not a real alternative. So, if you want to know more details about the various options, please read appropriate reviews like the one from MissingM.