Containers are meant to keep processes contained. But there are ways to gather information about the host – like the actual execution environment you are running in.
Containers are pretty good at keeping everyone and everything inside their boundaries – thanks to SELinux, namespaces and so on. But they are not perfect. Thanks to a recent Azure security flaw I was made aware of a nice trick via the /proc/ file system to figure out what container runtime the container is running in.
The idea is that the started container inherits some /proc/ entries – among the entry for /proc/self/. If we are able to load a malicious container, we can use this knowledge to execute the container host binary and by that get information about the runtime.
As an example, let’s take a Fedora container image with podman (and all it’s library dependencies) installed. We can run it and check the version of crun inside:
With this nice little feature we now have insight into the version used – and as it was shown in the detailed write-up of the Azure security problem mentioned above, this insight can be crucial to identify and use the right CVEs to start lateral movement.
Of course this requires us to be able to load containers of our choice, and to have the right libraries inside the container. This example is simplified because I knew about the target system and there was a container available with everything I needed. For a more realistic attack container image, check out whoc.
Cilium is a networking plugin for Kubernetes based on eBPF. If you want to give it a try, Minikube is a good option to get started.
Background
I just started with Isovalent – and since I am very much a beginner regarding everything related to Kubernetes I decided to get some hands-on experience with the technology I am going to work with for the foreseeable future.
Isovalent’s offering is an Enterprise version of Cilium which basically manages and secures connections between containers and adds observability to it. It all runs on eBPF and thus is pretty performant. eBPF can run sandboxed programs in Linux kernel space without the need to recompile the kernel; A tiny bit like a “Kernel VM”. I always wanted to get my hands dirty with eBPF anyway, and Cilium is a very good way to approach it. But where to start? The answer is: with a small Kubernetes setup based on Minikube, a tiny Kubernetes distribution for testing and fooling around which leaves your main system almost unchanged.
Preparing the environment
Minikube runs itself in a tightly confined environment to not disturb your other systems. This abstraction is done via containers or VMs realized via so called “drivers”. Drivers are available for Docker, VMWare, KVM, Podman and others. I decided to go with the KVM driver, so the virtualization bits need to be installed:
Note in the above commands that the last command only works in Nushell and has to be slightly adjusted for Bash or Zsh.
Next we have to install Minikube itself:
❯ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 15.1M 100 15.1M 0 0 8304k 0 0:00:01 0:00:01 --:--:-- 8300k
❯ sudo rpm -Uvh minikube-latest.x86_64.rpm
Verifying... ################################# [100%]
Preparing... ################################# [100%]
Updating / installing...
1:minikube-1.22.0-0 ################################# [100%]
Also, to install and manage Cilium easily it makes sense to use the Cilium CLI. Unfortunately the CLI is currently not available as a RPM package for Fedora, so we have to install the binary and move it to /usr/local/bin:
❯ curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz{,.sha256sum}
[...]
❯ sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
cilium-linux-amd64.tar.gz: OK
❯ sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
Starting Minikube with CNI
We now need to start up our Kubernetes cluster, and it needs to be in a way that we can install and use Cilium in it. So we set the network configuration to CNI:
❯ minikube start --network-plugin=cni
😄 minikube v1.22.0 on Fedora 34
✨ Automatically selected the kvm2 driver. Other choices: podman, ssh
💾 Downloading driver docker-machine-driver-kvm2:
> docker-machine-driver-kvm2....: 65 B / 65 B [----------] 100.00% ? p/s 0s
> docker-machine-driver-kvm2: 11.47 MiB / 11.47 MiB 100.00% 12.50 MiB p/s
❗ With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
💿 Downloading VM boot image ...
> minikube-v1.22.0.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
> minikube-v1.22.0.iso: 242.95 MiB / 242.95 MiB [ 100.00% 20.05 MiB p/s 12s
👍 Starting control plane node minikube in cluster minikube
🔥 Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.21.2 on Docker 20.10.6 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Installing Cilium into Kubernetes
Since Cilium CLI is already installed, it is fairly easy to install Cilium into the cluster itself. The installation is done into the current kubectl context, so make sure you are running in the right context for example with kubectl get nodes. Afterwards, fire up the installation:
❯ cilium install
🔮 Auto-detected Kubernetes kind: minikube
✨ Running "minikube" validation checks
✅ Detected minikube version "1.22.0"
ℹ️ using Cilium version "v1.10.2"
🔮 Auto-detected cluster name: minikube
🔮 Auto-detected IPAM mode: cluster-pool
🔮 Auto-detected datapath mode: tunnel
🔮 Custom datapath mode: tunnel
🔑 Generating CA...
2021/07/13 14:09:33 [INFO] generate received request
2021/07/13 14:09:33 [INFO] received CSR
2021/07/13 14:09:33 [INFO] generating key: ecdsa-256
2021/07/13 14:09:33 [INFO] encoded CSR
2021/07/13 14:09:33 [INFO] signed certificate with serial number 122640105911298337607907666763746132599853501126
🔑 Generating certificates for Hubble...
2021/07/13 14:09:33 [INFO] generate received request
2021/07/13 14:09:33 [INFO] received CSR
2021/07/13 14:09:33 [INFO] generating key: ecdsa-256
2021/07/13 14:09:33 [INFO] encoded CSR
2021/07/13 14:09:33 [INFO] signed certificate with serial number 459020519400202498147292503280351877404424824247
🚀 Creating Service accounts...
🚀 Creating Cluster roles...
🚀 Creating ConfigMap...
🚀 Creating Agent DaemonSet...
🚀 Creating Operator Deployment...
⌛ Waiting for Cilium to be installed...
⌛ Waiting for Cilium to become ready before restarting unmanaged pods...
♻️ Restarting unmanaged pods...
♻️ Restarted unmanaged pod kube-system/coredns-558bd4d5db-8s4f6
♻️ Restarted unmanaged pod kubernetes-dashboard/dashboard-metrics-scraper-7976b667d4-ctq4p
♻️ Restarted unmanaged pod kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-5wkbx
✅ Cilium was successfully installed! Run 'cilium status' to view installation health
The installation went through flawlessly. But does it really work? As mentioned in the last line of the above listing, we can check the status of Cilium easily:
We can even get one step further and check the connectivity of the cluster – after all, Cilium is all about proper networking:
❯ cilium connectivity test
ℹ️ Single-node environment detected, enabling single-node connectivity test
ℹ️ Monitor aggregation detected, will skip some flow validation steps
[...]
..
✅ All 11 tests (76 actions) successful, 0 tests skipped, 0 scenarios skipped.
As you see Cilium creates a set of pods and a service in a dedicated namespace and runs tests on them afterwards.
Interacting with the Cilium agent
Let’s have a first look at our installed Cilium environment by running a few commands on the local Cilium agent. First we have to figure out the name of the actual Cilium pod:
❯ minikube kubectl -- -n kube-system get pods -l k8s-app=cilium
NAME READY STATUS RESTARTS AGE
cilium-8hx2v 1/1 Running 0 35m
With the name of the pod we can now reach into the pod and execute the Cilium command right inside, for example querying the list of endpoints:
This list is long, detailed and only really makes sense on a wide monitor. But it already tells us a lot about the current enforcement of ingress and egress policies (here they are not enforced as of yet).
But there is more: since Cilium is eBPF based, we can go one layer deeper, and for example look at the policy related eBPF maps:
❯ minikube kubectl -- -n kube-system exec cilium-8hx2v -- cilium bpf policy get --all
Defaulted container "cilium-agent" out of: cilium-agent, ebpf-mount (init), clean-cilium-state (init)
/sys/fs/bpf/tc/globals/cilium_policy_00208:
POLICY DIRECTION LABELS (source:key[=value]) PORT/PROTO PROXY PORT BYTES PACKETS
Allow Ingress reserved:unknown ANY NONE 16959 183
Allow Ingress reserved:host ANY NONE 1098509 4452
Allow Egress reserved:unknown ANY NONE 393706 4204
[...]
Note that the policy number is related to the endpoint ID in the Cilium endpoint list above.
We now have a running Cilium setup which can be used to run tests and examples!
Next: write and enforce policies, add observability
Doing a policy enforcement test goes beyond of the scope of this blog post – but it certainly is worth a look in the future. Also with all the data already shown above it makes sense to make a deep-dive into the topic of observation in the future.
The same is true for observability: if you wonder how deep the rabbit hole really is there is Hubble which provides serious observability into the Kubernetes network, services and security, comes with a UI and can be quickly installed since it is tightly integrated with Cilium.
And if you have stories to share around eBPF, Cilium and similar topics I am finally getting an idea of what you are talking about. 😉
Podman is a daemonless container engine to develop, run and manage OCI containers. In a recent version the API was rewritten and now offers a REST interface as well as a docker compatible endpoint.
Podman is a daemonless container engine to develop, run and manage OCI containers. In a recent version the API was rewritten and now offers a REST interface as well as a docker compatible endpoint.
In case you never heard of Podman before, it is certainly worth a look. Besides offering a more secure drop-in-replacement for many docker functions, it can also manage pods and thus provides a container experience more aligned with what Kubernetes uses. It even can understand Kubernetes yaml (see podman-play-kube), easing the transition from single host container development over to fully fledged container management environments. Last but not least it is among the tools supporting newest features in the container space like cgroups v2.
Background: Podman API
Of course Podman is not perfect – due to the focus on Kubernetes yaml there is no support for docker-compose files (though alternativesexist), networking and routing based on names is not as simple as on Docker (read more about Podman container networking) and last but not least, the API was different – making it hard to migrate solutions dependent on the docker API.
This changed: recently, a new API was merged:
The new API is a simpler implementation based on HTTP/REST. We provide two basic groups of endpoints. The first one is for libpod; the second is for Docker compatibility, to ease adoption.
So how can I access the new API and fool around with it?
If you are familiar with Podman, or read carefully, the first question is: where is this API running if Podman is daemonless? And in fact, an API service needs to be started explicitly:
$ podman system service --timeout 5000
This starts the API on a UNIX socket. Other options, like a TCP socket or to run this without a timeout are also possible, the documentation provides examples.
How to use the Docker API endpoint
Let’s use the Docker API endpoint. To talk to a UNIX socket based REST API a recent curl (version >= 7.40) is quite helpful:
Note that here we are speaking to the rootless container, thus the unix domain socket is in the user runtime directory. Also, localhost has to be provided in the URL for very recent curl versions, otherwise it does not output anything!
The answer is a JSON listing, which is not easily readable. Simplify it with the help of Python (and silence curl info with the silent flag):
So what can you do with the API? Podman tries to recreate most of the docker API, so you can basically use the docker API documentation to see what should be possible. Note though that not all API endpoints are supported since Podman does not provide all functions Docker offers.
How to use the Podman API endpoint
As mentioned the API does provide two endpoints: the Docker endpoint, and a Podman specific endpoint. This second API is necessary for multiple reasons: first, Podman has functions which are alien to Docker and thus not part of the Docker API. The pod function is the most notable here. Another reason is that an independent API enables the Podman developers to further innovate in their own way and velocity, and to change the API when needed or wanted.
The API for Podman can be reached via curl as mentioned above. However, there are two notable differences: first, the Podman endpoint is marked via an additional “podman” string in the API URI, and second the Podman API is always versioned. To list the images as shown above, but via podman’s own API, the following call is necessary:
Currently there is no documentation of the API available – or at least none of the level of the current Docker API documentation. But hopefully that will change soon.
Takeaways
Podman providing a Docker API is a great step for people who are dependent on the Docker API but nevertheless want switch to Podman. But providing a unique, but simple to consume REST API for Podman itself is equally great because it makes it easy to integrate Podman processes into existing tools and frameworks.
Just don’t forget that the API is still in development!
Traefik is a great reverse proxy solution, and a perfect tool to direct traffic in container environments. However, to do that, it needs access to docker – and that is very dangerous and must be tightly secured!
The problem: access to the docker socket
Containers offer countless opportunities to improve the deployment and management of services. However, having multiple containers on one system, often re-deploying them on the fly, requires a dynamic way of routing traffic to them. Additionally, there might be reasons to have a front end reverse proxy to sort the traffic properly anyway.
In comes traefik – “the cloud native edge router”. Among many supported backends it knows how to listen to docker and create dynamic routes on the fly when new containers come up.
To do so traefik needs access to the docker socket. Many people decide to just provide that as a volume to traefik. This usually does not work because SELinux prevents it for a reason. The apparent workaround for many is to run traefik in a privileged container. But that is a really bad idea:
Docker currently does not have any Authorization controls. If you can talk to the docker socket or if docker is listening on a network port and you can talk to it, you are allowed to execute all docker commands. […] At which point you, or any user that has these permissions, have total control on your system.
But there are ways to securely provide traefik the access it needs – without exposing too much permissions. One way is to provide limited access to the docker socket via tcp via another container which cannot be reached from the outside that easily.
What? This is a security-enhaced proxy for the Docker Socket. Why? Giving access to your Docker socket could mean giving root access to your host, or even to your whole swarm, but some services require hooking into that socket to react to events, etc. Using this proxy lets you block anything you consider those services should not do.
It is a container which connects to the docker socket and exports the API features in a secured and configurable way via TCP. At container startup it is configured with booleans to which API sections access is granted.
So basically you set up a docker proxy to support your proxy for docker containers. Well…
How to use it
The docker socket proxy is a container itself. Thus it needs to be launched as a privileged container with access to the docker socket. Also, it must not publish any ports to the outside. Instead it should run on a dedicated docker network shared with the traefik container. The Ansible code to launch the container that way is for example:
Note the env right in the middle: that is where the exported permissions are configured. CONTAINERS: 1 provides access to container relevant information. There are also SERVICES: 1 and SWARM: 1 to manage access to docker services and swarm.
Traefik needs to have access to the same network. Also, the traefik configuration needs to point to the docker container via tcp:
This setup works surprisingly easy. And it allows traefik to access the docker socket for the things it needs without exposing critical permissions to take over the system. At the same time, full access to the docker socket is restricted to a non-public container, which makes it harder for attackers to exploit it.
If you have a simple container setup and use Ansible to start and stop the containers, I’ve written a role to get the above mentioned setup running.
Linux packaging was a nightmare for years. But recently serious contenders came up claiming to solve the challenge: first containers changed how code is deployed on servers for good. And now a solution for the desktops is within reach. Meet Flatpak!
Preface
In the beginning I probably should admit that over the years I identified packaging within the Linux ecosystem as a fundamental problem. It prevented wider adoption of Linux in general, but especially on the desktop. Iwaskindofobsessedwiththetopic.
The general arguments were/are:
Due to a missing standard it was not easy enough for developers to package software. If they used one of the formats out there they could only target a sub-set of distributions. This lead to lower adoption of the software on Linux, making it a less attractive platform.
Since Linux was less attractive for developers, less applications were created on/ported to Linux. This lead to a smaller ecosystem. Thus it was less attractive to users since they could not find appealing or helpful applications.
Due to missing packages in an easy-accessible format, installing software was a challenge as soon as it was not packaged for the distribution in use. So Linux was a lot less attractive for users because few software was available.
History, server side
In hindsight I must say the situation was not as bad as I thought on the server level: Linux in the data center grew and grew. Packaging simply did not matter that much because admins were used to problems deploying applications on servers anyway and they had the proper knowledge (and time) to tackle challenges.
Additionally, the recent rise of container technologies like Docker had a massive impact: it made deploying of apps much easier and added other benefits like sandboxing, detailed access permissions, clearer responsibilities especially with dev and ops teams involved, and less dependency hell problems. Together with Kubernetes it seems as there is an actual standard evolving of how software is deployed on Linux servers.
To summarize, in the server ecosystem things never were as bad, and are quite good these days. Given that Azure serves more Linux servers than Windows servers there are reasons to believe that Linux is these days the dominant server platform and that Windows is more and more becoming a niche platform.
History, client side
On the desktop side things were bad right from the start. Distribution specific packaging made compatibility a serious problem, incompatible packaging formats with RPMs and DEBs made it worse. One reason why no package format ever won was probably that no solution offered real benefits above the other. Given today’s solutions for packaging software out there RPM and DEB are missing major advantages like sandboxing and permission systems. They are helplessly outdated, I question if they are suited for software packaging at all today.
There were attempts to solve the problem. There were attempts at standardization – for example via the LSB – but that did not gather enough attraction. There were platform agnostic packaging solutions. Most notably is Klik which started already 15 years ago and got later renamed to AppImage. But despite the good intentions and the ease of use it never gained serious attention over the years.
But with the approach of Docker things changed: people saw the benefits of container formats and the technology technology for such approaches was widely available. So people gave the idea another try: Flatpak.
Flatpak
Flatpak is a “technology for building and distributing desktop applications on Linux”. It is an attempt to establish an application container format for Linux based desktops and make them easy consumable.
According to the history of Flatpak the initial idea goes way back. Real work started in 2014, and the first release was in 2015. It was developed initially in the ecosystem of Fedora and Red Hat, but soon got attention from other distributions as well.
Many features look somewhat similar to the typical features associated with container tools like Docker:
Build for every distro
Consistent environments
Full control over dependencies
Easy to use build tools
Future-proof builds
Distribution of packages made easy
Additionally it features a sandboxing environment and a permissions system.
The most appealing feature for end users is that it makes it simple to install packages and that there are many packages available because developers only have to built them once to support a huge range of distributions.
By using Flatpak the software version is also not tied to the distribution update cycle. Flatpak can update all installed packages centrally as well.
Flathub
One thing I like about Flatpak is that it was built with repositories (“shops”) baked right in. There is a large repository called flathub.org where developers can submit their applications to be found and consumed by users:
Flathub Games
Flathub feed of new and updated applications
Flathub popular apps
The interface is simple but has a somewhat proper design. Each application features screenshots and a summary. The apps themselves are grouped by categories. The ever changing list of new & updated apps shows that the list of apps is ever growing. A list of the two dozen most popular apps is available as well.
I am a total fan of Open Source but I do like the fact that there are multiple closed source apps listed in the store. It shows that the format can be used for such use cases. That is a sign of a healthy ecosystem. Also, there are quite a few games which is always good 😉
Of course there is lots of room for improvement: at the time of writing there is no way to change or filter the sorting order of the lists. There is no popularity rating visible and no way to rate applications or leave comments.
Last but not least, there is currently little support from external vendors. While you find many closed source applications in Flathub, hardly any of them were provided by the software vendor. They were created by the community but are not affiliated with the vendors. To have a broader acceptance of Flatpak the support of software vendors is crucial, and this needs to be highlighted in the web page as well (“verified vendor” or similar).
Hosting your own hub
As mentioned Flatpak has repositories baked in, and it is well documented. It is easy to generate your own repository for your own flatpaks. This is especially appealing to projects or vendors who do not want to host their applications themselves.
While today it is more or less common to use a central market (Android, iOS, etc.) some still prefer to keep their code in there hands. It sometimes makes it easier to provide testing and development versions. Other use cases are software which is just developed and used in-house, or the mirroring of existing repositories for security or offline reasons: such use cases require local hubs, and it is no problem at all to bring them up with Flatpak.
Flatpak, distribution support
Flatpak is currently supported on most distributions. Many of them have the support built in right from the start, others, most notably Ubuntu, need to install some software first. But in general it is quite easy to get started – and once you did, there are hundreds of applications you can use.
What about the other solutions?
Of course Flatpak is not the only solution out there. After all, this is the open source world we are talking about, so there must be other solutions 😉
Snap & Snapcraft
Snapcraft is a way to “deliver and update your app on any Linux distribution – for desktop, cloud, and Internet of Things.” The concept and idea behind it is somewhat similar to Flatpak, with a few notable differences:
Snapcraft also target servers, while Flatpak only targets desktops
Among the servers, Snapcraft additionally has iot devices as a specific target group
Snapcraft does not support additional repositories; there is only one central market place everyone needs to use, and there is no real way to change that
Some more technical differences are in the way packages are built, how the sandbox work and so on, but we will noch focus on those in this post.
The Snapcraft market place called snapcraft.io provides lists of applications, but is much more mature than Flathub: it has vendor testimonials, features verified accounts, multiple versions like beta or development can be picked from within the market, there are case stories, for each app additional blog posts are listed, there is integration with social accounts, you can even see the distribution by countries and Linux flavors.
And as you can see, Snapcraft is endorsed and supported by multiple companies today which are listed on the web page and which maintain their applications in the market.
snapcraft.io main page
detailed view with option to download multiple versions
snapcraft’s iot page
Flathub has a lot to learn until it reaches the same level of maturity. However, while I’d say that snapcraft.io is much more mature than Flathub it also misses the possibility to rate packages, or just list them by popularity. Am I the only one who wants that?
The main disadvantage I see is the monopoly. snapcraft.io is tightly controlled by a single company (not a foundation or similar). It is of course Canonical’s full right to do so, and the company and many others argue that this is not different from what Apple does with iOS. However, the Linux ecosystem is not the Apple ecosystem, and in the Linux ecosystem there are often strong opinions about monopolies, closed source solutions and related topics which might lead to acceptance problems in the long term.
Also, technically it is not possible to launch your own central server for example for in-house development, or for hosting a local mirror, or to support offline environments or for other reasons. To me this is particularly surprising given that Snapcraft targets specifically iot devices, and I would run iot devices in an closed network wherever I can – thus being unable to connect to snapcraft.io. The only solution I was able to identify was running a http proxy, which is far from the optimal solution.
Another a little bit unusual feature of Snapcraft is that updates are installed automatically, thanks to theo.9dor for the hint:
The good news is that snaps are updated automatically in the background every day!
While in the end a development model with auto deployments, even dozens per day, is a worthwhile goal I am not sure if everyone is there yet.
So while Snapcraft has a mature market place, targets much more use cases and provides more packages to this date, I do wonder how it will turn out in the long run given that we are talking about the Linux ecosystem here. And while Canonical has quite some experience to develop their own solutions outside the “rest” of the community, those attempts seldom worked out.
AppImage
I’ve already mentioned AppImage above and I’ve written about it in the past when it was still called Klik. AppImage is “way for upstream developers to provide native binaries for Linux”. The result is basically a file that contains your entire application and which you can copy everywhere. It exists for more than a dozen years now.
The thing that is probably most worth mentioning about it is that it never caught on. After all, already long time ago it provided many impressive features, and made it possible to install software cross distribution. Many applications where also available as AppImage – and yet I never saw wider adoption. It seems to me that it only got traction recently because Snapcraft and Flatpak entered the market and kind of dragged it with them.
I’d love to understand why that is the case, or have an answer to the “why”. I only have few ideas but those are just ideas, and not explanations why AppImage, in all the years, never managed to become the Docker of the Linux desktop.
Maybe one problem was that it never featured a proper store: today we know from multiple examples on multiple platforms that a store can mean the difference. A central place for the users to browse, get a first idea of the app, leave comments and rate the application. Docker has a central “store”, Android and iOS have one, Flatpak and Snapcraft have one. However, AppImage never put a focus on that, and I do wonder if this was a missed opportunity. And no, appimage.github.io/apps is not a store.
Another difference to the other tools is that AppImage always focused on the Open Source tools. Don’t get me wrong, I appreciate it – but open source tools like Digikam were available on every distribution anyway. If AppImage would have focused to reach out to closed source software vendors as well, together with marketing this aggressively, maybe things would have turned out differently. You do not only need to make software easily available to users, you also need to make software available the people want.
Last but not least, AppImage always tried to provide as many features as possible, while it might have benefited from focusing on some and marketing them stronger. As an example, AppImage advertises that it can run with and without sandboxing. However, sandboxing is a large benefit of using such a solution to begin with. Another thing is integrated updates: there is a way to automatically update all appimages on a system, but it is not built in. If both would have been default and not optional, things maybe would have been different.
But again, these are just ideas, attempts to find explanations. I’d be happy if someone has better ideas.
Disadvantages of the Flatpak approach
There are some disadvantages with the Flatpak approach – or the Snapcraft one, or in general with any container approach. Most notably: libraries and dependencies.
The basic argument here is: all dependencies are kept in each package. This means:
Multiple copies of the same libraries on the system, leading to larger disk consumption
Multiple copies of the same library in the RAM during execution, leading to larger memory consumption
And probably most important: if a library has a security problem, each and every package has to be updated
Especially the last part is crucial: in case of a serious library security problem the user has to rely on each and every package vendor that they update the library in the package and release an updated version. With a dependency based system this is usually not the case.
People often compare this problem to the Windows or Java world were a similar situation exists. However, while the underlying problem is existent and serious, with Flatpak at least there is a sandbox and a permission system something which was not the case in former Windows versions.
There needs to be made a trade off between the advanced security through permissions and sandboxing vs the risk of having not-updated libraries in those packages. That trade off is not easily done.
But do we even need something like Flatpak?
This question might be strange, given the needs I identified in the past and my obvious enthusiasm for it. However, these days more and more apps are created as web applications – the importance of the desktop is shrinking. The dominant platform for users these days are mobile phones and tablets anyway. I would even go so far to say that in the future desktops will still be there but mainly to launch a web browser
But we are not there yet and today there is still the need for easy consumption of software on Linux desktops. I would have hoped though to see this technology and this much traction and distribution and vendor support 10 years ago.
Conclusion
Well – as I mentioned early on, I can get somewhat obsessed with the topic. And this much too long blog post shows this for sure 😉
But as a conclusion I say that the days of difficult-to-install-software on Linux desktops are gone. I am not sure if Snapcraft or Flatpak will “win” the race, we have to see that.
At the same time we have to face that desktops in general are just not that important anymore. But until then, I am very happy that it became so much easier for me to install certain pieces of software in up2date versions on my machine.