Getting Started with Ansible Security Automation: Investigation Enrichment

Last November we introduced Ansible security automation as our answer to the lack of integration across the IT security industry. Let’s have a closer look at one of the scenarios where Ansible can facilitate typical operational challenges of security practitioners.

Last November we introduced Ansible security automation as our answer to the lack of integration across the IT security industry. Let’s have a closer look at one of the scenarios where Ansible can facilitate typical operational challenges of security practitioners.

A big portion of security practitioners’ daily activity is dedicated to investigative tasks. Enrichment is one of those tasks, and could be both repetitive and time-consuming, making it a perfect candidate for automation. Streamlining these processes can free up their analysts to focus on more strategic tasks, accelerate the response in time-sensitive situations and reduce human errors. However, in many large organizations , the multiple security solutions aspect of these activities are not integrated with each other. Hence, different teams may be in charge of different aspects of IT security, sometimes with no processes in common.

That often leads to manual work and interaction between people of different teams which can be error-prone and above all, slow. So when something suspicious happens and further attention is needed, security teams spend a lot of valuable time operating on many different security solutions and coordinating work with other teams, instead of focusing on the suspicious activity directly.

In this blog post we have a closer look at how Ansible can help to overcome these challenges and support investigation enrichment activities. In the following example we’ll see how Ansible can be used to enable programmatic access to information like logs coming from technologies that may not be integrated into a SIEM. As an example we’ll use enterprise firewalls and intrusion detection and protection systems (IDPS).

Simple Demo Setup

To showcase the aforementioned scenario we created a simplified, very basic demo setup to showcase the interactions. This setup includes two security solutions providing information about suspicious traffic, as well as a SIEM: we use a Check Point Next Generation Firewall (NGFW) and a Snort IDPS as security solutions providing information. The SIEM to gather and analyze those data is IBM QRadar.

Also, from a machine called “attacker” we will simulate a potential attack pattern on the target machine on which the IDPS is running.

Roland blog 1

This is just a basic demo setup, a real world setup of an Ansible security automation integration would look different, and can feature other vendors and technologies.

Logs: crucial, but distributed

Now imagine you are a security analyst in an enterprise. You were just informed of an anomaly in an application, showing  suspicious log activities. For example, we have a little demo where we curl a certain endpoint of the web server which we conveniently called “web_attack_simulation”:

$ sudo grep web_attack /var/log/httpd/access_log
172.17.78.163 - - [22/Sep/2019:15:56:49 +0000] "GET /web_attack_simulation HTTP/1.1" 200 22 "-" "curl/7.29.0"
...

As a security analyst you know that anomalies can be the sign of a potential threat. You have to determine if this is a false positive, that can be simply dismissed or an actual threat which requires a series of remediation activities to be stopped. Thus you need to collect more data points – like from the firewall and the IDS. Going through the logs of the firewall and IDPS manually takes a lot of time. In large organizations, the security analyst might not even have the necessary access rights and needs to contact the teams that each are responsible for both the enterprise firewall and the IDPS, asking them to manually go through the respective logs and directly check for anomalies on their own and then reply with the results. This could imply a phone call, a ticket, long explanations, necessary exports or other actions consuming valuable time.

It is common in large organisations to centralise event management on a SIEM and use it as the primary dashboard for investigations. In our demo example the SIEM is QRadar, but the steps shown here are valid for any SIEM. To properly analyze security-related events there are multiple steps necessary: the security technologies in question – here the firewall and the IDPS – need to be configured to stream their logs to the SIEM in the first place. But the SIEM also needs to be configured to help ensure that those logs are parsed in the correct way and meaningful events are generated. Doing this manually is time-intensive and requires in-depth domain knowledge. Additionally it might require privileges a security analyst does not have.

But Ansible allows security organizations to create pre-approved automation workflows in the form of playbooks. Those can even be maintained centrally and shared across different teams to enable security workflows at the press of a button. 

Why don’t we add those logs to QRadar permanently? This could create alert fatigue, where too much data in the system generates too many events, and analysts might miss the crucial events. Additionally, sending all logs from all systems easily consumes a huge amount of cloud resources and network bandwidth.

So let’s write such a playbook to first configure the log sources to send their logs to the SIEM. We start the playbook with Snort and configure it to send all logs to the IP address of the SIEM instance:

---
- name: Configure snort for external logging
  hosts: snort
  become: true
  vars:
    ids_provider: "snort"
    ids_config_provider: "snort"
    ids_config_remote_log: true
    ids_config_remote_log_destination: "192.168.3.4"
    ids_config_remote_log_procotol: udp
    ids_install_normalize_logs: false

  tasks:
    - name: import ids_config role
      include_role:
        name: "ansible_security.ids_config"

Note that here we only have one task, which imports an existing role. Roles are an essential part of Ansible, and help in structuring your automation content. Roles usually encapsulate the tasks and other data necessary for a clearly defined purpose. In the case of the above shown playbook, we use the role ids_config, which manages the configuration of various IDPS. It is provided as an example by the ansible-security team. This role, like others mentioned in this blog post, are provided as a guidance to help customers that may not be accustomed to Ansible to become productive faster. They are not necessarily meant as a best practise or a reference implementation.

Using this role we only have to note a few parameters, the domain knowledge of how to configure Snort itself is hidden away. Next, we do the very same thing with the Check Point firewall. Again an existing role is re-used, log_manager:

- name: Configure Check Point to send logs to QRadar
  hosts: checkpoint

  tasks:
    - include_role:
        name: ansible_security.log_manager
        tasks_from: forward_logs_to_syslog
      vars:
        syslog_server: "192.168.3.4"
        checkpoint_server_name: "gw-2d3c54"
        firewall_provider: checkpoint

With these two snippets we are already able to reach out to two security solutions in an automated way and reconfigure them to send their logs to a central SIEM.

We can also automatically configure the SIEM to accept those logs and sort them into corresponding streams in QRadar:

- name: Add Snort log source to QRadar
  hosts: qradar
  collections:
    - ibm.qradar

  tasks:
    - name: Add snort remote logging to QRadar
      qradar_log_source_management:
        name: "Snort rsyslog source - 192.168.14.15"
        type_name: "Snort Open Source IDS"
        state: present
        description: "Snort rsyslog source"
        identifier: "ip-192-168-14-15"

- name: Add Check Point log source to QRadar
  hosts: qradar
  collections:
    - ibm.qradar

  tasks:
    - name: Add Check Point remote logging to QRadar
      qradar_log_source_management:
        name: "Check Point source - 192.168.23.24"
        type_name: "Check Point FireWall-1"
        state: present
        description: "Check Point log source"
        identifier: "192.168.23.24"

Here we do use Ansible Content Collections: the new method of distributing, maintaining and consuming automation content. Collections can contain roles, but also modules and other code necessary to enable automation of certain environments. In our case the collection for example contains a role, but also the necessary modules and connection plugins to interact with QRadar.

Without any further intervention by the security analyst, Check Point logs start to appear in the QRadar log overview. Note that so far no logs are sent from Snort to QRadar: Snort does not know yet that this traffic is noteworthy! We will come to this in a few moments.

roland blog 2

Remember, taking the perspective of a security analyst: now we have more data at our disposal. We have a better understanding of what could be the cause of the anomaly in the application behaviour. Logs from the firewall are shown, who is sending traffic to whom. But this is still not enough data to fully qualify what is going on.

Fine-tuning the investigation

Given the data at your disposal you decide to implement a custom signature on the IDPS to get alert logs if a specific pattern is detected.

In a typical situation, implementing a new rule would require another interaction with the security operators in charge of Snort who would likely have to manually configure multiple instances. But luckily we can again use an Ansible Playbook to achieve the same goal without the need for time consuming manual steps or interactions with other team members.

There is also the option to have a set of playbooks for customer specific situations pre-create. Since the language of Ansible is YAML, even team members with little knowledge can contribute to the playbooks, making it possible to have agreed upon playbooks ready to be used by the analysts.

Again we reuse a role, ids_rule. Note that this time some  understanding of Snort rules is required to make the playbook work. Still, the actual knowledge of how to manage Snort as a service across various target systems is shielded away by the role.

---
- name: Add Snort rule
  hosts: snort
  become: yes

  vars:
    ids_provider: snort

  tasks:
    - name: Add snort web attack rule
      include_role:
        name: "ansible_security.ids_rule"
      vars:
        ids_rule: 'alert tcp any any -> any any (msg:"Attempted Web Attack"; uricontent:"/web_attack_simulation"; classtype:web-application-attack; sid:99000020; priority:1; rev:1;)'
        ids_rules_file: '/etc/snort/rules/local.rules'
        ids_rule_state: present

Finish the offense

Moments after the playbook is executed, we can check in QRadar if we see alerts. And indeed, in our demo setup this is the case:

roland blog 3

With this  information on  hand, we can now finally check all offenses of this type, and verify that they are all coming only from one single host – here the attacker.

From here we can move on with the investigation. For our demo we assume that the behavior is intentional, and thus close the offense as false positive.

Rollback!

Last but not least, there is one step which is often overlooked, but is crucial: rolling back all the changes! After all, as discussed earlier, sending all logs into the SIEM all the time is resource-intensive.

With Ansible the rollback is quite easy: basically the playbooks from above can be reused, they just need to be slightly altered to not create log streams, but remove them again. That way, the entire process can be fully automated and at the same time  made as resource friendly as possible.

Takeaways and where to go next

It happens that the job of a CISO and her team is difficult even if they have in place all necessary tools, because the tools don’t integrate with each other. When there is a security threat, an analyst has to perform an investigation, chasing all relevant pieces of information across the entire infrastructure, consuming valuable time to understand what’s going on and ultimately perform any sort of remediation.

Ansible security automation is designed to help enable integration and interoperability of security technologies to support security analysts’ ability to investigate and remediate security incidents faster.

As next steps there are plenty of resources to follow up on the topic:

Credits

This post was originally released on ansible.com/blog: GETTING STARTED WITH ANSIBLE SECURITY AUTOMATION: INVESTIGATION ENRICHMENT

Header image by Alexas_Fotos from Pixabay.

[Howto] Using toolbox in Fedora / RHEL 8 for easy management of CLI tools

Running CLI tools like ansible often requires a specific environment with dependencies on the core operating system libraries. That makes it hard to run different versions in parallel – or test the newest updates. And it might clutter the OS. Toolbox offers simple container management to avoid these shortcomings.

Running CLI tools like ansible often requires a specific environment with dependencies on the core operating system libraries. That makes it hard to run different versions in parallel – or test the newest updates. And it might clutter the OS. Toolbox offers simple container management to avoid these shortcomings.

The recent development of Linux distributions has seen a shift away from all-purpose distributions towards stable core distributions with limited packages and additional sand-boxed tooling running on top to enable management of applications. One of the most advanced distributions here is for sure Fedora Silverblue, but even the enterprise distribution Red Hat Enterprise Linux 8 brings a lot of changes which aim into the right direction. Technologies in this context are for example rpm-ostree for the management of immutable OS images and Flatpak for the management of GUI applications. Additionally, RHEL 8 comes along with so called app-streams – and of course there is always the option of using containers with for example podman.

In this blog post I want to focus on the last one: using containers to manage your CLI tools, thus keeping them independent of your operating system packaging and libraries. With Fedora and RHEL, there is tooling provided which makes this even easier: Toolbox.

The rational

The basic idea for using containers, and especially Toolbox, is similar to the one about Flatpak: it solves many problems of the Linux packaging problem. This means essentially:

  • Independence from OS libraries and their versions
  • Sand-boxing, meaning better protection of the OS
  • Multi-version support
  • Less OS clutter through isolated installation of dependencies
  • Easy to recreate environments (think of “works on my machine”)
  • Immutable environments possible

Think of it that way: with complex applications, behavior sometimes depends on certain versions of some libraries. When those are managed by the OS packaging system, it is hard to keep them up2date or just in the same version across multiple machines, not to speak about multiple distributions. Also, I don’t want my OS to be cluttered with weird dependencies which I might not even trust just to justify a weird application’s requirements. And I might want to install different versions of a tool to test them, – with different libraries as well, which is often impossible with OS package management.

Toolbox

In comes Toolbox:

Toolbox is a tool that offers a familiar package based environment for developing and debugging software that runs fully unprivileged using Podman.

The toolbox container is a fully mutable container; when you see yum install ansible for example, that’s something you can do inside your toolbox container, without affecting the base operating system.

Toolbox on Github

While Toolbox is particularly interesting for immutable systems like Fedora Silverblue, it even makes sense to run it on other distributions. I started using it on my regular Fedora for example just to have certain tools available in certain versions for tests.

And why use Toolbox, and not just the usual container tools? Toolbox takes care of volume mounting and all the other necessary bits of container management, and enables you to just use a very basic set of commands to create – and reuse – your tool containers. It is simpler and easier than always typing in fully fledged podman or docker commands all the time.

You can read more about Toolbox in the Fedora Silverblue Toolbox docs or the Red Hat Enterprise Linux 8 Toolbox docs.

Getting started

It is very easy to get started with Toolbox. First, it needs to be installed on the system. For example, on Fedora 31, this can be done via:

$ sudo dnf install toolbox

After that, you are good to go. Since the idea is to have re-usable containers, let’s create the first. In my example I want to have a container with the newest Ansible version to run some automation. So we just create a new container called ansible:

$ toolbox create --container ansible
Image required to create toolbox container.
Download registry.fedoraproject.org/f31/fedora-toolbox:31 (500MB)? [y/N]: y
Created container: ansible

As you see, a base image for my distribution was downloaded, and the container created. Next, let’s access it and look around:

$ toolbox enter --container ansible

Welcome to the Toolbox; a container where you can install and run
all your tools.

 - Use DNF in the usual manner to install command line tools.
 - To create a new tools container, run 'toolbox create'.

For more information, see the documentation.

⬢[liquidat@toolbox ~]$

We are greeted with a short message and then dropped to a shell. Note the bubble at the start of the command prompt – a nice touch to differentiate if you are inside a toolbox or not. Next, let’s look at our environment:

⬢[liquidat@toolbox ~]$ pwd
/home/liquidat
⬢[liquidat@toolbox ~]$ ls
bin  development  documents  downloads  ...
⬢[liquidat@toolbox ~]$ ls /
README.md  bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
⬢[liquidat@toolbox ~]$ cat /README.md 
# Toolbox — Unprivileged development environment

[Toolbox](https://github.com/debarshiray/toolbox) is a tool that offers a
[...]

As you see, the toolbox has actual access to the file system. That way we can use the tools just like normal shell tools, interact with things we have in our environment. However, at the same time we have limited access to the root system since we see the container root system (as identified by the readme), not the host root system.

Getting my first tool ready

As mentioned I’d like to have a container with the newest Ansible. Let’s install it:

⬢[liquidat@toolbox ~]$ pip install --user ansible
Collecting ansible
Using cached https://files.pythonhosted.org/packages/ae/b7/c717363f767f7af33d90af9458d5f1e0960db9c2393a6c221c2ce97ad1aa/ansible-2.9.6.tar.gz
Collecting jinja2 (from ansible)
[...]
Running setup.py install for ansible … done
Successfully installed MarkupSafe-1.1.1 PyYAML-5.3 ansible-2.9.6 cffi-1.14.0 cryptography-2.8 jinja2-2.11.1 pycparser-2.20
⬢[liquidat@toolbox ~]$ ansible --version
ansible 2.9.6
config file = /home/liquidat/.ansible.cfg
configured module search path = ['/home/liquidat/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/liquidat/.local/lib/python3.7/site-packages/ansible
executable location = /home/liquidat/.local/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]

As you see, Ansible was properly installed. And with this we are already done – we have our first tool ready, name “ansible”.

Using our tool

Now let’s assume I use the container for some things, exit it – and want to reuse it later on. This is no problem at all, since that is exactly what Toolbox was built for. And we have a name, which makes it fairly easy to remember how to access it. But even if we do not remember the name, we can easily list all available tools:

$ toolbox list
IMAGE ID      IMAGE NAME                                        CREATED
64e68e194389  registry.fedoraproject.org/f31/fedora-toolbox:31  2 weeks ago

CONTAINER ID  CONTAINER NAME  CREATED         STATUS             IMAGE NAME
8ec117845e06  ansible         47 minutes ago  Up 47 minutes ago  registry.fedoraproject.org/f31/fedora-toolbox:31
$ toolbox enter -c ansible
⬢[liquidat@toolbox ~]$ ansible --version
ansible 2.9.6
  config file = /home/liquidat/.ansible.cfg
  configured module search path = ['/home/liquidat/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/liquidat/.local/lib/python3.7/site-packages/ansible
  executable location = /home/liquidat/.local/bin/ansible
  python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]

As you see the container is in the same state as we left it: Ansible is still installed in the proper way, and ready to be used. And we can do this now with all kinds of other tools: be it another version of Ansible, or even some daemon we want to experiment with. It can all be easily installed and run and re-used, without worrying of cluttering the OS, or having the wrong library versions installed, or not being able to update some library because of a system dependency.

Summary

Toolbox is an interesting approach to simplify container management to fool around with CLI based tools. If you have an immutable environment like Fedora Silverblue, it might become a crucial piece in your daily operations since it is a pain to install additional packages on top of Silverblue’s ostree infrastructure. But even for “normal” distributions it is worth a try!

[Howto] Get a Python virtual environment running on RHEL 8

RHEL 8 has a new way how Python is installed and handled. How do you use it properly then, especially when multiple versions are installed? Read on to learn how to properly set up a virtual environment nevertheless.

RHEL 8 has a new way how Python is installed and handled. How do you use it properly then, especially when multiple versions are installed? Read on to learn how to properly set up a virtual environment nevertheless.

Red Hat Enterprise Linux 8 was released in May this year – and comes with a lot of changes. Think of a really modern OS here. Among those changes is also that Python is, well different: it is included, for sure. But at the same time, it isn’t.

The important piece is anyway that, when you work with Python in development environments or for example when you are dealing with Ansible, it makes sense to run everything in a Python virtual environment.

Here is how this can be best done in RHEL 8:

First, install the Python 3.6 appstream:

$ sudo yum install -y python36

Afterwards, set up a python virtual environment:

$ python3.6 -m venv myvirtual_venv

And that’s it already. Activate it with:

$ source myvirtual_venv/bin/activate

In case you are dealing with SELinux bindings, it might make sense to link those into your virtual environment:

$ cd myvirtual_venv/lib/python3.6/site-packages/
$ ln -s /usr/lib64/python3.6/site-packages/selinux
$ ln -s /usr/lib64/python3.6/site-packages/_selinux.cpython-36m-x86_64-linux-gnu.so

When in the future different versions of Python are offered via appstreams, make sure to pick the right selinux bindings when you link them into your virtual environment.

Another way to work with selinux libs is to create the virtual environment by using system packages:

$ python3.6 -m venv --system-site-packages myvirtual_venv

[Howto] Three commands to update Fedora

These days using Fedora Workstation there are multiple commands necessary to update the entire software on the system: not everything is installed as RPMs anymore – and some systems hardly use RPMs at all anyway.

Fedora Logo Bubble

These days using Fedora Workstation there are multiple commands necessary to update the entire software on the system: not everything is installed as RPMs anymore – and some systems hardly use RPMs at all anyway.

Background

In the past all updates of a Fedora system were easily applied with one single command:

$ yum update

Later on, yum was replaced by DNF, but the idea stayed the same:

$ dnf update

Simple, right? But not these days: Fedora recently added capabilities to install and manage code via other ways: Flatpak packages are not managed by DNF. Also, many firmware updates are managed via the dedicated management tool fwupd. And lost but not least, Fedora Silverblue does not support DNF at all.

GUI solution Gnome Software – one tool to rule them all…

To properly update your Fedora system you have to check multiple sources. But before we dive into detailed CLI commands there is a simple way to do that all in one go: The Gnome Software tool does that for you. It checks all sources and just provides the available updates in its single GUI:

The above screenshot highlights that Gnome Software just shows available updates and can manage those. The user does not even know where those come from.

If we have a closer look at the configured repositories in Gnome Software we see that it covers main Fedora repositories, 3rd party repositories, flatpaks, firmware and so on:

Using the GUI alone is sufficient to take care of all update routines. However, if you want to know and understand what happens underneath it is good to know the separate CLI commands for all kinds of software resources. We will look at them in the rest of the post.

System packages

Each and every system is made up at least of a basic set of software. The Kernel, a system for managing services like systemd, core libraries like libc and so on. With Fedora used as a Workstation system there are two ways to manage system packages, because there are two totally different spins of Fedora: the normal one, traditionally based on DNF and thus comprised out of RPM packages, and the new Fedora Silverblue, based on immutable ostree system images.

Traditional: DNF

Updating a RPM based system via DNF is easy:

$ dnf upgrade
[sudo] password for liquidat: 
Last metadata expiration check: 0:39:20 ago on Tue 18 Jun 2019 01:03:12 PM CEST.
Dependencies resolved.
================================================================================
 Package                      Arch       Version             Repository    Size
================================================================================
Installing:
 kernel                       x86_64     5.1.9-300.fc30      updates       14 k
 kernel-core                  x86_64     5.1.9-300.fc30      updates       26 M
 kernel-modules               x86_64     5.1.9-300.fc30      updates       28 M
 kernel-modules-extra         x86_64     5.1.9-300.fc30      updates      2.1 M
[...]

This is the traditional way to keep a Fedora system up2date. It is used for years and well known to everyone.

And in the end it is analogue to the way Linux distributions are kept up2date for ages now, only the command differs from system to system (apt-get, etc.)

Silverblue: OSTree

With the recent rise of container technologies the idea of immutable systems became prominent again. With Fedora Silverblue there is an implementation of that approach as a Fedora Workstation spin.

[Unlike] other operating systems, Silverblue is immutable. This means that every installation is identical to every other installation of the same version. The operating system that is on disk is exactly the same from one machine to the next, and it never changes as it is used.

Silverblue’s immutable design is intended to make it more stable, less prone to bugs, and easier to test and develop. Finally, Silverblue’s immutable design also makes it an excellent platform for containerized apps as well as container-based software development development. In each case, apps and containers are kept separate from the host system, improving stability and reliability.

https://docs.fedoraproject.org/en-US/fedora-silverblue/

Since we are dealing with immutable images here, another tool to manage them is needed: OSTree. Basically OSTree is a set of libraries and tools which helps to manage images and snapshots. The idea is to provide a basic system image to all, and all additional software on top in sandboxed formats like Flatpak.

Unfortunately, not all tools can be packages as flatpak: especially command line tools are currently hardly usable at all as flatpak. Thus there is a way to install and manage RPMs on top of the OSTree image, but still baked right into it: rpm-ostreee. In fact, on Fedora Silverblue, all images and RPMs baked into it are managed by it.

Thus updating the system and all related RPMs needs the command rpm-ostreee update:

$ rpm-ostree update
⠂ Receiving objects: 98% (4653/4732) 4,3 MB/s 129,7 MB 
Receiving objects: 98% (4653/4732) 4,3 MB/s 129,7 MB... done
Checking out tree 209dfbe... done
Enabled rpm-md repositories: fedora-cisco-openh264 rpmfusion-free-updates rpmfusion-nonfree fedora rpmfusion-free updates rpmfusion-nonfree-updates
rpm-md repo 'fedora-cisco-openh264' (cached); generated: 2019-03-21T15:16:16Z
rpm-md repo 'rpmfusion-free-updates' (cached); generated: 2019-06-13T10:31:33Z
rpm-md repo 'rpmfusion-nonfree' (cached); generated: 2019-04-16T21:53:39Z
rpm-md repo 'fedora' (cached); generated: 2019-04-25T23:49:41Z
rpm-md repo 'rpmfusion-free' (cached); generated: 2019-04-16T20:46:20Z
rpm-md repo 'updates' (cached); generated: 2019-06-17T18:09:33Z
rpm-md repo 'rpmfusion-nonfree-updates' (cached); generated: 2019-06-13T11:00:42Z
Importing rpm-md... done
Resolving dependencies... done
Checking out packages... done
Running pre scripts... done
Running post scripts... done
Running posttrans scripts... done
Writing rpmdb... done
Writing OSTree commit... done
Staging deployment... done
Freed: 50,2 MB (pkgcache branches: 0)
Upgraded:
  gcr 3.28.1-3.fc30 -> 3.28.1-4.fc30
  gcr-base 3.28.1-3.fc30 -> 3.28.1-4.fc30
  glib-networking 2.60.2-1.fc30 -> 2.60.3-1.fc30
  glib2 2.60.3-1.fc30 -> 2.60.4-1.fc30
  kernel 5.1.8-300.fc30 -> 5.1.9-300.fc30
  kernel-core 5.1.8-300.fc30 -> 5.1.9-300.fc30
  kernel-devel 5.1.8-300.fc30 -> 5.1.9-300.fc30
  kernel-headers 5.1.8-300.fc30 -> 5.1.9-300.fc30
  kernel-modules 5.1.8-300.fc30 -> 5.1.9-300.fc30
  kernel-modules-extra 5.1.8-300.fc30 -> 5.1.9-300.fc30
  plymouth 0.9.4-5.fc30 -> 0.9.4-6.fc30
  plymouth-core-libs 0.9.4-5.fc30 -> 0.9.4-6.fc30
  plymouth-graphics-libs 0.9.4-5.fc30 -> 0.9.4-6.fc30
  plymouth-plugin-label 0.9.4-5.fc30 -> 0.9.4-6.fc30
  plymouth-plugin-two-step 0.9.4-5.fc30 -> 0.9.4-6.fc30
  plymouth-scripts 0.9.4-5.fc30 -> 0.9.4-6.fc30
  plymouth-system-theme 0.9.4-5.fc30 -> 0.9.4-6.fc30
  plymouth-theme-spinner 0.9.4-5.fc30 -> 0.9.4-6.fc30
Run "systemctl reboot" to start a reboot

Desktop applications: Flatpak

Installing software – especially desktop related software – on Linux is a major pain for distributors, users and developers alike. One attempt to solve this is the flatpak format, see also Flatpak – a solution to the Linux desktop packaging problem.

Basically Flatpak is a distribution independent packaging format targeted at desktop applications. It does come along with sandboxing capabilities and the packages usually have hardly any dependencies at all besides a common set provided to all of them.

Flatpak also provide its own repository format thus Flatpak packages can come with their own repository to be released and updated independently of a distribution release cycle.

In fact, this is what happens with the large Flatpak community repository flathub.org: all packages installed from there can be updated via flathub repos fully independent from Fedora – which also means independent from Fedora security teams, btw….

So Flatpak makes developing and distributing desktop programs much easier – and provides a tool for that. Meet flatpak!

$ flatpak update
Looking for updates…

        ID                                            Arch              Branch            Remote            Download
 1. [✓] org.freedesktop.Platform.Locale               x86_64            1.6               flathub            1.0 kB / 177.1 MB
 2. [✓] org.freedesktop.Platform.Locale               x86_64            18.08             flathub            1.0 kB / 315.9 MB
 3. [✓] org.libreoffice.LibreOffice.Locale            x86_64            stable            flathub            1.0 MB / 65.7 MB
 4. [✓] org.freedesktop.Sdk.Locale                    x86_64            1.6               flathub            1.0 kB / 177.1 MB
 5. [✓] org.freedesktop.Sdk.Locale                    x86_64            18.08             flathub            1.0 kB / 319.3 MB

Firmware

And there is firmware: the binary blobs that keep some of our hardware running and which is often – unfortunately – closed source.

A lot of Kernel related firmware is managed as system packages and thus part of the system image or packaged via RPM. But device related firmware (laptops, docking stations, and so on) is often only provided in Windows executable formats and difficult to handle.

Luckily, recently the Linux Vendor Firmware Service (LVFS) gained quite some traction as the default way for many vendors to make their device firmware consumable to Linux users:

The Linux Vendor Firmware Service is a secure portal which allows hardware vendors to upload firmware updates.

This site is used by all major Linux distributions to provide metadata for clients such as fwupdmgr and GNOME Software.

https://fwupd.org/

End users can take advantage of this with a tool dedicated to identify devices and manage the necessary firmware blobs for them: meet fwupdmgr!

$ fwupdmgr update                                                                                                                                                         No upgrades for 20L8S2N809 System Firmware, current is 0.1.31: 0.1.25=older, 0.1.26=older, 0.1.27=older, 0.1.29=older, 0.1.30=older
No upgrades for UEFI Device Firmware, current is 184.65.3590: 184.55.3510=older, 184.60.3561=older, 184.65.3590=same
No upgrades for UEFI Device Firmware, current is 0.1.13: 0.1.13=same
No releases found for device: Not compatible with bootloader version: failed predicate [BOT01.0[0-3]_* regex BOT01.04_B0016]

In the above example there were no updates available – but multiple devices are supported and thus were checked.

Forgot something? Gnome extensions…

The above examples cover the major ways to managed various bits of code. But they do not cover all cases, so for the sake of completion I’d like to highlight a few more here.

For example, Gnome extensions can be installed as RPM, but can also be installed via extensions.gnome.org. In that case the installation is done via a browser plugin.

The same is true for browser plugins themselves: they can be installed independently and extend the usage of the web browser. Think of the Chrome Web Store here, or Firefox Add-ons.

Conclusion

Keeping a system up2date was easier in the past – with a single command. However, at the same time that meant that those systems were limited by what RPM could actually deliver.

With the additional ways to update systems there is an additional burden on the system administrator, but at the same time there is much more software and firmware available these ways – code which was not available in the old RPM times at all. And with Silverblue an entirely new paradigm of system management is there – again something which would not have been the case with RPM at all.

At the same time it needs to be kept in mind that these are pure desktop systems – and there Gnome Software helps by being the single pane of glas.

So I fully understand if some people are a bit grumpy about the new needs for multiple tools. But I think the advantages by far outweigh the disadvantages.

[Howto] Rebasing Fedora Silverblue – even from Rawhide to Fedora 30

I recently switched to Fedora Silverblue, the immutable desktop version of Fedora. With Silverblue, rebasing is easy – even when I had to downgrade from Rawhide to a stable release!

silverblue-logo

I recently switched to Fedora Silverblue, the immutable desktop version of Fedora. With Silverblue, rebasing is easy – even when I had to downgrade from Rawhide to a stable release!

Fedora Silverblue is an interesting attempt at providing an immutable operating system – targeted at desktop users. Using it on a daily base helps me to get more familiar with the toolset and the ideas behind it which are also used in other projects like Fedora Atomic or Fedora CoreOS.

When Fedora 30 was released I decided to give it a try, went to the Silverblue download page – and unfortunately picked the wrong image: the one for Rawhide.

Rawhide is the rolling release/development branch of Fedora, and is way too unstable for my daily usage. But I only discovered this when I had it already installed and spent quite some time on customizing it.

But Silverblue is an immutable distribution, so switching to a previous version should be no problem, right? And in fact, yes, it is very easy!

Silverblue supports rebasing, switching between different branches. To get a list of available branches, first list the name of the remote source, and afterwards query the available references/branches:

[liquidat@heisenberg ~]$ ostree remote list
fedora
[liquidat@heisenberg ~]$ ostree remote refs fedora
[...]
fedora:fedora/30/x86_64/silverblue
fedora:fedora/30/x86_64/testing/silverblue
fedora:fedora/30/x86_64/updates/silverblue
[...]
fedora:fedora/rawhide/x86_64/silverblue
fedora:fedora/rawhide/x86_64/workstation

The list is quite long, and does list multiple operating system versions.

In my case I was on the rawhide branch and tried to rebase to version 30. That however failed:

[liquidat@heisenberg ~]$ rpm-ostree rebase fedora/30/x86_64/silverblue
1 metadata, 0 content objects fetched; 569 B transferred in 4 seconds
Checking out tree 7420c3a... done
Enabled rpm-md repositories: rawhide
Updating metadata for 'rawhide'... done
rpm-md repo 'rawhide'; generated: 2019-05-13T08:01:20Z
Importing rpm-md... done
⠁  
Forbidden base package replacements:
  libgcc 9.1.1-1.fc30 -> 9.1.1-1.fc31 (rawhide)
  libgomp 9.1.1-1.fc30 -> 9.1.1-1.fc31 (rawhide)
This likely means that some of your layered packages have requirements on newer or older versions of some base packages. `rpm-ostree cleanup -m` may help. For more details, see: https://githResolving dependencies... done
error: Some base packages would be replaced

The problem was that I had installed additional packages in the meantime. Note that there are multiple ways to install packages in Silverblue:

– Flatpak apps: this is the primary way that apps get installed on Silverblue.
– Containers: which can be installed and used for development purposes.
– Toolbox containers: a special kind of container that are tailored to be used as a software development environment.

The other method of installing software on Silverblue is package layering. This is different from the other methods, and goes against the general principle of immutability. Package layering adds individual packages to the Silverblue system, and in so doing modifies the operating system.

https://docs.fedoraproject.org/en-US/fedora-silverblue/getting-started/

While Flatpak in itself is a pretty cool solution to the Linux desktop packaging problem it usually comes with sandboxed environments, making it less usable for integrated tools and libraries.

For that reason it is still possible to install RPMs on top, in a layered form. This however might result in dependency issues when the underlying image is supposed to change.

This is exactly what happened here: I had additional software installed, which depended on some specific versions of the underlying image. So I had to remove those:

[liquidat@heisenberg ~]$ rpm-ostree uninstall fedora-workstation-repositories golang pass vim zsh

Afterwards it was easy to rebase the entire system onto a different branch or – in my case – a different version of the same branch:

[liquidat@heisenberg ~]$ rpm-ostree rebase fedora/30/x86_64/silverblue
1 metadata, 0 content objects fetched; 569 B transferred in 2 seconds
Staging deployment... done
Freed: 47,4 MB (pkgcache branches: 0)
  liberation-fonts-common 1:2.00.3-3.fc30 -> 1:2.00.5-1.fc30
  [...]
Downgraded:
  GConf2 3.2.6-26.fc31 -> 3.2.6-26.fc30
  [...]
Removed:
  fedora-repos-rawhide-31-0.2.noarch
  [...]
Added:
  PackageKit-gstreamer-plugin-1.1.12-5.fc30.x86_64
  [...]
Run "systemctl reboot" to start a reboot

And that’s it – after a short systemctl reboot the machine was back, running Fedora 30. And since ostree works with images the reboot went smooth and quick, long sessions of installing/updating software during shutdown or reboot are not necessary with such a setup!

In conclusion I must say that I am pretty impressed – both by the concept as well as the execution on the concept, how well Silverblue works in a day to day situation even as a desktop. My next step will be to test it on a Laptop on the ride, and see if other problems come up there.