RHEL 8 has a new way how Python is installed and handled. How do you use it properly then, especially when multiple versions are installed? Read on to learn how to properly set up a virtual environment nevertheless.
The important piece is anyway that, when you work with Python in development environments or for example when you are dealing with Ansible, it makes sense to run everything in a Python virtual environment.
These days using Fedora Workstation there are multiple commands necessary to update the entire software on the system: not everything is installed as RPMs anymore – and some systems hardly use RPMs at all anyway.
In the past all updates of a Fedora system were easily applied with one single command:
Simple, right? But not these days: Fedora recently added capabilities to install and manage code via other ways: Flatpak packages are not managed by DNF. Also, many firmware updates are managed via the dedicated management tool fwupd. And lost but not least, Fedora Silverblue does not support DNF at all.
GUI solution Gnome Software – one tool to rule them all…
To properly update your Fedora system you have to check multiple sources. But before we dive into detailed CLI commands there is a simple way to do that all in one go: The Gnome Software tool does that for you. It checks all sources and just provides the available updates in its single GUI:
The above screenshot highlights that Gnome Software just shows available updates and can manage those. The user does not even know where those come from.
If we have a closer look at the configured repositories in Gnome Software we see that it covers main Fedora repositories, 3rd party repositories, flatpaks, firmware and so on:
Using the GUI alone is sufficient to take care of all update routines. However, if you want to know and understand what happens underneath it is good to know the separate CLI commands for all kinds of software resources. We will look at them in the rest of the post.
Each and every system is made up at least of a basic set of software. The Kernel, a system for managing services like systemd, core libraries like libc and so on. With Fedora used as a Workstation system there are two ways to manage system packages, because there are two totally different spins of Fedora: the normal one, traditionally based on DNF and thus comprised out of RPM packages, and the new Fedora Silverblue, based on immutable ostree system images.
Updating a RPM based system via DNF is easy:
$ dnf upgrade
[sudo] password for liquidat:
Last metadata expiration check: 0:39:20 ago on Tue 18 Jun 2019 01:03:12 PM CEST.
Package Arch Version Repository Size
kernel x86_64 5.1.9-300.fc30 updates 14 k
kernel-core x86_64 5.1.9-300.fc30 updates 26 M
kernel-modules x86_64 5.1.9-300.fc30 updates 28 M
kernel-modules-extra x86_64 5.1.9-300.fc30 updates 2.1 M
This is the traditional way to keep a Fedora system up2date. It is used for years and well known to everyone.
And in the end it is analogue to the way Linux distributions are kept up2date for ages now, only the command differs from system to system (apt-get, etc.)
With the recent rise of container technologies the idea of immutable systems became prominent again. With Fedora Silverblue there is an implementation of that approach as a Fedora Workstation spin.
[Unlike] other operating systems, Silverblue is immutable. This means that every installation is identical to every other installation of the same version. The operating system that is on disk is exactly the same from one machine to the next, and it never changes as it is used.
Silverblue’s immutable design is intended to make it more stable, less prone to bugs, and easier to test and develop. Finally, Silverblue’s immutable design also makes it an excellent platform for containerized apps as well as container-based software development development. In each case, apps and containers are kept separate from the host system, improving stability and reliability.
Since we are dealing with immutable images here, another tool to manage them is needed: OSTree. Basically OSTree is a set of libraries and tools which helps to manage images and snapshots. The idea is to provide a basic system image to all, and all additional software on top in sandboxed formats like Flatpak.
Unfortunately, not all tools can be packages as flatpak: especially command line tools are currently hardly usable at all as flatpak. Thus there is a way to install and manage RPMs on top of the OSTree image, but still baked right into it: rpm-ostreee. In fact, on Fedora Silverblue, all images and RPMs baked into it are managed by it.
Thus updating the system and all related RPMs needs the command rpm-ostreee update:
Basically Flatpak is a distribution independent packaging format targeted at desktop applications. It does come along with sandboxing capabilities and the packages usually have hardly any dependencies at all besides a common set provided to all of them.
Flatpak also provide its own repository format thus Flatpak packages can come with their own repository to be released and updated independently of a distribution release cycle.
In fact, this is what happens with the large Flatpak community repository flathub.org: all packages installed from there can be updated via flathub repos fully independent from Fedora – which also means independent from Fedora security teams, btw….
So Flatpak makes developing and distributing desktop programs much easier – and provides a tool for that. Meet flatpak!
And there is firmware: the binary blobs that keep some of our hardware running and which is often – unfortunately – closed source.
A lot of Kernel related firmware is managed as system packages and thus part of the system image or packaged via RPM. But device related firmware (laptops, docking stations, and so on) is often only provided in Windows executable formats and difficult to handle.
End users can take advantage of this with a tool dedicated to identify devices and manage the necessary firmware blobs for them: meet fwupdmgr!
$ fwupdmgr update No upgrades for 20L8S2N809 System Firmware, current is 0.1.31: 0.1.25=older, 0.1.26=older, 0.1.27=older, 0.1.29=older, 0.1.30=older
No upgrades for UEFI Device Firmware, current is 184.65.3590: 184.55.3510=older, 184.60.3561=older, 184.65.3590=same
No upgrades for UEFI Device Firmware, current is 0.1.13: 0.1.13=same
No releases found for device: Not compatible with bootloader version: failed predicate [BOT01.0[0-3]_* regex BOT01.04_B0016]
In the above example there were no updates available – but multiple devices are supported and thus were checked.
Forgot something? Gnome extensions…
The above examples cover the major ways to managed various bits of code. But they do not cover all cases, so for the sake of completion I’d like to highlight a few more here.
For example, Gnome extensions can be installed as RPM, but can also be installed via extensions.gnome.org. In that case the installation is done via a browser plugin.
The same is true for browser plugins themselves: they can be installed independently and extend the usage of the web browser. Think of the Chrome Web Store here, or Firefox Add-ons.
Keeping a system up2date was easier in the past – with a single command. However, at the same time that meant that those systems were limited by what RPM could actually deliver.
With the additional ways to update systems there is an additional burden on the system administrator, but at the same time there is much more software and firmware available these ways – code which was not available in the old RPM times at all. And with Silverblue an entirely new paradigm of system management is there – again something which would not have been the case with RPM at all.
At the same time it needs to be kept in mind that these are pure desktop systems – and there Gnome Software helps by being the single pane of glas.
So I fully understand if some people are a bit grumpy about the new needs for multiple tools. But I think the advantages by far outweigh the disadvantages.
Roles are an essential part of Ansible, and help in structuring your automation content. The idea is to have clearly defined roles for dedicated tasks. During your automation code, the roles will be called by the Ansible Playbooks.
Since roles usually have a well defined purpose, they make it easy to reuse your code for yourself, but also in your team. And you can even share roles with the global community. In fact, the Ansible community created Ansible Galaxy as a central place to display, search and view Ansible roles from thousands of people.
So what does a role look like? Basically it is a predefined structure of folders and files to hold your automation code. There is a folder for your templates, a folder to keep files with tasks, one for handlers, another one for your default variables, and so on:
In folders which contain Ansible code – like tasks, handlers, vars, defaults – there are main.yml files. Those contain the relevant Ansible bits. In case of the tasks directory, they often include other yaml files within the same directory. Roles even provide ways to test your automation code – in an automated fashion, of course.
This post will show how roles can be shared with others, be used in your projects and how this works with Red Hat Ansible Tower.
Share Roles via Repositories
Roles can be part of your project repository. They usually sit underneath a dedicated roles/ directory. But keeping roles in your own repository makes it hard to share them with others, to be reused and improved by them. If someone works on a different team, or on a different project, they might not have access to your repository – or they may use their own anyway. So even if you send them a copy of your role, they could add it to their own repository, making it hard to exchange improvements, bug fixes and changes across totally different repositories.
For that reason, a better way is to keep a role in its own repository. That way it can be easily shared and improved. However, to be available to a playbook, the role still needs to be included. Technically there are multiple ways to do that.
For example there can be a global roles directory outside your project where all roles are kept. This can be referenced in ansible.cfg. However, this requires that all developer setups and also the environment in which the automation is finally executed have the same global directory structure. This is not very practical.
When Git is used as the version control system, there is also the possibility of importing roles from other repositories via Git submodules, or even using Git subtrees. However, this requires quite some knowledge about advanced Git features by each and everyone using it – so it is far from simple.
The best way to make shared roles available to your playbooks is to use a function built into Ansible itself: by using the command ansible-galaxy , ansible galaxy can read a file specifying which external roles need to be imported for a successful Ansible run: requirements.yml. It lists external roles and their sources. If needed, it can also point to a specific version:
# from GitHub
- src: https://github.com/bennojoy/nginx
# from GitHub, overriding the name and specifying a tag
- src: https://github.com/bennojoy/nginx
# from Bitbucket
- src: git+http://bitbucket.org/willthames/git-ansible-galaxy
version: v1.4 # from galaxy
- src: yatesr.timezone
The file can be used via the command ansible-galaxy. It reads the file and downloads all specified roles to the appropriate path:
ansible-galaxy install -r roles/requirements.yml
- extracting nginx to /home/rwolters/ansible/roles/nginx
- nginx was installed successfully
- extracting nginx_role to
- nginx_role (master) was installed successfully
The output also highlights when a specific version was downloaded. You will find a copy of each role in your roles/directory – so make sure that you do not accidentally add the downloaded roles to your repository! The best option is to add them to the .gitignore file.
This way, roles can be imported into the project and are available to all playbooks while they are still shared via a central repository. Changes to the role need to be made in the dedicated repository – which ensures that no light-minded and project specific changes are done in the role.
At the same time the version attribute in requirements.ymlensures that the used role can be pinned to a certain release tag value, commit hash, or branch name. This is useful in case the development of a role is quickly moving forward, but your project has longer development cycles.
Using Roles in Ansible Tower
If you use automation on larger, enterprise scales you most likely will start using Ansible Tower sooner or later. So how do roles work with Ansible Tower? In fact – just like mentioned above. Each time Ansible Tower checks out a project it looks for a roles/requirements.yml. If such a file is present, a new version of each listed role is copied to the local checkout of the project and thus available to the relevant playbooks.
That way shared roles can easily be reused in Ansible Tower – it is built in right from the start!
Best Practices and Things to Keep in Mind
There are a few best practices around sharing of Ansible roles that make your life easier. The first is the naming and location of the roles directory. While it is possible to name the directory any way via the roles_path in ansible.cfg, we strongly recommend to stick to the directory name roles, sitting in the root of your project directory. Do not choose another name for it or move it to some subdirectory.
The same is true for requirements.yml: have one requirements.yml only, and keep it at roles/requirements.yml. While it is technically possible to have multiple files and spread them across your project, this will not work when the project is imported into Ansible Tower.
Also, if the roles are not only shared among multiple users, but are also developed with others or not by you at all, it might make sense to pin the role to the actual commit you’ve tested your setup against. That way you will avoid unwanted changes in the role behaviour.
Ansible and Ansible Tower provide a powerful variable system. At the same time, there are some variables reserved to one or the other, which cannot be used by others, but can be helpful. This post lists all reserved and magic variables and also important keywords.
Variables in Ansible are a powerful tool to influence and control your automation execution. In fact, I’ve dedicated a fare share of posts to the topic over the years:
The variable system is in fact so powerful that Ansible uses it itself. There are certain variables which are reserved, the so called magic variables.
The given documentation lists many of them – but is missing the Tower ones. For that reason this post list all magic variables in Ansible and Ansible Tower with references to more information.
Note that the variables and keywords might be different for different Ansible versions. The lists provided here are for Ansible 2.8 which is the current release and als shipped in Fedora – and Tower 3.4/3.5.
Reserved & Magic Variables
The following list shows true magic variables. They are reserved internally and are overwritten by Ansible if needed. A “(D)” highlights that the variable is deprecated.
Facts are not magic variables because they are not internal. But they are collected during facts gathering or execution of the setup module, so it helps to keep them in mind. There are two “main” variables related to facts, and a lot of other variables depending on what the managed node has to offer. Since those are different from system to system, it is tricky to list them all. But they can be easily identified by the leading ansible_.
Keywords are strictly speaking not variables. In fact, you can even set a variable named as a key word. Instead, they are the parts of a playbook that make a playbook work: think of the keys hosts, tasks, name or even the parameters of a module.
It is just important to keep those keywords in mind – and it certainly helps when you name your variables in a way that they are not mixed up with keywords by chance.
The following lists shows all keywords by where they can appear. Note that some keywords are listed multiple times because they can be used at different places.
I recently switched to Fedora Silverblue, the immutable desktop version of Fedora. With Silverblue, rebasing is easy – even when I had to downgrade from Rawhide to a stable release!
Fedora Silverblue is an interesting attempt at providing an immutable operating system – targeted at desktop users. Using it on a daily base helps me to get more familiar with the toolset and the ideas behind it which are also used in other projects like Fedora Atomic or Fedora CoreOS.
Rawhide is the rolling release/development branch of Fedora, and is way too unstable for my daily usage. But I only discovered this when I had it already installed and spent quite some time on customizing it.
But Silverblue is an immutable distribution, so switching to a previous version should be no problem, right? And in fact, yes, it is very easy!
Silverblue supports rebasing, switching between different branches. To get a list of available branches, first list the name of the remote source, and afterwards query the available references/branches:
The list is quite long, and does list multiple operating system versions.
In my case I was on the rawhide branch and tried to rebase to version 30. That however failed:
[liquidat@heisenberg ~]$ rpm-ostree rebase fedora/30/x86_64/silverblue
1 metadata, 0 content objects fetched; 569 B transferred in 4 seconds
Checking out tree 7420c3a... done
Enabled rpm-md repositories: rawhide
Updating metadata for 'rawhide'... done
rpm-md repo 'rawhide'; generated: 2019-05-13T08:01:20Z
Importing rpm-md... done
Forbidden base package replacements:
libgcc 9.1.1-1.fc30 -> 9.1.1-1.fc31 (rawhide)
libgomp 9.1.1-1.fc30 -> 9.1.1-1.fc31 (rawhide)
This likely means that some of your layered packages have requirements on newer or older versions of some base packages. `rpm-ostree cleanup -m` may help. For more details, see: https://githResolving dependencies... done
error: Some base packages would be replaced
The problem was that I had installed additional packages in the meantime. Note that there are multiple ways to install packages in Silverblue:
– Flatpak apps: this is the primary way that apps get installed on Silverblue. – Containers: which can be installed and used for development purposes. – Toolbox containers: a special kind of container that are tailored to be used as a software development environment.
The other method of installing software on Silverblue is package layering. This is different from the other methods, and goes against the general principle of immutability. Package layering adds individual packages to the Silverblue system, and in so doing modifies the operating system.
For that reason it is still possible to install RPMs on top, in a layered form. This however might result in dependency issues when the underlying image is supposed to change.
This is exactly what happened here: I had additional software installed, which depended on some specific versions of the underlying image. So I had to remove those:
[liquidat@heisenberg ~]$ rpm-ostree uninstall fedora-workstation-repositories golang pass vim zsh
Afterwards it was easy to rebase the entire system onto a different branch or – in my case – a different version of the same branch:
[liquidat@heisenberg ~]$ rpm-ostree rebase fedora/30/x86_64/silverblue
1 metadata, 0 content objects fetched; 569 B transferred in 2 seconds
Staging deployment... done
Freed: 47,4 MB (pkgcache branches: 0)
liberation-fonts-common 1:2.00.3-3.fc30 -> 1:2.00.5-1.fc30
GConf2 3.2.6-26.fc31 -> 3.2.6-26.fc30
Run "systemctl reboot" to start a reboot
And that’s it – after a short systemctl reboot the machine was back, running Fedora 30. And since ostree works with images the reboot went smooth and quick, long sessions of installing/updating software during shutdown or reboot are not necessary with such a setup!
In conclusion I must say that I am pretty impressed – both by the concept as well as the execution on the concept, how well Silverblue works in a day to day situation even as a desktop. My next step will be to test it on a Laptop on the ride, and see if other problems come up there.
A SSH client configuration makes accessing servers much easier and more convenient. Until recently the configuration was done in one single file which could be problematic. But newer versions support includes to read configuration from multiple places.
SSH is the default way to access servers remotely – Linux and other UNIX systems, and since recently Windows as well.
One feature of the OpenSSH client is to configure often used parameters for SSH connections in a central config file, ~/.ssh/config. This comes in especially handy when multiple remote servers require different parameters: varying ports, other user names, different SSH keys, and so on. It also provides the possibility to define aliases for host names to avoid the necessity to type in the FQDN each time. Since such a configuration is directly read by the SSH client other tools wich are using the SSH client in the background – like Ansible – can benefit from the configuration as well.
A typical configuration of such a config file can look like this:
While this is very handy and helps a lot to maintain sanity even with very different and strange SSH configurations, a single huge file is hard to manage.
Cloud environments for example change constantly, so it makes sense to update/rebuild the configuration regularly. There are scripts out there, but they either just overwrite existing configuration, or do entirely work on an extra file which is referenced in each SSH client call with ssh -F .ssh/aws-config, or they require to mark sections in the .ssh/config like "### AZURE-SSH-CONFIG BEGIN ###". All attempts are either clumsy or error prone.
Another use case is where parts of the SSH configuration is managed by configuration management systems or by software packages for example by a company – again that requires changes to a single file and might alter or remove existing configuration for your others services and servers. After all, it is not uncommon to use your more-or-less private Github account for your company work so that you have mixed entries in your .ssh/config.
The underneath problem of managing more complex software configurations in single files is not unique to OpenSSH, but more or less common across many software stacks which are configured in text files. Recently it became more and more common to write software in a way that configuration is not read as a single file, but that all files from a certain directory are read in. Examples for this include:
So now it is possible to add one or even multiple files and directories from where additional configuration can be loaded.
Include the specified configuration file(s). Multiple pathnames may be specified and each pathname may contain glob(7) wildcards and, for user configurations, shell-like `~’ references to user home directories. Files without absolute paths are assumed to be in ~/.ssh if included in a user configuration file or /etc/ssh if included from the system configuration file. Include directive may appear inside a Match or Host block to perform conditional inclusion.
The following .ssh/config file defines a sub-directory from where additional configuration can be read in:
$ cat ~/.ssh/config
Underneath ~/.ssh/conf.d there can be additional files, each containing one or more host definitions:
$ ls ~/.ssh/conf.d/
$ cat ~/.ssh/conf.d/aws.conf
This feature made managing SSH configuration for me much easier, and I only have few use cases and mainly require it to keep a simple overview over things. For more flexible (aka cloud based) setups this is crucial and can make things way easier.
Note that the additional config files should only contain host definitions! General SSH configuration should be inside ~/.ssh/config and should be before the include directive: any configuration provided after a “Host” keyword is interpreted as part of that exact host definition – until the next host block or until the next “Match” keyword.
Ara is a simple web server showing detailed information about Ansible runs. It is helpful in understanding and troubleshooting Ansible runs.
Ansible runs, especially on the command line, do only provide limited information. Details about used variables, the timing of each task or other information are only available using additional plugins, but the details provided by them are usually narrowed to a use case.
A better way to provide information about Ansible runs is to collect the data and provide them in a web framework. That is what Ansible Tower (or AWX, the upstream project to Tower) does for example: collecting detailed data and providing them in the jobs overview.
But there are situations where a fully fledged Tower is too much, or where a comparing overview of the various runs is needed. This is where ara comes in:
ARA Records Ansible playbook runs and makes the recorded data available and intuitive for users and systems. It makes your Ansible playbooks easier to understand and troubleshoot.
ara was originally developed by people of the OpenStack community, and still today has strong ties with it. It does not replace Ansible Tower at all, since it does not manage the execution at all. It complements the information and overview part, and in a way more competes with the logging solutions which can be connected to Ansible Tower.
How to install
The installation of ara is pretty straight forward and described in the documentation: the software is basically installed via pip, afterwards the server can be started as a local running instance. The connection between Ansible and ara is done via action and callback plugins.
The installation of the ara package is quickly done. Note that on systems with both Python 2 and 3 you need to pick the right pip version:
Notice that the binaries end up in ~/.local/bin. If that is not part of the $PATH variable, the server executable to start ara needs to be addressed directly, like ~/.local/bin/ara-manage runserver:
$ ~/.local/bin/ara-manage runserver * Serving Flask app "ara" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
2019-05-06 02:45:49,156 INFO werkzeug: * Running on http://127.0.0.1:9191/ (Press CTRL+C to quit)
2019-05-06 02:45:55,915 INFO werkzeug: 127.0.0.1 - - [06/May/2019 02:45:55] "GET / HTTP/1.1" 302 -
The web page can be accessed by pointing a web browser towards http://127.0.0.1:9191/. Since Ansible is not connected yet to ara no data are shown:
As mentioned, to connect ara to Ansible a callback plugin is used. There are different ways available to tell Ansible to use a callback plugin, the easiest is to set up a ansible.cfg with the appropriate data:
$ python3 -m ara.setup.ansible | tee -a ansible.cfg
Note here that this creates a new section named [defaults]. Check if your ansible.cfg already has a section called [defaults] and if so merge the entries manually. Now call a few playbooks and check the results:
ara provides easy access to all existing runs, making it possible to easily compare different runs with each other. At the same time detailed information are provided for individual runs, making it easy to figure out what actually happened.
ara is an interesting attempt at better displaying the information from Ansible runs. It helps analyzing what is happening in each run, where problems might be hidden and so on.
If you use Ansible Tower already the information are available to you anyway. If you like the way how it is presented in ara you can even use both at the same time.