[Howto] Rebasing Fedora Silverblue – even from Rawhide to Fedora 30

silverblue-logo

I recently switched to Fedora Silverblue, the immutable desktop version of Fedora. With Silverblue, rebasing is easy – even when I had to downgrade from Rawhide to a stable release!

Fedora Silverblue is an interesting attempt at providing an immutable operating system – targeted at desktop users. Using it on a daily base helps me to get more familiar with the toolset and the ideas behind it which are also used in other projects like Fedora Atomic or Fedora CoreOS.

When Fedora 30 was released I decided to give it a try, went to the Silverblue download page – and unfortunately picked the wrong image: the one for Rawhide.

Rawhide is the rolling release/development branch of Fedora, and is way too unstable for my daily usage. But I only discovered this when I had it already installed and spent quite some time on customizing it.

But Silverblue is an immutable distribution, so switching to a previous version should be no problem, right? And in fact, yes, it is very easy!

Silverblue supports rebasing, switching between different branches. To get a list of available branches, first list the name of the remote source, and afterwards query the available references/branches:

[liquidat@heisenberg ~]$ ostree remote list
fedora
[liquidat@heisenberg ~]$ ostree remote refs fedora
[...]
fedora:fedora/30/x86_64/silverblue
fedora:fedora/30/x86_64/testing/silverblue
fedora:fedora/30/x86_64/updates/silverblue
[...]
fedora:fedora/rawhide/x86_64/silverblue
fedora:fedora/rawhide/x86_64/workstation

The list is quite long, and does list multiple operating system versions.

In my case I was on the rawhide branch and tried to rebase to version 30. That however failed:

[liquidat@heisenberg ~]$ rpm-ostree rebase fedora/30/x86_64/silverblue
1 metadata, 0 content objects fetched; 569 B transferred in 4 seconds
Checking out tree 7420c3a... done
Enabled rpm-md repositories: rawhide
Updating metadata for 'rawhide'... done
rpm-md repo 'rawhide'; generated: 2019-05-13T08:01:20Z
Importing rpm-md... done
⠁  
Forbidden base package replacements:
  libgcc 9.1.1-1.fc30 -> 9.1.1-1.fc31 (rawhide)
  libgomp 9.1.1-1.fc30 -> 9.1.1-1.fc31 (rawhide)
This likely means that some of your layered packages have requirements on newer or older versions of some base packages. `rpm-ostree cleanup -m` may help. For more details, see: https://githResolving dependencies... done
error: Some base packages would be replaced

The problem was that I had installed additional packages in the meantime. Note that there are multiple ways to install packages in Silverblue:

– Flatpak apps: this is the primary way that apps get installed on Silverblue.
– Containers: which can be installed and used for development purposes.
– Toolbox containers: a special kind of container that are tailored to be used as a software development environment.

The other method of installing software on Silverblue is package layering. This is different from the other methods, and goes against the general principle of immutability. Package layering adds individual packages to the Silverblue system, and in so doing modifies the operating system.

https://docs.fedoraproject.org/en-US/fedora-silverblue/getting-started/

While Flatpak in itself is a pretty cool solution to the Linux desktop packaging problem it usually comes with sandboxed environments, making it less usable for integrated tools and libraries.

For that reason it is still possible to install RPMs on top, in a layered form. This however might result in dependency issues when the underlying image is supposed to change.

This is exactly what happened here: I had additional software installed, which depended on some specific versions of the underlying image. So I had to remove those:

[liquidat@heisenberg ~]$ rpm-ostree uninstall fedora-workstation-repositories golang pass vim zsh

Afterwards it was easy to rebase the entire system onto a different branch or – in my case – a different version of the same branch:

[liquidat@heisenberg ~]$ rpm-ostree rebase fedora/30/x86_64/silverblue
1 metadata, 0 content objects fetched; 569 B transferred in 2 seconds
Staging deployment... done
Freed: 47,4 MB (pkgcache branches: 0)
  liberation-fonts-common 1:2.00.3-3.fc30 -> 1:2.00.5-1.fc30
  [...]
Downgraded:
  GConf2 3.2.6-26.fc31 -> 3.2.6-26.fc30
  [...]
Removed:
  fedora-repos-rawhide-31-0.2.noarch
  [...]
Added:
  PackageKit-gstreamer-plugin-1.1.12-5.fc30.x86_64
  [...]
Run "systemctl reboot" to start a reboot

And that’s it – after a short systemctl reboot the machine was back, running Fedora 30. And since ostree works with images the reboot went smooth and quick, long sessions of installing/updating software during shutdown or reboot are not necessary with such a setup!

In conclusion I must say that I am pretty impressed – both by the concept as well as the execution on the concept, how well Silverblue works in a day to day situation even as a desktop. My next step will be to test it on a Laptop on the ride, and see if other problems come up there.

Advertisements

Of debugging Ansible Tower and underlying cloud images

Ansible Logo

Recently I was experimenting with Tower’s isolated nodes feature – but somehow it did not work in my environment. Debugging told me a lot about Ansible Tower – and also why you should not trust arbitrary cloud images.

Background – Isolated Nodes

Ansible Tower has a nice feature called “isolated nodes”. Those are dedicated Tower instances which can manage nodes in separated environments – basically an Ansible Tower Proxy.

An Isolated Node is an Ansible Tower node that contains a small piece of software for running playbooks locally to manage a set of infrastructure. It can be deployed behind a firewall/VPC or in a remote datacenter, with only SSH access available. When a job is run that targets things managed by the isolated node, the job and its environment will be pushed to the isolated node over SSH, where it will run as normal.

Ansible Tower Feature Spotlight: Instance Groups and Isolated Nodes

Isolated nodes are especially handy when you setup your automation in security sensitive environments. Think of DMZs here, of network separation and so on.

I was fooling around with a clustered Tower installation on RHEL 7 VMs in a cloud environment when I run into trouble though.

My problem – Isolated node unavailable

Isolated nodes – like instance groups – have a status inside Tower: if things are problematic, they are marked as unavailable. And this is what happened with my instance isonode.remote.example.com running in my lab environment:

Ansible Tower showing an instance node as unavailable

I tried to turn it “off” and “on” again with the button in the control interface. It made the node available, it was even able to executed jobs – but it became quickly unavailable soon after.

Analysis

So what happened? The Tower logs showed a Python error:

# tail -f /var/log/tower/tower.log
fatal: [isonode.remote.example.com]: FAILED! => {"changed": false,
"module_stderr": "Shared connection to isonode.remote.example.com
closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n
File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1552400585.04
-60203645751230/AnsiballZ_awx_capacity.py\", line 113, in <module>\r\n
_ansiballz_main()\r\n  File \"/var/lib/awx/.ansible/tmp/ansible-tmp
-1552400585.04-60203645751230/AnsiballZ_awx_capacity.py\", line 105, in
_ansiballz_main\r\n    invoke_module(zipped_mod, temp_path,
ANSIBALLZ_PARAMS)\r\n  File \"/var/lib/awx/.ansible/tmp/ansible-tmp
-1552400585.04-60203645751230/AnsiballZ_awx_capacity.py\", line 48, in
invoke_module\r\n    imp.load_module('__main__', mod, module, MOD_DESC)\r\n
File \"/tmp/ansible_awx_capacity_payload_6p5kHp/__main__.py\", line 74, in
<module>\r\n  File \"/tmp/ansible_awx_capacity_payload_6p5kHp/__main__.py\",
line 60, in main\r\n  File
\"/tmp/ansible_awx_capacity_payload_6p5kHp/__main__.py\", line 27, in
get_cpu_capacity\r\nAttributeError: 'module' object has no attribute
'cpu_count'\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact
error", "rc": 1}

PLAY RECAP *********************************************************************
isonode.remote.example.com : ok=0    changed=0    unreachable=0    failed=1  

Apparently a Python function was missing. If we check the code we see that indeed in line 27 of file awx_capacity.py the function psutil.cpu_count() is called:

def get_cpu_capacity():
    env_forkcpu = os.getenv('SYSTEM_TASK_FORKS_CPU', None)
    cpu = psutil.cpu_count()

Support for this function was added in version 2.0 of psutil:

2014-03-10
Enhancements
424: [Windows] installer for Python 3.X 64 bit.
427: number of logical and physical CPUs (psutil.cpu_count()).

psutil history

Note the date here: 2014-03-10 – pretty old! I check the version of the installed package, and indeed the version was pre-2.0:

$ rpm -q --queryformat '%{VERSION}\n' python-psutil
1.2.1

To be really sure and also to ensure that there was no weird function backporting, I checked the function call directly on the Tower machine:

# python
Python 2.7.5 (default, Sep 12 2018, 05:31:16) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import inspect
>>> import psutil as module
>>> functions = inspect.getmembers(module, inspect.isfunction)
>>> functions
[('_assert_pid_not_reused', <function _assert_pid_not_reused at
0x7f9eb10a8d70>), ('_deprecated', <function deprecated at 0x7f9eb38ec320>),
('_wraps', <function wraps at 0x7f9eb414f848>), ('avail_phymem', <function
avail_phymem at 0x7f9eb0c32ed8>), ('avail_virtmem', <function avail_virtmem at
0x7f9eb0c36398>), ('cached_phymem', <function cached_phymem at
0x7f9eb10a86e0>), ('cpu_percent', <function cpu_percent at 0x7f9eb0c32320>),
('cpu_times', <function cpu_times at 0x7f9eb0c322a8>), ('cpu_times_percent',
<function cpu_times_percent at 0x7f9eb0c326e0>), ('disk_io_counters',
<function disk_io_counters at 0x7f9eb0c32938>), ('disk_partitions', <function
disk_partitions at 0x7f9eb0c328c0>), ('disk_usage', <function disk_usage at
0x7f9eb0c32848>), ('get_boot_time', <function get_boot_time at
0x7f9eb0c32a28>), ('get_pid_list', <function get_pid_list at 0x7f9eb0c4b410>),
('get_process_list', <function get_process_list at 0x7f9eb0c32c08>),
('get_users', <function get_users at 0x7f9eb0c32aa0>), ('namedtuple',
<function namedtuple at 0x7f9ebc84df50>), ('net_io_counters', <function
net_io_counters at 0x7f9eb0c329b0>), ('network_io_counters', <function
network_io_counters at 0x7f9eb0c36500>), ('phymem_buffers', <function
phymem_buffers at 0x7f9eb10a8848>), ('phymem_usage', <function phymem_usage at
0x7f9eb0c32cf8>), ('pid_exists', <function pid_exists at 0x7f9eb0c32140>),
('process_iter', <function process_iter at 0x7f9eb0c321b8>), ('swap_memory',
<function swap_memory at 0x7f9eb0c327d0>), ('test', <function test at
0x7f9eb0c32b18>), ('total_virtmem', <function total_virtmem at
0x7f9eb0c361b8>), ('used_phymem', <function used_phymem at 0x7f9eb0c36050>),
('used_virtmem', <function used_virtmem at 0x7f9eb0c362a8>), ('virtmem_usage',
<function virtmem_usage at 0x7f9eb0c32de8>), ('virtual_memory', <function
virtual_memory at 0x7f9eb0c32758>), ('wait_procs', <function wait_procs at
0x7f9eb0c32230>)]

Searching for a package origin

So how to solve this issue? My first idea was to get this working by updating the entire code part to the multiprocessor lib:

# python
Python 2.7.5 (default, Sep 12 2018, 05:31:16) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import multiprocessing
>>> cpu = multiprocessing.cpu_count()
>>> cpu
4

But while I was filling a bug report I wondered why RHEL shipped such an ancient library. After all, RHEL 7 was released in June 2014, and psutil had cpu_count available since early 2014! And indeed, a quick search for the package via the Red Hat package search showed a weird result: python-psutil was never part of base RHEL 7! It was only shipped as part of some very, very old OpenStack channels:

access.redhat.com package search, results for python-psutil

Newer OpenStack channels in fact come along with newer versions of python-psutil.

So how did this outdated package end up on this RHEL 7 image? Why was it never updated?

The cloud image is to blame! The package was installed on it – most likely during the creation of the image: python-psutil is needed for OpenStack Heat, so I assume that these RHEL 7 images where once created via OpenStack and then used as the default image in this demo environment.

And after the initial creation of the image the Heat packages were forgotten. In the meantime the image was updated to newer RHEL versions, snapshots were created as new defaults and so on. But since the package in question was never part of the main RHEL repos, it was never changed or removed. It just stayed there. Waiting, apparently, for me 😉

Conclusion

This issue showed me how tricky cloud images can be. Think about your own cloud images: have you really checked all all of them and verified that no package, no start up script, no configuration was changed from the Linux distribution vendor’s base setup?

With RPMs this is still manageable, you can track if packages are installed which are not present in the existing channels. But did someone install something with pip? Or any other way?

Take my case: an outdated version of a library was called instead of a much, much more recent one. If there would have been a serious security issue with the library in the meantime, I would have been exposed although my update management did not report any library to be updated.

I learned my lesson to be more critical with cloud images, checking them in more detail in the future to avoid having nasty surprises during production. And I can just recommend that you do that as well.

Flatpak – a solution to the Linux desktop packaging problem [Update]

Linux packaging was a nightmare for years. But recently serious contenders came up claiming to solve the challenge: first containers changed how code is deployed on servers for good. And now a solution for the desktops is within reach. Meet Flatpak!

Preface

In the beginning I probably should admit that over the years I identified packaging within the Linux ecosystem as a fundamental problem. It prevented wider adoption of Linux in general, but especially on the desktop. I was kind of obsessed with the topic.

The general arguments were/are:

  • Due to a missing standard it was not easy enough for developers to package software. If they used one of the formats out there they could only target a sub-set of distributions. This lead to lower adoption of the software on Linux, making it a less attractive platform.
  • Since Linux was less attractive for developers, less applications were created on/ported to Linux. This lead to a smaller ecosystem. Thus it was less attractive to users since they could not find appealing or helpful applications.
  • Due to missing packages in an easy-accessible format, installing software was a challenge as soon as it was not packaged for the distribution in use. So Linux was a lot less attractive for users because few software was available.

History, server side

In hindsight I must say the situation was not as bad as I thought on the server level: Linux in the data center grew and grew. Packaging simply did not matter that much because admins were used to problems deploying applications on servers anyway and they had the proper knowledge (and time) to tackle challenges.

Additionally, the recent rise of container technologies like Docker had a massive impact: it made deploying of apps much easier and added other benefits like sandboxing, detailed access permissions, clearer responsibilities especially with dev and ops teams involved, and less dependency hell problems. Together with Kubernetes it seems as there is an actual standard evolving of how software is deployed on Linux servers.

To summarize, in the server ecosystem things never were as bad, and are quite good these days. Given that Azure serves more Linux servers than Windows servers there are reasons to believe that Linux is these days the dominant server platform and that Windows is more and more becoming a niche platform.

History, client side

On the desktop side things were bad right from the start. Distribution specific packaging made compatibility a serious problem, incompatible packaging formats with RPMs and DEBs made it worse. One reason why no package format ever won was probably that no solution offered real benefits above the other. Given today’s solutions for packaging software out there RPM and DEB are missing major advantages like sandboxing and permission systems. They are helplessly outdated, I question if they are suited for software packaging at all today.

There were attempts to solve the problem. There were attempts at standardization – for example via the LSB – but that did not gather enough attraction. There were platform agnostic packaging solutions. Most notably is Klik which started already 15 years ago and got later renamed to AppImage. But despite the good intentions and the ease of use it never gained serious attention over the years.

But with the approach of Docker things changed: people saw the benefits of container formats and the technology technology for such approaches was widely available. So people gave the idea another try: Flatpak.

Flatpak

Flatpak is a “technology for building and distributing desktop applications on Linux”. It is an attempt to establish an application container format for Linux based desktops and make them easy consumable.

According to the history of Flatpak the initial idea goes way back. Real work started in 2014, and the first release was in 2015. It was developed initially in the ecosystem of Fedora and Red Hat, but soon got attention from other distributions as well.

Many features look somewhat similar to the typical features associated with container tools like Docker:

  • Build for every distro
  • Consistent environments
  • Full control over dependencies
  • Easy to use  build tools
  • Future-proof builds
  • Distribution of packages made easy

Additionally it features a sandboxing environment and a permissions system.

The most appealing feature for end users is that it makes it simple to install packages and that there are many packages available because developers only have to built them once to support a huge range of distributions.

By using Flatpak the software version is also not tied to the distribution update cycle. Flatpak can update all installed packages centrally as well.

Flathub

One thing I like about Flatpak is that it was built with repositories (“shops”) baked right in. There is a large repository called flathub.org where developers can submit their applications to be found and consumed by users:

The interface is simple but has a somewhat proper design. Each application features screenshots and a summary. The apps themselves are grouped by categories. The ever changing list of new & updated apps shows that the list of apps is ever growing. A list of the two dozen most popular apps is available as well.

I am a total fan of Open Source but I do like the fact that there are multiple closed source apps listed in the store. It shows that the format can be used for such use cases. That is a sign of a healthy ecosystem. Also, there are quite a few games which is always good 😉

Of course there is lots of room for improvement: at the time of writing there is no way to change or filter the sorting order of the lists. There is no popularity rating visible and no way to rate applications or leave comments.

Last but not least, there is currently little support from external vendors. While you find many closed source applications in Flathub, hardly any of them were provided by the software vendor. They were created by the community but are not affiliated with the vendors. To have a broader acceptance of Flatpak the support of software vendors is crucial, and this needs to be highlighted in the web page as well (“verified vendor” or similar).

Hosting your own hub

As mentioned Flatpak has repositories baked in, and it is well documented. It is easy to generate your own repository for your own flatpaks. This is especially appealing to projects or vendors who do not want to host their applications themselves.

While today it is more or less common to use a central market (Android, iOS, etc.) some still prefer to keep their code in there hands. It sometimes makes it easier to provide testing and development versions. Other use cases are software which is just developed and used in-house, or the mirroring of existing repositories for security or offline reasons: such use cases require local hubs, and it is no problem at all to bring them up with Flatpak.

Flatpak, distribution support

Flatpak is currently supported on most distributions. Many of them have the support built in right from the start, others, most notably Ubuntu, need to install some software first. But in general it is quite easy to get started – and once you did, there are hundreds of applications you can use.

What about the other solutions?

Of course Flatpak is not the only solution out there. After all, this is the open source world we are talking about, so there must be other solutions 😉

Snap & Snapcraft

Snapcraft is a way to “deliver and update your app on any Linux distribution – for desktop, cloud, and Internet of Things.” The concept and idea behind it is somewhat similar to Flatpak, with a few notable differences:

  • Snapcraft also target servers, while Flatpak only targets desktops
  • Among the servers, Snapcraft additionally has iot devices as a specific target group
  • Snapcraft does not support additional repositories; there is only one central market place everyone needs to use, and there is no real way to change that

Some more technical differences are in the way packages are built, how the sandbox work and so on, but we will noch focus on those in this post.

The Snapcraft market place called snapcraft.io provides lists of applications, but is much more mature than Flathub: it has vendor testimonials, features verified accounts, multiple versions like beta or development can be picked from within the market, there are case stories, for each app additional blog posts are listed, there is integration with social accounts, you can even see the distribution by countries and Linux flavors.

And as you can see, Snapcraft is endorsed and supported by multiple companies today which are listed on the web page and which maintain their applications in the market.

Flathub has a lot to learn until it reaches the same level of maturity. However, while I’d say that snapcraft.io is much more mature than Flathub it also misses the possibility to rate packages, or just list them by popularity. Am I the only one who wants that?

The main disadvantage I see is the monopoly. snapcraft.io is tightly controlled by a single company (not a foundation or similar). It is of course Canonical’s full right to do so, and the company and many others argue that this is not different from what Apple does with iOS. However, the Linux ecosystem is not the Apple ecosystem, and in the Linux ecosystem there are often strong opinions about monopolies, closed source solutions and related topics which might lead to acceptance problems in the long term.

Also, technically it is not possible to launch your own central server for example for in-house development, or for hosting a local mirror, or to support offline environments or for other reasons. To me this is particularly surprising given that Snapcraft targets specifically iot devices, and I would run iot devices in an closed network wherever I can – thus being unable to connect to snapcraft.io. The only solution I was able to identify was running a http proxy, which is far from the optimal solution.

Another a little bit unusual feature of Snapcraft is that updates are installed automatically, thanks to theo.9dor for the hint:

The good news is that snaps are updated automatically in the background every day! 

https://tutorials.ubuntu.com/tutorial/basic-snap-usage#2

While in the end a development model with auto deployments, even dozens per day, is a worthwhile goal I am not sure if everyone is there yet.

So while Snapcraft has a mature market place, targets much more use cases and provides more packages to this date, I do wonder how it will turn out in the long run given that we are talking about the Linux ecosystem here. And while Canonical has quite some experience to develop their own solutions outside the “rest” of the community, those attempts seldom worked out.

AppImage

I’ve already mentioned AppImage above and I’ve written about it in the past when it was still called Klik. AppImage is “way for upstream developers to provide native binaries for Linux”. The result is basically a file that contains your entire application and which you can copy everywhere. It exists for more than a dozen years now.

The thing that is probably most worth mentioning about it is that it never caught on. After all, already long time ago it provided many impressive features, and made it possible to install software cross distribution. Many applications where also available as AppImage – and yet I never saw wider adoption. It seems to me that it only got traction recently because Snapcraft and Flatpak entered the market and kind of dragged it with them.

I’d love to understand why that is the case, or have an answer to the “why”. I only have few ideas but those are just ideas, and not explanations why AppImage, in all the years, never managed to become the Docker of the Linux desktop.

Maybe one problem was that it never featured a proper store: today we know from multiple examples on multiple platforms that a store can mean the difference. A central place for the users to browse, get a first idea of the app, leave comments and rate the application. Docker has a central “store”, Android and iOS have one, Flatpak and Snapcraft have one. However, AppImage never put a focus on that, and I do wonder if this was a missed opportunity. And no, appimage.github.io/apps is not a store.

Another difference to the other tools is that AppImage always focused on the Open Source tools. Don’t get me wrong, I appreciate it – but open source tools like Digikam were available on every distribution anyway. If AppImage would have focused to reach out to closed source software vendors as well, together with marketing this aggressively, maybe things would have turned out differently. You do not only need to make software easily available to users, you also need to make software available the people want.

Last but not least, AppImage always tried to provide as many features as possible, while it might have benefited from focusing on some and marketing them stronger. As an example, AppImage advertises that it can run with and without sandboxing. However, sandboxing is a large benefit of using such a solution to begin with. Another thing is integrated updates: there is a way to automatically update all appimages on a system, but it is not built in. If both would have been default and not optional, things maybe would have been different.

But again, these are just ideas, attempts to find explanations. I’d be happy if someone has better ideas.

Disadvantages of the Flatpak approach

There are some disadvantages with the Flatpak approach – or the Snapcraft one, or in general with any container approach. Most notably: libraries and dependencies.

The basic argument here is: all dependencies are kept in each package. This means:

  • Multiple copies of the same libraries on the system, leading to larger disk consumption
  • Multiple copies of the same library in the RAM during execution, leading to larger memory consumption
  • And probably most important: if a library has a security problem, each and every package has to be updated

Especially the last part is crucial: in case of a serious library security problem the user has to rely on each and every package vendor that they update the library in the package and release an updated version. With a dependency based system this is usually not the case.

People often compare this problem to the Windows or Java world were a similar situation exists. However, while the underlying problem is existent and serious, with Flatpak at least there is a sandbox and a permission system something which was not the case in former Windows versions.

There needs to be made a trade off between the advanced security through permissions and sandboxing vs the risk of having not-updated libraries in those packages. That trade off is not easily done.

But do we even need something like Flatpak?

This question might be strange, given the needs I identified in the past and my obvious enthusiasm for it. However, these days more and more apps are created as web applications – the importance of the desktop is shrinking. The dominant platform for users these days are mobile phones and tablets anyway. I would even go so far to say that in the future desktops will still be there but mainly to launch a web browser

But we are not there yet and today there is still the need for easy consumption of software on Linux desktops. I would have hoped though to see this technology and this much traction and distribution and vendor support 10 years ago.

Conclusion

Well – as I mentioned early on, I can get somewhat obsessed with the topic. And this much too long blog post shows this for sure 😉

But as a conclusion I say that the days of difficult-to-install-software on Linux desktops are gone. I am not sure if Snapcraft or Flatpak will “win” the race, we have to see that.

At the same time we have to face that desktops in general are just not that important anymore.  But until then, I am very happy that it became so much easier for me to install certain pieces of software in up2date versions on my machine.

[Short Tip] Workaround MIT-SHM error when starting QT/KDE apps with SUDO

Gnomelogo.svg

Starting GUI programs as root usually is not a problem. In worst case, sudo inside a terminal should do the trick.

However, recently I had to start a QT application as sudo from within GNOME. It was the yubikey configuration GUI, a third party tool thus not part of any desktop environment. Executing the app failed, it only showed a gray window and multiple errors in the command line:

$ sudo /usr/bin/yubikey-personalization-gui 
X Error: BadAccess (attempt to access private resource denied) 10
  Extension:    130 (MIT-SHM)
  Minor opcode: 1 (X_ShmAttach)
  Resource id:  0x142
X Error: BadShmSeg (invalid shared segment parameter) 128
  Extension:    130 (MIT-SHM)
  Minor opcode: 5 (X_ShmCreatePixmap)
  Resource id:  0xfa
X Error: BadDrawable (invalid Pixmap or Window parameter) 9
  Major opcode: 62 (X_CopyArea)
  Resource id:  0x2800015

Workarounds like pkexec and adding a policykit rule didn’t help, either. The error indicates that there is a problem with the MIT Shared Memory Extension of X.

A good workaround is to deactivate the usage of the extension on command line:

$ sudo QT_X11_NO_MITSHM=1 /usr/bin/yubikey-personalization-gui

It works like a charm.

[Howto] Writing an Ansible module for a REST API

Ansible LogoAnsible comes along with a great set of modules. But maybe your favorite tool is not covered yet and you need to develop your own module. This guide shows you how to write an Ansible module – when you have a REST API to speak to.

Background: Ansible modules

Ansible is a great tool to automate almost everything in an IT environment. One of the huge benefits of Ansible are the so called modules: they provide a way to address automation tasks in the native language of the problem. For example, given a user needs to be created: this is usually done by calling certain commandos on the shell. In that case the automation developer has to think about which command line tool needs to be used, which parameters and options need to be provided, and the result is most likely not idempotent. And its hard t run tests (“checks”) with such an approach.

Enter Ansible user modules: with them the automation developer only has to provide the data needed for the actual problem like the user name, group name, etc. There is no need to remember the user management tool of the target platform or to look up parameters:

$ ansible server -m user -a "name=abc group=wheel" -b

Ansible comes along with hundreds of modules. But what is if your favorite task or tool is not supported by any module? You have to write your own Ansible module. If your tools support REST API, there are a few things to know which makes it much easier to get your module running fine with Ansible. These few things are outlined below.

REST APIs and Python libraries in Ansible modules

According to Wikipedia, REST is:

… the software architectural style of the World Wide Web.

In short, its a way to write, provide and access an API via usual HTTP tools and libraries (Apache web server, Curl, you name it), and it is very common in everything related to the WWW.

To access a REST API via an Ansible module, there are a few things to note. Ansible modules are usually written in Python. The library of choice to access URLs and thus REST APIs in Python is usually urllib. However, the library is not the easiest to use and there are some security topics to keep in mind when these are used. Out of these reasons alternative libraries like Python requests came up in the past and are pretty common.

However, using an external library in an Ansible module would add an extra dependency, thus the Ansible developers added their own library inside Ansible to access URLs: ansible.module_utils.urls. This one is already shipped with Ansible – the code can be found at lib/ansible/module_utils/urls.py – and it covers the shortcomings and security concerns of urllib. If you submit a module to Ansible calling REST APIs the Ansible developers usually require that you use the inbuilt library.

Unfortunately, currently the documentation on the Ansible url library is sparse at best. If you need information about it, look at other modules like the Github, Kubernetes or a10 modules. To cover that documentation gap I will try to cover the most important basics in the following lines – at least as far as I know.

Creating REST calls in an Ansible module

To access the Ansible urls library right in your modules, it needs to be imported in the same way as the basic library is imported in the module:

from ansible.module_utils.basic import *
from ansible.module_utils.urls import *

The main function call to access a URL via this library is open_url. It can take multiple parameters:

def open_url(url, data=None, headers=None, method=None, use_proxy=True,
        force=False, last_mod_time=None, timeout=10, validate_certs=True,
        url_username=None, url_password=None, http_agent=None,
force_basic_auth=False, follow_redirects='urllib2'):

The parameters in detail are:

  • url: the actual URL, the communication endpoint of your REST API
  • data: the payload for the URL request, for example a JSON structure
  • headers: additional headers, often this includes the content-type of the data stream
  • method: a URL call can be of various methods: GET, DELETE, PUT, etc.
  • use_proxy: if a proxy is to be used or not
  • force: force an update even if a 304 indicates that nothing has changed (I think…)
  • last_mod_time: the time stamp to add to the header in case we get a 304
  • timeout: set a timeout
  • validate_certs: if certificates should be validated or not; important for test setups where you have self signed certificates
  • url_username: the user name to authenticate
  • url_password: the password for the above listed username
  • http_agent: if you wnat to set the http agent
  • force_basic_auth: for ce the usage of the basic authentication
  • follow_redirects: determine how redirects are handled

For example, to fire a simple GET to a given source like Google most parameters are not needed and it would look like:

open_url('https://www.google.com',method="GET")

A more sophisticated example is to push actual information to a REST API. For example, if you want to search for the domain example on a Satellite server you need to change the method to PUT, add a data structure to set the actual search string ({"search":"example"}) and add a corresponding content type as header information ({'Content-Type':'application/json'}). Also, a username and password must be provided. Given we access a test system here the certification validation needs to be turned off also. The resulting string looks like this:

open_url('https://satellite-server.example.com/api/v2/domains',method="PUT",url_username="admin",url_password="abcd",data=json.dumps({"search":"example"}),force_basic_auth=True,validate_certs=False,headers={'Content-Type':'application/json'})

Beware that the data json structure needs to be processed by json.dumps. The result of the query can be formatted as json and further used as a json structure:

resp = open_url(...)
resp_json = json.loads(resp.read())

Full example

In the following example, we query a Satellite server to find a so called environment ID for two given parameters, an organization ID and an environment name. To create a REST call for this task in a module multiple, separate steps have to be done: first, create the actual URL endpoint. This usually consists of the server name as a variable and the API endpoint as the flexible part which is different in each REST call.

server_name = 'https://satellite.example.com'
api_endpoint = '/katello/api/v2/environments/'
my_url = server_name + api_endpoint

Besides the actual URL, the payload must be pieced together and the headers need to be set according to the content type of the payload – here json:

headers = {'Content-Type':'application/json'}
payload = {"organization_id":orga_id,"name":env_name}

Other content types depends on the REST API itself and on what the developer prefers. JSON is widely accepted as a good way to go for REST calls.

Next, we set the user and password and launch the call. The return data from the call are saved in a variable to analyze later on.

user = 'abc'
pwd = 'def'
resp = open_url(url_action,method="GET",headers=headers,url_username=module.params.get('user'),url_password=module.params.get('pwd'),force_basic_auth=True,data=json.dumps(payload))

Last but not least we transform the return value into a json construct, and analyze it: if the return value does not contain any data – that means the value for the key total is zero – we want the module to exit with an error. Something went wrong, and the automation administrator needs to know that. The module calls the built-in error functionmodule.fail_json. But if the total is not zero, we get out the actual environment ID we were looking for with this REST call from the beginning – it is deeply hidden in the json structure, btw.

resp_json = json.loads(resp.read())
if resp_json["total"] == 0:
    module.fail_json(msg="Environment %s not found." % env_name)
env_id = resp_json["results"][0]["id"]

Summary

It is fairly easy to write Ansible modules to access REST APIs. The most important part to know is that an internal, Ansible provided library should be used, instead of the better known urllib or requests library. Also, the actual library documentation is still pretty limited, but that gap is partially filled by the above post.

[Howto] Keeping temporary Ansible scripts

Ansible LogoAnsible tasks are executed locally on the target machine. via generated Python scripts. For debugging it might make sense to analyze the scripts – so Ansible must be told to not delete them.

When Ansible executes a command on a remote host, usually a Python script is copied, executed and removed immediately. For each task, a script is copied and executed, as shown in the logs:

Feb 25 07:40:44 ansible-demo-helium sshd[2395]: Accepted publickey for liquidat from 192.168.122.1 port 54108 ssh2: RSA 78:7c:4a:15:17:b2:62:af:0b:ac:34:4a:00:c0:9a:1c
Feb 25 07:40:44 ansible-demo-helium systemd[1]: Started Session 7 of user liquidat
Feb 25 07:40:44 ansible-demo-helium sshd[2395]: pam_unix(sshd:session): session opened for user liquidat by (uid=0)
Feb 25 07:40:44 ansible-demo-helium systemd-logind[484]: New session 7 of user liquidat.
Feb 25 07:40:44 ansible-demo-helium systemd[1]: Starting Session 7 of user liquidat.
Feb 25 07:40:45 ansible-demo-helium ansible-yum[2399]: Invoked with name=['httpd'] list=None install_repoquery=True conf_file=None disable_gpg_check=False state=absent disablerepo=None update_cache=False enablerepo=None exclude=None
Feb 25 07:40:45 ansible-demo-helium sshd[2398]: Received disconnect from 192.168.122.1: 11: disconnected by user
Feb 25 07:40:45 ansible-demo-helium sshd[2395]: pam_unix(sshd:session): session closed for user liquidat

However, for debugging it might make sense to keep the script and execute it locally. Ansible can be persuaded to keep a script by setting the variable ANSIBLE_KEEP_REMOTE_FILES to true at the command line:

$ ANSIBLE_KEEP_REMOTE_FILES=1 ansible helium -m yum -a "name=httpd state=absent"

The actually executed command – and the created temporary file – is revealed when ansible is executed with the debug option:

$ ANSIBLE_KEEP_REMOTE_FILES=1 ansible helium -m yum -a "name=httpd state=absent" -vvv
...
<192.168.122.202> SSH: EXEC ssh -C -vvv -o ForwardAgent=yes -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -tt 192.168.122.202 'LANG=de_DE.UTF-8 LC_ALL=de_DE.UTF-8 LC_MESSAGES=de_DE.UTF-8 /usr/bin/python -tt /home/liquidat/.ansible/tmp/ansible-tmp-1456498240.12-1738868183958/yum'
...

Note that here the script is executed directly via Python. If the “become” flag i set, the Python execution is routed through a shell, the command looks like /bin/sh -c 'sudo -u $SUDO_USER /bin/sh -c "/usr/bin/python $SCRIPT"'.

The temporary file is a Python script, as the header shows:

$ head yum 
#!/usr/bin/python -tt
# -*- coding: utf-8 -*-
# -*- coding: utf-8 -*-

# (c) 2012, Red Hat, Inc
# Written by Seth Vidal <skvidal at fedoraproject.org>
# (c) 2014, Epic Games, Inc.
#
# This file is part of Ansible
...

The script can afterwards be executed by /usr/bin/python yum or /bin/sh -c 'sudo -u $SUDO_USER /bin/sh -c "/usr/bin/python yum"' respectively:

$ /bin/sh -c 'sudo -u root /bin/sh -c "/usr/bin/python yum"'
{"msg": "", "invocation": {"module_args": {"name": ["httpd"], "list": null, "install_repoquery": true, "conf_file": null, "disable_gpg_check": false, "state": "absent", ...

More detailed information about debugging Ansible can be found at Will Thames’ article “Debugging Ansible for fun and no profit”.

[Howto] Look up of external sources in Ansible

Ansible Logo Part of Ansible’s power comes from an easy integration with other systems. In this post I will cover how to look up data from external sources like DNS or Redis.

Background

A tool for automation is only as good as it is capable to integrate it with the already existing environment – thus with other tools. Among various ways Ansible offers the possibility to look up Ansible variables from external stores like DNS, Redis, etcd or even generic INI or CSV files. This enables Ansible to easily access data which are stored – and changed, managed – outside of Ansible.

Setup

Ansible’s lookup feature is already installed by default.

Queries are executed on the host where the playbook is executed – in case of Tower this would be the Tower host itself. Thus the node needs access to the resources which needs to be queried.

Some lookup functions for example for DNS or Redis servers require additional python libraries – on the host actually executing the queries! On Fedora, the python-dns package is necessary for DNS queries and the package python-redis for Redis queries.

Generic usage

The lookup function can be used the exact same way variables are used: curly brackets surround the lookup function, the result is placed where the variable would be. That means lookup functions can be used in the head of a playbook, inside the tasks, even in templates.

The lookup command itself has to list the plugin as well as the arguments for the plugin:

{{ lookup('plugin','arguments') }}

Examples

Files

Entire files can be used as content of a variable. This is simply done via:

vars:
  content: "{{ lookup('file','lorem.txt') }}"

As a result, the variable has the entire content of the file. Note that the lookup of files always searches the files relative to the path of the actual playbook, not relative to the path where the command is executed.

Also, the lookup might fail when the file itself contains quote characters.

CSV

While the file lookup is pretty simple and generic, the CSV lookup module gives the ability to access values of given keys in a CSV file. An optional parameter can identify the appropriate column. For example, if the following CSV file is given:

$ cat gamma.csv
daytime,time,meal
breakfast,7,soup
lunch,12,rice
tea,15,cake
dinner,18,noodles

Now the lookup function for CSV files can access the lines identified by keys which are compared to the values of the first column. The following example looks up the key dinner and gives back the entry of the third column: {{ lookup('csvfile','dinner file=gamma.csv delimiter=, col=2') }}.

Inserted in a playbook, this looks like:

ansible-playbook examples/lookup.yml

PLAY [demo lookups] *********************************************************** 

GATHERING FACTS ***************************************************************
ok: [neon]

TASK: [lookup of a csv file] **************************************************
ok: [neon] => {
    "msg": "noodles"
}

PLAY RECAP ********************************************************************
neon                       : ok=2    changed=0    unreachable=0    failed=0

The corresponding playbook gives out the variable via the debug module:

---
- name: demo lookups
  hosts: neon

  tasks:
    - name: lookup of a csv file
      debug: msg="{{ lookup('csvfile','dinner file=gamma.csv delimiter=, col=2') }}"

DNS

The DNS lookup is particularly interesting in cases where the local DNS provides a lot of information like SSH fingerprints or the MX record.

The DNS lookup plugin is called dig – like the command line client dig. As arguments, the plugin takes a domain name and the DNS type: {{ lookup('dig', 'redhat.com. qtype=MX') }}. Another way to hand over the type argument is via slash: {{ lookup('dig', 'redhat.com./MX') }}

The result for this example is:

TASK: [lookup of dns dig entries] *********************************************
ok: [neon] =&amp;amp;gt; {
    "msg": "10 int-mx.corp.redhat.com."
}

Redis

It gets even more interesting when existing databases are queried. Ansible lookup supports for example Redis databases. The plugin takes as argument the entire URL: redis://$URL:$PORT,$KEY.

For example, to query a local Redis server for the key dinner:

---
tasks:
  - name: lookup of redis entries
    debug: msg="{{ lookup('redis_kv', 'redis://localhost:6379,dinner') }}"

The result is:

TASK: [lookup of redis entries] ***********************************************
ok: [neon] =&amp;amp;gt; {
    "msg": "noodles"
}

Template

As already mentioned, lookups can not only be used in Playbooks, but also directly in templates. For example, given the template code:

$ cat templatej2
...
Red Hat MX: {{ lookup('dig', 'redhat.com./MX') }}
$ cat template.conf
...
Red Hat MX: 10 mx2.redhat.com.,5 mx1.redhat.com.

Conclusion

As shown the lookup plugin of Ansible provides many possibilities to integrate Ansible with existing tools and environments which already contain valuable data about the systems. It is easy to use, integrates well with the existing Ansible concepts and can quickly be integrated. Just drop it where a variable would be dropped, and it already works.

I am looking forward to more lookup modules support in the future – I’d love to see a generic “http” and a generic “SQL” plugin, even with the ability to provide credentials, although these features can be somewhat realized with already existing modules.