[Howto] Create your own cloud gaming server to stream games to Fedora

A few months back I wanted to give a game a try which only runs on Windows and requires a dedicated GPU. Since I have neither of those, a decided to set up my own Windows cloud gaming server to stream the game to my Linux machine.

A few months back I wanted to give a game a try which only runs on Windows and requires a dedicated GPU. Since I have neither of those, a decided to set up my own Windows cloud gaming server to stream the game to my Linux machine.

Dozens of years ago there was one game I played day and night. For weeks, months, maybe even years. Till today I can still remember the distinct soundtrack which makes the hair stand up on the back of my neck: UFO: Enemy Unknown. I loved the game! A few years ago I also played one of the open source games inspired by UFO quite some time, UFO: AI. That was fun.

Sequels to the original game were released, two over the last couple of years. But they never really were an option since they required Windows (or so I thought) and above all, time. However, few months ago I first realized that one of the sequels, XCOM: Enemy Unknown, was available for Android. Since I have a brand new flagship Android tablet I gave it a shot – and it was great! But since the Android version was seriously limited, I played it again on Linux. That barely worked with my limited Intel GPU. But it was playable, and I had fun.

I was infected with the urge to play the game more – and when a thid sequel was announced, I at least wanted to play the second one, XCOM 2. But how? My GPU was too limited and eGPUs are expensive and often involve a lot of hassle – even if I would be willing to buy a Windows license. So I searched if cloud gaming could do the trick.

Cloud Gaming Services

The idea of cloud gaming is that heavy machines in the data center do the rendering, and the client machine only displays the end result. That shifts the burden of the powerful GPU towards the data center, and the client only needs to have simple graphics to show a stream of images. This does however require a rather responsive broad band connection between the client and the data center.

This principle is not new, but got new attention recently when Google announced their cloud gaming offer Stadia. I checked if any cloud gaming services offered my game of choice – and was available on Linux. Unfortunately, the results were disappointing:

  • Stadia: no XCOM2, no Linux client via Chrome Browser (thanks to zesoup)
  • GeForce Now: no XCOM2, no Linux client
  • Playstation Now: XCOM2 available, but no Linux client
  • Vortex: no XCOM2, no Linux client

Some of the above can be used on Linux with the help of Lutris, which uses Wine in the background. But for me that would only count as a last resort. I was not that desperate yet.

However, not all was lost yet: some services are not tied to a certain game catalog, but instead offer a generic server and client onto which you can install your games. The research results were first promising: shadow.tech offers machines for just that and a working Linux client! However, they are not available at my place.

The solution: Parsec

So with all ready-to-consume options out of the picture, I was almost willing to give up (or give Lutris and Playstation Now a chance, or even buy a eGPU). But then I stumbled upon something interesting: Parsec, a client for interactive game streaming.

Parsec is a high performance, low latency 60 FPS remote access product connecting you to your computer from anywhere.

Parsec features

That itself didn’t solve my problem. But it opened a window to a new solution: in the past, the company offered cloud hosted game servers on their own. Players could connect to it with their Parsec client and play games on them together – or on their own. The Parsec promise is that their client is fast enough for a reasonable good experience.

The server offer was canceled some time ago – but there was no one stopping me launching my own server and connect the Parsec client to it. And that is what I did. Read on to learn how to do that yourself.

Step 1: Getting a Windows cloud server with a reasonable GPU

What is needed is a cloud hosted Windows machine with a reasonable GPU. In best case the data center hosting the machine should not be on the other side of the planet. AWS, Azure, GCP and other have such offers. But there is even a better route: during my research I found Paperspace, a company specialized on providing access to GPU or AI cloud platforms. That is perfect for this use case!

Paperspace does not really advertise their support for gaming platforms. But after I signed up and looked what was needed to create my first cloud server I found a Parsec template:

That makes the entire process very easy!

  • Sign up with Paperspace, get billing sorted out (yes, this stuff costs money)
  • Get to Core -> Compute -> Machines, create a new machine
  • From Public Templates, get the Parsec cloud gaming template
  • Pick the right size for your games; for me a P4000 was enough.
  • Make sure to add a public IP and enough storage. Many today’s games easily consume dozens of GB
  • Set the auto-shutdown timer. No need to waste money.
  • Start the machine.

And that’s it already. Once the machine starts, you will notice a Parsec icon on the home screen. Time to get that working.

Step 2: Get Parsec

Parsec has clients for Linux based operating systems such as Ubuntu and Raspberry. There is even an AppImage or a Snap – unfortunately not a Flatpak yet. Update: there is now even a Flatpak package available! Thanks Sheogorath for the hint!

And if you are not willing to use Flatpak, AppImage or Snap for whatever reason, you can download the Ubuntu deb and create a RPM out of it. There is even a handy script for that. Any way, get it installed.

Sign up to Parsec, start the client, log in, and you are almost there:

Step 3: Play

After Parsec is all set, just start the cloud server, start Parsec there (maybe log in to your Parsec account), connect to the session on your client – and you are good to go: You can start playing!

For a first test I just watched some Youtube videos and was surprised by the quality. Next I logged in to my Steam account, got my XCOM2 installed and played along happily!

Performance and user experience

But how good is the performance? Well, that depends mostly on one factor: network. Due to unfortunate circumstances I was “able” to test this setup with three very distinct networks in a short time frame:

  • A rather slowish, unstable WiFi with a lot of jitter
  • A LTE connection, provided to me via WiFi hotspot
  • A top-notch, high performance mesh WiFi

When you have slow pings (everything below 25 ms) and/or a lot of jitter, I cannot recommend that you go this path. Otherwise it can be a serious option!

The first network I was on was horrible slow, and the experience was horrible. XCOM2 has basically permanent background music, and the constant interruptions in the music and audio sequences were in fact the worst for me.

The LTE based network was slightly better, but still far from a native feeling. I was able to get a good experience out of this and have fun, but that about was it.

However, the third option, WiFi on almost wired quality, was so good that in times I forgot that I was not playing the game natively. There was no visible lag, the graphics were crystal clear, the music was never interrupted, etc. I was impressed – and had great sessions that way!

I can only recommend to always keep an eye on the connection quality reported in the Parsec overlay:

As Parsec mentions:

At 60 frames per second, 1 frame is around 16ms. By combining decode, encode and network, you’ll have the amount of frames the client lags behind.

Parsec about lag latency

Having this in mind, the above screenshot shows a connection with an unfortunate lag, leading to a not-that-good experience.

Recap

If you don’t have the hardware and/or software to play your favorite game, cloud gaming can be a solution for your problem. And if there is no proper offering out there, it is possible to get this working on your own.

Running your own cloud gaming server is surprisingly easy and not too expensive. It does feel somewhat weird in the beginning especially if you usually only use clouds for your professional work. But it is a fun experience, and the results can be staggering – if your network is up for the job!

Featured image by Martin Str from Pixabay

[Howto] Three commands to update Fedora

These days using Fedora Workstation there are multiple commands necessary to update the entire software on the system: not everything is installed as RPMs anymore – and some systems hardly use RPMs at all anyway.

Fedora Logo Bubble

These days using Fedora Workstation there are multiple commands necessary to update the entire software on the system: not everything is installed as RPMs anymore – and some systems hardly use RPMs at all anyway.

Background

In the past all updates of a Fedora system were easily applied with one single command:

$ yum update

Later on, yum was replaced by DNF, but the idea stayed the same:

$ dnf update

Simple, right? But not these days: Fedora recently added capabilities to install and manage code via other ways: Flatpak packages are not managed by DNF. Also, many firmware updates are managed via the dedicated management tool fwupd. And lost but not least, Fedora Silverblue does not support DNF at all.

GUI solution Gnome Software – one tool to rule them all…

To properly update your Fedora system you have to check multiple sources. But before we dive into detailed CLI commands there is a simple way to do that all in one go: The Gnome Software tool does that for you. It checks all sources and just provides the available updates in its single GUI:

The above screenshot highlights that Gnome Software just shows available updates and can manage those. The user does not even know where those come from.

If we have a closer look at the configured repositories in Gnome Software we see that it covers main Fedora repositories, 3rd party repositories, flatpaks, firmware and so on:

Using the GUI alone is sufficient to take care of all update routines. However, if you want to know and understand what happens underneath it is good to know the separate CLI commands for all kinds of software resources. We will look at them in the rest of the post.

System packages

Each and every system is made up at least of a basic set of software. The Kernel, a system for managing services like systemd, core libraries like libc and so on. With Fedora used as a Workstation system there are two ways to manage system packages, because there are two totally different spins of Fedora: the normal one, traditionally based on DNF and thus comprised out of RPM packages, and the new Fedora Silverblue, based on immutable ostree system images.

Traditional: DNF

Updating a RPM based system via DNF is easy:

$ dnf upgrade
[sudo] password for liquidat: 
Last metadata expiration check: 0:39:20 ago on Tue 18 Jun 2019 01:03:12 PM CEST.
Dependencies resolved.
================================================================================
 Package                      Arch       Version             Repository    Size
================================================================================
Installing:
 kernel                       x86_64     5.1.9-300.fc30      updates       14 k
 kernel-core                  x86_64     5.1.9-300.fc30      updates       26 M
 kernel-modules               x86_64     5.1.9-300.fc30      updates       28 M
 kernel-modules-extra         x86_64     5.1.9-300.fc30      updates      2.1 M
[...]

This is the traditional way to keep a Fedora system up2date. It is used for years and well known to everyone.

And in the end it is analogue to the way Linux distributions are kept up2date for ages now, only the command differs from system to system (apt-get, etc.)

Silverblue: OSTree

With the recent rise of container technologies the idea of immutable systems became prominent again. With Fedora Silverblue there is an implementation of that approach as a Fedora Workstation spin.

[Unlike] other operating systems, Silverblue is immutable. This means that every installation is identical to every other installation of the same version. The operating system that is on disk is exactly the same from one machine to the next, and it never changes as it is used.

Silverblue’s immutable design is intended to make it more stable, less prone to bugs, and easier to test and develop. Finally, Silverblue’s immutable design also makes it an excellent platform for containerized apps as well as container-based software development development. In each case, apps and containers are kept separate from the host system, improving stability and reliability.

https://docs.fedoraproject.org/en-US/fedora-silverblue/

Since we are dealing with immutable images here, another tool to manage them is needed: OSTree. Basically OSTree is a set of libraries and tools which helps to manage images and snapshots. The idea is to provide a basic system image to all, and all additional software on top in sandboxed formats like Flatpak.

Unfortunately, not all tools can be packages as flatpak: especially command line tools are currently hardly usable at all as flatpak. Thus there is a way to install and manage RPMs on top of the OSTree image, but still baked right into it: rpm-ostreee. In fact, on Fedora Silverblue, all images and RPMs baked into it are managed by it.

Thus updating the system and all related RPMs needs the command rpm-ostreee update:

$ rpm-ostree update
⠂ Receiving objects: 98% (4653/4732) 4,3 MB/s 129,7 MB 
Receiving objects: 98% (4653/4732) 4,3 MB/s 129,7 MB... done
Checking out tree 209dfbe... done
Enabled rpm-md repositories: fedora-cisco-openh264 rpmfusion-free-updates rpmfusion-nonfree fedora rpmfusion-free updates rpmfusion-nonfree-updates
rpm-md repo 'fedora-cisco-openh264' (cached); generated: 2019-03-21T15:16:16Z
rpm-md repo 'rpmfusion-free-updates' (cached); generated: 2019-06-13T10:31:33Z
rpm-md repo 'rpmfusion-nonfree' (cached); generated: 2019-04-16T21:53:39Z
rpm-md repo 'fedora' (cached); generated: 2019-04-25T23:49:41Z
rpm-md repo 'rpmfusion-free' (cached); generated: 2019-04-16T20:46:20Z
rpm-md repo 'updates' (cached); generated: 2019-06-17T18:09:33Z
rpm-md repo 'rpmfusion-nonfree-updates' (cached); generated: 2019-06-13T11:00:42Z
Importing rpm-md... done
Resolving dependencies... done
Checking out packages... done
Running pre scripts... done
Running post scripts... done
Running posttrans scripts... done
Writing rpmdb... done
Writing OSTree commit... done
Staging deployment... done
Freed: 50,2 MB (pkgcache branches: 0)
Upgraded:
  gcr 3.28.1-3.fc30 -> 3.28.1-4.fc30
  gcr-base 3.28.1-3.fc30 -> 3.28.1-4.fc30
  glib-networking 2.60.2-1.fc30 -> 2.60.3-1.fc30
  glib2 2.60.3-1.fc30 -> 2.60.4-1.fc30
  kernel 5.1.8-300.fc30 -> 5.1.9-300.fc30
  kernel-core 5.1.8-300.fc30 -> 5.1.9-300.fc30
  kernel-devel 5.1.8-300.fc30 -> 5.1.9-300.fc30
  kernel-headers 5.1.8-300.fc30 -> 5.1.9-300.fc30
  kernel-modules 5.1.8-300.fc30 -> 5.1.9-300.fc30
  kernel-modules-extra 5.1.8-300.fc30 -> 5.1.9-300.fc30
  plymouth 0.9.4-5.fc30 -> 0.9.4-6.fc30
  plymouth-core-libs 0.9.4-5.fc30 -> 0.9.4-6.fc30
  plymouth-graphics-libs 0.9.4-5.fc30 -> 0.9.4-6.fc30
  plymouth-plugin-label 0.9.4-5.fc30 -> 0.9.4-6.fc30
  plymouth-plugin-two-step 0.9.4-5.fc30 -> 0.9.4-6.fc30
  plymouth-scripts 0.9.4-5.fc30 -> 0.9.4-6.fc30
  plymouth-system-theme 0.9.4-5.fc30 -> 0.9.4-6.fc30
  plymouth-theme-spinner 0.9.4-5.fc30 -> 0.9.4-6.fc30
Run "systemctl reboot" to start a reboot

Desktop applications: Flatpak

Installing software – especially desktop related software – on Linux is a major pain for distributors, users and developers alike. One attempt to solve this is the flatpak format, see also Flatpak – a solution to the Linux desktop packaging problem.

Basically Flatpak is a distribution independent packaging format targeted at desktop applications. It does come along with sandboxing capabilities and the packages usually have hardly any dependencies at all besides a common set provided to all of them.

Flatpak also provide its own repository format thus Flatpak packages can come with their own repository to be released and updated independently of a distribution release cycle.

In fact, this is what happens with the large Flatpak community repository flathub.org: all packages installed from there can be updated via flathub repos fully independent from Fedora – which also means independent from Fedora security teams, btw….

So Flatpak makes developing and distributing desktop programs much easier – and provides a tool for that. Meet flatpak!

$ flatpak update
Looking for updates…

        ID                                            Arch              Branch            Remote            Download
 1. [✓] org.freedesktop.Platform.Locale               x86_64            1.6               flathub            1.0 kB / 177.1 MB
 2. [✓] org.freedesktop.Platform.Locale               x86_64            18.08             flathub            1.0 kB / 315.9 MB
 3. [✓] org.libreoffice.LibreOffice.Locale            x86_64            stable            flathub            1.0 MB / 65.7 MB
 4. [✓] org.freedesktop.Sdk.Locale                    x86_64            1.6               flathub            1.0 kB / 177.1 MB
 5. [✓] org.freedesktop.Sdk.Locale                    x86_64            18.08             flathub            1.0 kB / 319.3 MB

Firmware

And there is firmware: the binary blobs that keep some of our hardware running and which is often – unfortunately – closed source.

A lot of Kernel related firmware is managed as system packages and thus part of the system image or packaged via RPM. But device related firmware (laptops, docking stations, and so on) is often only provided in Windows executable formats and difficult to handle.

Luckily, recently the Linux Vendor Firmware Service (LVFS) gained quite some traction as the default way for many vendors to make their device firmware consumable to Linux users:

The Linux Vendor Firmware Service is a secure portal which allows hardware vendors to upload firmware updates.

This site is used by all major Linux distributions to provide metadata for clients such as fwupdmgr and GNOME Software.

https://fwupd.org/

End users can take advantage of this with a tool dedicated to identify devices and manage the necessary firmware blobs for them: meet fwupdmgr!

$ fwupdmgr update                                                                                                                                                         No upgrades for 20L8S2N809 System Firmware, current is 0.1.31: 0.1.25=older, 0.1.26=older, 0.1.27=older, 0.1.29=older, 0.1.30=older
No upgrades for UEFI Device Firmware, current is 184.65.3590: 184.55.3510=older, 184.60.3561=older, 184.65.3590=same
No upgrades for UEFI Device Firmware, current is 0.1.13: 0.1.13=same
No releases found for device: Not compatible with bootloader version: failed predicate [BOT01.0[0-3]_* regex BOT01.04_B0016]

In the above example there were no updates available – but multiple devices are supported and thus were checked.

Forgot something? Gnome extensions…

The above examples cover the major ways to managed various bits of code. But they do not cover all cases, so for the sake of completion I’d like to highlight a few more here.

For example, Gnome extensions can be installed as RPM, but can also be installed via extensions.gnome.org. In that case the installation is done via a browser plugin.

The same is true for browser plugins themselves: they can be installed independently and extend the usage of the web browser. Think of the Chrome Web Store here, or Firefox Add-ons.

Conclusion

Keeping a system up2date was easier in the past – with a single command. However, at the same time that meant that those systems were limited by what RPM could actually deliver.

With the additional ways to update systems there is an additional burden on the system administrator, but at the same time there is much more software and firmware available these ways – code which was not available in the old RPM times at all. And with Silverblue an entirely new paradigm of system management is there – again something which would not have been the case with RPM at all.

At the same time it needs to be kept in mind that these are pure desktop systems – and there Gnome Software helps by being the single pane of glas.

So I fully understand if some people are a bit grumpy about the new needs for multiple tools. But I think the advantages by far outweigh the disadvantages.

[Howto] Rebasing Fedora Silverblue – even from Rawhide to Fedora 30

I recently switched to Fedora Silverblue, the immutable desktop version of Fedora. With Silverblue, rebasing is easy – even when I had to downgrade from Rawhide to a stable release!

silverblue-logo

I recently switched to Fedora Silverblue, the immutable desktop version of Fedora. With Silverblue, rebasing is easy – even when I had to downgrade from Rawhide to a stable release!

Fedora Silverblue is an interesting attempt at providing an immutable operating system – targeted at desktop users. Using it on a daily base helps me to get more familiar with the toolset and the ideas behind it which are also used in other projects like Fedora Atomic or Fedora CoreOS.

When Fedora 30 was released I decided to give it a try, went to the Silverblue download page – and unfortunately picked the wrong image: the one for Rawhide.

Rawhide is the rolling release/development branch of Fedora, and is way too unstable for my daily usage. But I only discovered this when I had it already installed and spent quite some time on customizing it.

But Silverblue is an immutable distribution, so switching to a previous version should be no problem, right? And in fact, yes, it is very easy!

Silverblue supports rebasing, switching between different branches. To get a list of available branches, first list the name of the remote source, and afterwards query the available references/branches:

[liquidat@heisenberg ~]$ ostree remote list
fedora
[liquidat@heisenberg ~]$ ostree remote refs fedora
[...]
fedora:fedora/30/x86_64/silverblue
fedora:fedora/30/x86_64/testing/silverblue
fedora:fedora/30/x86_64/updates/silverblue
[...]
fedora:fedora/rawhide/x86_64/silverblue
fedora:fedora/rawhide/x86_64/workstation

The list is quite long, and does list multiple operating system versions.

In my case I was on the rawhide branch and tried to rebase to version 30. That however failed:

[liquidat@heisenberg ~]$ rpm-ostree rebase fedora/30/x86_64/silverblue
1 metadata, 0 content objects fetched; 569 B transferred in 4 seconds
Checking out tree 7420c3a... done
Enabled rpm-md repositories: rawhide
Updating metadata for 'rawhide'... done
rpm-md repo 'rawhide'; generated: 2019-05-13T08:01:20Z
Importing rpm-md... done
⠁  
Forbidden base package replacements:
  libgcc 9.1.1-1.fc30 -> 9.1.1-1.fc31 (rawhide)
  libgomp 9.1.1-1.fc30 -> 9.1.1-1.fc31 (rawhide)
This likely means that some of your layered packages have requirements on newer or older versions of some base packages. `rpm-ostree cleanup -m` may help. For more details, see: https://githResolving dependencies... done
error: Some base packages would be replaced

The problem was that I had installed additional packages in the meantime. Note that there are multiple ways to install packages in Silverblue:

– Flatpak apps: this is the primary way that apps get installed on Silverblue.
– Containers: which can be installed and used for development purposes.
– Toolbox containers: a special kind of container that are tailored to be used as a software development environment.

The other method of installing software on Silverblue is package layering. This is different from the other methods, and goes against the general principle of immutability. Package layering adds individual packages to the Silverblue system, and in so doing modifies the operating system.

https://docs.fedoraproject.org/en-US/fedora-silverblue/getting-started/

While Flatpak in itself is a pretty cool solution to the Linux desktop packaging problem it usually comes with sandboxed environments, making it less usable for integrated tools and libraries.

For that reason it is still possible to install RPMs on top, in a layered form. This however might result in dependency issues when the underlying image is supposed to change.

This is exactly what happened here: I had additional software installed, which depended on some specific versions of the underlying image. So I had to remove those:

[liquidat@heisenberg ~]$ rpm-ostree uninstall fedora-workstation-repositories golang pass vim zsh

Afterwards it was easy to rebase the entire system onto a different branch or – in my case – a different version of the same branch:

[liquidat@heisenberg ~]$ rpm-ostree rebase fedora/30/x86_64/silverblue
1 metadata, 0 content objects fetched; 569 B transferred in 2 seconds
Staging deployment... done
Freed: 47,4 MB (pkgcache branches: 0)
  liberation-fonts-common 1:2.00.3-3.fc30 -> 1:2.00.5-1.fc30
  [...]
Downgraded:
  GConf2 3.2.6-26.fc31 -> 3.2.6-26.fc30
  [...]
Removed:
  fedora-repos-rawhide-31-0.2.noarch
  [...]
Added:
  PackageKit-gstreamer-plugin-1.1.12-5.fc30.x86_64
  [...]
Run "systemctl reboot" to start a reboot

And that’s it – after a short systemctl reboot the machine was back, running Fedora 30. And since ostree works with images the reboot went smooth and quick, long sessions of installing/updating software during shutdown or reboot are not necessary with such a setup!

In conclusion I must say that I am pretty impressed – both by the concept as well as the execution on the concept, how well Silverblue works in a day to day situation even as a desktop. My next step will be to test it on a Laptop on the ride, and see if other problems come up there.

[Howto] ara – making Ansible runs easier to read and understand

Ara is a simple web server showing detailed information about Ansible runs. It is helpful in understanding and troubleshooting Ansible runs.

Ara is a simple web server showing detailed information about Ansible runs. It is helpful in understanding and troubleshooting Ansible runs.

Background

Ansible runs, especially on the command line, do only provide limited information. Details about used variables, the timing of each task or other information are only available using additional plugins, but the details provided by them are usually narrowed to a use case.

A better way to provide information about Ansible runs is to collect the data and provide them in a web framework. That is what Ansible Tower (or AWX, the upstream project to Tower) does for example: collecting detailed data and providing them in the jobs overview.

But there are situations where a fully fledged Tower is too much, or where a comparing overview of the various runs is needed. This is where ara comes in:

ARA Records Ansible playbook runs and makes the recorded data available and intuitive for users and systems.
It makes your Ansible playbooks easier to understand and troubleshoot.

https://ara.recordsansible.org/

ara was originally developed by people of the OpenStack community, and still today has strong ties with it. It does not replace Ansible Tower at all, since it does not manage the execution at all. It complements the information and overview part, and in a way more competes with the logging solutions which can be connected to Ansible Tower.

How to install

The installation of ara is pretty straight forward and described in the documentation: the software is basically installed via pip, afterwards the server can be started as a local running instance. The connection between Ansible and ara is done via action and callback plugins.

The installation of the ara package is quickly done. Note that on systems with both Python 2 and 3 you need to pick the right pip version:

$ pip3 install --user ara
...
$ python3 -m ara.setup.action_plugins                                                                                                   /home/liquidat/.local/lib/python3.7/site-packages/ara/plugins/actions
$ python3  -m ara.setup.callback_plugins                                                                                                /home/liquidat/.local/lib/python3.7/site-packages/ara/plugins/callbacks

Notice that the binaries end up in ~/.local/bin. If that is not part of the $PATH variable, the server executable to start ara needs to be addressed directly, like ~/.local/bin/ara-manage runserver:

$ ~/.local/bin/ara-manage runserver                                                                                                      * Serving Flask app "ara" (lazy loading)
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: off
2019-05-06 02:45:49,156 INFO werkzeug:  * Running on http://127.0.0.1:9191/ (Press CTRL+C to quit)
2019-05-06 02:45:55,915 INFO werkzeug: 127.0.0.1 - - [06/May/2019 02:45:55] "GET / HTTP/1.1" 302 -

The web page can be accessed by pointing a web browser towards http://127.0.0.1:9191/. Since Ansible is not connected yet to ara no data are shown:

As mentioned, to connect ara to Ansible a callback plugin is used. There are different ways available to tell Ansible to use a callback plugin, the easiest is to set up a ansible.cfg with the appropriate data:

$ python3 -m ara.setup.ansible | tee -a ansible.cfg                                                                                        
[defaults]
callback_plugins=/home/liquidat/.local/lib/python3.7/site-packages/ara/plugins/callbacks
action_plugins=/home/liquidat/.local/lib/python3.7/site-packages/ara/plugins/actions

Note here that this creates a new section named [defaults]. Check if your ansible.cfg already has a section called [defaults] and if so merge the entries manually. Now call a few playbooks and check the results:

ara provides easy access to all existing runs, making it possible to easily compare different runs with each other. At the same time detailed information are provided for individual runs, making it easy to figure out what actually happened.

Summary

ara is an interesting attempt at better displaying the information from Ansible runs. It helps analyzing what is happening in each run, where problems might be hidden and so on.

If you use Ansible Tower already the information are available to you anyway. If you like the way how it is presented in ara you can even use both at the same time.

[Howto] Fix ldap “protocol error” in Gitea (and other Go based apps)

I prefer self hosted solution for some tasks. But this also means that I have to troubleshoot my problems on my own. Recently a go-ldap error gave me a headache. Here is the analysis of the protocol error – and how to solve it.

For certain projects I prefer a self hosted Git server. Solutions like Gitea, the fast developing and striving fork of Gogs, make this painless and easy to do – especially in a containerized environment.

My users are managed in a FreeIPA, and Gitea connects to it via LDAP. And this is a constant source for trouble. Gitea is written in Go, and the go ldap libraries seem to be far from perfect.

For example, after a recent update of my environment, login at Gitea stopped working:

[...gitea/models/user.go:1544 SyncExternalUsers()] [E] LDAP Search failed unexpectedly! (LDAP Result Code 2 "Protocol Error": )

The FreeIPA server at the same time showed indeed malformed requests:

[170978469] fd=112 slot=112 connection from 172.18.0.6 to 172.18.0.2
[171199824] op=0 BIND dn="uid=system,cn=sysaccounts,cn=etc,dc=bayz,dc=de" method=128 version=3
[223472706] op=0 RESULT err=0 tag=97 nentries=0 etime=0.0052434415 dn="uid=system,cn=sysaccounts,cn=etc,dc=bayz,dc=de"
[223738210] op=1 SRCH base="cn=users,cn=accounts,dc=bayz,dc=de" scope=2 filter="(&(objectClass=person)(uid=rwo))" attrs=ALL
[225467030] op=1 RESULT err=0 tag=101 nentries=1 etime=0.0001797298
[226078299] op=2 BIND dn="uid=rwo,cn=users,cn=accounts,dc=bayz,dc=de" method=128 version=3
[278423889] op=2 RESULT err=0 tag=97 nentries=0 etime=0.0052380180 dn="uid=rwo,cn=users,cn=accounts,dc=bayz,dc=de"
[278705323] op=3 SRCH base="(null)" scope=2 filter="(&(objectClass=person)(uid=rwo))", invalid attribute request
[278722888] op=3 RESULT err=2 tag=101 nentries=0 etime=0.0000084787
[279051788] op=-1 fd=112 closed - B1

Since LDAP login still worked fine with other tools I assumed a problem in the new Gitea version and filled a bug report. Other users with the same problem joined soon after, but no one was able to provide a solution.

After some research I figured out that the problem appeared to be related to an update in the FreeIPA server: a security update in the underlying 389 server lead to protocol errors when empty attributes were part of the request.

An updated version of the go-ldap library was supposed to fix this – and indeed, after Gitea updated the library other users reported that the issue was fixed for them.

However, not for me: I still had the problem, and got frustrated over this for weeks.

It took me another evening of research until I found the important missing detail: a Grafana user had the same problem. The updated library did not help there either. But reducing the number of empty attributes by providing values for default, thus otherwise empty attributes did the trick:

For me the error occurs when I have less than 4 attributes.

https://github.com/grafana/grafana/issues/14432#issuecomment-452025337

In the end they figured out that one empty attribute was ok to be sent, but not two. With this information, the fix of my problem was easy: I previously had not set the attribute for First Name and Surname. I added those and immediately was able to login again.

Gitea with LDAP attributes

So: if you ever run into the same problem with Go and LDAP, check if you are indeed sending more than one empty attribute!

This was the second LDAP problem I encountered using Gitea. Due to this experience I do not really feel comfortable with using this combination – and will not be surprised if it breaks again with the next update.

On the other hand LDAP is a rather complicated protocol and probably a bit overkill for a simple use case like this. If I ever re-do my setup I might change over to mail based authentication.