[Short Tip] Add a path entry to Nushell

Sometimes, if you cannot properly transfer files to your phone, try another mtp implementation!

Adding a path in nushell is pretty straight forward: the configuration is done in ~/config/nu/config.toml in the [path] section.

If you don’t have it, make sure that the default entries are listed there as well when you start bringing in your own directories. The fastest way to populate your configuration with the default entries is to ask nushell to do it: config set path $nu.path

Next, add the directories you need:

path = ["/usr/local/bin", "/usr/local/sbin", "/usr/bin", "/usr/sbin","/home/rwolters/go/bin"]

In the above example I added the default go binary directory to the list.

[Short Tip] Define an alias in Nushell

A few days ago I decided to switch my main shell to nushell. It offers next generation capabilities similar to other modern shells like Powershell:

Rather than thinking of files and services as raw streams of text, Nu looks at each input as something with structure. For example, when you list the contents of a directory, what you get back is a table of rows, where each row represents an item in that directory.

Nushell Philosophy

This switch feels like the biggest shell related change for me since I switched from Windows Command Prompt to Bash. Even the switch from Bash to ZSH was small in comparison.

As part of this I have to relearn how to do a lot of things. For example, defining an alias is totally different.

The nushell configuration takes place in ~.config/nu/config.toml. To add an alias there, add the section startup if it is not there already, and add a list item there:

startup = [
    "alias evince = flatpak run org.gnome.Evince",
    "alias ll = ls -la",
    ]

In the above example two aliases are given, one to call ls, and one for launching evince via flatpaks.

Image by julia roman from Pixabay

[Howto] Using systemd timers instead of /etc/cron entries

Executing certain commands at given intervals or times is a very typical task for system administrators. In the past it was common to use cron or some variation for this, in some way or the other.

Background

Cron does the job it was written for. But this was years ago, and these days Kernels offer neat things like CPU quotas and memory limits. Cron has no means to use those – but other tools have.

Additionally, newer tools provide dependencies, a proper configuration language (instead of hard-to-maintain bash lines), multiple triggers, randomized delays and real logging.

Especially the last bit, real logging, is essential: Cron can forward log messages it thinks needs to be forwarded. But without real kernel backed process management (cgroups) there is no real way for Cron to see if a job is running or has finished, and what log lines belong to it.

Systemd has all this – and thus it makes sense to create new recurrent jobs in Systemd and even migrate old ones sometimes.

Setting up the timer

What is needed are two things: a service file describing WHAT should be done, and a timer file describing WHEN to do it.

Let’s start with the WHAT: we create a typical service file named backupjob.service executing a backup bash script:

[Unit]
Description=Backup job

[Service]
Type=oneshot
ExecStart=/usr/local/bin/backup.sh

Note that we do not enable the service here! We can start it for quick and easy debugging – which is also way easier than with Cron.

Keep in mind that this is a typical systemd service. You can also add requirements, dependencies, performance options and so on in a standardized fashion. With Cron this is not possible, would have to be done in the bash script itself and thus would be messy, hard to maintain, and most likely duplicate work between multiple Cron jobs. And btw.: that way not at all following the UNIX philosophy!

Next, the WHEN. We need a timer file, something which describes when to execute the service, backupjob.timer.

[Unit]
Description=Run backup jobs regularly

[Timer]
OnCalendar=daily
AccuracySec=1h
Unit=backupjob.service

[Install]
WantedBy=timers.target

As you can see, this job is scheduled to run daily, with an accuracy of 1 hour. The accuracy is an interesting bit: systemd tries to avoid starting all services at the exact same time, to avoid massive system load at :00. Just recently a technician at a large cloud provider mentioned that data centers could be designed way more efficient if not everyone would put their Cron jobs to the full minute.

Speaking about, besides OnCalendar there is also a way to start jobs relative to the boot up time, or relative to when the timer was run last. For example, to run something every 15 minutes, set OnActiveSec=15min. More information can be found in the timer documentation.

Starting and stopping the timer

As mentioned above, the service unit files are not activated, instead the timers are: sudo systemctl enable --now backupjob.timer

Stopping a timer is equally simple: systemctl stop backupjob.timer . If you want to avoid that it is started again during the next boot, also disable it: systemctl disable backupjob.timer.

Additional tooling

One of the great things of using the systemd ecosystem is that it is very easy to work with timers: with systemctl list-timers a nice and clear overview of the current state and time of next execution is given:

❯ sudo systemctl list-timers
[sudo] password for liquidat: 
NEXT                         LEFT          LAST                         PASSED       UNIT                         ACTIVATES                     
Thu 2021-04-15 18:06:34 CEST 1h 14min left Thu 2021-04-15 16:09:32 CEST 42min ago    dnf-makecache.timer          dnf-makecache.service         
Fri 2021-04-16 00:00:00 CEST 7h left       Thu 2021-04-15 00:01:01 CEST 16h ago      logrotate.timer              logrotate.service             
Fri 2021-04-16 00:00:00 CEST 7h left       Thu 2021-04-15 00:01:01 CEST 16h ago      mlocate-updatedb.timer       mlocate-updatedb.service      
Fri 2021-04-16 00:00:00 CEST 7h left       Thu 2021-04-15 00:01:01 CEST 16h ago      unbound-anchor.timer         unbound-anchor.service        
Fri 2021-04-16 12:32:32 CEST 19h left      Thu 2021-04-15 12:32:32 CEST 4h 19min ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Mon 2021-04-19 01:34:34 CEST 3 days left   Mon 2021-04-12 15:46:39 CEST 3 days ago   fstrim.timer                 fstrim.service                

6 timers listed.
Pass --all to see loaded but inactive timers, too.

At the same time, you can get the detailed status with systemctl status logrotate.timer – or of all of them via systemctl status *timer.

Logs are simply available via journalctl -u logrotate.timer – and the logs for the executed service can be read via journalctl -u logrotate.service.

And if you just don’t want to deal with systemd files right now – but nevertheless want all the goods from it, you can also launch systemd services or general commands with a one-time execution:

systemd-run --on-active="10h 30m" --unit myonetimescript.service

Final words

Writing systemd timers instead of traditional Cron jobs makes operating, maintaining and even writing of recurrent jobs much easier.

The only thing you might be missing is easily sending out stuff via mail.

Image by Ryan McGuire from Pixabay

[Howto] Create your own cloud gaming server to stream games to Fedora

A few months back I wanted to give a game a try which only runs on Windows and requires a dedicated GPU. Since I have neither of those, a decided to set up my own Windows cloud gaming server to stream the game to my Linux machine.

A few months back I wanted to give a game a try which only runs on Windows and requires a dedicated GPU. Since I have neither of those, a decided to set up my own Windows cloud gaming server to stream the game to my Linux machine.

Dozens of years ago there was one game I played day and night. For weeks, months, maybe even years. Till today I can still remember the distinct soundtrack which makes the hair stand up on the back of my neck: UFO: Enemy Unknown. I loved the game! A few years ago I also played one of the open source games inspired by UFO quite some time, UFO: AI. That was fun.

Sequels to the original game were released, two over the last couple of years. But they never really were an option since they required Windows (or so I thought) and above all, time. However, few months ago I first realized that one of the sequels, XCOM: Enemy Unknown, was available for Android. Since I have a brand new flagship Android tablet I gave it a shot – and it was great! But since the Android version was seriously limited, I played it again on Linux. That barely worked with my limited Intel GPU. But it was playable, and I had fun.

I was infected with the urge to play the game more – and when a thid sequel was announced, I at least wanted to play the second one, XCOM 2. But how? My GPU was too limited and eGPUs are expensive and often involve a lot of hassle – even if I would be willing to buy a Windows license. So I searched if cloud gaming could do the trick.

Cloud Gaming Services

The idea of cloud gaming is that heavy machines in the data center do the rendering, and the client machine only displays the end result. That shifts the burden of the powerful GPU towards the data center, and the client only needs to have simple graphics to show a stream of images. This does however require a rather responsive broad band connection between the client and the data center.

This principle is not new, but got new attention recently when Google announced their cloud gaming offer Stadia. I checked if any cloud gaming services offered my game of choice – and was available on Linux. Unfortunately, the results were disappointing:

  • Stadia: no XCOM2, no Linux client via Chrome Browser (thanks to zesoup)
  • GeForce Now: no XCOM2, no Linux client
  • Playstation Now: XCOM2 available, but no Linux client
  • Vortex: no XCOM2, no Linux client

Some of the above can be used on Linux with the help of Lutris, which uses Wine in the background. But for me that would only count as a last resort. I was not that desperate yet.

However, not all was lost yet: some services are not tied to a certain game catalog, but instead offer a generic server and client onto which you can install your games. The research results were first promising: shadow.tech offers machines for just that and a working Linux client! However, they are not available at my place.

The solution: Parsec

So with all ready-to-consume options out of the picture, I was almost willing to give up (or give Lutris and Playstation Now a chance, or even buy a eGPU). But then I stumbled upon something interesting: Parsec, a client for interactive game streaming.

Parsec is a high performance, low latency 60 FPS remote access product connecting you to your computer from anywhere.

Parsec features

That itself didn’t solve my problem. But it opened a window to a new solution: in the past, the company offered cloud hosted game servers on their own. Players could connect to it with their Parsec client and play games on them together – or on their own. The Parsec promise is that their client is fast enough for a reasonable good experience.

The server offer was canceled some time ago – but there was no one stopping me launching my own server and connect the Parsec client to it. And that is what I did. Read on to learn how to do that yourself.

Step 1: Getting a Windows cloud server with a reasonable GPU

What is needed is a cloud hosted Windows machine with a reasonable GPU. In best case the data center hosting the machine should not be on the other side of the planet. AWS, Azure, GCP and other have such offers. But there is even a better route: during my research I found Paperspace, a company specialized on providing access to GPU or AI cloud platforms. That is perfect for this use case!

Paperspace does not really advertise their support for gaming platforms. But after I signed up and looked what was needed to create my first cloud server I found a Parsec template:

That makes the entire process very easy!

  • Sign up with Paperspace, get billing sorted out (yes, this stuff costs money)
  • Get to Core -> Compute -> Machines, create a new machine
  • From Public Templates, get the Parsec cloud gaming template
  • Pick the right size for your games; for me a P4000 was enough.
  • Make sure to add a public IP and enough storage. Many today’s games easily consume dozens of GB
  • Set the auto-shutdown timer. No need to waste money.
  • Start the machine.

And that’s it already. Once the machine starts, you will notice a Parsec icon on the home screen. Time to get that working.

Step 2: Get Parsec

Parsec has clients for Linux based operating systems such as Ubuntu and Raspberry. There is even an AppImage or a Snap – unfortunately not a Flatpak yet. Update: there is now even a Flatpak package available! Thanks Sheogorath for the hint!

And if you are not willing to use Flatpak, AppImage or Snap for whatever reason, you can download the Ubuntu deb and create a RPM out of it. There is even a handy script for that. Any way, get it installed.

Sign up to Parsec, start the client, log in, and you are almost there:

Step 3: Play

After Parsec is all set, just start the cloud server, start Parsec there (maybe log in to your Parsec account), connect to the session on your client – and you are good to go: You can start playing!

For a first test I just watched some Youtube videos and was surprised by the quality. Next I logged in to my Steam account, got my XCOM2 installed and played along happily!

Performance and user experience

But how good is the performance? Well, that depends mostly on one factor: network. Due to unfortunate circumstances I was “able” to test this setup with three very distinct networks in a short time frame:

  • A rather slowish, unstable WiFi with a lot of jitter
  • A LTE connection, provided to me via WiFi hotspot
  • A top-notch, high performance mesh WiFi

When you have slow pings (everything below 25 ms) and/or a lot of jitter, I cannot recommend that you go this path. Otherwise it can be a serious option!

The first network I was on was horrible slow, and the experience was horrible. XCOM2 has basically permanent background music, and the constant interruptions in the music and audio sequences were in fact the worst for me.

The LTE based network was slightly better, but still far from a native feeling. I was able to get a good experience out of this and have fun, but that about was it.

However, the third option, WiFi on almost wired quality, was so good that in times I forgot that I was not playing the game natively. There was no visible lag, the graphics were crystal clear, the music was never interrupted, etc. I was impressed – and had great sessions that way!

I can only recommend to always keep an eye on the connection quality reported in the Parsec overlay:

As Parsec mentions:

At 60 frames per second, 1 frame is around 16ms. By combining decode, encode and network, you’ll have the amount of frames the client lags behind.

Parsec about lag latency

Having this in mind, the above screenshot shows a connection with an unfortunate lag, leading to a not-that-good experience.

Recap

If you don’t have the hardware and/or software to play your favorite game, cloud gaming can be a solution for your problem. And if there is no proper offering out there, it is possible to get this working on your own.

Running your own cloud gaming server is surprisingly easy and not too expensive. It does feel somewhat weird in the beginning especially if you usually only use clouds for your professional work. But it is a fun experience, and the results can be staggering – if your network is up for the job!

Featured image by Martin Str from Pixabay

[Short Tip] Flatten nested dict/list structures in Ansible with json_query

A few days ago I was asked how to best deal with structures in Ansible which are mixing dictionaries and lists. json_query can help here!

Ansible Logo

A few days ago I was asked how to best deal with structures in Ansible which are mixing dictionaries and lists. Basically, the following example was provided and the questioned remained how to deal with this – for example how to flatten it:

    myhash:
      cloud1:
        region1:
          - name: "city1"
          - size: "large"
          - param: "alpha"
        region2:
          - name: "city2"
          - size: "small"
          - param: "beta"
      cloud2:
        region1:
          - name: "city1"
          - size: "large"
          - param: "gamma"

I was wondering a lot how to deal with this – after all dict2items only deals with dicts and fails when it reaches the lists in there. I also fooled around with the map filter, but most of my results also required some previous knowledge about the data structure, were only acting by providing “cloud1.region1” or similar.

The solution was the json_query filter: it is based on jmespath and can deal with the above mentioned structure by list and object projections:

  tasks:
  - name: Projections using json_query
    debug:
      msg: "Item value is: {{ item }}"
    loop: "{{ myhash|json_query(projection_query)|list }}"
    vars:
      projection_query: "*.*[]"

And indeed, the loop does create a simplified output of all the elements in this nested structure:

TASK [Projections using json_query] **********************************************************
ok: [localhost] => (item=[{'name': 'city1'}, {'size': 'large'}, {'param': 'alpha'}]) => {
    "msg": "Item value is: [{'name': 'city1'}, {'size': 'large'}, {'param': 'alpha'}]"
}
ok: [localhost] => (item=[{'name': 'city2'}, {'size': 'small'}, {'param': 'beta'}]) => {
    "msg": "Item value is: [{'name': 'city2'}, {'size': 'small'}, {'param': 'beta'}]"
}
ok: [localhost] => (item=[{'name': 'city1'}, {'size': 'large'}, {'param': 'gamma'}]) => {
    "msg": "Item value is: [{'name': 'city1'}, {'size': 'large'}, {'param': 'gamma'}]"
}

Of course, some knowledge is still needed to make this work: you need to know if you are projecting on a list or on a dictionary. So if your data structure changes on that level between executions, you might need something else.

Image by Andrew Martin from Pixabay