[Short Tip] Plot live-data in Linux terminal

Recently I realized that one of the disks in my server had died. After the replacement, the RAID sync started – and I quickly had to learn that this was going to take days (!). But I also learned that the time it might take massively jumped up and down.

Thus I thought it would be fun to monitor the progress of this. First, I just crated a command to watch the minutes (calculated into days) every few seconds with watch:

watch 'cat /proc/mdstat |grep recovery|cut -d " " -f 13|cut -d "=" -f 2|cut -d "." -f 1|xargs -n 1 -I {} echo "{}/60/24"|bc'

But since it was jumping so much I was wondering if I could live-plot the data in the terminal (remote server, after all). There are many ways to do that, even gnuplot seems to have an options for that, but I wanted something more simple. Enter: pipeplot

First I tried to use watch together with pipeplot, but it was easier to just write a short for loop around it:

while true;
do
  cat /proc/mdstat |grep recovery|cut -d " " -f 13|cut -d "=" -f 2|cut -d "." -f 1|xargs -n 1 -I {} echo "{}/60/24"|bc;
  sleep 5;
done \
| pipeplot

And the result is rather nice (also shown in the header image):

[Howto] My own mail & groupware server, part 3: Git server

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #3 of the series and covers the integration of an additional Git server.

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #3 of the series and covers the integration of an additional Git server setup.

This post is all about setting up an additional Git server to the existing mail server. Read about the background to this setup in the first post, part 1: what, why, how?, and all about the mail server setup itself in the second post, part 2: initial mail server setup.

Gitea as Git server

I heavily use Git all the time. And there are enough information where I feel more comfortable when I host them only on infrastructure I control. So for the new setup I also wanted to have a Git server, as fast as possible.

In the past I used Gitlab as Git server, but that is very resource intensive and just overkill for my use cases. Thus years ago I already replaced Gitlab with Gitea – a lightweight, painless self-hosted git service. It is quickly set up, simple to use, offers nevertheless all relevant Git features, and simply does it’s job. Gitea itself is a fork of Gogs, which was not really community friendly. These days Gitea is a way more active and prospering project than Gogs.

Background: Nginx as reverse proxy in Mailu

So how do you “attach” Gitea to a running Mailu infrastructure? Mailu itself comes with a set of defined services, and that’s it. There is no plugin or module system to extend it. However, the project does offer special “overrides” directories where additional configuration can be placed – this applies to Nginx as well! That way, a service can be placed right next to other Mailu services, behind the same reverse proxy, and that way benefit from the already existing setup and for example the certificate regeneration. Also, there is no problem with the already used ports 80 and 443, etc.

Overrides can be placed in /data/mailu/overrides/nginx. They are basically just snippets of Nginx configuration. Note though that they are included within the the main server block! That means they can only work on locations, not on server names. This is somewhat unfortunate since I used to address all my old services via sub-domains. git.bayz.de, nc.bayz.de, etc. With the new setup and the limit to locations this is not an option anymore, everything has to work on different working directories: bayz.de/git, bayz.de/nc, etc.

This is somewhat unfortunate, because that also meant that I had to reconfigure clients, and also ask others to reconfigure their clients when using my infrastructure. I would be happy to get back to a pure sub-domain based addressing, but I don’t see how this could be possible without changing the actual Nginx image.

Adding Gitea entry to Nginx Override

Having said that, to add Gitea to Nginx, create this file: /data/mailu/overrides/nginx/git.conf

location /gitea/ {
  proxy_pass http://git:3000/;
}

And that’s it already. More configuration is not needed since Mailu already configures Nginx with reasonable defaults.

This also gives a first hint that it is pretty easy to add further services – I will cover more examples in this ongoing blog post series.

Additional entry to docker compose

To start Gitea itself, add it to Mailu’s docker-compose.yml:

  # gitea

  git:
    image: gitea/gitea:latest
    restart: always
    env_file: mailu.env
    depends_on:
      - resolver
    dns:
      - 192.168.203.254
    volumes:
      - /data/gitea:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "22:22"

Note the shared volumes: that way the Gitea configuration file will be written on your storage in /data/gitea/gitea/conf/app.ini .

Also, we want to set UID and GUID for Gitea via environment variables. Set this in your mailu.env:

###################################
# Gitea settings
###################################
USER_UID=1000
USER_GID=1000

Setting up Gitea basic configuration

Now let’s configure Gitea. It is possible to pre-create a full Gitea configuration and start the container with it. However, documentation on that is sparse and in my tests there were always problems.

So in my case, I just started and stopped the container (docker compose down and up) a few times, edited some configuration, once registered an admin user via the GUI and was done. While this worked, I can only recommend to closely track the logs during that time to ensure that no one else is accessing the container and does mischief!

So, the first step is to start the new Docker compose service. This will write a first vanilla configuration of Gitea. Afterwards, add the correct domain information in the [server] section in the Gitea configuration file app.ini:

Note that the ROOT_URL value ensures the required rewrite of all requests and links so that the setup works flawlessly with the above mentioned Nginx configuration!

Next, bring the service down and up again (Docker compose down and up), login to your new service (here: git.bayz.de/gitea ) and register a new admin user. Note that here you also have to pick the database option. For small systems with only very few concurrent connections sqlite is fine. If you will serve more users here, or automated access, pick Postgresql. However, to make that work you need to bring up another Postgresql container. One of the next posts will introduce one, so you might want to re-think your setup then.

Directly after this admin registration is done, in the Gitea configuration fileapp.ini in the [server] section change the value of DISABLE_REGISTRATION to true. Stop and start the service again, and no new (external) users can register anymore.

But how do we register new users now?

Central services authentication in Mailu: mail

One of the major hassles with my last setup was the authentication. I started with a fully blown OpenLDAP years ago, which was a pain to manage and maintain already. Moving over to FreeIPA meant that I had better interfaces and maybe even a UI, but it was still a complex, tricky service. Also while almost every service out there can be connected to LDAP, that is not always easy or a pleasant experience. And given that I only have a few users on my system, that is hardly worth the trouble.

Mailu offers an interesting approach here: users are stored in a DB, and external services are asked to authenticate against the email services (IMAP, SMTP). I was surprised to learn that indeed many services out there support this or have plugins for that.

Gitea can authenticate against SMTP sources, and I decided to go that route:

  • In Gitea, access “Site Administration”
  • Click on “Authentication Sources”
  • Pick the blue button “Add Authentication Source”
  • As “Authentication Type“, choose SMTP
  • Give it a name
  • As “SMTP Authentication Type“, enter LOGIN
  • As “SMTP Host“, provide the external host name here (more on that further down below) lisa.bayz.de
  • Pick the right “SMTP Port“, 587
  • And limit the “Allowed Domains” if you want, in my case to bayz.de
  • Of course, tick the check box “Enable TLS Encryption” and also the check box “This Authentication Source is Activated

After this is done, log out of Gitea and log in with an existing mail user. It should just work! And that all without any trace of LDAP! Awesome, right?

A word about the SMTP host in the above configuration: do not try to enter here the SMTP docker compose service directly. This will not work: port 587 is managed by the Nginx proxy which acts as mail proxy here, which redirects auth mail requests to the admin portal. The internal SMTP container does not even listen on port 587.

What’s next?

With my private Git server back to live I felt slightly better again. Now I had the infrastructure at my hands I needed to tackle the cloud/file sharing part of all of it to also lay the foundations for the groupware pieces: Nextcloud.

More about that in the next post.

Featured image by Myriam Zilles from Pixabay

[Howto] Create your own cloud gaming server to stream games to Fedora

A few months back I wanted to give a game a try which only runs on Windows and requires a dedicated GPU. Since I have neither of those, a decided to set up my own Windows cloud gaming server to stream the game to my Linux machine.

A few months back I wanted to give a game a try which only runs on Windows and requires a dedicated GPU. Since I have neither of those, a decided to set up my own Windows cloud gaming server to stream the game to my Linux machine.

Dozens of years ago there was one game I played day and night. For weeks, months, maybe even years. Till today I can still remember the distinct soundtrack which makes the hair stand up on the back of my neck: UFO: Enemy Unknown. I loved the game! A few years ago I also played one of the open source games inspired by UFO quite some time, UFO: AI. That was fun.

Sequels to the original game were released, two over the last couple of years. But they never really were an option since they required Windows (or so I thought) and above all, time. However, few months ago I first realized that one of the sequels, XCOM: Enemy Unknown, was available for Android. Since I have a brand new flagship Android tablet I gave it a shot – and it was great! But since the Android version was seriously limited, I played it again on Linux. That barely worked with my limited Intel GPU. But it was playable, and I had fun.

I was infected with the urge to play the game more – and when a thid sequel was announced, I at least wanted to play the second one, XCOM 2. But how? My GPU was too limited and eGPUs are expensive and often involve a lot of hassle – even if I would be willing to buy a Windows license. So I searched if cloud gaming could do the trick.

Cloud Gaming Services

The idea of cloud gaming is that heavy machines in the data center do the rendering, and the client machine only displays the end result. That shifts the burden of the powerful GPU towards the data center, and the client only needs to have simple graphics to show a stream of images. This does however require a rather responsive broad band connection between the client and the data center.

This principle is not new, but got new attention recently when Google announced their cloud gaming offer Stadia. I checked if any cloud gaming services offered my game of choice – and was available on Linux. Unfortunately, the results were disappointing:

  • Stadia: no XCOM2, no Linux client via Chrome Browser (thanks to zesoup)
  • GeForce Now: no XCOM2, no Linux client
  • Playstation Now: XCOM2 available, but no Linux client
  • Vortex: no XCOM2, no Linux client

Some of the above can be used on Linux with the help of Lutris, which uses Wine in the background. But for me that would only count as a last resort. I was not that desperate yet.

However, not all was lost yet: some services are not tied to a certain game catalog, but instead offer a generic server and client onto which you can install your games. The research results were first promising: shadow.tech offers machines for just that and a working Linux client! However, they are not available at my place.

The solution: Parsec

So with all ready-to-consume options out of the picture, I was almost willing to give up (or give Lutris and Playstation Now a chance, or even buy a eGPU). But then I stumbled upon something interesting: Parsec, a client for interactive game streaming.

Parsec is a high performance, low latency 60 FPS remote access product connecting you to your computer from anywhere.

Parsec features

That itself didn’t solve my problem. But it opened a window to a new solution: in the past, the company offered cloud hosted game servers on their own. Players could connect to it with their Parsec client and play games on them together – or on their own. The Parsec promise is that their client is fast enough for a reasonable good experience.

The server offer was canceled some time ago – but there was no one stopping me launching my own server and connect the Parsec client to it. And that is what I did. Read on to learn how to do that yourself.

Step 1: Getting a Windows cloud server with a reasonable GPU

What is needed is a cloud hosted Windows machine with a reasonable GPU. In best case the data center hosting the machine should not be on the other side of the planet. AWS, Azure, GCP and other have such offers. But there is even a better route: during my research I found Paperspace, a company specialized on providing access to GPU or AI cloud platforms. That is perfect for this use case!

Paperspace does not really advertise their support for gaming platforms. But after I signed up and looked what was needed to create my first cloud server I found a Parsec template:

That makes the entire process very easy!

  • Sign up with Paperspace, get billing sorted out (yes, this stuff costs money)
  • Get to Core -> Compute -> Machines, create a new machine
  • From Public Templates, get the Parsec cloud gaming template
  • Pick the right size for your games; for me a P4000 was enough.
  • Make sure to add a public IP and enough storage. Many today’s games easily consume dozens of GB
  • Set the auto-shutdown timer. No need to waste money.
  • Start the machine.

And that’s it already. Once the machine starts, you will notice a Parsec icon on the home screen. Time to get that working.

Step 2: Get Parsec

Parsec has clients for Linux based operating systems such as Ubuntu and Raspberry. There is even an AppImage or a Snap – unfortunately not a Flatpak yet. Update: there is now even a Flatpak package available! Thanks Sheogorath for the hint!

And if you are not willing to use Flatpak, AppImage or Snap for whatever reason, you can download the Ubuntu deb and create a RPM out of it. There is even a handy script for that. Any way, get it installed.

Sign up to Parsec, start the client, log in, and you are almost there:

Step 3: Play

After Parsec is all set, just start the cloud server, start Parsec there (maybe log in to your Parsec account), connect to the session on your client – and you are good to go: You can start playing!

For a first test I just watched some Youtube videos and was surprised by the quality. Next I logged in to my Steam account, got my XCOM2 installed and played along happily!

Performance and user experience

But how good is the performance? Well, that depends mostly on one factor: network. Due to unfortunate circumstances I was “able” to test this setup with three very distinct networks in a short time frame:

  • A rather slowish, unstable WiFi with a lot of jitter
  • A LTE connection, provided to me via WiFi hotspot
  • A top-notch, high performance mesh WiFi

When you have slow pings (everything below 25 ms) and/or a lot of jitter, I cannot recommend that you go this path. Otherwise it can be a serious option!

The first network I was on was horrible slow, and the experience was horrible. XCOM2 has basically permanent background music, and the constant interruptions in the music and audio sequences were in fact the worst for me.

The LTE based network was slightly better, but still far from a native feeling. I was able to get a good experience out of this and have fun, but that about was it.

However, the third option, WiFi on almost wired quality, was so good that in times I forgot that I was not playing the game natively. There was no visible lag, the graphics were crystal clear, the music was never interrupted, etc. I was impressed – and had great sessions that way!

I can only recommend to always keep an eye on the connection quality reported in the Parsec overlay:

As Parsec mentions:

At 60 frames per second, 1 frame is around 16ms. By combining decode, encode and network, you’ll have the amount of frames the client lags behind.

Parsec about lag latency

Having this in mind, the above screenshot shows a connection with an unfortunate lag, leading to a not-that-good experience.

Recap

If you don’t have the hardware and/or software to play your favorite game, cloud gaming can be a solution for your problem. And if there is no proper offering out there, it is possible to get this working on your own.

Running your own cloud gaming server is surprisingly easy and not too expensive. It does feel somewhat weird in the beginning especially if you usually only use clouds for your professional work. But it is a fun experience, and the results can be staggering – if your network is up for the job!

Featured image by Martin Str from Pixabay

[Howto] My own mail & groupware server, part 2: initial mail server setup

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #2 of the series and covers the initial mail server setup.

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #2 of the series and covers the initial mail server setup.

This post is all about setting up an initial mail server. Read about the background to this setup and the decisions I took in the first post, My own mail & groupware server, part 1: what, why, how?

Getting a server

The first thing of hosting your own mail server is to answer a simple question: where? Do you have an internet connection at home with fast uploads and maybe even a fixed IPv4? Or is cloud the only option? And if you take the cloud, will it be a virtual server or a root server?

My home connection doesn’t really allow for a larger server setup, so cloud is the only option. Cloud always means someone else’s computer, never forget that! But if you pick a hardware machine it is at least harder to access/copy that machine without your knowledge compared to a virtual instance.

My mail setup always run on root servers, and for the last years I picked Hetzner as the hoster. Their server auction often has appealing things on sale, so that you can get something “decent enough” for around $30 per months.

So, for my new server I got one from the server auction again:

Server provisioning

The server was up quickly. The next step was to get it provisioned properly. Mailu requires docker compose. And since Docker is not properly supported on CentOS 8 I decided to got with CentOS 7. This I rebooted the server into security mode and started Hetzner’s custom installer to install centos77 minimal. RAID 1 was already configured, I just altered the partition sizes and moved most of the storage to the custom mount point /data.

DNS

Besides the basic provisioning I added rDNS entries for IPv4 and IPv6 – don’t forget those, they are important for many spam filters!

Speaking of DNS, the domain someone wants to use for mail needs to be set up in DNS as well. At least the following things should be done:

  • create an A entry named @ for your server IPv4
  • create an AAAA entry named @ for your IPv6
  • create an A entry with your server’s host name for the server IPv4
  • create an AAAA entry with your server’s host name the for server IPv6
  • create a MX entry pointing to A entry for server host name (not a CNAME!)
  • add a CAA entry for letsencrpyt: 0 issue "letsencrypt.org"

Many of those entries were still there from my previous setup, but I had to adjust the IP addresses to the new server and add the new host – host name “lisa”, named after the Simpsons.

Basic user and SSH

After setting up a new server, the next step usually is to add a new user: adduser liquidat creates it, usermod -aG wheel liquidat adds my user to the sudo group, and additionally I set the group wheel to NOPASSWD via visudo.

Also, copy the authorized keys from the root user to the new user – cp -r /root/.ssh /home/liquidat/ – and correct their ownership: chown -R liquidat:liquidat /home/liquidat/.ssh

Just to be sure the “right” ssh keys are there, copy your usual set over: ssh-copy-id liquidat@lisa.bayz.de . Personally, afterwards I removed all besides the currently most trusted (ssh-ed25519…).

Last but not least deactivate root login to ssh:

  • Set PermitRootLogin no in /etc/ssh/sshd_config
  • Also, in /etc/ssh/sshd_config set Port 2222 (we want to use port 22 for the git server later on)
  • Restart sshd: systemctl restart sshd

Encrypted partition

One of the most important steps for me (YMMV) is an encrypted hard drive. In my personal risk assessment this impedes many possible hardware attacks from certain actors. Of course certain risks remain.

As mentioned, most of the storage is setup in a partition on /data. So it has to be encrypted properly, formatted and made available again. Note that this requires you to log into the machine after every reboot and actively decrypt the partition. If your server ever goes down, all services on it are down until you decrypted the partition!

  • Umount existing mount: sudo umount /data/
  • Remove /data entry from /etc/fstab
  • Set up crypted device: sudo cryptsetup luksFormat /dev/md3
  • Get passphrase via pwgen -y 50
  • Decrypt the device: sudo cryptsetup luksOpen /dev/md3 verysecret
  • Create a file system on it: sudo mkfs.ext4 /dev/mapper/verysecret
  • Mount it: sudo mount /dev/mapper/verysecret /data

I can only recommend to verify the decryption afterwards:

  • Reboot server
  • Decrypt storage: sudo cryptsetup luksOpen /dev/md3 verysecret
  • Mount storage: sudo mount /dev/mapper/verysecret /data

Docker Compose

Next I had to install Docker. Personally not my first choice to run containers, I would rather use Podman or even get my hands dirty with Kubernetes. But alas, time was short and pressure was high.

To get Docker onto the machine:

  • Remove CentOS’ Docker packages: sudo yum remove -y docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine
  • Install tooling to easier add third party repos: sudo yum install -y yum-utils
  • Add Docker’s third party repo for CentOS: sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  • Install Docker: sudo yum install -y docker-ce docker-ce-cli containerd.io
  • We don’t want autostarts of Docker since the device is still encrypted: sudo systemctl disable docker
  • Get Docker up: sudo systemctl start docker
  • Create Docker group: sudo groupadd docker
  • Add user to it: sudo usermod -aG docker $USER
  • Load new group immediately: newgrp docker
  • Get compose: sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  • Make compose executable: sudo chmod +x /usr/local/bin/docker-compose

I will always wonder why they never managed to get compose out there as a package. Not that hard, I’d say?!

But anyhow, the stage was set now: the server was up and running, a user was ready to do work, the device was properly secured, I was ready to set up the mail server!

Installing Mailu

Getting Mailu up and running is really a matter of minutes. I must admit I was impressed – especially since I knew how much time my own setup ate over the years. Basically you use a config file generator from them which will generate a docker-compose.yml, and then start it. That’s really all!

  • sudo mkdir /data/mailu
  • create config file via https://setup.mailu.io/master/
    • make sure to add all kinds of subdomains you will be using
    • in my case: don’t activate webmail or caldav, I will be using Nextcloud for that
  • download config files via wget as instructed​: one docker-compose.yml and a mailu.env containing all the entered variables
  • verify that ANTIVIRUS is indeed defined and not commented out in mailu.env (thanks to dhoppe for that)
  • docker-compose -p mailu up -d

And that’s it, really! My mail server setup was already running, after minutes. Next I added an admin user:

  • create a password for the admin user via pwgen 20
  • create admin password: docker-compose -p mailu exec admin flask mailu admin postmaster bayz.de $PASSWORD
  • log in to admin interface: https://lisa.bayz.de/admin/, login is postmaster@$DOMAIN

The Mailu admin interface is nothing spectacular, but does it’s job:

After this, I did some housework: not strictly necessary, but helpful:

  • Add abuse alias to postmaster
  • Add admin alias to postmaster (needed for RUA, dmarc aggregated reports)
  • Generate DNS entries for SPF, DKIM, etc and add them to your DNS domain entries
  • Add other users or even domains at will; all domains entered must be present in the mailu.env config file!

And that’s it! I was able to send myself mail via some freemail accounts. And with a classic mail client (Thunderbird, or something on the phone) I could also send mails. It all just worked!

Get word out there

However, I still had to get word out there that there is a new mail server and that it will be sending valid mails.

For example, I registered the new mail server at the DNS whitelist, DSWL: many spam filters check against that.

Next, I let Microsoft know of the new machine and registered it at postmater.live.

Last but not least I checked in with Google’s postmaster service.

Verifying and testing the setup

Now it was time for serious testing. I already said mail is hard, right? You better not do mistakes in your configuration, otherwise your mail is marked as spam quickly. So how about some online tests to check how good my new server scored against various spam filters? Here is a list of online checks of all kinds, including services to which someone can send mails to get them analyzed:

All these tests were green. And should always be! As a small private mail server I cannot afford it to have even the tiniest error. If you decide to setup something like this: do not proceed in your mail setup if some test shows something like “9/10” or other inferior results. Fix them all! I cannot stress this enough.

Having said that, you will realize that indeed this setup is not perfect: first and foremost, we will not be accepting mails via IPv6. Thus services testing delivery in IPv6 will report problems. Second, DANE is not working out of the box with Mailu. In the long term I hope that I will be able to update this guide to include both functions properly.

What’s next?

So the mail server was up and running. I was already able to use it with IMAP clients. And given my story leading to this setup you cannot believe how relieved I was once everything worked again and mails were coming in.

I knew that there was still a lot to do – and I will post more posts about the other steps in other blog posts – but the most important task was accomplished.

I’d like to thank the Mailu team for their awesome work on this piece of code – it is really great and I highly appreciate the ease of use and the simple admin capabilities.

Featured image by Felix Lichtenfeld from Pixabay

Getting Started with Ansible Security Automation: Investigation Enrichment

Last November we introduced Ansible security automation as our answer to the lack of integration across the IT security industry. Let’s have a closer look at one of the scenarios where Ansible can facilitate typical operational challenges of security practitioners.

Last November we introduced Ansible security automation as our answer to the lack of integration across the IT security industry. Let’s have a closer look at one of the scenarios where Ansible can facilitate typical operational challenges of security practitioners.

A big portion of security practitioners’ daily activity is dedicated to investigative tasks. Enrichment is one of those tasks, and could be both repetitive and time-consuming, making it a perfect candidate for automation. Streamlining these processes can free up their analysts to focus on more strategic tasks, accelerate the response in time-sensitive situations and reduce human errors. However, in many large organizations , the multiple security solutions aspect of these activities are not integrated with each other. Hence, different teams may be in charge of different aspects of IT security, sometimes with no processes in common.

That often leads to manual work and interaction between people of different teams which can be error-prone and above all, slow. So when something suspicious happens and further attention is needed, security teams spend a lot of valuable time operating on many different security solutions and coordinating work with other teams, instead of focusing on the suspicious activity directly.

In this blog post we have a closer look at how Ansible can help to overcome these challenges and support investigation enrichment activities. In the following example we’ll see how Ansible can be used to enable programmatic access to information like logs coming from technologies that may not be integrated into a SIEM. As an example we’ll use enterprise firewalls and intrusion detection and protection systems (IDPS).

Simple Demo Setup

To showcase the aforementioned scenario we created a simplified, very basic demo setup to showcase the interactions. This setup includes two security solutions providing information about suspicious traffic, as well as a SIEM: we use a Check Point Next Generation Firewall (NGFW) and a Snort IDPS as security solutions providing information. The SIEM to gather and analyze those data is IBM QRadar.

Also, from a machine called “attacker” we will simulate a potential attack pattern on the target machine on which the IDPS is running.

Roland blog 1

This is just a basic demo setup, a real world setup of an Ansible security automation integration would look different, and can feature other vendors and technologies.

Logs: crucial, but distributed

Now imagine you are a security analyst in an enterprise. You were just informed of an anomaly in an application, showing  suspicious log activities. For example, we have a little demo where we curl a certain endpoint of the web server which we conveniently called “web_attack_simulation”:

$ sudo grep web_attack /var/log/httpd/access_log
172.17.78.163 - - [22/Sep/2019:15:56:49 +0000] "GET /web_attack_simulation HTTP/1.1" 200 22 "-" "curl/7.29.0"
...

As a security analyst you know that anomalies can be the sign of a potential threat. You have to determine if this is a false positive, that can be simply dismissed or an actual threat which requires a series of remediation activities to be stopped. Thus you need to collect more data points – like from the firewall and the IDS. Going through the logs of the firewall and IDPS manually takes a lot of time. In large organizations, the security analyst might not even have the necessary access rights and needs to contact the teams that each are responsible for both the enterprise firewall and the IDPS, asking them to manually go through the respective logs and directly check for anomalies on their own and then reply with the results. This could imply a phone call, a ticket, long explanations, necessary exports or other actions consuming valuable time.

It is common in large organisations to centralise event management on a SIEM and use it as the primary dashboard for investigations. In our demo example the SIEM is QRadar, but the steps shown here are valid for any SIEM. To properly analyze security-related events there are multiple steps necessary: the security technologies in question – here the firewall and the IDPS – need to be configured to stream their logs to the SIEM in the first place. But the SIEM also needs to be configured to help ensure that those logs are parsed in the correct way and meaningful events are generated. Doing this manually is time-intensive and requires in-depth domain knowledge. Additionally it might require privileges a security analyst does not have.

But Ansible allows security organizations to create pre-approved automation workflows in the form of playbooks. Those can even be maintained centrally and shared across different teams to enable security workflows at the press of a button. 

Why don’t we add those logs to QRadar permanently? This could create alert fatigue, where too much data in the system generates too many events, and analysts might miss the crucial events. Additionally, sending all logs from all systems easily consumes a huge amount of cloud resources and network bandwidth.

So let’s write such a playbook to first configure the log sources to send their logs to the SIEM. We start the playbook with Snort and configure it to send all logs to the IP address of the SIEM instance:

---
- name: Configure snort for external logging
  hosts: snort
  become: true
  vars:
    ids_provider: "snort"
    ids_config_provider: "snort"
    ids_config_remote_log: true
    ids_config_remote_log_destination: "192.168.3.4"
    ids_config_remote_log_procotol: udp
    ids_install_normalize_logs: false

  tasks:
    - name: import ids_config role
      include_role:
        name: "ansible_security.ids_config"

Note that here we only have one task, which imports an existing role. Roles are an essential part of Ansible, and help in structuring your automation content. Roles usually encapsulate the tasks and other data necessary for a clearly defined purpose. In the case of the above shown playbook, we use the role ids_config, which manages the configuration of various IDPS. It is provided as an example by the ansible-security team. This role, like others mentioned in this blog post, are provided as a guidance to help customers that may not be accustomed to Ansible to become productive faster. They are not necessarily meant as a best practise or a reference implementation.

Using this role we only have to note a few parameters, the domain knowledge of how to configure Snort itself is hidden away. Next, we do the very same thing with the Check Point firewall. Again an existing role is re-used, log_manager:

- name: Configure Check Point to send logs to QRadar
  hosts: checkpoint

  tasks:
    - include_role:
        name: ansible_security.log_manager
        tasks_from: forward_logs_to_syslog
      vars:
        syslog_server: "192.168.3.4"
        checkpoint_server_name: "gw-2d3c54"
        firewall_provider: checkpoint

With these two snippets we are already able to reach out to two security solutions in an automated way and reconfigure them to send their logs to a central SIEM.

We can also automatically configure the SIEM to accept those logs and sort them into corresponding streams in QRadar:

- name: Add Snort log source to QRadar
  hosts: qradar
  collections:
    - ibm.qradar

  tasks:
    - name: Add snort remote logging to QRadar
      qradar_log_source_management:
        name: "Snort rsyslog source - 192.168.14.15"
        type_name: "Snort Open Source IDS"
        state: present
        description: "Snort rsyslog source"
        identifier: "ip-192-168-14-15"

- name: Add Check Point log source to QRadar
  hosts: qradar
  collections:
    - ibm.qradar

  tasks:
    - name: Add Check Point remote logging to QRadar
      qradar_log_source_management:
        name: "Check Point source - 192.168.23.24"
        type_name: "Check Point FireWall-1"
        state: present
        description: "Check Point log source"
        identifier: "192.168.23.24"

Here we do use Ansible Content Collections: the new method of distributing, maintaining and consuming automation content. Collections can contain roles, but also modules and other code necessary to enable automation of certain environments. In our case the collection for example contains a role, but also the necessary modules and connection plugins to interact with QRadar.

Without any further intervention by the security analyst, Check Point logs start to appear in the QRadar log overview. Note that so far no logs are sent from Snort to QRadar: Snort does not know yet that this traffic is noteworthy! We will come to this in a few moments.

roland blog 2

Remember, taking the perspective of a security analyst: now we have more data at our disposal. We have a better understanding of what could be the cause of the anomaly in the application behaviour. Logs from the firewall are shown, who is sending traffic to whom. But this is still not enough data to fully qualify what is going on.

Fine-tuning the investigation

Given the data at your disposal you decide to implement a custom signature on the IDPS to get alert logs if a specific pattern is detected.

In a typical situation, implementing a new rule would require another interaction with the security operators in charge of Snort who would likely have to manually configure multiple instances. But luckily we can again use an Ansible Playbook to achieve the same goal without the need for time consuming manual steps or interactions with other team members.

There is also the option to have a set of playbooks for customer specific situations pre-create. Since the language of Ansible is YAML, even team members with little knowledge can contribute to the playbooks, making it possible to have agreed upon playbooks ready to be used by the analysts.

Again we reuse a role, ids_rule. Note that this time some  understanding of Snort rules is required to make the playbook work. Still, the actual knowledge of how to manage Snort as a service across various target systems is shielded away by the role.

---
- name: Add Snort rule
  hosts: snort
  become: yes

  vars:
    ids_provider: snort

  tasks:
    - name: Add snort web attack rule
      include_role:
        name: "ansible_security.ids_rule"
      vars:
        ids_rule: 'alert tcp any any -> any any (msg:"Attempted Web Attack"; uricontent:"/web_attack_simulation"; classtype:web-application-attack; sid:99000020; priority:1; rev:1;)'
        ids_rules_file: '/etc/snort/rules/local.rules'
        ids_rule_state: present

Finish the offense

Moments after the playbook is executed, we can check in QRadar if we see alerts. And indeed, in our demo setup this is the case:

roland blog 3

With this  information on  hand, we can now finally check all offenses of this type, and verify that they are all coming only from one single host – here the attacker.

From here we can move on with the investigation. For our demo we assume that the behavior is intentional, and thus close the offense as false positive.

Rollback!

Last but not least, there is one step which is often overlooked, but is crucial: rolling back all the changes! After all, as discussed earlier, sending all logs into the SIEM all the time is resource-intensive.

With Ansible the rollback is quite easy: basically the playbooks from above can be reused, they just need to be slightly altered to not create log streams, but remove them again. That way, the entire process can be fully automated and at the same time  made as resource friendly as possible.

Takeaways and where to go next

It happens that the job of a CISO and her team is difficult even if they have in place all necessary tools, because the tools don’t integrate with each other. When there is a security threat, an analyst has to perform an investigation, chasing all relevant pieces of information across the entire infrastructure, consuming valuable time to understand what’s going on and ultimately perform any sort of remediation.

Ansible security automation is designed to help enable integration and interoperability of security technologies to support security analysts’ ability to investigate and remediate security incidents faster.

As next steps there are plenty of resources to follow up on the topic:

Credits

This post was originally released on ansible.com/blog: GETTING STARTED WITH ANSIBLE SECURITY AUTOMATION: INVESTIGATION ENRICHMENT

Header image by Alexas_Fotos from Pixabay.