[Howto] Monitoring OpenVPN ports with Nagios/Icinga

920839987_135ba34fffA OpenVPN server is usually a crucial part of the IT infrastructure, and thus should be monitored properly. But monitoring UDP is sometimes not that easy, so I wrote a script which can be used in Nagios/Icinga.

OpenVPN is usually accessed via UDP. Since UDP is not as easy to monitor as TCP ports are, many administrators restrain themselves to just monitor if an OpenVPN process is running on the OpenVPN server. However, that does not unveil network problems, and can only be used on machines where you have proper access to: 3rd party machines or appliances are out of your reach with this attempt. Another attempt is to monitor the management port. However, that requires that the port is reachable by the monitoring server which might not be the best idea in case of distributed monitoring. And this is still no option in case of 3rd party machines or other black boxes.

A customer of my employer credativ GmbH had exactly that kind of problem, so I wrote a script in Python. It checks the UDP port of a given server. If the server does respond, the script gives back the state “OK” together with the hex form of the response. The script can be tested on command line:

$ python check_openvpn openvpn.example.com
OK: OpenVPN server response (hex): 4018062d97f85c21d50000000000

The port can be changed by the flag “-p”:

$ python check_openvpn -h
usage: check_openvpn [-h] [-p PORT] [-t] host

positional arguments:
  host                  the OpenVPN host name or ip

optional arguments:
  -h, --help            show this help message and exit
  -p PORT, --port PORT  set port number
  -t, --tcp             use tcp instead of udp

As you see, it also supports testing TCP ports. However, in that case we do not have a return code, we effectively just test if the given tcp port can be reached. Here we switch on TCP support and also modify the port to 443:

$ python check_openvpn -t -p 443 openvpn-tcp.example.com
OK: OpenVPN tcp port reachable.

If the server does not respond within a given time period – 5 seconds – the server throws an error:

$ python check_openvpn slowserver.example.com
CRIT: Request timed out

The script was also uploaded to Monitoringexchange. Since my employer strongly supports the ideas behind Open Source, I was able to publish the script under the MIT licence. I also wrote a blog post about the script on my German company’s blog.

Advertisements

[Howto] Remapping buttons with xbindkeys and xte

TuxSometimes you buy devices like a mouse or a keyboard which provide additional buttons for special functions. Or which have buttons which do not behave as expected. In such cases the button actions can be mapped to other functions, or even to other buttons.

I recently bought a Logitech M705 mouse which works almost perfectly. However, there is one nagging bug: the middle mouse button does not trigger the usual event “mouseclick two” which is interpreted as pressing the middle button, the scroll wheel which is crucial for example browsing the web. Nothing happens.

In such cases the first step is to see if the operating system receives any input at all. Fire up a shell and start the program xev. It opens a small, white window where you can move your cursor to. As soon as the cursor enters the window, you will see a lot of log data on the shell: xev shows you all X events, thus all data you enter via keyboard or mouse.

In my case I pressed the middle button, and saw:

ButtonPress event, serial 40, synthetic NO, window 0x5800001,
    root 0xc7, subw 0x5800002, time 19610234, (46,38), root:(1784,61),
    state 0x10, button 6, same_screen YES
[...]
ButtonRelease event, serial 40, synthetic NO, window 0x5800001,
    root 0xc7, subw 0x5800002, time 19610234, (46,38), root:(1784,61),
    state 0x210, button 6, same_screen YES

The interesting part is that the button was not interpreted as “button 2” as I would have expected, but as “button 6”. That’s not what I expected, and thus the button must be remapped to the event “button 2”.

Mapping of keys and buttons can be done via xbindkeys: in case of KDE I created a symlink to start the program at each startup:

ls -la ~/.kde/env/xbindkeys 
lrwxrwxrwx 1 liquidat users 18 Aug  1 10:56 .kde/env/xbindkeys -> /usr/bin/xbindkeys

xbindkeys reads its configuration from ~/.xbindkeysrc, so that’s the place where we need to configure the actual mapping. The syntax is:

#    "command to start"
#       associated key

The most interesting part of the mapping is: how to trigger the action “button 2”? That is done by the program xte which generates fake input. Thus the final configuration is:

"xte 'mouseclick 2'"                                                                                                                                                                                           
  b:6  

And you are done. Pressing the mouse button 6 on the Logitech M705 now launches the mouseclick 2.

However, as stated correctly by the comments below, this is just an intermediate solution! A long time solution is to fix the mapping in evdev, the Linux input handling.

[Short Tip] Use SSH agent forwarding on remote servers

920839987_135ba34fff
When you administrate machines it sometimes makes sense to forward your SSH agent information from your client A to the server B. Using agent forwarding you can use the authentication keys from client A on server B to for example properly authenticate on server C – without the need to copy your private SSH key to server B. One common example in my case is that I sometimes need to access Gitolite/Github repositories on server B but I do not want to copy my SSH key there.

Keep in mind that you previously have to add the wanted SSH key on client A via ssh-add!

$ ssh-add -c
Identity added: /home/liquidat/.ssh/id_rsa (/home/liquidat/.ssh/id_rsa)
The user must confirm each use of the key
$ ssh -A server_b.example.net
liquidat@server_b.example.net's password: 
Last login: Fri May 24 17:11:17 2013 from somewhereovertherainbow.example.com
$ ssh git@git.example.com info
hello liquidat, this is git@git.example.com running gitolite3 3.5.1-1.el6 on git 1.7.1

[...]

(Thanks to Evgeni for reminding me of the ‘-c’ flag.)

[Howto] Managing dotfiles with dfm

920839987_135ba34fffMost system administrators have a set of personalized dotfiles like .vimrc and .bashrc. Taking these files with you from host to host and keeping them up2date everywhere can be a quite wearisome task. There are various tools to ease the pain, and I like to shed some light on one of them: dfm – the dotfile manager.

My background

On my machines I usually keep a set of personalized dotfiles which I don’t want to miss on any other server I have to administrate:

.screenrc
.bashrc
.inputrc
.vimrc
.vim/colors/jellybeans.vim
.vim/colors/desert256.vim

I need these files on all machines which I regularly work on – and since there are quite some customer machines I have access to regularly I wrote my own, git backed Python script years ago to keep these files synced and up2date on each machine. While it was fun to write the script, I always knew that it did not cover all my use cases regarding dotfiles, and it was not really flexible in terms of complex directory structures and so on. Also, I knew there must be other people with the same problem out there – and thus I was sure better solutions already existed.

And boy, there are so many of them!

Some interesting solutions for dotfile management

Many people have looked at this problem before – and solved it in their own ways. Most often the basic principle is that the files are stored and tracked via git in a hidden directory, and the tool of your choise manages symlinks between the files in the store and in $HOME.

For example, a very interesting idea is to use GNU Stow to manage dotfiles. It tracks the necessary files in subdirectories and of course links the files from there to the ‘real’ places in $HOME. I like reusing existing tools, so the idea of using GNU Stow appealed immediately. Also, the ‘packages’ or ‘group’ support it offers is tempting. Unfortunately, on most systems GNU Stow is not installed by default, and I cannot install new software on customer machines.

The problem of necessary software installation is also relevant for another often mentioned solution: Homesick. Homesick is Ruby based, and works similar to the GNU Stow solution mentioned above: files are stored in a hidden subdirectory, tracked with git, and linked in $HOME. The main feature here is that it can keep the configuration files in various git repositories, called ‘castles’, so you can integrate the work of projects like oh-my-zsh.
While Homesick does offer quite some features, it is Ruby based – and I cannot expect a working Ruby environment on each system, so it is out of question. I can go with Perl or Python, but that’s about it.

Other people had the same Ruby problem and created Homeshick – a Homesick clone spelled with an additional ‘h’ and besides written in Bash. It is quite straight forward and offers all necessary features like listing and tracking various git repositories as source for dotfiles, linking the actual dotfiles to your home, and so on. This one is almost my favorite! I wouldn’t be surprised if it is the favorite for most of the users out there.

But Homeshick is only almost my favorite – meet dfm – a Utility to Manage Dotfiles! It is written in Perl and mainly does the same as mentioned above, even minus the support for more than one repository. But on the plus side it has the capability of ensuring file rights via chmod. I haven’t seen that in any other solution. Additionally it supports arbitrary scripts executed during the update process for example for host specific commands. And last but not least, using a three letter program feels, somehow, right 😉

Starting with dfm

So, first of course you have to get dfm. If you are hosting your dotfiles on github anyway, just fork the dfm starter repo and clone it. Otherwise, if you later want to host it yourself, clone the main dfm repo and change the remote URL. My choice was the second way:

$ git clone git@github.com:justone/dotfiles.git .dotfiles
Cloning into '.dotfiles'...
remote: Counting objects: 3212, done.
remote: Compressing objects: 100% (1531/1531), done.
remote: Total 3212 (delta 1413), reused 3096 (delta 1397)
Receiving objects: 100% (3212/3212), 4.22 MiB | 202 KiB/s, done.
Resolving deltas: 100% (1413/1413), done.

Next I configured the just cloned repository to use my own URL since my dotfiles are not on github:

$ cd .dotfiles/
$ git remote -v
origin  git@github.com:justone/dotfiles.git (fetch)
origin  git@github.com:justone/dotfiles.git (push)
$ git remote set-url origin git@git.example.net:dotfiles
$ git push origin master
Counting objects: 402, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (139/139), done.
Writing objects: 100% (402/402), 58.03 KiB, done.
Total 402 (delta 207), reused 389 (delta 195)
To git@git.example.net:dotfiles
 * [new branch]      master -> master

You now have the repository up and ready. So let’s install dfm as a tool available in $PATH, meaning creating a symlink between ~/bin and ~/.dotfiles/bin and also extending the $PATH variable in .bashrc.load, which is added to .bashrc:

$ ./.dotfiles/bin/dfm
INFO: Installing dotfiles...
INFO:   Symlinking bin (.dotfiles/bin).
INFO:   Symlinking .bashrc.load (.dotfiles/.bashrc.load).
INFO: Appending loader to .bashrc

The .bashrc is hardly modified:

$ tail -n 1 .bashrc
. $HOME/.bashrc.load

As a side node, I am not sure if I really want to drop all my customizations on the bashrc loader, but the reasoning behind that move from the dfm author is rationale:

Why .bashrc.load instead of .bashrc?

Each OS or distribution generally has its own way of populating a default .bashrc in each new user’s home directory. This file works with the rest of the OS to load in special things like bash completion scripts or aliases. The idea behind using .bashrc.load is that dotfiles should add new things to a system rather than overwriting built-in funcitonality.

For instance, if a system sources bash completion files for you, and your dotfiles overwrites the system-provided .bashrc, then you would have to replicate that functionality on your own.

But no matter if you agree with it or not, the next step is to add further files to your dfm repository, which is quite easy because dfm comes along with an import function:

$ dfm import .vimrc
INFO: Importing .vimrc from /home/liquidat into /home/liquidat/.dotfiles
INFO:   Symlinking .vimrc (.dotfiles/.vimrc).
INFO: Committing with message 'importing .vimrc'
[master d7de67a] importing .vimrc
 1 file changed, 29 insertions(+)
 create mode 100644 .vimrc

The usage is pretty straightforward, and supports directories as well:

$ dfm import .vim
INFO: Importing .vim from /home/liquidat into /home/liquidat/.dotfiles
INFO:   Symlinking .vim (.dotfiles/.vim).
INFO: Committing with message 'importing .vim'
[master e9bd60a] importing .vim
 3 files changed, 875 insertions(+)
 create mode 100644 .vim/colors/desert256.vim
 create mode 100644 .vim/colors/jellybeans.vim

Using dfm on a new system

Using dfm on a new system is straightforward as well: clone the repo, invocate dfm, and you are done:

$ git clone git@git.example.com:dotfiles .dotfiles
Cloning into '.dotfiles'...
remote: Counting objects: 418, done.
remote: Compressing objects: 100% (142/142), done.
remote: Total 418 (delta 211), reused 401 (delta 207)
Receiving objects: 100% (418/418), 66.83 KiB, done.
Resolving deltas: 100% (211/211), done.
$ ./.dotfiles/bin/dfm
INFO: Installing dotfiles...
INFO:   Backing up .vimrc.
INFO:   Symlinking .vimrc (.dotfiles/.vimrc).
INFO:   Backing up bin.
INFO:   Symlinking bin (.dotfiles/bin).
INFO:   Symlinking .bashrc.load (.dotfiles/.bashrc.load).
INFO:   Backing up .inputrc.
INFO:   Symlinking .inputrc (.dotfiles/.inputrc).
INFO:   Backing up .vim.
INFO:   Symlinking .vim (.dotfiles/.vim).
INFO: Appending loader to .bashrc

As you see quite some files are backed up, that just means they are moved to .backup, so in worst case you know where to look.

Now lets see what happens when you change something.

$ cd ~/bin
$ ln -s /usr/bin/gnome-terminal gt
$ dfm add bin/gt
$ dfm commit -m "Added gt symlink for gnome-terminal."
[master 441c067] Added gt symlink for gnome-terminal.
 1 file changed, 1 insertion(+)
 create mode 120000 bin/gt
$ dfm push
Counting objects: 6, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (4/4), 363 bytes, done.
Total 4 (delta 1), reused 0 (delta 0)
To git@sinoda:dotfiles
   b28dc11..441c067  master -> master

As you see, dfm supports git pass through: git commands are directly handed over to git. The changes where added to the git repository, and the repository was pushed to the remote URL.

So, to get the changes onto the other system you just have to ask dfm to update the files via dfm umi. In this case I called it after I made changes to .screenrc:

$ dfm umi
[...]
INFO: re-installing dotfiles
INFO: Installing dotfiles...
INFO:   Symlinking .screenrc (.dotfiles/.screenrc).

dfm special features

As mentioned above, the strongest feature of dfm is to be able to ensure file system rights and to start scripts after an update. The first option comes in handy when you are sharing files in your ssh config directory. The second is useful whenever you have to alter files or do anything based for example on host names. Imagine that you have various build machines to build rpm files, but you have to use different packages names on each build environment (think of customer specific e-mail addresses here).

It should be possible to create a script that would fill in the necessary details in the rpmmacros file based on IP or hostname. I haven’t given that a try, but it should be worth it…

Keeping dfm up2date

Last but not least, it is of course desirable to keep dfm itself up2date. The dfm wiki proposes the following workflow for that:

$ dfm remote add upstream git://github.com/justone/dotfiles.git
$ dfm checkout master
$ dfm fetch upstream 
$ dfm merge upstream/master

It is a pretty neat way, using git tools as they should be used, and is still easy enough to handle.

Summary

So, summarizing I can say dfm offers a quite neat and easily understandable solution for managing dotfiles while not relying on languages or tools you probably cannot install on the systems you are working on. However, Homeshick comes in as a close second, and I might give that one a try at some other point in the future. In the end, both solutions are much better than self written solutions – or no solution at all.

Short Tip: Fix qdbus problems during a Kubuntu upgrade to 13.04

135712502612
I just performed an upgrade of my Kubuntu installation from 12.10 to 13.04 and had problems with the starting KDE seesion: it wasn’t able to bring up dbus, asked if I were able to call qdbus and quit afterwards.

A short test on the command line calling qdbus brought up a strange error: the binary was there and could be called, but looked for another binary called qdbus in /usr/lib/x86_64-linux-gnu/qt4/bin/qdbus which wasn’t there. However, there was a binary called /usr/lib/i386-linux-gnu/qt4/bin/qdbus, and thus I realized that the i386 version of qdbus was installed, rather than the x86_64 version I needed. Thus the fix was easy:

apt-get install qdbus

The i386 version was automatically removed, the x86_64 bit version was installed, and KDE was able to start up properly.

Short Tip: Check configured virtual hosts in Apache

920839987_135ba34fff
Whenever you have to debug virtual host setups in Apache, checking the actual running virtual host configuration is a good first step. This can be done with the -S option used on the Apache binary: It lists all running virtual hosts and performs a syntax check.

On Fedora, RHEL, CentOS the Apache binary can be found on /usr/sbin/httpd:

# /usr/sbin/httpd -S
VirtualHost configuration:                                                        
1.2.3.4:80      me.example.net (/etc/httpd/conf.d/me.conf:5)
2.3.4.5:80      others.example.net (/etc/httpd/conf.d/others.conf:1)
2.3.4.5:443     others.example.net (/etc/httpd/conf.d/others.conf:38)
Syntax OK

On Debian systems the call is almost the same, you just have to source the environment variables upfront, and the binary has a different name for historical reasons:

# source /etc/apache2/envvars
# /usr/sbin/apache2 -S
VirtualHost configuration:                                                        
1.2.3.4:80      me.example.net (/etc/apache2/sites-enabled/me.conf:5)
2.3.4.5:80      others.example.net (/etc/apache2/sites-enabled/others.conf:1)
2.3.4.5:443     others.example.net (/etc/apache2/sites-enabled/others.conf:38)
Syntax OK

you might run into an error about user names, in such cases it is helpful to call upfront.