[Howto] Managing dotfiles with dfm

920839987_135ba34fffMost system administrators have a set of personalized dotfiles like .vimrc and .bashrc. Taking these files with you from host to host and keeping them up2date everywhere can be a quite wearisome task. There are various tools to ease the pain, and I like to shed some light on one of them: dfm – the dotfile manager.

My background

On my machines I usually keep a set of personalized dotfiles which I don’t want to miss on any other server I have to administrate:


I need these files on all machines which I regularly work on – and since there are quite some customer machines I have access to regularly I wrote my own, git backed Python script years ago to keep these files synced and up2date on each machine. While it was fun to write the script, I always knew that it did not cover all my use cases regarding dotfiles, and it was not really flexible in terms of complex directory structures and so on. Also, I knew there must be other people with the same problem out there – and thus I was sure better solutions already existed.

And boy, there are so many of them!

Some interesting solutions for dotfile management

Many people have looked at this problem before – and solved it in their own ways. Most often the basic principle is that the files are stored and tracked via git in a hidden directory, and the tool of your choise manages symlinks between the files in the store and in $HOME.

For example, a very interesting idea is to use GNU Stow to manage dotfiles. It tracks the necessary files in subdirectories and of course links the files from there to the ‘real’ places in $HOME. I like reusing existing tools, so the idea of using GNU Stow appealed immediately. Also, the ‘packages’ or ‘group’ support it offers is tempting. Unfortunately, on most systems GNU Stow is not installed by default, and I cannot install new software on customer machines.

The problem of necessary software installation is also relevant for another often mentioned solution: Homesick. Homesick is Ruby based, and works similar to the GNU Stow solution mentioned above: files are stored in a hidden subdirectory, tracked with git, and linked in $HOME. The main feature here is that it can keep the configuration files in various git repositories, called ‘castles’, so you can integrate the work of projects like oh-my-zsh.
While Homesick does offer quite some features, it is Ruby based – and I cannot expect a working Ruby environment on each system, so it is out of question. I can go with Perl or Python, but that’s about it.

Other people had the same Ruby problem and created Homeshick – a Homesick clone spelled with an additional ‘h’ and besides written in Bash. It is quite straight forward and offers all necessary features like listing and tracking various git repositories as source for dotfiles, linking the actual dotfiles to your home, and so on. This one is almost my favorite! I wouldn’t be surprised if it is the favorite for most of the users out there.

But Homeshick is only almost my favorite – meet dfm – a Utility to Manage Dotfiles! It is written in Perl and mainly does the same as mentioned above, even minus the support for more than one repository. But on the plus side it has the capability of ensuring file rights via chmod. I haven’t seen that in any other solution. Additionally it supports arbitrary scripts executed during the update process for example for host specific commands. And last but not least, using a three letter program feels, somehow, right 😉

Starting with dfm

So, first of course you have to get dfm. If you are hosting your dotfiles on github anyway, just fork the dfm starter repo and clone it. Otherwise, if you later want to host it yourself, clone the main dfm repo and change the remote URL. My choice was the second way:

$ git clone git@github.com:justone/dotfiles.git .dotfiles
Cloning into '.dotfiles'...
remote: Counting objects: 3212, done.
remote: Compressing objects: 100% (1531/1531), done.
remote: Total 3212 (delta 1413), reused 3096 (delta 1397)
Receiving objects: 100% (3212/3212), 4.22 MiB | 202 KiB/s, done.
Resolving deltas: 100% (1413/1413), done.

Next I configured the just cloned repository to use my own URL since my dotfiles are not on github:

$ cd .dotfiles/
$ git remote -v
origin  git@github.com:justone/dotfiles.git (fetch)
origin  git@github.com:justone/dotfiles.git (push)
$ git remote set-url origin git@git.example.net:dotfiles
$ git push origin master
Counting objects: 402, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (139/139), done.
Writing objects: 100% (402/402), 58.03 KiB, done.
Total 402 (delta 207), reused 389 (delta 195)
To git@git.example.net:dotfiles
 * [new branch]      master -> master

You now have the repository up and ready. So let’s install dfm as a tool available in $PATH, meaning creating a symlink between ~/bin and ~/.dotfiles/bin and also extending the $PATH variable in .bashrc.load, which is added to .bashrc:

$ ./.dotfiles/bin/dfm
INFO: Installing dotfiles...
INFO:   Symlinking bin (.dotfiles/bin).
INFO:   Symlinking .bashrc.load (.dotfiles/.bashrc.load).
INFO: Appending loader to .bashrc

The .bashrc is hardly modified:

$ tail -n 1 .bashrc
. $HOME/.bashrc.load

As a side node, I am not sure if I really want to drop all my customizations on the bashrc loader, but the reasoning behind that move from the dfm author is rationale:

Why .bashrc.load instead of .bashrc?

Each OS or distribution generally has its own way of populating a default .bashrc in each new user’s home directory. This file works with the rest of the OS to load in special things like bash completion scripts or aliases. The idea behind using .bashrc.load is that dotfiles should add new things to a system rather than overwriting built-in funcitonality.

For instance, if a system sources bash completion files for you, and your dotfiles overwrites the system-provided .bashrc, then you would have to replicate that functionality on your own.

But no matter if you agree with it or not, the next step is to add further files to your dfm repository, which is quite easy because dfm comes along with an import function:

$ dfm import .vimrc
INFO: Importing .vimrc from /home/liquidat into /home/liquidat/.dotfiles
INFO:   Symlinking .vimrc (.dotfiles/.vimrc).
INFO: Committing with message 'importing .vimrc'
[master d7de67a] importing .vimrc
 1 file changed, 29 insertions(+)
 create mode 100644 .vimrc

The usage is pretty straightforward, and supports directories as well:

$ dfm import .vim
INFO: Importing .vim from /home/liquidat into /home/liquidat/.dotfiles
INFO:   Symlinking .vim (.dotfiles/.vim).
INFO: Committing with message 'importing .vim'
[master e9bd60a] importing .vim
 3 files changed, 875 insertions(+)
 create mode 100644 .vim/colors/desert256.vim
 create mode 100644 .vim/colors/jellybeans.vim

Using dfm on a new system

Using dfm on a new system is straightforward as well: clone the repo, invocate dfm, and you are done:

$ git clone git@git.example.com:dotfiles .dotfiles
Cloning into '.dotfiles'...
remote: Counting objects: 418, done.
remote: Compressing objects: 100% (142/142), done.
remote: Total 418 (delta 211), reused 401 (delta 207)
Receiving objects: 100% (418/418), 66.83 KiB, done.
Resolving deltas: 100% (211/211), done.
$ ./.dotfiles/bin/dfm
INFO: Installing dotfiles...
INFO:   Backing up .vimrc.
INFO:   Symlinking .vimrc (.dotfiles/.vimrc).
INFO:   Backing up bin.
INFO:   Symlinking bin (.dotfiles/bin).
INFO:   Symlinking .bashrc.load (.dotfiles/.bashrc.load).
INFO:   Backing up .inputrc.
INFO:   Symlinking .inputrc (.dotfiles/.inputrc).
INFO:   Backing up .vim.
INFO:   Symlinking .vim (.dotfiles/.vim).
INFO: Appending loader to .bashrc

As you see quite some files are backed up, that just means they are moved to .backup, so in worst case you know where to look.

Now lets see what happens when you change something.

$ cd ~/bin
$ ln -s /usr/bin/gnome-terminal gt
$ dfm add bin/gt
$ dfm commit -m "Added gt symlink for gnome-terminal."
[master 441c067] Added gt symlink for gnome-terminal.
 1 file changed, 1 insertion(+)
 create mode 120000 bin/gt
$ dfm push
Counting objects: 6, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (4/4), 363 bytes, done.
Total 4 (delta 1), reused 0 (delta 0)
To git@sinoda:dotfiles
   b28dc11..441c067  master -> master

As you see, dfm supports git pass through: git commands are directly handed over to git. The changes where added to the git repository, and the repository was pushed to the remote URL.

So, to get the changes onto the other system you just have to ask dfm to update the files via dfm umi. In this case I called it after I made changes to .screenrc:

$ dfm umi
INFO: re-installing dotfiles
INFO: Installing dotfiles...
INFO:   Symlinking .screenrc (.dotfiles/.screenrc).

dfm special features

As mentioned above, the strongest feature of dfm is to be able to ensure file system rights and to start scripts after an update. The first option comes in handy when you are sharing files in your ssh config directory. The second is useful whenever you have to alter files or do anything based for example on host names. Imagine that you have various build machines to build rpm files, but you have to use different packages names on each build environment (think of customer specific e-mail addresses here).

It should be possible to create a script that would fill in the necessary details in the rpmmacros file based on IP or hostname. I haven’t given that a try, but it should be worth it…

Keeping dfm up2date

Last but not least, it is of course desirable to keep dfm itself up2date. The dfm wiki proposes the following workflow for that:

$ dfm remote add upstream git://github.com/justone/dotfiles.git
$ dfm checkout master
$ dfm fetch upstream 
$ dfm merge upstream/master

It is a pretty neat way, using git tools as they should be used, and is still easy enough to handle.


So, summarizing I can say dfm offers a quite neat and easily understandable solution for managing dotfiles while not relying on languages or tools you probably cannot install on the systems you are working on. However, Homeshick comes in as a close second, and I might give that one a try at some other point in the future. In the end, both solutions are much better than self written solutions – or no solution at all.

Short Tip: egrep – using grep with more than one expression

I stumbled across an old blog post of mine about using grep with more than one expression: in the old days I used -e several times, one for each new expression. But as stressed in the comments that way is neither convenient nor reliable on ll platforms. And I have developed as well, so today I usually use egrep if I need to grep for several expressions. Thus, here are some short notes about using it.

The multiple arguments you are searching for a passed to egrep separated by pipes. For example, if you want to grep the output of lspci for all audio and video controllers, the correct command is:

$ lspci|egrep -i 'audio|vga'
00:05.0 Audio device: NVIDIA Corporation MCP61 High Definition Audio (rev a2)
00:0d.0 VGA compatible controller: NVIDIA Corporation C61 [GeForce 6150SE nForce 430] (rev a2)

( Yes, I know, I write my blog post on pretty old hardware right now 😉 )

egrep does understand more than two expressions, so you can use the option like $STRING_1|$STRING_2|$STRING_3|.... But don’t forget to include the high tics ' in the command: these ensure that the pipe is used as a separator instead of being interpreted by your shell.

[Howto] Sending test mails with Swaks

Setting up e-mail servers can become a time consuming and complex task. Test mails can help verifying the functionality of the system – and here Swaks comes into play, the “swiss army knife for smtp”.

Swaks can be used to send test mails of all kinds. The advantage of Swaks compared to sending mails with your normal e-mail client is that you are able to alter almost any part of the e-mail: to, from, header, attachments, which server to speak to, etc.

Here are some common Swaks usage examples:

The first, basic example is sending a mail to your own server (here “bayz.de”):

$ swaks -f someone@example.net -t liquidat@example.com

If you need more recipients, add them via comma:

$ swaks -f someone@example.net -t liquidat@example.com,testme@example.com

It gets more interesting if you change the “TO” to another domain, but send the mail via the server for “bayz.de” nevertheless. If that works for arbitrary domains, and if the mails are forwarded to these you have big problem: an open relay.

$ swaks -f someone@example.net -t liquidat@example.com --server mail.example.com

Or do you need to know if a certain recipient is actually available?

$ swaks -f someone@example.net -t liquidat@example.com --quit-after RCPT

But you can also use Swaks to test a spam filter: If any of the bigger spam filters out there identifies the GTube sting in an e-mail, it will mark it as spam:

$ swaks -f someone@example.net -t liquidat@example.com --body /path/to/gtube/file

The same is true for anti virus programs and the Eicar file:

$ swaks -f someone@example.net -t liquidat@example.com --body /path/to/eicar/file

But Swaks can also be used to test user authentication and tls:

$ swaks -tls --server example.com -f liquidat@example.com -t someone@example.net  -ao --auth-user=liquidat

And this can of course be used to test authentication between servers:

$ swaks -tls -s example.com -f someone@example.net -t liquidat@example.com --ehlo $(host $(wget http://automation.whatismyip.com/n09230945.asp -O - -q))

The last bit makes sure your local test machine does provide a correct fqdn.

But in case your MTA setup does rely or use custom headers, how about adding some of these?

$ swaks -f someone@example.net -t liquidat@example.com --add-header "X-Custom-Header: Swaks-Tested"

If you have other interesting examples, don’t hesitate to drop them in the comments, I am happy to add them here.

Short Tip: Changing the original time of a photo at cli level

920839987_135ba34fffSometimes it happens that you take photos with a camera, and realize right in the middle of your session that the time of the camera is totally offset. In such cases: just keep taking photos and make sure that you take a photo of a clock at some point. You can correct the time stamps later on in the shell, even processing multiple images at once.

Afterwards, download the images, check the actual time offset by comparing the photographed clock and the time and date given in that image, and use exiftool to correct the time stamps of the photo. For example, imagine you have to change the time by adding two hours and fifteen minutes, the cli command is:

$ exiftool -AllDates-='2:15' *.JPG

You can check the actual date of the image either by the usual GUI programs or on command line:

$ exiftool MyImage.jpg|grep Time
File Modification Date/Time     : 2011:11:03 13:00:39+01:00
Exposure Time                   : 1/100
Date/Time Original              : 2009:09:05 07:07:49

If you have to process a batch of pictures, you can just wrap a for loop around the command.