Most system administrators have a set of personalized dotfiles like .vimrc and .bashrc. Taking these files with you from host to host and keeping them up2date everywhere can be a quite wearisome task. There are various tools to ease the pain, and I like to shed some light on one of them: dfm – the dotfile manager.
On my machines I usually keep a set of personalized dotfiles which I don’t want to miss on any other server I have to administrate:
.screenrc .bashrc .inputrc .vimrc .vim/colors/jellybeans.vim .vim/colors/desert256.vim
I need these files on all machines which I regularly work on – and since there are quite some customer machines I have access to regularly I wrote my own, git backed Python script years ago to keep these files synced and up2date on each machine. While it was fun to write the script, I always knew that it did not cover all my use cases regarding dotfiles, and it was not really flexible in terms of complex directory structures and so on. Also, I knew there must be other people with the same problem out there – and thus I was sure better solutions already existed.
And boy, there are so many of them!
Some interesting solutions for dotfile management
Many people have looked at this problem before – and solved it in their own ways. Most often the basic principle is that the files are stored and tracked via git in a hidden directory, and the tool of your choise manages symlinks between the files in the store and in
For example, a very interesting idea is to use GNU Stow to manage dotfiles. It tracks the necessary files in subdirectories and of course links the files from there to the ‘real’ places in $HOME. I like reusing existing tools, so the idea of using GNU Stow appealed immediately. Also, the ‘packages’ or ‘group’ support it offers is tempting. Unfortunately, on most systems GNU Stow is not installed by default, and I cannot install new software on customer machines.
The problem of necessary software installation is also relevant for another often mentioned solution: Homesick. Homesick is Ruby based, and works similar to the GNU Stow solution mentioned above: files are stored in a hidden subdirectory, tracked with git, and linked in
$HOME. The main feature here is that it can keep the configuration files in various git repositories, called ‘castles’, so you can integrate the work of projects like oh-my-zsh.
While Homesick does offer quite some features, it is Ruby based – and I cannot expect a working Ruby environment on each system, so it is out of question. I can go with Perl or Python, but that’s about it.
Other people had the same Ruby problem and created Homeshick – a Homesick clone spelled with an additional ‘h’ and besides written in Bash. It is quite straight forward and offers all necessary features like listing and tracking various git repositories as source for dotfiles, linking the actual dotfiles to your home, and so on. This one is almost my favorite! I wouldn’t be surprised if it is the favorite for most of the users out there.
But Homeshick is only almost my favorite – meet dfm – a Utility to Manage Dotfiles! It is written in Perl and mainly does the same as mentioned above, even minus the support for more than one repository. But on the plus side it has the capability of ensuring file rights via chmod. I haven’t seen that in any other solution. Additionally it supports arbitrary scripts executed during the update process for example for host specific commands. And last but not least, using a three letter program feels, somehow, right 😉
Starting with dfm
So, first of course you have to get dfm. If you are hosting your dotfiles on github anyway, just fork the dfm starter repo and clone it. Otherwise, if you later want to host it yourself, clone the main dfm repo and change the remote URL. My choice was the second way:
$ git clone firstname.lastname@example.org:justone/dotfiles.git .dotfiles Cloning into '.dotfiles'... remote: Counting objects: 3212, done. remote: Compressing objects: 100% (1531/1531), done. remote: Total 3212 (delta 1413), reused 3096 (delta 1397) Receiving objects: 100% (3212/3212), 4.22 MiB | 202 KiB/s, done. Resolving deltas: 100% (1413/1413), done.
Next I configured the just cloned repository to use my own URL since my dotfiles are not on github:
$ cd .dotfiles/ $ git remote -v origin email@example.com:justone/dotfiles.git (fetch) origin firstname.lastname@example.org:justone/dotfiles.git (push) $ git remote set-url origin email@example.com:dotfiles $ git push origin master Counting objects: 402, done. Delta compression using up to 2 threads. Compressing objects: 100% (139/139), done. Writing objects: 100% (402/402), 58.03 KiB, done. Total 402 (delta 207), reused 389 (delta 195) To firstname.lastname@example.org:dotfiles * [new branch] master -> master
You now have the repository up and ready. So let’s install dfm as a tool available in
$PATH, meaning creating a symlink between
~/.dotfiles/bin and also extending the
$PATH variable in
.bashrc.load, which is added to
$ ./.dotfiles/bin/dfm INFO: Installing dotfiles... INFO: Symlinking bin (.dotfiles/bin). INFO: Symlinking .bashrc.load (.dotfiles/.bashrc.load). INFO: Appending loader to .bashrc
.bashrc is hardly modified:
$ tail -n 1 .bashrc . $HOME/.bashrc.load
As a side node, I am not sure if I really want to drop all my customizations on the bashrc loader, but the reasoning behind that move from the dfm author is rationale:
Why .bashrc.load instead of .bashrc?
Each OS or distribution generally has its own way of populating a default .bashrc in each new user’s home directory. This file works with the rest of the OS to load in special things like bash completion scripts or aliases. The idea behind using .bashrc.load is that dotfiles should add new things to a system rather than overwriting built-in funcitonality.
For instance, if a system sources bash completion files for you, and your dotfiles overwrites the system-provided .bashrc, then you would have to replicate that functionality on your own.
But no matter if you agree with it or not, the next step is to add further files to your dfm repository, which is quite easy because dfm comes along with an import function:
$ dfm import .vimrc INFO: Importing .vimrc from /home/liquidat into /home/liquidat/.dotfiles INFO: Symlinking .vimrc (.dotfiles/.vimrc). INFO: Committing with message 'importing .vimrc' [master d7de67a] importing .vimrc 1 file changed, 29 insertions(+) create mode 100644 .vimrc
The usage is pretty straightforward, and supports directories as well:
$ dfm import .vim INFO: Importing .vim from /home/liquidat into /home/liquidat/.dotfiles INFO: Symlinking .vim (.dotfiles/.vim). INFO: Committing with message 'importing .vim' [master e9bd60a] importing .vim 3 files changed, 875 insertions(+) create mode 100644 .vim/colors/desert256.vim create mode 100644 .vim/colors/jellybeans.vim
Using dfm on a new system
Using dfm on a new system is straightforward as well: clone the repo, invocate dfm, and you are done:
$ git clone email@example.com:dotfiles .dotfiles Cloning into '.dotfiles'... remote: Counting objects: 418, done. remote: Compressing objects: 100% (142/142), done. remote: Total 418 (delta 211), reused 401 (delta 207) Receiving objects: 100% (418/418), 66.83 KiB, done. Resolving deltas: 100% (211/211), done. $ ./.dotfiles/bin/dfm INFO: Installing dotfiles... INFO: Backing up .vimrc. INFO: Symlinking .vimrc (.dotfiles/.vimrc). INFO: Backing up bin. INFO: Symlinking bin (.dotfiles/bin). INFO: Symlinking .bashrc.load (.dotfiles/.bashrc.load). INFO: Backing up .inputrc. INFO: Symlinking .inputrc (.dotfiles/.inputrc). INFO: Backing up .vim. INFO: Symlinking .vim (.dotfiles/.vim). INFO: Appending loader to .bashrc
As you see quite some files are backed up, that just means they are moved to
.backup, so in worst case you know where to look.
Now lets see what happens when you change something.
$ cd ~/bin $ ln -s /usr/bin/gnome-terminal gt $ dfm add bin/gt $ dfm commit -m "Added gt symlink for gnome-terminal." [master 441c067] Added gt symlink for gnome-terminal. 1 file changed, 1 insertion(+) create mode 120000 bin/gt $ dfm push Counting objects: 6, done. Delta compression using up to 4 threads. Compressing objects: 100% (3/3), done. Writing objects: 100% (4/4), 363 bytes, done. Total 4 (delta 1), reused 0 (delta 0) To git@sinoda:dotfiles b28dc11..441c067 master -> master
As you see, dfm supports git pass through: git commands are directly handed over to git. The changes where added to the git repository, and the repository was pushed to the remote URL.
So, to get the changes onto the other system you just have to ask dfm to update the files via
dfm umi. In this case I called it after I made changes to .screenrc:
$ dfm umi [...] INFO: re-installing dotfiles INFO: Installing dotfiles... INFO: Symlinking .screenrc (.dotfiles/.screenrc).
dfm special features
As mentioned above, the strongest feature of dfm is to be able to ensure file system rights and to start scripts after an update. The first option comes in handy when you are sharing files in your ssh config directory. The second is useful whenever you have to alter files or do anything based for example on host names. Imagine that you have various build machines to build rpm files, but you have to use different packages names on each build environment (think of customer specific e-mail addresses here).
It should be possible to create a script that would fill in the necessary details in the rpmmacros file based on IP or hostname. I haven’t given that a try, but it should be worth it…
Keeping dfm up2date
Last but not least, it is of course desirable to keep dfm itself up2date. The dfm wiki proposes the following workflow for that:
$ dfm remote add upstream git://github.com/justone/dotfiles.git $ dfm checkout master $ dfm fetch upstream $ dfm merge upstream/master
It is a pretty neat way, using git tools as they should be used, and is still easy enough to handle.
So, summarizing I can say dfm offers a quite neat and easily understandable solution for managing dotfiles while not relying on languages or tools you probably cannot install on the systems you are working on. However, Homeshick comes in as a close second, and I might give that one a try at some other point in the future. In the end, both solutions are much better than self written solutions – or no solution at all.