[Howto] Use Powerline on Fedora

920839987_135ba34fffPowerline is a status line plugin for Vim, but also a prompt plugin for Bash, ZSH and others. It can easily be installed in Fedora via provided packages.

The status line plugin Powerline is available via the Fedora repositories. There has just been an update which is already available in the testing repository:

$ sudo yum install --enablerepo=updates-testing powerline

The powerline documentation is rather good and explains all steps necessary to configure all the various Powerline plugins. However, note that the string {repository_root} in the examples have to be replaced by /usr/lib/python2.7/site-packages/, so for example {repository_root}/powerline/bindings/vim becomes /usr/lib/python2.7/site-packages/powerline/bindings/vim/. This is due to the fact that the Powerline rpm installs the Powerline code into this specific directory.

So to use Powerline in Vim, just add the following line to the top of your ~/.vimrc:

set rtp+=/usr/lib/python2.7/site-packages/powerline/bindings/vim/

If your previously used other Vim plugins also altering the status line, make sure that you deactivate these.

To use Powerline in Zsh, simply add the following lines to your ~/.zshrc:

# Powerline
if [[ -r /usr/share/powerline/zsh/powerline.zsh ]]; then
  source /usr/share/powerline/zsh/powerline.zsh
fi

In case you use Zsh and want to get rid of the EMACS at the beginning, you need to create a configuration path for Powerline, copy the necessary Shell theme files and alter them accordingly:

$ mkdir -p ~/.config/powerline/themes/shell
$ cp -a /usr/lib/python2.7/site-packages/powerline/config_files/themes/shell/* ~/.config/powerline/themes/shell/

Open the file default.json and remove the lines:

      {
        "function": "powerline.segments.shell.mode"
      },

You might have to restart the powerline-daemon, powerline-daemon -r but afterwards the shell line in Zsh does not contain the current mode anymore. Have fun!

PS: In case you use Ubuntu, an almost perfect Howto can be found at AskUbuntu: How can I install and use powerline plugin?.

[Short Tip] Ansible Cheat Sheet

Ansible Logo

I created an Ansbile Cheat Sheet for Wall-Skills.com which was published today. It covers most of the important bits and pieces on one neat single page and thus should hang on your office wall. And since even customers recently approached me regarding using Ansible on Ubuntu/Debian I figure and hope that this cheat sheet will be of help to others.
AnsibleCheatSheet

By the way, thanks to pastjean who is the creator of the famous Git Cheat Sheet which was published on Wall-Skills not long ago in an adapted version: the Git Cheat Sheet inspired me to write my own cheat sheet for Ansible, and the design follows similar principles.

[Howto] First Steps With Ansible

Ansible LogoAnsible is a tool to manage systems and their configuration. Without the need for a client installed agent and with the ability to launch programs with command line, it seems to fit between classic configuration management like Puppet on one hand and ssh/dsh on the other.

Background

System/Configuration management is a hot topic right now. At Fosdem2014 there was an entire track dedicated to the topic – and the rooms where constantly overcrowded. There are more and more large server installations out there these days. With virtualization, it again get sensible and possible to have one server for each service. All these often rather similar machines need to be managed and thus central configuration management tools like Puppet or Chef became very popular. They keep all configuration stored in recipes on a central server, and the clients connect to it and pull the recipes regularly to ensure if everything is fine.

But sometimes there are smaller tasks: tasks which only need to be done once or once in a while, but for which a configuration management recipe might be too much. Also, it might happen that you have machines where you cannot easily install a Puppet client, or for example where you have machines which cannot contact your configuration management server via pull due to security concerns. For that situations ssh is often the tool of sysadmin’s choice. There are also cluster or distributed versions available like dsh.

Ansible now fits right in between these two classes of tools: it does provide the possibility to serve recipes from a central server, but does not require the clients to run any other agent but ssh.

Basic configuration, simple commands

First of all Ansible needs to know the hosts its going to serve. They can be managed on the central server in /etc/ansible/hosts or in a file configured in the shell variable ANSIBLE_HOSTS. The hosts can be listed as IP addresses or host names, and can contain additional information like user names, ssh port and so on:

[web-servers]
www.example.net ansible_ssh_port=222
www.example.com ansible_ssh_user=liquidat

[db-servers]
192.168.1.1
blue ansible_ssh_host=192.168.1.50

As soon as the hosts are defined, an Ansible “ping” can be used to see if they all can be reached. This is done from the central server – Ansible is per default a pushing service, not a pulling one.

$ ansible all -m ping
www.example.net | success >> {
    "changed": false, 
    "ping": "pong"
}
...

As seen above, Ansible was called with flag “m” which means module – the module “ping” just contacts the servers and checks if everything is ok. In this case the servers answer was successfully. Also, as you see the output is formatted in JSON style which is helpful in case the results need to be parsed anywhere.

In case you want to call arbitrary commands the flag “a” is needed:

$ ansible all -a "whoami" --sudo -K
sudo password: 
www.example.net | success | rc=0 >>
root
...

The “a” flag provides arguments to the invocated modules. In case no module is given, the argument of the flag is executed on the machine directly. The flag “sudo” does call the argument with sudo rights, “K” asks for the sudo password. Btw., note that this requires all servers to use the same sudo password, so to run Ansible you should think about configuring sudo with NOPASSWD.

More modules

There are dozens of modules provided with Ansible. For example, the file module can change permissions and ownership of a file or delete files and directories. The service module can check the state of services:

$ ansible www.example.com -m service -a "name=sshd state=restarted" --sudo -K
sudo password: 
www.example.com | success >> {
    "changed": true, 
    "name": "sshd", 
    "state": "started"
}

There are modules to send e-mails, copy files, install software via various package managers, for the management of cloud resources, to manage different databases, and so on. For example, the copy module can be used to copy files – and shows that files are only transferred if they are not already there:

$ ansible www.example.com -m copy -a "src=/home/liquidat/tmp/test.yml dest=/home/liquidat/text.yaml"
www.example.com | success >> {
    "changed": <strong>true</strong>, 
    "dest": "/home/liquidat/text.yaml", 
    "gid": 500, 
    "group": "liquidat", 
    "md5sum": "504e549603f616826707d60be0d9cd40", 
...

$ ansible www.example.com -m copy -a "src=/home/liquidat/tmp/test.yml dest=/home/liquidat/text.yaml"
www.example.com | success >> {
    "changed": <strong>false</strong>, 
...
}

In the second attempt the “changed” status is on “false”, indicating that the file was not actually changed since it was already there.

Playbooks

However, Ansible can be used for more than a distributed shell on steroids: configuration management and system orchestration. Both is realized in Ansible via so called Playbooks. In such Yaml files all the necessary tasks are stored which either ensure a given configuration or set up a specific system. In the end the Playbooks just list the Ansible commandos and modules which could also be called via command line. However, Playbooks also offer a dependency/notification system where given tasks are only executed if other tasks did change anything. Playbooks are called with a specific command line: ansible-playbook $PLAYBOOK.yml

For example, imagine a setup where you copy a file, and if that file was copied (so not there before or changed in the meantime) you need to restart sshd:

---
- hosts: www.example.com
  remote_user: liquidat
  tasks:
      - name: copy file
        copy: src=~/tmp/test.txt dest=~/test.txt
        notify:
            - restart sshd
  handlers:
      - name: restart sshd
        service: name=sshd state=restarted
        sudo: yes

As you see the host and user is configured in the beginning. There could be also host groups if needed. It is followed by the actual task – copying the file. All tasks of a Playbook are usually executed. This given task definition does have a notifier: if the task is executed with a “change” state of “true”, than a “handler” is notified. A handler is a task which is only executed if its called for. In this case, sshd is restarted after we copied over a file.

And the output is clear as well:

$ ansible-playbook tmp/test.yml -K
sudo password: 

PLAY [www.example.com] ********************************************************* 

GATHERING FACTS *************************************************************** 
ok: [www.example.com]

TASK: [copy file] ************************************************************* 
changed: [www.example.com]

NOTIFIED: [restart sshd] ****************************************************** 
changed: [www.example.com]

PLAY RECAP ******************************************************************** 
www.example.com             : ok=3    changed=2    unreachable=0    failed=0

The above example is a simple Playbook – but Playbooks offer many more functions: templates, variables based on various sources like the machine facts, conditions and even looping the same set of tasks over different sets of variables. For example, if we take the copy task but loop over a set of file names, each which should have a different name on the target system:

- name: copy files
  copy: src=~/tmp/{{ item.src_name }} dest=~/{{ item.dest_name }}                               
  with_items:                                                                                   
    - { src_name: file1.txt, dest_name: dest-file1.txt }                                      
    - { src_name: file2.txt, dest_name: dest-file2.txt }  

Also, Playbooks can include other Playbooks so you can have a set of ready-made Playbooks at your hand and combine them as you like. As you see Ansible is incredible powerful and does provide the ability to write Playbooks for very complex management tasks and system setups.

Outlook

Ansible is a tempting solution for configuration management since it does combine direct access with configuration management. If you have your large server data center already configured in an ansible-hosts file, you can it use for both system configuration as well as performing direct tasks. This is a big advantage compared to for example Puppet setups. Also, you can write Playbooks which you only need once in a while, store them at some place – and use them for orchestration purposes. Something which is not easily available with Puppet, but very simple with Ansible. Additionally, Ansible can be used either pushing or pulling, there are tools for both, which makes it much more flexible compared to other solutions out there.

And since you can use Ansible right from the start even without writing complex recipes before the learning curve is not that steep – and the adoption of Ansible is much quicker. There are already customers who use Ansible together with Puppet since Ansible is so much easier and much quicker to learn.

So in the end I can only recommend Ansible to anyone who is dealing with configuration management. It is a certainly helpful tool and even if you don’t start using it it might be interesting to know how other approaches to system and configuration management do look like.

[Howto] Managing dotfiles with dfm

920839987_135ba34fffMost system administrators have a set of personalized dotfiles like .vimrc and .bashrc. Taking these files with you from host to host and keeping them up2date everywhere can be a quite wearisome task. There are various tools to ease the pain, and I like to shed some light on one of them: dfm – the dotfile manager.

My background

On my machines I usually keep a set of personalized dotfiles which I don’t want to miss on any other server I have to administrate:

.screenrc
.bashrc
.inputrc
.vimrc
.vim/colors/jellybeans.vim
.vim/colors/desert256.vim

I need these files on all machines which I regularly work on – and since there are quite some customer machines I have access to regularly I wrote my own, git backed Python script years ago to keep these files synced and up2date on each machine. While it was fun to write the script, I always knew that it did not cover all my use cases regarding dotfiles, and it was not really flexible in terms of complex directory structures and so on. Also, I knew there must be other people with the same problem out there – and thus I was sure better solutions already existed.

And boy, there are so many of them!

Some interesting solutions for dotfile management

Many people have looked at this problem before – and solved it in their own ways. Most often the basic principle is that the files are stored and tracked via git in a hidden directory, and the tool of your choise manages symlinks between the files in the store and in $HOME.

For example, a very interesting idea is to use GNU Stow to manage dotfiles. It tracks the necessary files in subdirectories and of course links the files from there to the ‘real’ places in $HOME. I like reusing existing tools, so the idea of using GNU Stow appealed immediately. Also, the ‘packages’ or ‘group’ support it offers is tempting. Unfortunately, on most systems GNU Stow is not installed by default, and I cannot install new software on customer machines.

The problem of necessary software installation is also relevant for another often mentioned solution: Homesick. Homesick is Ruby based, and works similar to the GNU Stow solution mentioned above: files are stored in a hidden subdirectory, tracked with git, and linked in $HOME. The main feature here is that it can keep the configuration files in various git repositories, called ‘castles’, so you can integrate the work of projects like oh-my-zsh.
While Homesick does offer quite some features, it is Ruby based – and I cannot expect a working Ruby environment on each system, so it is out of question. I can go with Perl or Python, but that’s about it.

Other people had the same Ruby problem and created Homeshick – a Homesick clone spelled with an additional ‘h’ and besides written in Bash. It is quite straight forward and offers all necessary features like listing and tracking various git repositories as source for dotfiles, linking the actual dotfiles to your home, and so on. This one is almost my favorite! I wouldn’t be surprised if it is the favorite for most of the users out there.

But Homeshick is only almost my favorite – meet dfm – a Utility to Manage Dotfiles! It is written in Perl and mainly does the same as mentioned above, even minus the support for more than one repository. But on the plus side it has the capability of ensuring file rights via chmod. I haven’t seen that in any other solution. Additionally it supports arbitrary scripts executed during the update process for example for host specific commands. And last but not least, using a three letter program feels, somehow, right 😉

Starting with dfm

So, first of course you have to get dfm. If you are hosting your dotfiles on github anyway, just fork the dfm starter repo and clone it. Otherwise, if you later want to host it yourself, clone the main dfm repo and change the remote URL. My choice was the second way:

$ git clone git@github.com:justone/dotfiles.git .dotfiles
Cloning into '.dotfiles'...
remote: Counting objects: 3212, done.
remote: Compressing objects: 100% (1531/1531), done.
remote: Total 3212 (delta 1413), reused 3096 (delta 1397)
Receiving objects: 100% (3212/3212), 4.22 MiB | 202 KiB/s, done.
Resolving deltas: 100% (1413/1413), done.

Next I configured the just cloned repository to use my own URL since my dotfiles are not on github:

$ cd .dotfiles/
$ git remote -v
origin  git@github.com:justone/dotfiles.git (fetch)
origin  git@github.com:justone/dotfiles.git (push)
$ git remote set-url origin git@git.example.net:dotfiles
$ git push origin master
Counting objects: 402, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (139/139), done.
Writing objects: 100% (402/402), 58.03 KiB, done.
Total 402 (delta 207), reused 389 (delta 195)
To git@git.example.net:dotfiles
 * [new branch]      master -> master

You now have the repository up and ready. So let’s install dfm as a tool available in $PATH, meaning creating a symlink between ~/bin and ~/.dotfiles/bin and also extending the $PATH variable in .bashrc.load, which is added to .bashrc:

$ ./.dotfiles/bin/dfm
INFO: Installing dotfiles...
INFO:   Symlinking bin (.dotfiles/bin).
INFO:   Symlinking .bashrc.load (.dotfiles/.bashrc.load).
INFO: Appending loader to .bashrc

The .bashrc is hardly modified:

$ tail -n 1 .bashrc
. $HOME/.bashrc.load

As a side node, I am not sure if I really want to drop all my customizations on the bashrc loader, but the reasoning behind that move from the dfm author is rationale:

Why .bashrc.load instead of .bashrc?

Each OS or distribution generally has its own way of populating a default .bashrc in each new user’s home directory. This file works with the rest of the OS to load in special things like bash completion scripts or aliases. The idea behind using .bashrc.load is that dotfiles should add new things to a system rather than overwriting built-in funcitonality.

For instance, if a system sources bash completion files for you, and your dotfiles overwrites the system-provided .bashrc, then you would have to replicate that functionality on your own.

But no matter if you agree with it or not, the next step is to add further files to your dfm repository, which is quite easy because dfm comes along with an import function:

$ dfm import .vimrc
INFO: Importing .vimrc from /home/liquidat into /home/liquidat/.dotfiles
INFO:   Symlinking .vimrc (.dotfiles/.vimrc).
INFO: Committing with message 'importing .vimrc'
[master d7de67a] importing .vimrc
 1 file changed, 29 insertions(+)
 create mode 100644 .vimrc

The usage is pretty straightforward, and supports directories as well:

$ dfm import .vim
INFO: Importing .vim from /home/liquidat into /home/liquidat/.dotfiles
INFO:   Symlinking .vim (.dotfiles/.vim).
INFO: Committing with message 'importing .vim'
[master e9bd60a] importing .vim
 3 files changed, 875 insertions(+)
 create mode 100644 .vim/colors/desert256.vim
 create mode 100644 .vim/colors/jellybeans.vim

Using dfm on a new system

Using dfm on a new system is straightforward as well: clone the repo, invocate dfm, and you are done:

$ git clone git@git.example.com:dotfiles .dotfiles
Cloning into '.dotfiles'...
remote: Counting objects: 418, done.
remote: Compressing objects: 100% (142/142), done.
remote: Total 418 (delta 211), reused 401 (delta 207)
Receiving objects: 100% (418/418), 66.83 KiB, done.
Resolving deltas: 100% (211/211), done.
$ ./.dotfiles/bin/dfm
INFO: Installing dotfiles...
INFO:   Backing up .vimrc.
INFO:   Symlinking .vimrc (.dotfiles/.vimrc).
INFO:   Backing up bin.
INFO:   Symlinking bin (.dotfiles/bin).
INFO:   Symlinking .bashrc.load (.dotfiles/.bashrc.load).
INFO:   Backing up .inputrc.
INFO:   Symlinking .inputrc (.dotfiles/.inputrc).
INFO:   Backing up .vim.
INFO:   Symlinking .vim (.dotfiles/.vim).
INFO: Appending loader to .bashrc

As you see quite some files are backed up, that just means they are moved to .backup, so in worst case you know where to look.

Now lets see what happens when you change something.

$ cd ~/bin
$ ln -s /usr/bin/gnome-terminal gt
$ dfm add bin/gt
$ dfm commit -m "Added gt symlink for gnome-terminal."
[master 441c067] Added gt symlink for gnome-terminal.
 1 file changed, 1 insertion(+)
 create mode 120000 bin/gt
$ dfm push
Counting objects: 6, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (4/4), 363 bytes, done.
Total 4 (delta 1), reused 0 (delta 0)
To git@sinoda:dotfiles
   b28dc11..441c067  master -> master

As you see, dfm supports git pass through: git commands are directly handed over to git. The changes where added to the git repository, and the repository was pushed to the remote URL.

So, to get the changes onto the other system you just have to ask dfm to update the files via dfm umi. In this case I called it after I made changes to .screenrc:

$ dfm umi
[...]
INFO: re-installing dotfiles
INFO: Installing dotfiles...
INFO:   Symlinking .screenrc (.dotfiles/.screenrc).

dfm special features

As mentioned above, the strongest feature of dfm is to be able to ensure file system rights and to start scripts after an update. The first option comes in handy when you are sharing files in your ssh config directory. The second is useful whenever you have to alter files or do anything based for example on host names. Imagine that you have various build machines to build rpm files, but you have to use different packages names on each build environment (think of customer specific e-mail addresses here).

It should be possible to create a script that would fill in the necessary details in the rpmmacros file based on IP or hostname. I haven’t given that a try, but it should be worth it…

Keeping dfm up2date

Last but not least, it is of course desirable to keep dfm itself up2date. The dfm wiki proposes the following workflow for that:

$ dfm remote add upstream git://github.com/justone/dotfiles.git
$ dfm checkout master
$ dfm fetch upstream 
$ dfm merge upstream/master

It is a pretty neat way, using git tools as they should be used, and is still easy enough to handle.

Summary

So, summarizing I can say dfm offers a quite neat and easily understandable solution for managing dotfiles while not relying on languages or tools you probably cannot install on the systems you are working on. However, Homeshick comes in as a close second, and I might give that one a try at some other point in the future. In the end, both solutions are much better than self written solutions – or no solution at all.

Short Tip: egrep – using grep with more than one expression

920839987_135ba34fff
I stumbled across an old blog post of mine about using grep with more than one expression: in the old days I used -e several times, one for each new expression. But as stressed in the comments that way is neither convenient nor reliable on ll platforms. And I have developed as well, so today I usually use egrep if I need to grep for several expressions. Thus, here are some short notes about using it.

The multiple arguments you are searching for a passed to egrep separated by pipes. For example, if you want to grep the output of lspci for all audio and video controllers, the correct command is:

$ lspci|egrep -i 'audio|vga'
00:05.0 Audio device: NVIDIA Corporation MCP61 High Definition Audio (rev a2)
00:0d.0 VGA compatible controller: NVIDIA Corporation C61 [GeForce 6150SE nForce 430] (rev a2)

( Yes, I know, I write my blog post on pretty old hardware right now 😉 )

egrep does understand more than two expressions, so you can use the option like $STRING_1|$STRING_2|$STRING_3|.... But don’t forget to include the high tics ' in the command: these ensure that the pipe is used as a separator instead of being interpreted by your shell.