Category Archives: Fedora

[Howto] Using D-BUS to query status information from NetworkManager (or others)

920839987_135ba34fffMost of the current Linux installations rely on the inter process communication framework D-Bus. D-Bbus can be used to gather quite some information about the system – however the usage can be a bit troublesome. This howto sheds some light on the usage of D-Bus by the example of querying the NetworkManagaer interface.

Background

D-BUS enables tools and programs to talk to each other. For example tools like NetworkManager, systemd or firewalld all provide methods and information via D-Bus to query their information and change their configuration or trigger some specific behavior. And of course all these operations can also be performed on the command line. This can be handy in case you want to include it in some bash scripts or for example in your monitoring setup. It also helps understanding the basic principles behind D-Bus in case you want to use it in more complex scripts and programs.

First steps: qdbus

For this example I use qdbus which is shipped with Qt. There are corresponding tools like gdbus and others available in case you don’t want to install qt on your machine for whatever reason.

When you first launch qdbus it shows you a list of strange names which roughly remind you of the apps currently running on your desktop/user session. The point is that you are asking your own user environment – but in case of NetworkManager or other system tools you need to query the system D-Bus:

$ qdbus --system
...
 org.freedesktop.NetworkManager
...

This outputs show a list of all available services, or better said, interfaces. You can connect to these and can get a list of the objects the have:

$ qdbus --system org.freedesktop.NetworkManager
...
/org
/org/freedesktop
/org/freedesktop/NetworkManager
/org/freedesktop/NetworkManager/AccessPoint
/org/freedesktop/NetworkManager/AccessPoint/0
...

Each object has a path which identifies, well, the path to the object. That’s how you call it and everything which is connected to it.

Querying objects

Now that we have a list of objects, we can check which members belong to an object. Members can be actions which can be triggered, or information about a current state, signals, etc. – when we have access to the members things get interesting. In this case we query the object NetworkManager itself, not one of its sub-objects:

$ qdbus --system org.freedesktop.NetworkManager /org/freedesktop/NetworkManager
...
method QDBusVariant org.freedesktop.DBus.Properties.Get(QString interface, QString propname)
method QVariantMap org.freedesktop.DBus.Properties.GetAll(QString interface)
...
property read QList<QDBusObjectPath> org.freedesktop.NetworkManager.ActiveConnections
...

The output shows a list of various members. In the above given code snippet I highlighted the methods to get information – and a property which is called org.freedesktop.NetworkManager.ActiveConnections. Guess what, that property holds the information of the current active connections (there can be more than one!) of the NetworkManager. And we can ask this information (using the --literal because otherwise the output is not possible):

$ qdbus --system --literal org.freedesktop.NetworkManager /org/freedesktop/NetworkManager org.freedesktop.DBus.Properties.Get org.freedesktop.NetworkManager ActiveConnections
[Variant: [Argument: ao {[ObjectPath: /org/freedesktop/NetworkManager/ActiveConnection/0]}]]

Please note that as arguments we gave not the entire property as a whole, but we separated at the last dot. Formally we asked for the content of the property ActiveConnections at the interface org.freedesktop.NetworkManager. The interface and the property are merged in the output, but the query always needs to have them separated by a space. I’m not sure why…
But well, now we know that our active connection is actually a NetworkManager object with the path given above. We can again query that object to get a list of all members:

$ qdbus --system --literal org.freedesktop.NetworkManager /org/freedesktop/NetworkManager/ActiveConnection/0
...
method QDBusVariant org.freedesktop.DBus.Properties.Get(QString interface, QString propname)
...
property read QDBusObjectPath org.freedesktop.NetworkManager.Connection.Active.Ip4Config
...

There is again a member to get properties – and the interesting property again is an object path:

$ qdbus --system --literal org.freedesktop.NetworkManager /org/freedesktop/NetworkManager/ActiveConnection/0 org.freedesktop.DBus.Properties.Get org.freedesktop.NetworkManager.Connection.Active Ip4Config
[Variant: [ObjectPath: /org/freedesktop/NetworkManager/IP4Config/1]]

We query again that given object path and see rather promising members:

$ qdbus --system --literal org.freedesktop.NetworkManager /org/freedesktop/NetworkManager/IP4Config/1
property read QDBusRawType::aau org.freedesktop.NetworkManager.IP4Config.Addresses
property read QStringList org.freedesktop.NetworkManager.IP4Config.Domains
property read QString org.freedesktop.NetworkManager.IP4Config.Gateway
...

And indeed: if we now query these members, we get for example the current Gateway:

$ qdbus --system --literal org.freedesktop.NetworkManager /org/freedesktop/NetworkManager/IP4Config/1 org.freedesktop.DBus.Properties.Get org.freedesktop.NetworkManager.IP4Config Gateway
[Variant(QString): "192.168.178.1"]

That’s it. Now you know the gateway I have configured right now. If you do not want to query each member individually, you can simply call all given members of an interface:

$ qdbus --system --literal org.freedesktop.NetworkManager /org/freedesktop/NetworkManager/IP4Config/1 org.freedesktop.DBus.Properties.GetAll org.freedesktop.NetworkManager.IP4Config|sed 's/, /\n/g'
[Argument: a{sv} {"Gateway" = [Variant(QString): "192.168.178.1"]
"Addresses" = [Variant: [Argument: aau {[Argument: au {565356736
24
28485824}]}]]
"Routes" = [Variant: [Argument: aau {}]]
"Nameservers" = [Variant: [Argument: au {28485824}]]
"Domains" = [Variant(QStringList): {"example.com"}]
"Searches" = [Variant(QStringList): {}]
"WinsServers" = [Variant: [Argument: au {}]]}]

As you see the ipv4 addresses are encoded in reverse decimal notation. I am sure there is reason for that. A good one. Surely. But well, that’s just a stupid encoding problem, nothing else. In the end, the queries worked: the current gateway was successfully identified via D-Bus.

Methods: calling panic mode in firewalld

As mentioned above there are also methods which influence the behavior of an application. One simple example I came across is to kill all networking by calling the firewalld panic mode. For that you need the interface org.fedoraproject.FirewallD1, the object /org/fedoraproject/FirewallD1 and the method org.fedoraproject.FirewallD1.enablePanicMode:

$ qdbus --system --literal org.fedoraproject.FirewallD1 /org/fedoraproject/FirewallD1 org.fedoraproject.FirewallD1.enablePanicMode
[]

And your internet connection is gone. It comes back by disabling the panic mode again:

$ qdbus --system --literal org.fedoraproject.FirewallD1 /org/fedoraproject/FirewallD1 org.fedoraproject.FirewallD1.disablePanicMode
[]

Rights

You should also be aware that there is a rights management embedded in D-Bus – not every user is allowed to do anything. For example, as a normal user you cannot simply query all configured chains. If you call the following method:

$ qdbus --system --literal org.fedoraproject.FirewallD1 /org/fedoraproject/FirewallD1 org.fedoraproject.FirewallD1.direct.getAllChains
[Argument: a(sss) {}]

you are greeted with a password dialog before the command is executed.

Summary

D-Bus is used for inter process communication and thus can help when various programs are supposed to work together. It can also used on the shell to query information or to call specific methods as long as they are provided via the D-Bus interface. That might come in handy – some applications have rather strange ways to provide data or procedures via their user interfaces, and D-Bus offers a very generic way to interact without the need to respect any user interfaces.

[Short Tip] Ansible Cheat Sheet

Ansible Logo

I created an Ansbile Cheat Sheet for Wall-Skills.com which was published today. It covers most of the important bits and pieces on one neat single page and thus should hang on your office wall. And since even customers recently approached me regarding using Ansible on Ubuntu/Debian I figure and hope that this cheat sheet will be of help to others.
AnsibleCheatSheet

By the way, thanks to pastjean who is the creator of the famous Git Cheat Sheet which was published on Wall-Skills not long ago in an adapted version: the Git Cheat Sheet inspired me to write my own cheat sheet for Ansible, and the design follows similar principles.

[Howto] Vagrant: libvirt, (Multi-)Multi-Machine, Ansible and Puppet

Jean Victor Balin - CubesVagrant is a tool to create and configure virtual development environments. As a wrapper around common virtual machine solutions it helps bringing up and disposing a virtual machine in a glimpse. I played around with it – and, well, got a bit carried away…

What all the fuzz is about… – Vagrant Basics

Simply said, Vagrant is nothing more than a wrapper around your average virtualization solution. While it was mainly developed for VirtualBox these days it also supports libvirt/kvm, Amazon, VMWare and others. The aim is to have one single description file to bring up and tear down an entire virtual machine with simple commands. That way each developer or tester can use the same description file and thus the same environment – Vagrant wants to fight the “works on my machine” excuse. Also, since bringing up and disposing images is a matter of seconds all testing and development can be done against clean environments.

Another big feature is that Vagrant seamlessly integrates with configuration management systems like Puppet, Chef, Ansible and others. Thus the deployment of the virtual development machines can be done with the same configuration scripts you use for your production environment.

The Vagrant VMs are always spawned from a base image, which is again derived from a so called “box”. There are many of these boxes freely available on the internet. Most of them have the same features: only basic packages installed plus Puppet, Chef and similar clients, a user vagrant with the password vagrant, sudo rights and an insecure ssh public key in .ssh/authorized_keys.

Why? – The motivation

The question quickly arises why anyone should use Vagrant. After all, virsh already is a proper abstraction layer for qemu and kvm and provides quite some functionality. The rest can be done manually or by helper scripts: bringing up a VM, installing configuration management, throwing machines away, bringing new ones up, etc.

However, Vagrant is much simpler than virsh, and also offers a simple but kind of sane configuration file for it. And its easier to use Vagrant which is continuously improved than developing and maintaining your own set of scripts.

There are of course other solutions – Docker is rather often mentioned in this regard. I have not really looked at Docker or other alternatives in detail, so I won’t do a comparison right now. Maybe I will find time in the future to really dive into Docker and then compare them – if possible at all.

Can we start now? – Basic usage

Vagrant was originally developed around VirtualBox. Since I am used to libvirt i decided to test Vagrant on libvirt. James has written an incredible helpful howto describing what to do and where users should be careful.

The general steps are on Fedora (and similar on other machines like Debian, etc.):

  • get a recent Vagrant rpm and install it
  • throw in some dependencies for later: libvirt-devel, libxslt-devel, libxml2-devel, virsh, qemu-img
  • call vagrant and install a plugin to convert VirtualBox images to libvirt ones: vagrant plugin install vagrant-mutate
  • get the first Vagrant box: vagrant box add precise32 http://files.vagrantup.com/precise32.box – the default example, it a VirtualBox image
  • transform it to a VMDK2 image, because Fedora’s qemu right now cannot handle VMDK3 images: wget https://raw.github.com/erik-smit/one-liners/master/qemu-img.vmdk3.hack.sh, chmod u+x qemu-img.vmdk3.hack.sh, ./qemu-img.vmdk3.hack.sh ~/.vagrant.d/boxes/precise32/virtualbox/box-disk1.vmdk
  • mutate the VirtualBox image to libvirt: vagrant mutate precise32 libvirt
  • initiate a Vagrant project: vagrant init precise32, as you see the Vagrant configuration file called Vagrantfile is created
  • launch your first Vagrant box: vagrant up

And you are done! You’ve created your first VM with Vagrant. You can get access via vagrant ssh:

$ vagrant ssh
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic-pae i686)

 * Documentation:  https://help.ubuntu.com/
Welcome to your Vagrant-built virtual machine.
Last login: Fri Sep 14 06:22:31 2012 from 10.0.2.2
vagrant@precise32:~$

The command drops to a normal shell on the machine as the user “vagrant”.

And now comes the beauty of vagrant: with a single vagrant destroy you can get rid of the entire VM including the wasted space. If you want to try it again: vagrant up, box goes up, vagrant destroy, box goes down. Box goes up, Box goes down. Box goes up, box goes down.

Each time all modifications to the VM are lost, thus you can ensure all your changes in configuration management and development work against a clean environment. And it only takes half a minute to bring up the machine!

By the way, before you get too experimental: your current working directory is exported as a NFS share. Thus exchanging files between the machines is rather easy. But it also means that you should not try rm -rf / on a Vagrant machine.

Got problems? – Troubleshooting

What, already? Well… In that case, launch virt-manager and look at the machine. After all, Vagrant is just a wrapper!

For example if DHCP failed or NFS timed out it might be that there are problems with the firewall rules of the host machine. The way to allow the proper connections for recent Fedora versions is explained in Vagrant issue #2447. Basically, the Vagrant libvirt network bridge should be assigned to the internal zone, and services like NFS, DHCP, mountd and rpc for both TCP and UDP should be added there.

Also, if the request for the user password comes up all the time to secure only authorized access to the virtual machine management, PolicyKit rules can permanently provide proper rights. James’ blog post mentioned above explains the details.

And in case each machine gets two adapters with two IP addresses… well, that happened to me as well, and so far I have no solution. I am happy for any help regarding that topic.

Storage! – Where to put the images and boxes

Running virtual machines eats your storage. Usually, the boxes are managed in ~/.vagrant.d/boxes, while the actual VM templates and the machines themselves are stored in the default storage of the default virtual machine provider. In this case that is the default storage of libvirt.

It makes sense to dedicate an extra storage pool like an extra partition to Vagrant. That can be achieved by creating a storage pool in libvirt and telling Vagrant in the Vagrantfile about it. The “provider” section is the right place for that:

# Provider-specific configuration
config.vm.provider :libvirt do |libvirt|
    libvirt.driver = "qemu"
    libvirt.connect_via_ssh = false
    libvirt.storage_pool_name = "vm-images"
end

Providers are Vagrants name for virtual environments like VirtualBox, libvirt and so on. In this example the provider libvirt is ensured in case other providers are installed on the host as well. Also , the storage pool name is given, “vm-images”.

We want more! – Getting other boxes

In the above example an Ubuntu Precise machine was used. But it might rather well be that someone would like to use other operating systems or versions as a base. Quite some are listed at vagrantbox.es. Also, the Puppet project provides some useful boxes, even “clean” ones without any further additions installed.

However, using boxes from others is of course a security risk. But there is a way to build boxes yourself. I found James’ Makefile very handy: It uses virt-builder from the libguestfs project to build a CentOS 6 machine. Of course, using images from libguestfs only reduces your security risk, best would be to build your own images from scratch.

I built my CentOS 6 images using the above mentioned Makefile with two smaller adjustments to avoid problems with SSH fingerprints an for a better Ansible integration.

One, one-two, one-two-three! – Multi machines

Vagrant makes fun already – but things get more interesting with the possibility to start more than one machine. The Vagrant Documentation about multi-machines shows how further machines can be added – and named – in the Vagrantfile:

  config.vm.define "staging" do |staging|
    staging.vm.network private_network, ip: "192.168.121.101"
  end

  config.vm.define "prod" do |prod|
    prod.vm.network private_network, ip: "192.168.121.111"
  end

In this example two machines are now available: one named “staging” and one called “prod”, both with different IP addresses. Now all the Vagrant commands like up and ssh need the name of the machine to know on which machine they should act: vagrant up staging, vagrant ssh staging and vagrant destroy staging.

If no machine name is given, the task like up and destroy is run on all machine at the same time, in parallel. That can lead to problems like frozen machines, for unknown reasons. In such cases a workaround is to start the machines with the option --no-parallel.

Do you YAML? – Defining machines in external files

Being excited by the simplicity of Vagrant I got playful – and decided that specifying machines in the Vagrantfile is not very handy. Also, since I have no idea of Ruby and I am bound to make errors when I have to alter the file often. Thus I decided to specify machines in a YAML file, including name and IP. I picked YAML because I know it from Hiera/Puppet, Ansible and others. The file looks like the following:

---
- name: prod
  ip: 192.168.121.101
  environment: prod
- name: staging
  ip: 192.168.121.102
  environment: staging

This must be read into Vagrant – and luckily the entire script to read in these YAML data can be placed in the Vagrantfile itself:

  require 'yaml'
  servers = YAML.load_file('servers.yaml')

  servers.each do |servers|
      config.vm.define servers["name"] do |serv|
          serv.vm.network "private_network", ip: servers["ip"]
      end 
  end 

That’s it already. Now Vagrant can run as many machines as are specified in the YAML file.

Multi multi multi – Specifying multi-machines with different boxes

But that wasn’t enough – I work in two worlds, Debian/Ubuntu and Fedora/CentOS, and thus needed to be able to spawn VMs of CentOS and Ubuntu images. Thus I added the box name to the YAML file: “centos64″ is the base box for a recent CentOS, and “saucy64″ the base box for the last Ubuntu release.

- name: cstaging
  box: centos64
  ip: 192.168.121.102
- name: ustaging
  box: saucy64
  ip: 192.168.121.150

The names need to be different so that Vagrant can differentiate between the machines. In the above outlined example the names are prefixed with “u” for the Ubuntu machines and “c” for the CentOS machines.

The sourc ecode to read in the details in the Vagrantfile are:

  servers.each do |servers|
      config.vm.define servers["name"] do |serv|
          serv.vm.box = servers["box"]
          serv.vm.network "private_network", ip: servers["ip"]
      end 
  end

Do what I want! – Configuration management

As mentioned in the beginning, the real strength of Vagrant shines when it comes to integrating configuration management: Imagine a setup where all the developers work on the main Gits of the configuration management and the application only. And when they want to see if their changes work, they just fire up Vagrant: upon vagrant up development a new VM is deployed, all configurations and the application code from the stage “development” are pulled and installed automatically, and the developer just checks if everything is alright. Afterwards, the developer destroys the VM, and pushes the just tested changes to the staging repo.

To make that come true we need to integrate configuration management with Vagrant. By the way: Vagrant speaks about “provisioning” here not the best naming, but you got to live with it. Anyway, as mentioned there are multiple was to integrate configuration management. Puppet alone can be run by puppet apply or in terms of a real Puppet client, and the multiple configuration management systems can be even combined and run after each other. Here I will only shed some light on integrating either Ansible or Puppet Apply.

The decision which machine is to be managed by which solution will – of course – be done in the YAML file:

- name: cclean
  box: centos64
  ip: 192.168.121.102
  prov: ansible
  environment: staging
- name: uclean
  box: saucy64
  ip: 192.168.121.150
  prov: puppet
  environment: common

The CentOS machines will be managed by Ansible, the Ubuntu ones by Puppet. Also, please note that additional variables for stages are defined which might come in handy later on.

Simple and fast – integrating Ansible

I like Ansible. It’s really fast and easy to deploy. The same is true for its integration with Vagrant:

  servers.each do |servers|
      config.vm.define servers["name"] do |serv|
          serv.vm.box = servers["box"]
          serv.vm.network "private_network", ip: servers["ip"]
          if servers["prov"] == "ansible"
              serv.vm.provision "ansible" do |ansible|
                  ansible.playbook = "ansible/playbook.yaml"
              end 
          end 
      end 
  end

A subdirectory called “ansible” contains the playbook which is needed by Ansible. And, well, that’s it. The Vagrant Ansible documentation covers some more options like additional variables which can set. These come in handy when you need to differentiate between various environments like staging and production.

But in general, you can now write your playbook as you like and see how Ansible works its way. It is automatically launched after the NFS share was made ready.

Big and heavy – Puppet

Compared to Ansible, Puppet is rather heavy, and not that quick&dirty. Thus, the integration with Vagrant is a bit more difficult. But in general it is what anyone would expect of a Puppet server: a Hiera configuration, a modules directory and a basic Puppet manifest file to start with. The integration with Vagrant is done entirely in the Vagrantfile, and is shown below together with the the already above mentioned configuration:

  servers.each do |servers|
      config.vm.define servers["name"] do |serv|
          serv.vm.box = servers["box"]
          serv.vm.network "private_network", ip: servers["ip"]
          if servers["prov"] == "ansible"
              serv.vm.provision "ansible" do |ansible|
                  ansible.playbook = "ansible/playbook.yaml"
              end 
          elsif servers["prov"] == "puppet"
              serv.vm.provision :puppet do |puppet|
                  puppet.manifest_file  = "site.pp"
                  puppet.manifests_path = "puppet/manifests"
                  puppet.module_path = "puppet/modules"
                  puppet.hiera_config_path = "puppet/hiera.yaml"
                  puppet.working_directory = "/vagrant/puppet"
                  puppet.facter = { 
                      "environment" => servers["environment"]
                  }   
              end 
          end 
      end 
  end

In the example above an environment variable is provided to the Puppet Apply routine in form of a facter variable. Also, please note that a working directory has to be given so that all necessary files like the hieradata store can be found by Puppet. The rest is, however, pure Puppet: getting proper modules, setting up a well designed hierarchy with Hiera, etc.

Sooo…. What now? – Conclusion

Well… in the beginning I just wanted to understand what Vagrant exactly does and how and if it can help me. After getting this far I can say it will help me a lot in the future: creating and dumping virtual machines was never so quick and easy.

With the option of integrating configuration provisioning methods it can really change the way people develop code in business environments like for customers at my work at credativ. Often enough customers are simply too busy to bring up a VM because it takes too much time and is too tedious even with predefined scripts. They end up developing on their own machine, and later on run into problems with the further development. At such points Vagrant’s ease and simplicity can be incredible helpful to test configuration management recipes or development code. It could greatly ease the pain of providing an automated QA workflow where the QA testers get their own disposable VMs which are automatically brought up with the current testing stage.

But even for smaller setups like “personal” development Vagrant can really make things easier: as mentioned in the beginning, many people use virtualization in ways more or less similar to Vagrant. But keeping together personal scripts to bring up and destroy machines is a tedious work. Vagrant makes it quicker, simpler, and thus more reliable. Only for one-time tasks where for example I would need to create an entire new box in Vagrant I might still use virt-manager.

Of course, some bits are still missing. For example the provisioning done by YAML file could be improved so that Ansible, Puppet and all the others could share a single server configuration base with Vagrant. And as mentioned above, a comparison with other solutions would be nice as well.

But so far Vagrant has proven itself: Whenever I need to repeatedly test some configuration or integration I will use Vagrant. And I am already looking forward to all the shiny things which I can test now just a bit easier.

[Howto] First Steps With Ansible

Ansible LogoAnsible is a tool to manage systems and their configuration. Without the need for a client installed agent and with the ability to launch programs with command line, it seems to fit between classic configuration management like Puppet on one hand and ssh/dsh on the other.

Background

System/Configuration management is a hot topic right now. At Fosdem2014 there was an entire track dedicated to the topic – and the rooms where constantly overcrowded. There are more and more large server installations out there these days. With virtualization, it again get sensible and possible to have one server for each service. All these often rather similar machines need to be managed and thus central configuration management tools like Puppet or Chef became very popular. They keep all configuration stored in recipes on a central server, and the clients connect to it and pull the recipes regularly to ensure if everything is fine.

But sometimes there are smaller tasks: tasks which only need to be done once or once in a while, but for which a configuration management recipe might be too much. Also, it might happen that you have machines where you cannot easily install a Puppet client, or for example where you have machines which cannot contact your configuration management server via pull due to security concerns. For that situations ssh is often the tool of sysadmin’s choice. There are also cluster or distributed versions available like dsh.

Ansible now fits right in between these two classes of tools: it does provide the possibility to serve recipes from a central server, but does not require the clients to run any other agent but ssh.

Basic configuration, simple commands

First of all Ansible needs to know the hosts its going to serve. They can be managed on the central server in /etc/ansible/hosts or in a file configured in the shell variable ANSIBLE_HOSTS. The hosts can be listed as IP addresses or host names, and can contain additional information like user names, ssh port and so on:

[web-servers]
www.example.net ansible_ssh_port=222
www.example.com ansible_ssh_user=liquidat

[db-servers]
192.168.1.1
blue ansible_ssh_host=192.168.1.50

As soon as the hosts are defined, an Ansible “ping” can be used to see if they all can be reached. This is done from the central server – Ansible is per default a pushing service, not a pulling one.

$ ansible all -m ping
www.example.net | success >> {
    "changed": false, 
    "ping": "pong"
}
...

As seen above, Ansible was called with flag “m” which means module – the module “ping” just contacts the servers and checks if everything is ok. In this case the servers answer was successfully. Also, as you see the output is formatted in JSON style which is helpful in case the results need to be parsed anywhere.

In case you want to call arbitrary commands the flag “a” is needed:

$ ansible all -a "whoami" --sudo -K
sudo password: 
www.example.net | success | rc=0 >>
root
...

The “a” flag provides arguments to the invocated modules. In case no module is given, the argument of the flag is executed on the machine directly. The flag “sudo” does call the argument with sudo rights, “K” asks for the sudo password. Btw., note that this requires all servers to use the same sudo password, so to run Ansible you should think about configuring sudo with NOPASSWD.

More modules

There are dozens of modules provided with Ansible. For example, the file module can change permissions and ownership of a file or delete files and directories. The service module can check the state of services:

$ ansible www.example.com -m service -a "name=sshd state=restarted" --sudo -K
sudo password: 
www.example.com | success >> {
    "changed": true, 
    "name": "sshd", 
    "state": "started"
}

There are modules to send e-mails, copy files, install software via various package managers, for the management of cloud resources, to manage different databases, and so on. For example, the copy module can be used to copy files – and shows that files are only transferred if they are not already there:

$ ansible www.example.com -m copy -a "src=/home/liquidat/tmp/test.yml dest=/home/liquidat/text.yaml"
www.example.com | success >> {
    "changed": <strong>true</strong>, 
    "dest": "/home/liquidat/text.yaml", 
    "gid": 500, 
    "group": "liquidat", 
    "md5sum": "504e549603f616826707d60be0d9cd40", 
...

$ ansible www.example.com -m copy -a "src=/home/liquidat/tmp/test.yml dest=/home/liquidat/text.yaml"
www.example.com | success >> {
    "changed": <strong>false</strong>, 
...
}

In the second attempt the “changed” status is on “false”, indicating that the file was not actually changed since it was already there.

Playbooks

However, Ansible can be used for more than a distributed shell on steroids: configuration management and system orchestration. Both is realized in Ansible via so called Playbooks. In such Yaml files all the necessary tasks are stored which either ensure a given configuration or set up a specific system. In the end the Playbooks just list the Ansible commandos and modules which could also be called via command line. However, Playbooks also offer a dependency/notification system where given tasks are only executed if other tasks did change anything. Playbooks are called with a specific command line: ansible-playbook $PLAYBOOK.yml

For example, imagine a setup where you copy a file, and if that file was copied (so not there before or changed in the meantime) you need to restart sshd:

---
- hosts: www.example.com
  remote_user: liquidat
  tasks:
      - name: copy file
        copy: src=~/tmp/test.txt dest=~/test.txt
        notify:
            - restart sshd
  handlers:
      - name: restart sshd
        service: name=sshd state=restarted
        sudo: yes

As you see the host and user is configured in the beginning. There could be also host groups if needed. It is followed by the actual task – copying the file. All tasks of a Playbook are usually executed. This given task definition does have a notifier: if the task is executed with a “change” state of “true”, than a “handler” is notified. A handler is a task which is only executed if its called for. In this case, sshd is restarted after we copied over a file.

And the output is clear as well:

$ ansible-playbook tmp/test.yml -K
sudo password: 

PLAY [www.example.com] ********************************************************* 

GATHERING FACTS *************************************************************** 
ok: [www.example.com]

TASK: [copy file] ************************************************************* 
changed: [www.example.com]

NOTIFIED: [restart sshd] ****************************************************** 
changed: [www.example.com]

PLAY RECAP ******************************************************************** 
www.example.com             : ok=3    changed=2    unreachable=0    failed=0

The above example is a simple Playbook – but Playbooks offer many more functions: templates, variables based on various sources like the machine facts, conditions and even looping the same set of tasks over different sets of variables. For example, if we take the copy task but loop over a set of file names, each which should have a different name on the target system:

- name: copy files
  copy: src=~/tmp/{{ item.src_name }} dest=~/{{ item.dest_name }}                               
  with_items:                                                                                   
    - { src_name: file1.txt, dest_name: dest-file1.txt }                                      
    - { src_name: file2.txt, dest_name: dest-file2.txt }  

Also, Playbooks can include other Playbooks so you can have a set of ready-made Playbooks at your hand and combine them as you like. As you see Ansible is incredible powerful and does provide the ability to write Playbooks for very complex management tasks and system setups.

Outlook

Ansible is a tempting solution for configuration management since it does combine direct access with configuration management. If you have your large server data center already configured in an ansible-hosts file, you can it use for both system configuration as well as performing direct tasks. This is a big advantage compared to for example Puppet setups. Also, you can write Playbooks which you only need once in a while, store them at some place – and use them for orchestration purposes. Something which is not easily available with Puppet, but very simple with Ansible. Additionally, Ansible can be used either pushing or pulling, there are tools for both, which makes it much more flexible compared to other solutions out there.

And since you can use Ansible right from the start even without writing complex recipes before the learning curve is not that steep – and the adoption of Ansible is much quicker. There are already customers who use Ansible together with Puppet since Ansible is so much easier and much quicker to learn.

So in the end I can only recommend Ansible to anyone who is dealing with configuration management. It is a certainly helpful tool and even if you don’t start using it it might be interesting to know how other approaches to system and configuration management do look like.

First look at cockpit, a web based server management interface [Update]

TuxOnly recently the Cockpit project was launched, aiming at providing a web based management interface for various servers. It already leaves an interesting impression for simple management tasks – and the design is actually well done.

I just recently came across the only three month old Cockpit project. The mission statement is clear:

Cockpit is a server manager that makes it easy to administer your GNU/Linux servers via a web browser.

The web page also states three aims: beginners friendly interface, multi server management – and that there should be no interference in mixed usage of web interface and shell. Especially the last point caught my attention: many other web based solutions introduce their own magic, thus making it sometimes tricky to co-administrate the system manually via the shell. The listed objectives also make clear that cockpit does not try to replace tools that go much deeper into the configuration of servers, like Webmin, which for example offers modules to configure Apache servers in a quite detailed manner. Cockpit tries to simply administrate the server, not the applications. I must admit that I would always do such a application configuration manually anyway…

The installation of Cockpit is a bit bumpy: besides the requirement of tools like systemd which limits the usage to only very recent distributions (excluding Ubuntu, I guess) there are no packages yet, some manual steps are required. A post at unshut.me highlights the necessary steps for Fedora which I followed: in includes installing dependencies, setting firewall rules, etc. – and in the end it just works. But please note, in case you wanna give it a try: it is not ready for production. Not at all. Use virtual machines!

What I did see after the installation was actually rather appealing: a clean, yet modern web interface offering the most important and simple tasks a sysadmin might need in a daily routine: quickly showing the current health state, providing logs, starting and stopping services, creating new users, switching between servers, etc. And: there is even a working rescue console!

And where ever you click you see quickly what the foundation for Cockpit is: systemd. The logviewer shows systemd journal logs, services are displayed as seen and managed by systemd, and so on. That is the reason why one goal – no interference between shell and web interface – can be rather easily reached: the web interface communicates with systemd, just like a administrator on such a machine would do.  <Update> Speaking about: if you want to get an idea of *how* Cockpit communicates with its components, have a look at their transport graphic. </Update> Systemd by the way also explains why Cockpit currently is developed on Fedora: it ships with fully activated systemd.

But back to Cockpit itself: Some people might note that running a web server on a machine which is not meant to provide web pages is a security issue. And they are right. Each additional service on a server is a potential threat. But also keep in mind that many simple server installations already have an additional web server for example to show Munin statistics. So as always you have to carefully balance the pros of usable system management with the cons of an additional service and a web reachable system console…

To summarize: The interface is slick and easy to use, for simple server setups it could come in handy as a server management tool for example for beginners and accessible from the internal network only. A downside currently is the already mentioned limit to the distributions: as far as I got it, only Fedora 18 and 20 are supported yet. But the project has just begun, and will most certainly pick up more support in the near future, as long as the foundations (systemd) are properly supported in the distribution of your choice. And in the meantime Cockpit might be an extra bonus for people testing the coming Fedora Server. ;-)

Last but not least, in case you wonder how server management looks like with systemd, Cockpit can give you a first impression: it uses systemd and almost nothing else for exactly that.