Ansible Tower 3.1 – screenshot tour

Ansible LogoAnsible Tower 3.1 was just released. Time to have a closer look at some of the new features like the workflow editor.

Just a few days ago, Ansible Tower 3.1 was released. Besides the usual bug fixes, refinements of the UI and similar things this Tower version comes with major new feature: a workflow editor, scale out clustering, integration with logging providers and a new job details page.

The basic idea of a workflow is to link multiple job templates coming one after the other. They may or may not share inventory, playbooks or even permissions. The links can be conditional: if job template A succeeds, job template B is automatically executed afterwards, but in case of failure, job template C will be run. And the workflows are not even limited to job templates, but can also include project or inventory updates.

This enables new applications for Tower: besides the rather simple execution of prepared job templates, now different workflows can build upon each other. Imagine the networking team which creates a playbooks with their own content, in their own Git repository and even targeting their own inventory, while the operations team also has their own repos, playbooks and inventory. With older Tower versions there would be no simple way to bring these totally separated ways together – with 3.1 this can be done even with a graphical editor.

Workflows can be created right from the job template page. As can be seen that page got an overhaul:

templates

The button to add a new template offers a small arrow to get a menu from which a workflow can be set up.

Afterwards, the workflow needs to be defined – name, organization, etc. This is a necessary step, before the actual links can be created:

WorkflowEditorStart.png

As shown in the screenshot above from this screen on the actual editor can be started. And I must admit that I was surprised of how simple but yet rather elegant the editor looks like and works. It takes hardly any time to get used to, and the result is visually appealing and easily understandable:

WorkflowEditor.png

The above screenshot shows the major highlights: links depending on the result of the previous job template in red and green, blue links which are executed every time, a task in the workflow to update a project (indicated by the “P”), and the actual editor.

As mentioned at the beginning, there are more features in this new Tower release. The clustering feature is an explicitly interesting feature for load balancing and HA setups, though I have not tested it yet. Another possibility is the integration of logging providers right into the UI:

logging

As shown above a logstash logging provider  was configured to gather all the Tower logs. Other possible providers are  splunk, and in general everything which understands REST calls.

A change I yet have to get familiar with is the new view on the jobs page, showing running or completed jobs:

The new view is much more tailored to the output of ansible-playbook, showing the time at each task. Also, a search bar has been added which can be used to search through the results rather easily. Each taks can be clicked at to get much more details about the task. However, in the old view I liked the possibility to simply click through a play and the single tasks, getting the list of hosts adjusted automatically, etc. I can already see that the change will be for the better – but I have to get used to it first 😉

Overall the new release is pretty impressive. Especially the workflow editor will massively help bringing different teams even closer in automation (DevOps, anyone?). Also, the cluster feature will certainly help create stable, HA like setups of Tower. The UI might take some time to get used to, but that’s ok, since there will be a benefit at the end.

So, it is a great release – get started now!

Advertisements

[Howto] Solaris 11 on KVM

solarisRecently I had to test a few things on Solaris 11 and wondered how well it works virtualized with KVM. It does – with a few tweaks.

Preface

Testing various different versions of operating systems is easy these days thanks to virtualization. However, I’m mainly used to Linux variants and hardly ever install any other kind of UNIX based OS. Thus I was curious if an installation of Solaris 11 on KVM / libvirt works.

For the test I actually used virt-manager since it does provide neat defaults during the VM setup. But the same comments and lessons learned are true for the command line tool as well.

Setting up the VM

virt-manager usually does not provide Solaris as an operating system type by default in the VM setup dialog. You first have to click on “OS Type”, “Show all OS options” as shown here:
virt-manager Solaris guest picker

Note, a Solaris 11 should have at least 2 GB RAM, otherwise the installation and also booting might take very long or run into their very own problems.

The installation runs through – although quite some errors clutter the screen (see below).

Errors and problems

As soon as the machine is started several error messages are shown:

WARNING: /pci@0,0/pci1af4,1100@6,1 (uhci1): No SOF interrupts have been received, this USB UHCI host controller is unusable
WARNING: /pci@0,0/pci1af4,1100@6,2 (uhci2): No SOF interrupts have been received, this USB UHCI host controller is unusable

This shows that something is wrong with the interrupts and thus withe the “hardware” of the machine – or at least with the way the guest machine discovers the hardware.

Additionally, even if DHCP is configured, the machine is unable to obtain the networking configuration. A fixed IP address and gateway do not help here, either. The host system might even report that it provides DHCP data, but the guest system continues to request these:

Dez 23 11:11:05 liquidat dnsmasq-dhcp[13997]: DHCPDISCOVER(virbr0) 52:54:00:31:31:4b
Dez 23 11:11:05 liquidat dnsmasq-dhcp[13997]: DHCPOFFER(virbr0) 192.168.122.205 52:54:00:31:31:4b
Dez 23 11:11:09 liquidat dnsmasq-dhcp[13997]: DHCPDISCOVER(virbr0) 52:54:00:31:31:4b
Dez 23 11:11:09 liquidat dnsmasq-dhcp[13997]: DHCPOFFER(virbr0) 192.168.122.205 52:54:00:31:31:4b
Dez 23 11:11:17 liquidat dnsmasq-dhcp[13997]: DHCPDISCOVER(virbr0) 52:54:00:31:31:4b
Dez 23 11:11:17 liquidat dnsmasq-dhcp[13997]: DHCPOFFER(virbr0) 192.168.122.205 52:54:00:31:31:4b
...

Also, when the machine is shutting down and ready to be powered off, the CPU usage spikes to 100 %.

The solution: APIC

The solution for the “hardware” problems mentioned above and also for the networking trouble is to deactivate a APIC feature inside the VM: x2APIC, Intel’s programmable interrupt controller. Some more details about the problem can be found in the Red Hat Bugzilla entry #1040500.

To apply the fix the virtual machine definition needs to be edited to disable the feature. The xml definition can be edited with the command sudo virsh edit with the machine name as command line option, the change needs to be done in the section cpu as shown below. Make sure tha VM is stopped before the changes are done.

$ sudo virsh edit krypton
...
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Broadwell</model>
    <feature policy='disable' name='x2apic'/>
  </cpu>
$ sudo virsh start krypton

After this changes Solaris does not report any interrupt problems anymore and DHCP works without flaws. Note however that the CPU still spikes at power off. If anyone knows a solution to that problem I would be happy to hear about it and add it to this post.

[Howto] Managing Solaris 11 via Ansible

Ansible LogoAnsible can be used to manage various kinds of Server operating systems – among them Solaris 11.

Managing Solaris 11 servers via Ansible from my Fedora machine is actually less exciting than previously thought. Since the amount of blog articles covering that is limited I thought it might be a nice challenge.

However, the opposite is the case: it just works. On a fresh Solaris installation, out of the box. There is not even need for additional configuration or additional software. Of course, ssh access must be available – but the same is true on Linux machines as well. It’s almost boring 😉

Here is an example to install and remove software on Solaris 11, using the new package system IPS which was introduced in Solaris 11:

$ ansible solaris -s -m pkg5 -a "name=web/server/apache-24"
$ ansible solaris -s -m pkg5 -a "state=absent name=/text/patchutils"

While Ansible uses a special module, pkg5, to manage Solaris packages, service managing is even easier because the usual service module is used for Linux as well as Solaris machines:

$ ansible solaris -s -m service -a "name=apache24 state=started"
$ ansible solaris -s -m service -a "name=apache24 state=stopped"

So far so good – of course things get really interesting if playbooks can perform tasks on Solaris and Linux machines at the same time. For example, imagine Apache needs to be deployed and started on Linux as well as on Solaris. Here conditions come in handy:

---
- name: install and start Apache
  hosts: clients
  vars_files:
    - "vars/{{ ansible_os_family }}.yml"
  sudo: yes

  tasks:
    - name: install Apache on Solaris
      pkg5: name=web/server/apache-24
      when: ansible_os_family == "Solaris"

    - name: install Apache on RHEL
      yum:  name=httpd
      when: ansible_os_family == "RedHat"

    - name: start Apache
      service: name={{ apache }} state=started

Since the service name is not the same on different operating systems (or even different Linux distributions) the service name is a variable defined in a family specific Yaml file.

It’s also interesting to note that the same Ansible module works different on the different operating systems: when a service is ordered to be stopped, but is not even available because the corresponding package and thus service definition is not even installed, the return code on Linux is OK, while on Solaris an error is returned:

TASK: [stop Apache on Solaris] ************************************************
failed: [argon] => {"failed": true}
msg: svcs: Pattern 'apache24' doesn't match any instances

FATAL: all hosts have already failed -- aborting

It would be nice to catch the error, however as far as I know error handling in Ansible can only specify when to fail, and not which messages/errors should be ignored.

But besides this problem managing Solaris via Ansible works smoothly for me. And it even works on Ansible Tower, of course:

Tower-Ansible-Solaris.png

I haven’t tried to install Ansible on Solaris itself, but since packages are available that shouldn’t be much of an issue.

So in case you have a mixed environment including Solaris and Linux machines (Red Hat, Fedora, Ubuntu, Debian, Suse, you name it) I can only recommend to start using Ansible as soon as you possible. It simply works and can ease the pain of day to day tasks substantially.

Ansible Galaxy just added Solaris platform support

Ansible LogoWhile Ansible is mostly used in Linux environments, it can also be used to manage other UNIX variants like Solaris. Now the central hub for Ansible roles, Ansible Galaxy, also added support for the platform Solaris.

Ansible is handy tool to manage multiple servers. Besides the usual Linux distributions it features support for BSD variants, Solaris and even Windows. However, the central hub to share Ansible roles, Ansible Galaxy, was still missing Solaris support until now: the support was added for version 10 as well as version 11.

That can already be seen when a role template is generated with the Galaxy tools:

$ ansible-galaxy init acme --force
- acme was created successfully
$ grep -B 1 -A 8 Solaris acme/meta/main.yml
  #  - any
  #- name: Solaris
  #  versions:
  #  - all
  #  - 10
  #  - 11.0
  #  - 11.1
  #  - 11.2
  #  - 11.3
  #- name: Fedora

This opens up the possibility to provide Ansible roles including Solaris support at a central place. Right now I already have a pull request to enable Solaris support on a very powerful Apache role. In a following blog report I’ll add the (surprisingly few) steps which were necessary to adjust the role to support Solaris.

It is great that Ansible Galaxy adds more and more platforms and thus broadens the usage of the central hub to cover more and more use cases. I’m looking forward to see more and more Solaris roles in the Galaxy. If you need help porting a role don’t hesitate to contact me.

The support is so far still in internal testing and will be made final when the above mentioned Github issue is closed.

Current distribution of WhatsApp alternatives [Update]

Android_robotMany people are discussing alternatives to WhatsApp right now. Here I just track how many installations the currently discussed, crypto-enabled alternatives have according to the app store.

WhatsApp was already bad before Facebook acquired it. But at least now people woke up and are considering secure alternatives. Yes, this move could have come earlier, but I do welcome the new opportunity: its the first time wide spread encryption actually has a chance in the consumer market. So for most of the people out there the question is more “which alternative should I use” instead of “should I use one”. Right now I do not have the faintest idea which alternative with crypto support will make the break through – but you could say I am well prepare.

Screenshot installed instant messengers
Screenshot installed instant messengers

Well – that’s obviously not a long term solution. Thus, to shed some light on the various alternatives and how they stand right now, here is a quick statistical overview:

Secure Instant Messengers, state updated 2014-03-11
Name WebPage/GooglePlay installed devices Ratings Google +1
ChatSecure Website / Google Play 100 000 – 500 000 1 626 2 620
Kontalk Website / Google Play 10 000 – 50 000 237 265
surespot Website / Google Play 50 000 – 100 000 531 632
Telegram Website / Google Play 10 000 000 – 50 000 000 273 089 97 641
Threema Website / Google Play 500 000 – 1 000 000 9 368 12 594
TextSecure Website / Google Play 100 000 – 500 000 2 478 2 589

The statistics are taken from Google’s Android Play Store. I would love to include iTunes statistics, but it seems they are not provided via the web page. If you know how to gather them please drop me a note and I’ll include them here.

These numbers just help to show how fat an application is spread – it does not say anything about the quality. For example Threema is not Open Source and thus not a real alternative. So, if you want to know more details about the various options, please read appropriate reviews like the one from MissingM.

First look at cockpit, a web based server management interface [Update]

TuxOnly recently the Cockpit project was launched, aiming at providing a web based management interface for various servers. It already leaves an interesting impression for simple management tasks – and the design is actually well done.

I just recently came across the only three month old Cockpit project. The mission statement is clear:

Cockpit is a server manager that makes it easy to administer your GNU/Linux servers via a web browser.

The web page also states three aims: beginners friendly interface, multi server management – and that there should be no interference in mixed usage of web interface and shell. Especially the last point caught my attention: many other web based solutions introduce their own magic, thus making it sometimes tricky to co-administrate the system manually via the shell. The listed objectives also make clear that cockpit does not try to replace tools that go much deeper into the configuration of servers, like Webmin, which for example offers modules to configure Apache servers in a quite detailed manner. Cockpit tries to simply administrate the server, not the applications. I must admit that I would always do such a application configuration manually anyway…

The installation of Cockpit is a bit bumpy: besides the requirement of tools like systemd which limits the usage to only very recent distributions (excluding Ubuntu, I guess) there are no packages yet, some manual steps are required. A post at unshut.me highlights the necessary steps for Fedora which I followed: in includes installing dependencies, setting firewall rules, etc. – and in the end it just works. But please note, in case you wanna give it a try: it is not ready for production. Not at all. Use virtual machines!

What I did see after the installation was actually rather appealing: a clean, yet modern web interface offering the most important and simple tasks a sysadmin might need in a daily routine: quickly showing the current health state, providing logs, starting and stopping services, creating new users, switching between servers, etc. And: there is even a working rescue console!

And where ever you click you see quickly what the foundation for Cockpit is: systemd. The logviewer shows systemd journal logs, services are displayed as seen and managed by systemd, and so on. That is the reason why one goal – no interference between shell and web interface – can be rather easily reached: the web interface communicates with systemd, just like a administrator on such a machine would do.  <Update> Speaking about: if you want to get an idea of *how* Cockpit communicates with its components, have a look at their transport graphic. </Update> Systemd by the way also explains why Cockpit currently is developed on Fedora: it ships with fully activated systemd.

But back to Cockpit itself: Some people might note that running a web server on a machine which is not meant to provide web pages is a security issue. And they are right. Each additional service on a server is a potential threat. But also keep in mind that many simple server installations already have an additional web server for example to show Munin statistics. So as always you have to carefully balance the pros of usable system management with the cons of an additional service and a web reachable system console…

To summarize: The interface is slick and easy to use, for simple server setups it could come in handy as a server management tool for example for beginners and accessible from the internal network only. A downside currently is the already mentioned limit to the distributions: as far as I got it, only Fedora 18 and 20 are supported yet. But the project has just begun, and will most certainly pick up more support in the near future, as long as the foundations (systemd) are properly supported in the distribution of your choice. And in the meantime Cockpit might be an extra bonus for people testing the coming Fedora Server. 😉

Last but not least, in case you wonder how server management looks like with systemd, Cockpit can give you a first impression: it uses systemd and almost nothing else for exactly that.

[Howto] Share Ethernet via Wifi with NetworkManager in KDE

KDE logoI recently was taking part in a training at a place which had poor cellular reception, no wifi and only one single ethernet connection. Thus we had to the ethernet via wifi. I tried to do just that with my laptop via NetworkManager – and it worked out of the box.

The basic situation is rather common: you have one single network connection, and want to share it to other people or devices via wifi. If you want to do that manually you have to set up the wifi network on your own, including encryption, need to bring up a dhcp server, configure the routing and NATing, and so on. That can take quite some time, and is nothing you want to do during some precious training hours.

Thus I simply tried to bring up a shared wifi via the NetworkManager applet in KDE:
Share-Wireless
After providing a SSID name and configuring some security credentials the process was already done, and I was notified that the network was set up and ready. It was also shown in the plasma applet besides the ethernet connection:
Plasma-Connection-Established
Plasma-Applet-Conncetions

And that was it already: everyone was able to connect to the network without any problems – and it didn’t even took me a minute to bring it all up. Since I know how much trouble it can be to bring such a connection up manually I was really impressed!

In case you want to give it a try, make sure that your wifi hardware and thus the appropriate driver; do support Access Point (AP) mode which is needed to bring up the wifi. If it says “for some devices only” you have no choice but give it a try.

By the way, in case you wonder about DNS and DHCP: both is done via dnsmasq as a local server, offering both. The DNS queries are forwarded to the DNS servers you got via DHCP from the ethernet connection (or, presumably the one you configured in NetworkManager).
However, I was not able to find any temporary configuration in /run or /var/ which showed the actual DNS servers – I had to call nm-tool to figure that one out:

$ nm-tool
- Device: eth0  [Standard-Ethernet] --------------------------------------------
[...]

  IPv4 Settings:
    Address:         192.168.3.27
    Prefix:          24 (255.255.255.0)
    Gateway:         192.168.3.1

    DNS:             192.168.2.4
    DNS:             192.168.2.3

If you know of any other way to find out these information, or even better simply the entire configuration of dnsmasq for that purpose please let me know =)

Besides, while the Plasma applet gave me the option to shut down the shared wifi network, I wasn’t able to bring it up again. There simply is no option in the network overview to fire up again such a network, thus I filled a bug report.

But, besides these two smaller issues, the plasma-nm applet and thus NetworkManager did a great job making sharing networks very easy.