Ansible package moved from EPEL to extras

Ansible LogoA few days ago the Ansible package was removed from EPEL and many ask why that happened. The background is that Ansible is now provided in certain Red Hat channels.

What happened?

In the past (pre-2017-10) most people who were on RHEL or CentOS or similar RHEL based systems used to install Ansible from the EPEL repository. This way the package was updates regularly and it was ensured that it met the quite high packaging standards of the EPEL project.

However, a few days ago someone noticed that the EPEL repositories no longer contain an Ansible rpm package:

I'm running RHEL 7.3, and have installed the latest epel-release-latest-7.noarch.rpm. However, I'm unable to install ansible from this repo.

This caused some confusion and questions about the reasons behind that move.

EPEL repository policy

To better understand what happened it is important to understand EPEL’s package policy:

EPEL strives to never replace or interfere with packages shipped by Enterprise Linux.

While the idea of EPEL is to provide cool additional packages for RHEL, they will never replace anything that is shipped.

Change at Red Hat Enterprise Linux

That philosophy regularly requires that the EPEL project removes packages: each time when RHEL adds a package EPEL needs to check if they are providing it, and removes it.

And a few weeks ago exactly that happened: Ansible was included in RHELs extras repository.

The reasons behind that move is that the newest incarnation of RHEL now comes along with so called system roles – which require Ansible to execute them.

But where to get it now?

Ansible is now directly available to RHEL users as mentioned above. Also, CentOS picked up Ansible in their extras repository, and there are plenty of other ways available.

The only case where something actually changes for people is when the EPEL repository is activated – but the extras repository is not.

Advertisements

[Howto] Adopting Ansible Galaxy roles for Solaris

Ansible LogoIt is pretty easy to manage Solaris with Ansible. However, the Ansible roles available at Ansible Galaxy usually target Linux based OS only. Luckily, adopting them is rather simple.

Background

As mentioned earlier Solaris machines can be managed via Ansible pretty well: it works out of the box, and many already existing modules are incredible helpful in managing Solaris installations.

At the same time, the Ansible Best Practices guide strongly recommends using roles to organize your IT with Ansible. Many roles are already available at the Ansible Galaxy ready to be used by the admin in need. Ansible Galaxy is a central repository for various roles written by the community.

However, Ansible Galaxy only recently added support for Solaris. There are currently hardly any roles with Solaris platform support available.

Luckily expanding existing Ansible roles towards Solaris is not that hard.

Example: Apache role

For example, the Apache role from geerlingguy is one of the highest rated roles on Ansible Galaxy. It installs Apache, starts the service, has support for vhosts and custom ports and is above all pretty well documented. Yet, there is no Solaris support right now… Although geerlingguy just accepted a pull request, so it won’t be long until the new version will surface at Ansible Galaxy.

The best way to adopt a given role for another OS is to extend the current role for an additional OS – in contrast to deleting the original OS support an replacing it by new, again OS specific configuration. This keeps the role re-usable on other OS and enables the community to maintain and improve a shared, common role.

With a bit of knowledge about how services are started and stopped on Linux as well as on Solaris, one major difference quickly comes up: on Linux usually the name of the controlled service is the exact same name as the one of the binary behind the service. The same name string is also part of the path to the usr files of the program and for example to the configuration files. On Solaris that is often not the case!

So the best is to check the given role if it starts or stops the service at any given point, if a variable is used there, and if this variable is used somewhere else but for example to create a path name or identify a binary.

The given example indeed controls a service. Thus we add another variable, the service name:

tasks/main.yml
@@ -41,6 +41,6 @@
 - name: Ensure Apache has selected state and enabled on boot.
   service:
-    name: "{{ apache_daemon }}"
+    name: "{{ apache_service }}"
     state: "{{ apache_state }}"
     enabled: yes

Next, we need to add the new variable to the existing OS support:

vars/Debian.yml
@@ -1,4 +1,5 @@
 ---
+apache_service: apache2
 apache_daemon: apache2
 apache_daemon_path: /usr/sbin/
 apache_server_root: /etc/apache2
vars/RedHat.yml
@@ -1,4 +1,5 @@
 ---
+apache_service: httpd
 apache_daemon: httpd
 apache_daemon_path: /usr/sbin/
 apache_server_root: /etc/httpd

Now would be a good time to test the role – it should work on the suported platforms.

The next step is to add the necessary variables for Solaris. The best way is to copy an already existing variable file and to modify it afterwards to fit Solaris:

vars/Solaris.yml
@@ -0,0 +1,19 @@
+---
+apache_service: apache24
+apache_daemon: httpd
+apache_daemon_path: /usr/apache2/2.4/bin/
+apache_server_root: /etc/apache2/2.4/
+apache_conf_path: /etc/apache2/2.4/conf.d
+
+apache_vhosts_version: "2.2"
+
+__apache_packages:
+  - web/server/apache-24
+  - web/server/apache-24/module/apache-ssl
+  - web/server/apache-24/module/apache-security
+
+apache_ports_configuration_items:
+  - regexp: "^Listen "
+    line: "Listen {{ apache_listen_port }}"
+  - regexp: "^#?NameVirtualHost "
+    line: "NameVirtualHost *:{{ apache_listen_port }}"

This specific role provides two playbooks to setup and configure each supported platform. The easiest way to create these two files for a new platform is again to copy existing ones and to modify them afterwards according to the specifics of Solaris.

The configuration looks like:

tasks/configure-Solaris.yml
@@ -0,0 +1,19 @@
+---
+- name: Configure Apache.
+  lineinfile:
+    dest: "{{ apache_server_root }}/conf/{{ apache_daemon }}.conf"
+    regexp: "{{ item.regexp }}"
+    line: "{{ item.line }}"
+    state: present
+  with_items: apache_ports_configuration_items
+  notify: restart apache
+
+- name: Add apache vhosts configuration.
+  template:
+    src: "vhosts-{{ apache_vhosts_version }}.conf.j2"
+    dest: "{{ apache_conf_path }}/{{ apache_vhosts_filename }}"
+    owner: root
+    group: root
+    mode: 0644
+  notify: restart apache
+  when: apache_create_vhosts

The setup thus can look like:

tasks/setup-Solaris.yml
@@ -0,0 +1,6 @@
+---
+- name: Ensure Apache is installed.
+  pkg5:
+    name: "{{ item }}"
+    state: installed
+  with_items: apache_packages

Last but not least, the platform support must be activated in the main/task.yml file:

tasks/main.yml
@@ -15,6 +15,9 @@
 - include: setup-Debian.yml
   when: ansible_os_family == 'Debian'
 
+- include: setup-Solaris.yml
+  when: ansible_os_family == 'Solaris'
+
 # Figure out what version of Apache is installed.
 - name: Get installed version of Apache.
   shell: "{{ apache_daemon_path }}{{ apache_daemon }} -v"

When you now run the role on a Solaris machine, it should install Apache right away.

Conclusion

Adopting a given role from Ansible Galaxy for Solaris is rather easy – if the given role is already prepared for multi OS support. In such cases adding another role is a trivial task.

If the role is not prepared for multi OS support, try to get in contact with the developers, often they appreciate feedback and multi OS support pull requests.

[Howto] Solaris 11 on KVM

solarisRecently I had to test a few things on Solaris 11 and wondered how well it works virtualized with KVM. It does – with a few tweaks.

Preface

Testing various different versions of operating systems is easy these days thanks to virtualization. However, I’m mainly used to Linux variants and hardly ever install any other kind of UNIX based OS. Thus I was curious if an installation of Solaris 11 on KVM / libvirt works.

For the test I actually used virt-manager since it does provide neat defaults during the VM setup. But the same comments and lessons learned are true for the command line tool as well.

Setting up the VM

virt-manager usually does not provide Solaris as an operating system type by default in the VM setup dialog. You first have to click on “OS Type”, “Show all OS options” as shown here:
virt-manager Solaris guest picker

Note, a Solaris 11 should have at least 2 GB RAM, otherwise the installation and also booting might take very long or run into their very own problems.

The installation runs through – although quite some errors clutter the screen (see below).

Errors and problems

As soon as the machine is started several error messages are shown:

WARNING: /pci@0,0/pci1af4,1100@6,1 (uhci1): No SOF interrupts have been received, this USB UHCI host controller is unusable
WARNING: /pci@0,0/pci1af4,1100@6,2 (uhci2): No SOF interrupts have been received, this USB UHCI host controller is unusable

This shows that something is wrong with the interrupts and thus withe the “hardware” of the machine – or at least with the way the guest machine discovers the hardware.

Additionally, even if DHCP is configured, the machine is unable to obtain the networking configuration. A fixed IP address and gateway do not help here, either. The host system might even report that it provides DHCP data, but the guest system continues to request these:

Dez 23 11:11:05 liquidat dnsmasq-dhcp[13997]: DHCPDISCOVER(virbr0) 52:54:00:31:31:4b
Dez 23 11:11:05 liquidat dnsmasq-dhcp[13997]: DHCPOFFER(virbr0) 192.168.122.205 52:54:00:31:31:4b
Dez 23 11:11:09 liquidat dnsmasq-dhcp[13997]: DHCPDISCOVER(virbr0) 52:54:00:31:31:4b
Dez 23 11:11:09 liquidat dnsmasq-dhcp[13997]: DHCPOFFER(virbr0) 192.168.122.205 52:54:00:31:31:4b
Dez 23 11:11:17 liquidat dnsmasq-dhcp[13997]: DHCPDISCOVER(virbr0) 52:54:00:31:31:4b
Dez 23 11:11:17 liquidat dnsmasq-dhcp[13997]: DHCPOFFER(virbr0) 192.168.122.205 52:54:00:31:31:4b
...

Also, when the machine is shutting down and ready to be powered off, the CPU usage spikes to 100 %.

The solution: APIC

The solution for the “hardware” problems mentioned above and also for the networking trouble is to deactivate a APIC feature inside the VM: x2APIC, Intel’s programmable interrupt controller. Some more details about the problem can be found in the Red Hat Bugzilla entry #1040500.

To apply the fix the virtual machine definition needs to be edited to disable the feature. The xml definition can be edited with the command sudo virsh edit with the machine name as command line option, the change needs to be done in the section cpu as shown below. Make sure tha VM is stopped before the changes are done.

$ sudo virsh edit krypton
...
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Broadwell</model>
    <feature policy='disable' name='x2apic'/>
  </cpu>
$ sudo virsh start krypton

After this changes Solaris does not report any interrupt problems anymore and DHCP works without flaws. Note however that the CPU still spikes at power off. If anyone knows a solution to that problem I would be happy to hear about it and add it to this post.

[Howto] Managing Solaris 11 via Ansible

Ansible LogoAnsible can be used to manage various kinds of Server operating systems – among them Solaris 11.

Managing Solaris 11 servers via Ansible from my Fedora machine is actually less exciting than previously thought. Since the amount of blog articles covering that is limited I thought it might be a nice challenge.

However, the opposite is the case: it just works. On a fresh Solaris installation, out of the box. There is not even need for additional configuration or additional software. Of course, ssh access must be available – but the same is true on Linux machines as well. It’s almost boring 😉

Here is an example to install and remove software on Solaris 11, using the new package system IPS which was introduced in Solaris 11:

$ ansible solaris -s -m pkg5 -a "name=web/server/apache-24"
$ ansible solaris -s -m pkg5 -a "state=absent name=/text/patchutils"

While Ansible uses a special module, pkg5, to manage Solaris packages, service managing is even easier because the usual service module is used for Linux as well as Solaris machines:

$ ansible solaris -s -m service -a "name=apache24 state=started"
$ ansible solaris -s -m service -a "name=apache24 state=stopped"

So far so good – of course things get really interesting if playbooks can perform tasks on Solaris and Linux machines at the same time. For example, imagine Apache needs to be deployed and started on Linux as well as on Solaris. Here conditions come in handy:

---
- name: install and start Apache
  hosts: clients
  vars_files:
    - "vars/{{ ansible_os_family }}.yml"
  sudo: yes

  tasks:
    - name: install Apache on Solaris
      pkg5: name=web/server/apache-24
      when: ansible_os_family == "Solaris"

    - name: install Apache on RHEL
      yum:  name=httpd
      when: ansible_os_family == "RedHat"

    - name: start Apache
      service: name={{ apache }} state=started

Since the service name is not the same on different operating systems (or even different Linux distributions) the service name is a variable defined in a family specific Yaml file.

It’s also interesting to note that the same Ansible module works different on the different operating systems: when a service is ordered to be stopped, but is not even available because the corresponding package and thus service definition is not even installed, the return code on Linux is OK, while on Solaris an error is returned:

TASK: [stop Apache on Solaris] ************************************************
failed: [argon] => {"failed": true}
msg: svcs: Pattern 'apache24' doesn't match any instances

FATAL: all hosts have already failed -- aborting

It would be nice to catch the error, however as far as I know error handling in Ansible can only specify when to fail, and not which messages/errors should be ignored.

But besides this problem managing Solaris via Ansible works smoothly for me. And it even works on Ansible Tower, of course:

Tower-Ansible-Solaris.png

I haven’t tried to install Ansible on Solaris itself, but since packages are available that shouldn’t be much of an issue.

So in case you have a mixed environment including Solaris and Linux machines (Red Hat, Fedora, Ubuntu, Debian, Suse, you name it) I can only recommend to start using Ansible as soon as you possible. It simply works and can ease the pain of day to day tasks substantially.

Ansible Galaxy just added Solaris platform support

Ansible LogoWhile Ansible is mostly used in Linux environments, it can also be used to manage other UNIX variants like Solaris. Now the central hub for Ansible roles, Ansible Galaxy, also added support for the platform Solaris.

Ansible is handy tool to manage multiple servers. Besides the usual Linux distributions it features support for BSD variants, Solaris and even Windows. However, the central hub to share Ansible roles, Ansible Galaxy, was still missing Solaris support until now: the support was added for version 10 as well as version 11.

That can already be seen when a role template is generated with the Galaxy tools:

$ ansible-galaxy init acme --force
- acme was created successfully
$ grep -B 1 -A 8 Solaris acme/meta/main.yml
  #  - any
  #- name: Solaris
  #  versions:
  #  - all
  #  - 10
  #  - 11.0
  #  - 11.1
  #  - 11.2
  #  - 11.3
  #- name: Fedora

This opens up the possibility to provide Ansible roles including Solaris support at a central place. Right now I already have a pull request to enable Solaris support on a very powerful Apache role. In a following blog report I’ll add the (surprisingly few) steps which were necessary to adjust the role to support Solaris.

It is great that Ansible Galaxy adds more and more platforms and thus broadens the usage of the central hub to cover more and more use cases. I’m looking forward to see more and more Solaris roles in the Galaxy. If you need help porting a role don’t hesitate to contact me.

The support is so far still in internal testing and will be made final when the above mentioned Github issue is closed.

[Howto] Upgrading CentOS 6 to CentOS 7

CentOS LogoCentOS 7 is out. With systemd, Docker integration and many more fixes and new features I wanted to see if the new remote upgrade possibility already works. I’ve already posted the procedure to Vexxhost’s Blog and wanted to share it here as well. But beware: don’t try this on production servers!

CentOS 7 was released only few weeks after Red Hat Enterprise Linux 7, including the same exciting features RHEL ships. Besides the long awaited Systemd and the right now much discussed Docker this release also features the possibility to perform upgrades from version 6 to version 7 automatically without the need of the installation images. And although the upgrade still requires a reboot and thus is not a live upgrade as such, it comes in very handy for servers which can only be reached remotely.

Red Hat has already released and documented the necessary tools. The CentOS team didn’t have time yet to import, test and rebuild the tools but the developers are already on it – and they provide untested binaries.

Please, note: Since the packages are not tested yet you should not, by any means, try these on anything else than on spare test machines you can easily re-deploy and which do not have any valuable data. Do not try this on your production machines!

But if you want to get a first idea of how the tools do basically work, I recommend to set up a simple virtual machine with a fully updated CentOS 6 and as few packages as possible. Next, install the rpms from the CentOS repository mentioned above. Among these is the Preupgrade Assistant, which can be run on a system with no harm: preupg just analyses the system and gives hints what to look out for during an upgrade without performing any tasks. Since I only tested with systems with hardly any services installed I got no real results from preupg. Even a test run on a system with more services installed brought the same output (only showing some examples of the dozens and dozens of lines):

$ sudo preupg
Preupg tool doesn't do the actual upgrade.
Please ensure you have backed up your system and/or data in the event of a failed upgrade
 that would require a full re-install of the system from installation media.
Do you want to continue? y/n
y
Gathering logs used by preupgrade assistant:
All installed packages : 01/10 ...finished (time 00:00s)
All changed files      : 02/10 ...finished (time 00:48s)
Changed config files   : 03/10 ...finished (time 00:00s)
All users              : 04/10 ...finished (time 00:00s)
...
042/100 ...done    (samba shared directories selinux)
043/100 ...done    (CUPS Browsing/BrowsePoll configuration)
044/100 ...done    (CVS Package Split)
...
|samba shared directories selinux          |notapplicable  |
|CUPS Browsing/BrowsePoll configuration    |notapplicable  |
|CVS Package Split                         |notapplicable  |
...

As mentioned above the Preupgrade Assistant only helps evaluating what problems might come up during the upgrade – the real step must be done with the tool redhat-upgrade-tool-cli. For that to work the CentOS 7 key must be imported first:

$ sudo rpm --import http://isoredirect.centos.org/centos/7/os/x86_64/RPM-GPG-KEY-CentOS-7

Afterwards, the actual upgrade tool can be called. As options it takes the future distribution version and a URL to pull the data from. Additionally I had to add the option --force since the tool complained that preupg was not run previously – although it was. As soon as the upgrade tool is called, it starts downloading all necessary information, packages and images, and afterwards asks for a reboot – the reboot does not happen automatically.

$ sudo /usr/bin/redhat-upgrade-tool-cli --force --network 7 --instrepo=http://mirror.centos.org/centos/7/os/x86_64
setting up repos...
.treeinfo                                                                                                                                            | 1.1 kB     00:00     
getting boot images...

After the reboot the machine updates itself with the help of the downloaded packages. Note that this phase does take some time, depending on the speed of the machine, expect minutes, not seconds. However, if everything turns out right, the next login will be into a CentOS 7 machine:

$ cat /etc/os-release 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

Concluding it can be said that the upgrade tool worked quite nicely. While it is not comparable to a real live upgrade if offers a decent way to upgrade remote servers. I’ve tested it with a clean VM and also with a bare metal, remote server, and it worked surprisingly good. The analysis tool unfortunately did not perform how I expected it to work, but that might be due to the untested state or I was not using it properly. I’m looking forward what how that develops and improves over time.

But, again, and as mentioned before – don’t try this on your own prod servers.

First look at cockpit, a web based server management interface [Update]

TuxOnly recently the Cockpit project was launched, aiming at providing a web based management interface for various servers. It already leaves an interesting impression for simple management tasks – and the design is actually well done.

I just recently came across the only three month old Cockpit project. The mission statement is clear:

Cockpit is a server manager that makes it easy to administer your GNU/Linux servers via a web browser.

The web page also states three aims: beginners friendly interface, multi server management – and that there should be no interference in mixed usage of web interface and shell. Especially the last point caught my attention: many other web based solutions introduce their own magic, thus making it sometimes tricky to co-administrate the system manually via the shell. The listed objectives also make clear that cockpit does not try to replace tools that go much deeper into the configuration of servers, like Webmin, which for example offers modules to configure Apache servers in a quite detailed manner. Cockpit tries to simply administrate the server, not the applications. I must admit that I would always do such a application configuration manually anyway…

The installation of Cockpit is a bit bumpy: besides the requirement of tools like systemd which limits the usage to only very recent distributions (excluding Ubuntu, I guess) there are no packages yet, some manual steps are required. A post at unshut.me highlights the necessary steps for Fedora which I followed: in includes installing dependencies, setting firewall rules, etc. – and in the end it just works. But please note, in case you wanna give it a try: it is not ready for production. Not at all. Use virtual machines!

What I did see after the installation was actually rather appealing: a clean, yet modern web interface offering the most important and simple tasks a sysadmin might need in a daily routine: quickly showing the current health state, providing logs, starting and stopping services, creating new users, switching between servers, etc. And: there is even a working rescue console!

And where ever you click you see quickly what the foundation for Cockpit is: systemd. The logviewer shows systemd journal logs, services are displayed as seen and managed by systemd, and so on. That is the reason why one goal – no interference between shell and web interface – can be rather easily reached: the web interface communicates with systemd, just like a administrator on such a machine would do.  <Update> Speaking about: if you want to get an idea of *how* Cockpit communicates with its components, have a look at their transport graphic. </Update> Systemd by the way also explains why Cockpit currently is developed on Fedora: it ships with fully activated systemd.

But back to Cockpit itself: Some people might note that running a web server on a machine which is not meant to provide web pages is a security issue. And they are right. Each additional service on a server is a potential threat. But also keep in mind that many simple server installations already have an additional web server for example to show Munin statistics. So as always you have to carefully balance the pros of usable system management with the cons of an additional service and a web reachable system console…

To summarize: The interface is slick and easy to use, for simple server setups it could come in handy as a server management tool for example for beginners and accessible from the internal network only. A downside currently is the already mentioned limit to the distributions: as far as I got it, only Fedora 18 and 20 are supported yet. But the project has just begun, and will most certainly pick up more support in the near future, as long as the foundations (systemd) are properly supported in the distribution of your choice. And in the meantime Cockpit might be an extra bonus for people testing the coming Fedora Server. 😉

Last but not least, in case you wonder how server management looks like with systemd, Cockpit can give you a first impression: it uses systemd and almost nothing else for exactly that.