[Howto] Get a Python virtual environment running on RHEL 8

RHEL 8 has a new way how Python is installed and handled. How do you use it properly then, especially when multiple versions are installed? Read on to learn how to properly set up a virtual environment nevertheless.

Red Hat Enterprise Linux 8 was released in May this year – and comes with a lot of changes. Think of a really modern OS here. Among those changes is also that Python is, well different: it is included, for sure. But at the same time, it isn’t.

The important piece is anyway that, when you work with Python in development environments or for example when you are dealing with Ansible, it makes sense to run everything in a Python virtual environment.

Here is how this can be best done in RHEL 8:

First, install the Python 3.6 appstream:

$ sudo yum install -y python36

Afterwards, set up a python virtual environment:

$ python3.6 -m venv myvirtual_venv

And that’s it already. Activate it with:

$ source myvirtual_venv/bin/activate

In case you are dealing with SELinux bindings, it might make sense to link those into your virtual environment:

$ cd myvirtual_venv/lib/python3.6/site-packages/
$ ln -s /usr/lib64/python3.6/site-packages/selinux
$ ln -s /usr/lib64/python3.6/site-packages/_selinux.cpython-36m-x86_64-linux-gnu.so

When in the future different versions of Python are offered via appstreams, make sure to pick the right selinux bindings when you link them into your virtual environment.

Advertisements

Of debugging Ansible Tower and underlying cloud images

Ansible Logo

Recently I was experimenting with Tower’s isolated nodes feature – but somehow it did not work in my environment. Debugging told me a lot about Ansible Tower – and also why you should not trust arbitrary cloud images.

Background – Isolated Nodes

Ansible Tower has a nice feature called “isolated nodes”. Those are dedicated Tower instances which can manage nodes in separated environments – basically an Ansible Tower Proxy.

An Isolated Node is an Ansible Tower node that contains a small piece of software for running playbooks locally to manage a set of infrastructure. It can be deployed behind a firewall/VPC or in a remote datacenter, with only SSH access available. When a job is run that targets things managed by the isolated node, the job and its environment will be pushed to the isolated node over SSH, where it will run as normal.

Ansible Tower Feature Spotlight: Instance Groups and Isolated Nodes

Isolated nodes are especially handy when you setup your automation in security sensitive environments. Think of DMZs here, of network separation and so on.

I was fooling around with a clustered Tower installation on RHEL 7 VMs in a cloud environment when I run into trouble though.

My problem – Isolated node unavailable

Isolated nodes – like instance groups – have a status inside Tower: if things are problematic, they are marked as unavailable. And this is what happened with my instance isonode.remote.example.com running in my lab environment:

Ansible Tower showing an instance node as unavailable

I tried to turn it “off” and “on” again with the button in the control interface. It made the node available, it was even able to executed jobs – but it became quickly unavailable soon after.

Analysis

So what happened? The Tower logs showed a Python error:

# tail -f /var/log/tower/tower.log
fatal: [isonode.remote.example.com]: FAILED! => {"changed": false,
"module_stderr": "Shared connection to isonode.remote.example.com
closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n
File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1552400585.04
-60203645751230/AnsiballZ_awx_capacity.py\", line 113, in <module&gt;\r\n
_ansiballz_main()\r\n  File \"/var/lib/awx/.ansible/tmp/ansible-tmp
-1552400585.04-60203645751230/AnsiballZ_awx_capacity.py\", line 105, in
_ansiballz_main\r\n    invoke_module(zipped_mod, temp_path,
ANSIBALLZ_PARAMS)\r\n  File \"/var/lib/awx/.ansible/tmp/ansible-tmp
-1552400585.04-60203645751230/AnsiballZ_awx_capacity.py\", line 48, in
invoke_module\r\n    imp.load_module('__main__', mod, module, MOD_DESC)\r\n
File \"/tmp/ansible_awx_capacity_payload_6p5kHp/__main__.py\", line 74, in
<module&gt;\r\n  File \"/tmp/ansible_awx_capacity_payload_6p5kHp/__main__.py\",
line 60, in main\r\n  File
\"/tmp/ansible_awx_capacity_payload_6p5kHp/__main__.py\", line 27, in
get_cpu_capacity\r\nAttributeError: 'module' object has no attribute
'cpu_count'\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact
error", "rc": 1}

PLAY RECAP *********************************************************************
isonode.remote.example.com : ok=0    changed=0    unreachable=0    failed=1  

Apparently a Python function was missing. If we check the code we see that indeed in line 27 of file awx_capacity.py the function psutil.cpu_count() is called:

def get_cpu_capacity():
    env_forkcpu = os.getenv('SYSTEM_TASK_FORKS_CPU', None)
    cpu = psutil.cpu_count()

Support for this function was added in version 2.0 of psutil:

2014-03-10
Enhancements
424: [Windows] installer for Python 3.X 64 bit.
427: number of logical and physical CPUs (psutil.cpu_count()).

psutil history

Note the date here: 2014-03-10 – pretty old! I check the version of the installed package, and indeed the version was pre-2.0:

$ rpm -q --queryformat '%{VERSION}\n' python-psutil
1.2.1

To be really sure and also to ensure that there was no weird function backporting, I checked the function call directly on the Tower machine:

# python
Python 2.7.5 (default, Sep 12 2018, 05:31:16) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
&gt;&gt;&gt; import inspect
&gt;&gt;&gt; import psutil as module
&gt;&gt;&gt; functions = inspect.getmembers(module, inspect.isfunction)
&gt;&gt;&gt; functions
[('_assert_pid_not_reused', <function _assert_pid_not_reused at
0x7f9eb10a8d70&gt;), ('_deprecated', <function deprecated at 0x7f9eb38ec320&gt;),
('_wraps', <function wraps at 0x7f9eb414f848&gt;), ('avail_phymem', <function
avail_phymem at 0x7f9eb0c32ed8&gt;), ('avail_virtmem', <function avail_virtmem at
0x7f9eb0c36398&gt;), ('cached_phymem', <function cached_phymem at
0x7f9eb10a86e0&gt;), ('cpu_percent', <function cpu_percent at 0x7f9eb0c32320&gt;),
('cpu_times', <function cpu_times at 0x7f9eb0c322a8&gt;), ('cpu_times_percent',
<function cpu_times_percent at 0x7f9eb0c326e0&gt;), ('disk_io_counters',
<function disk_io_counters at 0x7f9eb0c32938&gt;), ('disk_partitions', <function
disk_partitions at 0x7f9eb0c328c0&gt;), ('disk_usage', <function disk_usage at
0x7f9eb0c32848&gt;), ('get_boot_time', <function get_boot_time at
0x7f9eb0c32a28&gt;), ('get_pid_list', <function get_pid_list at 0x7f9eb0c4b410&gt;),
('get_process_list', <function get_process_list at 0x7f9eb0c32c08&gt;),
('get_users', <function get_users at 0x7f9eb0c32aa0&gt;), ('namedtuple',
<function namedtuple at 0x7f9ebc84df50&gt;), ('net_io_counters', <function
net_io_counters at 0x7f9eb0c329b0&gt;), ('network_io_counters', <function
network_io_counters at 0x7f9eb0c36500&gt;), ('phymem_buffers', <function
phymem_buffers at 0x7f9eb10a8848&gt;), ('phymem_usage', <function phymem_usage at
0x7f9eb0c32cf8&gt;), ('pid_exists', <function pid_exists at 0x7f9eb0c32140&gt;),
('process_iter', <function process_iter at 0x7f9eb0c321b8&gt;), ('swap_memory',
<function swap_memory at 0x7f9eb0c327d0&gt;), ('test', <function test at
0x7f9eb0c32b18&gt;), ('total_virtmem', <function total_virtmem at
0x7f9eb0c361b8&gt;), ('used_phymem', <function used_phymem at 0x7f9eb0c36050&gt;),
('used_virtmem', <function used_virtmem at 0x7f9eb0c362a8&gt;), ('virtmem_usage',
<function virtmem_usage at 0x7f9eb0c32de8&gt;), ('virtual_memory', <function
virtual_memory at 0x7f9eb0c32758&gt;), ('wait_procs', <function wait_procs at
0x7f9eb0c32230&gt;)]

Searching for a package origin

So how to solve this issue? My first idea was to get this working by updating the entire code part to the multiprocessor lib:

# python
Python 2.7.5 (default, Sep 12 2018, 05:31:16) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import multiprocessing
>>> cpu = multiprocessing.cpu_count()
>>> cpu
4

But while I was filling a bug report I wondered why RHEL shipped such an ancient library. After all, RHEL 7 was released in June 2014, and psutil had cpu_count available since early 2014! And indeed, a quick search for the package via the Red Hat package search showed a weird result: python-psutil was never part of base RHEL 7! It was only shipped as part of some very, very old OpenStack channels:

access.redhat.com package search, results for python-psutil

Newer OpenStack channels in fact come along with newer versions of python-psutil.

So how did this outdated package end up on this RHEL 7 image? Why was it never updated?

The cloud image is to blame! The package was installed on it – most likely during the creation of the image: python-psutil is needed for OpenStack Heat, so I assume that these RHEL 7 images where once created via OpenStack and then used as the default image in this demo environment.

And after the initial creation of the image the Heat packages were forgotten. In the meantime the image was updated to newer RHEL versions, snapshots were created as new defaults and so on. But since the package in question was never part of the main RHEL repos, it was never changed or removed. It just stayed there. Waiting, apparently, for me 😉

Conclusion

This issue showed me how tricky cloud images can be. Think about your own cloud images: have you really checked all all of them and verified that no package, no start up script, no configuration was changed from the Linux distribution vendor’s base setup?

With RPMs this is still manageable, you can track if packages are installed which are not present in the existing channels. But did someone install something with pip? Or any other way?

Take my case: an outdated version of a library was called instead of a much, much more recent one. If there would have been a serious security issue with the library in the meantime, I would have been exposed although my update management did not report any library to be updated.

I learned my lesson to be more critical with cloud images, checking them in more detail in the future to avoid having nasty surprises during production. And I can just recommend that you do that as well.

[Howto] Using Ansible to manage RHEL 5 systems

Ansible Logo

With the release of Ansible 2.4, we now require that managed nodes have a Python version of at least 2.6. Most notable, this leaves RHEL 5 users asking how to manage RHEL 5 systems in the future – since it only provides Python 2.4.

(I published this post originally at ansible.com/blog/ .)

Background

With the release of Ansible 2.4 in September 2017, we have moved to support Python 2.6 or higher on the managed nodes. This means previous support for Python-2.4 or Python-2.5 is no longer available:

Support for Python-2.4 and Python-2.5 on the managed system’s side was dropped. If you need to manage a system that ships with Python-2.4 or Python-2.5, you’ll need to install Python-2.6 or better on the managed system.

This was bound to happen at some point in time because Python 2.6 was released almost 10 years ago, and most systems in production these days are based upon 2.6 or newer version. Furthermore, Python 3 is getting more and more traction, and in the long term we need to be able to support it. However, as the official Python documentation shows, code that runs on both Python 2.x and Python 3.x requires at least Python 2.6:

If you are able to skip Python 2.5 and older, then the required changes to your code should continue to look and feel like idiomatic Python code.

Thus the Ansible project had to make the change.

As a result, older Linux and UNIX releases only providing Python 2.4 are now faced with a challenge: How do I automate the management of my older environments that only provide Python-2.4?

We know organizations want to run their business critical applications for as long as possible, and this means running on older versions of Linux. We have seen this with Red Hat Enterprise Linux (RHEL) 5 customers who opted for the extended life cycle support until 2020 once the version reaches its end of life. However, RHEL 5 ships with Python-2.4, and thus users will see an error message even with the simplest Ansible 2.4 module:

$ ansible rhel5.qxyz.de -m ping  
rhel5.qxyz.de | FAILED! => {
    "changed": false, 
    "module_stderr": "Shared connection to rhel5.qxyz.de closed.\r\n", 
    "module_stdout": "Traceback (most recent call last):\r\n  File\"/home/rwolters/.ansible/tmp/ansible-tmp-1517240216.46-158969762588665/ping.py\", line 133, in ?\r\n    exitcode = invoke_module(module, zipped_mod, ANSIBALLZ_PARAMS)\r\n  File \"/home/rwolters/.ansible/tmp/ansible-tmp-1517240216.46-158969762588665/ping.py\", line 38, in invoke_module\r\n    (stdout, stderr) = p.communicate(json_params)\r\n  File \"/usr/lib64/python2.4/subprocess.py\", line 1050, in communicate\r\n    stdout, stderr = self._communicate_with_poll(input)\r\n  File \"/usr/lib64/python2.4/subprocess.py\", line 1113, in _communicate_with_poll\r\n    input_offset += os.write(fd, chunk)\r\nOSError: [Errno 32] Broken pipe\r\n", 
    "msg": "MODULE FAILURE", 
    "rc": 0
}

This post will show three different ways to solve these errors and ease the migration until your servers can be upgraded.

Solutions

1. Use Ansible 2.3

It is perfectly fine to use an older Ansible version. Some features and modules might be missing, but if you have to manage older Linux distributions and cannot install a newer Python version, using a slightly older Ansible version is a way to go. All old releases can be found at release.ansible.com/ansible.

As long as you are able to make the downgrade, and are only running the outdated Ansible version for a limited time, it is probably the easiest way to still automate your RHEL 5 machines.

2. Upgrade to a newer Python version

If you cannot use Ansible 2.3 but need to use a newer Ansible version with Red Hat Enterprise Linux 5 – upgrade the Python version on the managed nodes! However, as shown in the example below, especially with Python an updated version is usually installed to an alternative path to not break system tools. And Ansible needs to know where this is.

For example, the EPEL project provides Python 2.6 packages for Red Hat Enterprise Linux in their archives and we will show how to install and use them as an example. Note though that Python 2.6 as well as the EPEL packages for Red Hat Enterprise Linux 5 both reached end of life already, so you are on your own when it comes to support.

To install the packages on a managed node, the appropriate key and EPEL release package need to be installed. Then the package python26 is available for installation. 

$ wget  
https://archives.fedoraproject.org/pub/archive/epel/5/x86_64/epel-release-5-4.noarch.rpm
$ wget  
https://archives.fedoraproject.org/pub/archive/epel/RPM-GPG-KEY-EPEL-5
$ sudo rpm --import RPM-GPG-KEY-EPEL-5
$ sudo yum install epel-release-5-4.noarch.rpm
$ sudo yum install python26

The package does not overwrite the Python binary, but installs the binary at an alternative path, /usr/bin/python2.6. So we need to tell Ansible to look at a different place for the Python library, which can be done with the flag ansible_python_interpreter for example directly in the inventory:

[old]  
rhel5.qxyz.de ansible_python_interpreter=/usr/bin/python2.6 

That way, the commands work again:

$ ansible rhel5.qxyz.de  -m ping      
rhel5.qxyz.de | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

3. Use the power of RAW

Last but not least, there is another way to deal with old Python 2.4-only systems – or systems with no Python it all. Due to the way Ansible is built, almost all modules (for Linux/Unix systems, anyway) require Python to run. However, there are two notable exceptions: the raw module, and the script module. They both only send basic commands via the SSH connection, without invoking the underlying module subsystem.

That way, at least basic actions can be performed and managed via Ansible on legacy systems. Additionally, you can use the template function on the local control machine to create scripts on the fly dynamically before you execute them on the spot:

---
- name: control remote legacy system
  hosts: legacy
  gather_facts: no
  vars_files:
  - script_vars.yml

  tasks:
  - name: create script on the fly to manage system
    template:
      src: manage_script.sh.j2
      dest: "/manage_script.sh”
    delegate_to: localhost
  - name: execute management script on target system
    script:
      manage_script.sh
  - name: execute raw command to upgrade system
    raw:
      yum upgrade -y
    become: yes 

One thing to note here: since Python is not working on the target system, we cannot collect facts and thus we must work with gather_facts: no.

As you see, that way we can include legacy systems in the automation, despite the fact that we are limited to few modules to work with.

Conclusion

For many customers, they want to support their business critical applications for as long as possible, this often includes running it on an older version of Red Hat Enterprise Linux. “If it ain’t broke, don’t fix it,” is a common mantra within IT departments. But automating these older, traditional systems which do not provide standard libraries can be challenging. here are multiple ways to deal with that – deciding which way is best depends on the overall situation and possibilities of the automation setup. Until organizations are faced with the pressing business need to modernize these older systems, Ansible is there to help.

[Howto] Adopting Ansible Galaxy roles for Solaris

Ansible LogoIt is pretty easy to manage Solaris with Ansible. However, the Ansible roles available at Ansible Galaxy usually target Linux based OS only. Luckily, adopting them is rather simple.

Background

As mentioned earlier Solaris machines can be managed via Ansible pretty well: it works out of the box, and many already existing modules are incredible helpful in managing Solaris installations.

At the same time, the Ansible Best Practices guide strongly recommends using roles to organize your IT with Ansible. Many roles are already available at the Ansible Galaxy ready to be used by the admin in need. Ansible Galaxy is a central repository for various roles written by the community.

However, Ansible Galaxy only recently added support for Solaris. There are currently hardly any roles with Solaris platform support available.

Luckily expanding existing Ansible roles towards Solaris is not that hard.

Example: Apache role

For example, the Apache role from geerlingguy is one of the highest rated roles on Ansible Galaxy. It installs Apache, starts the service, has support for vhosts and custom ports and is above all pretty well documented. Yet, there is no Solaris support right now… Although geerlingguy just accepted a pull request, so it won’t be long until the new version will surface at Ansible Galaxy.

The best way to adopt a given role for another OS is to extend the current role for an additional OS – in contrast to deleting the original OS support an replacing it by new, again OS specific configuration. This keeps the role re-usable on other OS and enables the community to maintain and improve a shared, common role.

With a bit of knowledge about how services are started and stopped on Linux as well as on Solaris, one major difference quickly comes up: on Linux usually the name of the controlled service is the exact same name as the one of the binary behind the service. The same name string is also part of the path to the usr files of the program and for example to the configuration files. On Solaris that is often not the case!

So the best is to check the given role if it starts or stops the service at any given point, if a variable is used there, and if this variable is used somewhere else but for example to create a path name or identify a binary.

The given example indeed controls a service. Thus we add another variable, the service name:

tasks/main.yml
@@ -41,6 +41,6 @@
 - name: Ensure Apache has selected state and enabled on boot.
   service:
-    name: "{{ apache_daemon }}"
+    name: "{{ apache_service }}"
     state: "{{ apache_state }}"
     enabled: yes

Next, we need to add the new variable to the existing OS support:

vars/Debian.yml
@@ -1,4 +1,5 @@
 ---
+apache_service: apache2
 apache_daemon: apache2
 apache_daemon_path: /usr/sbin/
 apache_server_root: /etc/apache2
vars/RedHat.yml
@@ -1,4 +1,5 @@
 ---
+apache_service: httpd
 apache_daemon: httpd
 apache_daemon_path: /usr/sbin/
 apache_server_root: /etc/httpd

Now would be a good time to test the role – it should work on the suported platforms.

The next step is to add the necessary variables for Solaris. The best way is to copy an already existing variable file and to modify it afterwards to fit Solaris:

vars/Solaris.yml
@@ -0,0 +1,19 @@
+---
+apache_service: apache24
+apache_daemon: httpd
+apache_daemon_path: /usr/apache2/2.4/bin/
+apache_server_root: /etc/apache2/2.4/
+apache_conf_path: /etc/apache2/2.4/conf.d
+
+apache_vhosts_version: "2.2"
+
+__apache_packages:
+  - web/server/apache-24
+  - web/server/apache-24/module/apache-ssl
+  - web/server/apache-24/module/apache-security
+
+apache_ports_configuration_items:
+  - regexp: "^Listen "
+    line: "Listen {{ apache_listen_port }}"
+  - regexp: "^#?NameVirtualHost "
+    line: "NameVirtualHost *:{{ apache_listen_port }}"

This specific role provides two playbooks to setup and configure each supported platform. The easiest way to create these two files for a new platform is again to copy existing ones and to modify them afterwards according to the specifics of Solaris.

The configuration looks like:

tasks/configure-Solaris.yml
@@ -0,0 +1,19 @@
+---
+- name: Configure Apache.
+  lineinfile:
+    dest: "{{ apache_server_root }}/conf/{{ apache_daemon }}.conf"
+    regexp: "{{ item.regexp }}"
+    line: "{{ item.line }}"
+    state: present
+  with_items: apache_ports_configuration_items
+  notify: restart apache
+
+- name: Add apache vhosts configuration.
+  template:
+    src: "vhosts-{{ apache_vhosts_version }}.conf.j2"
+    dest: "{{ apache_conf_path }}/{{ apache_vhosts_filename }}"
+    owner: root
+    group: root
+    mode: 0644
+  notify: restart apache
+  when: apache_create_vhosts

The setup thus can look like:

tasks/setup-Solaris.yml
@@ -0,0 +1,6 @@
+---
+- name: Ensure Apache is installed.
+  pkg5:
+    name: "{{ item }}"
+    state: installed
+  with_items: apache_packages

Last but not least, the platform support must be activated in the main/task.yml file:

tasks/main.yml
@@ -15,6 +15,9 @@
 - include: setup-Debian.yml
   when: ansible_os_family == 'Debian'
 
+- include: setup-Solaris.yml
+  when: ansible_os_family == 'Solaris'
+
 # Figure out what version of Apache is installed.
 - name: Get installed version of Apache.
   shell: "{{ apache_daemon_path }}{{ apache_daemon }} -v"

When you now run the role on a Solaris machine, it should install Apache right away.

Conclusion

Adopting a given role from Ansible Galaxy for Solaris is rather easy – if the given role is already prepared for multi OS support. In such cases adding another role is a trivial task.

If the role is not prepared for multi OS support, try to get in contact with the developers, often they appreciate feedback and multi OS support pull requests.

[Howto] Solaris 11 on KVM

solarisRecently I had to test a few things on Solaris 11 and wondered how well it works virtualized with KVM. It does – with a few tweaks.

Preface

Testing various different versions of operating systems is easy these days thanks to virtualization. However, I’m mainly used to Linux variants and hardly ever install any other kind of UNIX based OS. Thus I was curious if an installation of Solaris 11 on KVM / libvirt works.

For the test I actually used virt-manager since it does provide neat defaults during the VM setup. But the same comments and lessons learned are true for the command line tool as well.

Setting up the VM

virt-manager usually does not provide Solaris as an operating system type by default in the VM setup dialog. You first have to click on “OS Type”, “Show all OS options” as shown here:
virt-manager Solaris guest picker

Note, a Solaris 11 should have at least 2 GB RAM, otherwise the installation and also booting might take very long or run into their very own problems.

The installation runs through – although quite some errors clutter the screen (see below).

Errors and problems

As soon as the machine is started several error messages are shown:

WARNING: /pci@0,0/pci1af4,1100@6,1 (uhci1): No SOF interrupts have been received, this USB UHCI host controller is unusable
WARNING: /pci@0,0/pci1af4,1100@6,2 (uhci2): No SOF interrupts have been received, this USB UHCI host controller is unusable

This shows that something is wrong with the interrupts and thus withe the “hardware” of the machine – or at least with the way the guest machine discovers the hardware.

Additionally, even if DHCP is configured, the machine is unable to obtain the networking configuration. A fixed IP address and gateway do not help here, either. The host system might even report that it provides DHCP data, but the guest system continues to request these:

Dez 23 11:11:05 liquidat dnsmasq-dhcp[13997]: DHCPDISCOVER(virbr0) 52:54:00:31:31:4b
Dez 23 11:11:05 liquidat dnsmasq-dhcp[13997]: DHCPOFFER(virbr0) 192.168.122.205 52:54:00:31:31:4b
Dez 23 11:11:09 liquidat dnsmasq-dhcp[13997]: DHCPDISCOVER(virbr0) 52:54:00:31:31:4b
Dez 23 11:11:09 liquidat dnsmasq-dhcp[13997]: DHCPOFFER(virbr0) 192.168.122.205 52:54:00:31:31:4b
Dez 23 11:11:17 liquidat dnsmasq-dhcp[13997]: DHCPDISCOVER(virbr0) 52:54:00:31:31:4b
Dez 23 11:11:17 liquidat dnsmasq-dhcp[13997]: DHCPOFFER(virbr0) 192.168.122.205 52:54:00:31:31:4b
...

Also, when the machine is shutting down and ready to be powered off, the CPU usage spikes to 100 %.

The solution: APIC

The solution for the “hardware” problems mentioned above and also for the networking trouble is to deactivate a APIC feature inside the VM: x2APIC, Intel’s programmable interrupt controller. Some more details about the problem can be found in the Red Hat Bugzilla entry #1040500.

To apply the fix the virtual machine definition needs to be edited to disable the feature. The xml definition can be edited with the command sudo virsh edit with the machine name as command line option, the change needs to be done in the section cpu as shown below. Make sure tha VM is stopped before the changes are done.

$ sudo virsh edit krypton
...
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Broadwell</model>
    <feature policy='disable' name='x2apic'/>
  </cpu>
$ sudo virsh start krypton

After this changes Solaris does not report any interrupt problems anymore and DHCP works without flaws. Note however that the CPU still spikes at power off. If anyone knows a solution to that problem I would be happy to hear about it and add it to this post.

[Howto] Managing Solaris 11 via Ansible

Ansible LogoAnsible can be used to manage various kinds of Server operating systems – among them Solaris 11.

Managing Solaris 11 servers via Ansible from my Fedora machine is actually less exciting than previously thought. Since the amount of blog articles covering that is limited I thought it might be a nice challenge.

However, the opposite is the case: it just works. On a fresh Solaris installation, out of the box. There is not even need for additional configuration or additional software. Of course, ssh access must be available – but the same is true on Linux machines as well. It’s almost boring 😉

Here is an example to install and remove software on Solaris 11, using the new package system IPS which was introduced in Solaris 11:

$ ansible solaris -s -m pkg5 -a "name=web/server/apache-24"
$ ansible solaris -s -m pkg5 -a "state=absent name=/text/patchutils"

While Ansible uses a special module, pkg5, to manage Solaris packages, service managing is even easier because the usual service module is used for Linux as well as Solaris machines:

$ ansible solaris -s -m service -a "name=apache24 state=started"
$ ansible solaris -s -m service -a "name=apache24 state=stopped"

So far so good – of course things get really interesting if playbooks can perform tasks on Solaris and Linux machines at the same time. For example, imagine Apache needs to be deployed and started on Linux as well as on Solaris. Here conditions come in handy:

---
- name: install and start Apache
  hosts: clients
  vars_files:
    - "vars/{{ ansible_os_family }}.yml"
  sudo: yes

  tasks:
    - name: install Apache on Solaris
      pkg5: name=web/server/apache-24
      when: ansible_os_family == "Solaris"

    - name: install Apache on RHEL
      yum:  name=httpd
      when: ansible_os_family == "RedHat"

    - name: start Apache
      service: name={{ apache }} state=started

Since the service name is not the same on different operating systems (or even different Linux distributions) the service name is a variable defined in a family specific Yaml file.

It’s also interesting to note that the same Ansible module works different on the different operating systems: when a service is ordered to be stopped, but is not even available because the corresponding package and thus service definition is not even installed, the return code on Linux is OK, while on Solaris an error is returned:

TASK: [stop Apache on Solaris] ************************************************
failed: [argon] => {"failed": true}
msg: svcs: Pattern 'apache24' doesn't match any instances

FATAL: all hosts have already failed -- aborting

It would be nice to catch the error, however as far as I know error handling in Ansible can only specify when to fail, and not which messages/errors should be ignored.

But besides this problem managing Solaris via Ansible works smoothly for me. And it even works on Ansible Tower, of course:

Tower-Ansible-Solaris.png

I haven’t tried to install Ansible on Solaris itself, but since packages are available that shouldn’t be much of an issue.

So in case you have a mixed environment including Solaris and Linux machines (Red Hat, Fedora, Ubuntu, Debian, Suse, you name it) I can only recommend to start using Ansible as soon as you possible. It simply works and can ease the pain of day to day tasks substantially.

Ansible Galaxy just added Solaris platform support

Ansible LogoWhile Ansible is mostly used in Linux environments, it can also be used to manage other UNIX variants like Solaris. Now the central hub for Ansible roles, Ansible Galaxy, also added support for the platform Solaris.

Ansible is handy tool to manage multiple servers. Besides the usual Linux distributions it features support for BSD variants, Solaris and even Windows. However, the central hub to share Ansible roles, Ansible Galaxy, was still missing Solaris support until now: the support was added for version 10 as well as version 11.

That can already be seen when a role template is generated with the Galaxy tools:

$ ansible-galaxy init acme --force
- acme was created successfully
$ grep -B 1 -A 8 Solaris acme/meta/main.yml
  #  - any
  #- name: Solaris
  #  versions:
  #  - all
  #  - 10
  #  - 11.0
  #  - 11.1
  #  - 11.2
  #  - 11.3
  #- name: Fedora

This opens up the possibility to provide Ansible roles including Solaris support at a central place. Right now I already have a pull request to enable Solaris support on a very powerful Apache role. In a following blog report I’ll add the (surprisingly few) steps which were necessary to adjust the role to support Solaris.

It is great that Ansible Galaxy adds more and more platforms and thus broadens the usage of the central hub to cover more and more use cases. I’m looking forward to see more and more Solaris roles in the Galaxy. If you need help porting a role don’t hesitate to contact me.

The support is so far still in internal testing and will be made final when the above mentioned Github issue is closed.