[Howto] Using Ansible to manage RHEL 5 systems

Ansible Logo

With the release of Ansible 2.4, Ansible requires that managed nodes have a Python version of at least 2.6. Most notable, this leaves RHEL 5 users wondering how to manage RHEL 5 systems in the future – given it only provides Python 2.4.

I covered this topic in a recent blog post at ansible.com/blog, read more at “USING ANSIBLE TO MANAGE RHEL 5 YESTERDAY, TODAY AND TOMORROW“.

Advertisements

[HowTo] Combine Python methods with Jinja filters in Ansible

Ansible Logo

Ansible has a lot of ways to manipulate variables and their content. We shed some light on the different possibilities – and how to combine them.

Ansible inbuilt filters

One way to manipulate variables in Ansible is to use filters. Filters are connected to variables via pipes, |, and the result is the modified variable. Ansible offers a set of inbuilt filters. For example the ipaddr filter can be used to find IP addresses with certain properties in a list of given strings:

# Example list of values
test_list = ['192.24.2.1', 'host.fqdn', '::1', '192.168.32.0/24', 'fe80::100/10', True, '', '42540766412265424405338506004571095040/64']

# {{ test_list | ipaddr }}
['192.24.2.1', '::1', '192.168.32.0/24', 'fe80::100/10', '2001:db8:32c:faad::/64']

Jinja2 filters

Another set of filters which can be utilized in Ansible are the Jinja2 filters of the template engine Jinja2, which is the default templating engine in Ansible.

For example the map filter can be used to pick certain values from a given dictionary. Note the following code snippet where from a list of names only the first names are given out as a list due to the mapping filter (and the list filter for the output).

vars:
  names:
    - first: Foo
      last: Bar
    - first: John
      last: Doe
 
 tasks:
 - debug:
     msg: "{{ names | map(attribute='first') |list }}"

Python methods

Besides filters, variables can also be modified by the Python string methods: Python is the scripting language Ansible is written in, and and provides string manipulation methods Ansible can just use. In contrast to filters, methods are not attached to variables with a pipe, but with dot notation:

vars:
  - mystring: foobar something

- name: endswith method
  debug:
    msg: "{{ mystring.endswith('thing') }}"

...

TASK [endswith method] *****************************************************************
ok: [localhost] => {
 "msg": true
}

Due to the close relation between Python and Jinja2 many of the above mentioned Jinja2 filters are quite similar to the string methods in Python and as a result, some capabilities like capitalize are available as a filter as well as a method:

vars:
  - mystring: foobar something

tasks:
- name: capitalize filter
  debug:
    msg: "{{ mystring|capitalize() }}"

- name: capitalize method
  debug:
    msg: "{{ mystring.capitalize() }}"

Connecting filters and methods

Due to the different ways of invoking filters and methods, it is sometimes difficult to bring both together. Caution needs to be applied if filters and methods are to be mixed.

For example, if a list of IP addresses is given and we want the last element of the included address of the range 10.0.0.0/8, we first can use the ipaddr filter to only output the IP within the appropriate range, and afterwards use the split method to break up the address in a list with four elements:

vars:
 - myaddresses: ['192.24.2.1', '10.0.3.5', '171.17.32.1']

tasks:
- name: get last element of 10* IP
  debug:
    msg: "{{ (myaddresses|ipaddr('10.0.0.0/8'))[0].split('.')[-1] }}"

...

TASK [get last element of 10* IP] **************************************************************
ok: [localhost] => {
 "msg": "5"
}

As can be seen above, to attach a method to a filtered object, another set of brackets – ( ) – is needed. Also, since the result of this filter is a list, we need to take the list element – in this case this is easy since we only have one result, so we take the element 0. Afterwards, the split method is called upon the result, gives back a list of elements, and we take the last element (-1, but element 3 would have worked here as well).

 

Conclusion

There are many ways in Ansible to manipulate strings, however since they are coming from various sources it is sometimes a little bit tricky to find what is actually needed.

Ansible package moved from EPEL to extras

Ansible LogoA few days ago the Ansible package was removed from EPEL and many ask why that happened. The background is that Ansible is now provided in certain Red Hat channels.

What happened?

In the past (pre-2017-10) most people who were on RHEL or CentOS or similar RHEL based systems used to install Ansible from the EPEL repository. This way the package was updates regularly and it was ensured that it met the quite high packaging standards of the EPEL project.

However, a few days ago someone noticed that the EPEL repositories no longer contain an Ansible rpm package:

I'm running RHEL 7.3, and have installed the latest epel-release-latest-7.noarch.rpm. However, I'm unable to install ansible from this repo.

This caused some confusion and questions about the reasons behind that move.

EPEL repository policy

To better understand what happened it is important to understand EPEL’s package policy:

EPEL strives to never replace or interfere with packages shipped by Enterprise Linux.

While the idea of EPEL is to provide cool additional packages for RHEL, they will never replace anything that is shipped.

Change at Red Hat Enterprise Linux

That philosophy regularly requires that the EPEL project removes packages: each time when RHEL adds a package EPEL needs to check if they are providing it, and removes it.

And a few weeks ago exactly that happened: Ansible was included in RHELs extras repository.

The reasons behind that move is that the newest incarnation of RHEL now comes along with so called system roles – which require Ansible to execute them.

But where to get it now?

Ansible is now directly available to RHEL users as mentioned above. Also, CentOS picked up Ansible in their extras repository, and there are plenty of other ways available.

The only case where something actually changes for people is when the EPEL repository is activated – but the extras repository is not.

[Howto] Reference Ansible variables between plays

Ansible LogoAnsible’s strenght is to work with all kinds of devices and services – in one go. To properly call a variable value from one server while working on another host the variable needs to be referenced properly.

One of the major strength about Ansible is the capability to almost seamlessly talk to different hosts, devices and services. That’s agent-less at its best!

However, to do that often variables of one host need to be referenced on another. For the sake of an example, imagine a monitoring server which needs to ssh to the managed nodes. The task is to first collect the public SSH key of the monitoring server and afterwards add it to the managed nodes.

First you need a play to collect the SSH key:

---
- name: fetch ssh key 
  hosts: monitoringserver

  tasks:
    - name: fetch ssh key from monitoring server
      slurp:
        src: ~/.ssh/id_rsa.pub
      register: monitoringsshkey

After that, the key needs to be distributed. It makes sense to just add a second play to the same playbook. However, since the ssh key was fetched in the first play, it is not possible to just reference it as {{ monitoringsshkey }}. That would lead to an error:

fatal: [managednode.qxyz.de]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'monitoringsshkey' is undefined\n\nThe error appears to have been in '/home/liquidat/ansible/sshkey.yml': line 19, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n  tasks:\n    - name: Distribute SSH to nodes\n      ^ here\n"}

Instead, the variable needs to be referenced properly, highlighting the actual host it is coming from:

- name: provide ssh key
  hosts: managednode.qxyz.de

  tasks:
    - name: Distribute SSH to nodes
      authorized_key: 
        user: liquidat
        key: "{{ hostvars['monitoringserver']['monitoringsshkey']['content'] | b64decode }}"

The reason for this need is simple: in his example we had only one host targeted in the first play – but it could also easily be five hosts. In that moment, Ansible could not reliably know which variable value to pick if we do not specify the actual host.

[Short Tip] Call Ansible or Ansible Playbooks without an inventory

Ansible Logo

Ansible is a great tool to automate almost anything in IT. However, one of the core concepts of Ansible is the inventory where the to be managed nodes are listed. However, in some situations setting up a dedicated inventory is overkill.

For example there are many situation where admins just want to ssh to a machine or two to figure something out. Ansible modules can often make such SSH calls in a much more efficient way, making them unnecessary – but creating a inventory first is a waste of time for such short tasks.

In such cases it is handy to call Ansible or Ansible playbooks without an inventory. In case of plain Ansible this can be done by  addressing all nodes while at the same time limiting them to an actual hostslist:

$ ansible all -i jenkins.qxyz.de, -m wait_for -a "host=jenkins.qxyz.de port=8080"
jenkins.qxyz.de | SUCCESS => {
    "changed": false, 
    "elapsed": 0, 
    "path": null, 
    "port": 8080, 
    "search_regex": null, 
    "state": "started"
}

The comma is needed since Ansible expects a list of hosts – and a list of one host still needs the comma.

For Ansible playbooks the syntax is slightly different:

$ ansible-playbook -i neon.qxyz.de, my_playbook.yml

Here the “all” is missing since the playbook already contains a hosts directive. But the comma still needs to be there to mark a list of hosts.

Ansible Tower 3.1 – screenshot tour

Ansible LogoAnsible Tower 3.1 was just released. Time to have a closer look at some of the new features like the workflow editor.

Just a few days ago, Ansible Tower 3.1 was released. Besides the usual bug fixes, refinements of the UI and similar things this Tower version comes with major new feature: a workflow editor, scale out clustering, integration with logging providers and a new job details page.

The basic idea of a workflow is to link multiple job templates coming one after the other. They may or may not share inventory, playbooks or even permissions. The links can be conditional: if job template A succeeds, job template B is automatically executed afterwards, but in case of failure, job template C will be run. And the workflows are not even limited to job templates, but can also include project or inventory updates.

This enables new applications for Tower: besides the rather simple execution of prepared job templates, now different workflows can build upon each other. Imagine the networking team which creates a playbooks with their own content, in their own Git repository and even targeting their own inventory, while the operations team also has their own repos, playbooks and inventory. With older Tower versions there would be no simple way to bring these totally separated ways together – with 3.1 this can be done even with a graphical editor.

Workflows can be created right from the job template page. As can be seen that page got an overhaul:

templates

The button to add a new template offers a small arrow to get a menu from which a workflow can be set up.

Afterwards, the workflow needs to be defined – name, organization, etc. This is a necessary step, before the actual links can be created:

WorkflowEditorStart.png

As shown in the screenshot above from this screen on the actual editor can be started. And I must admit that I was surprised of how simple but yet rather elegant the editor looks like and works. It takes hardly any time to get used to, and the result is visually appealing and easily understandable:

WorkflowEditor.png

The above screenshot shows the major highlights: links depending on the result of the previous job template in red and green, blue links which are executed every time, a task in the workflow to update a project (indicated by the “P”), and the actual editor.

As mentioned at the beginning, there are more features in this new Tower release. The clustering feature is an explicitly interesting feature for load balancing and HA setups, though I have not tested it yet. Another possibility is the integration of logging providers right into the UI:

logging

As shown above a logstash logging provider  was configured to gather all the Tower logs. Other possible providers are  splunk, and in general everything which understands REST calls.

A change I yet have to get familiar with is the new view on the jobs page, showing running or completed jobs:

The new view is much more tailored to the output of ansible-playbook, showing the time at each task. Also, a search bar has been added which can be used to search through the results rather easily. Each taks can be clicked at to get much more details about the task. However, in the old view I liked the possibility to simply click through a play and the single tasks, getting the list of hosts adjusted automatically, etc. I can already see that the change will be for the better – but I have to get used to it first 😉

Overall the new release is pretty impressive. Especially the workflow editor will massively help bringing different teams even closer in automation (DevOps, anyone?). Also, the cluster feature will certainly help create stable, HA like setups of Tower. The UI might take some time to get used to, but that’s ok, since there will be a benefit at the end.

So, it is a great release – get started now!