[Howto] Reference Ansible variables between plays

Ansible LogoAnsible’s strenght is to work with all kinds of devices and services – in one go. To properly call a variable value from one server while working on another host the variable needs to be referenced properly.

One of the major strength about Ansible is the capability to almost seamlessly talk to different hosts, devices and services. That’s agent-less at its best!

However, to do that often variables of one host need to be referenced on another. For the sake of an example, imagine a monitoring server which needs to ssh to the managed nodes. The task is to first collect the public SSH key of the monitoring server and afterwards add it to the managed nodes.

First you need a play to collect the SSH key:

---
- name: fetch ssh key 
  hosts: monitoringserver

  tasks:
    - name: fetch ssh key from monitoring server
      slurp:
        src: ~/.ssh/id_rsa.pub
      register: monitoringsshkey

After that, the key needs to be distributed. It makes sense to just add a second play to the same playbook. However, since the ssh key was fetched in the first play, it is not possible to just reference it as {{ monitoringsshkey }}. That would lead to an error:

fatal: [managednode.qxyz.de]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'monitoringsshkey' is undefined\n\nThe error appears to have been in '/home/liquidat/ansible/sshkey.yml': line 19, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n  tasks:\n    - name: Distribute SSH to nodes\n      ^ here\n"}

Instead, the variable needs to be referenced properly, highlighting the actual host it is coming from:

- name: provide ssh key
  hosts: managednode.qxyz.de

  tasks:
    - name: Distribute SSH to nodes
      authorized_key: 
        user: liquidat
        key: "{{ hostvars['monitoringserver']['monitoringsshkey']['content'] | b64decode }}"

The reason for this need is simple: in his example we had only one host targeted in the first play – but it could also easily be five hosts. In that moment, Ansible could not reliably know which variable value to pick if we do not specify the actual host.

Advertisements

[Howto] Automated DNS resolution for KVM/libvirt guests with a local domain

libvirt_logo-svg

I often run demos on my laptop with the help of libvirt. Managing 20+ machines that way is annoying when you have no DNS resolution for those. Luckily, with libvirt and NetworkManager, that can be easily solved.

The problem

Imagine you want to test something in a demo setup with 5 machines. You create the necessary VMs in your local KVM/libvirt environment – but you cannot address them properly by name. With 5 machines you also need to write down the appropriate IP addresses – that’s hardly practical.

It is possible to create static entries in the libvirt network configuration – however, that is still very inflexible, difficult to automate and only works for name resolution inside the libvirt environment. When you want to ssh into a running VM from the host, you again have to look up the IP.

Name resolution in  the host network would be possible by adding each entry to /etc/hosts additionally. But that would require the management of two lists at the same time. Not automated, far from dynamic, and very ponderous.

The solution

Luckily, there is an elegant solution: libvirt comes with its own in-build DNS server, dnsmasq. Configured properly, that can be used to serve DHCP and DNS to servers respecting a previous defined domain. Additionally, NetworkManager can be configured to use its own dnsmasq instance to resolve DNS entries – forwarding requests to the libvirt instance if needed.

That way, the only thing which has to be done is setting a proper host name inside the VMs. Everything else just works out of the box (with a recently Linux, see below).

The solution presented here is based on great post from Dominic Cleal.

Configuring libvirt

First of all, libvirt needs to be configured. Given that the network “default” is assigned to the relevant VMs, the configuration should look like this:

$ sudo virsh net-dumpxml default
<network connections='1'>
  <name>default</name>
  <uuid>158880c3-9adb-4a44-ab51-d0bc1c18cddc</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:fa:cb:e5'/>
  <domain name='qxyz.de' localOnly='yes'/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.128' end='192.168.122.254'/>
    </dhcp>
  </ip>
</network>

The interesting part is below the mac address: a local domain is defined and marked as localOnly. That domain will be the authoritative domain for the relevant VMs, and libvirt will configure dnsmasq to act as a resolver for that domain. The attribute makes sure that DNS requests regarding that domain will never be forwarded upstream. This is important to avoid loop holes.

Configuring the VM guests

When the domain is set, the guests inside the VMs need to be defined. With recent Linux releases this is as simple as setting the host name:

$ sudo hostnamectl set-hostname neon.qxyz.de

There is no need to enter the host name anywhere else: the command above takes care of that. And the default configuration of DHCP clients of recent Linux releases sends this hostname together with the DHCP request – dnsmasq picks the host name automatically  up if the domain matches.

If you are on a Linux where the hostnamectl command does not work, or where the DHCP client does send the host name with the request – switch to a recent version of Fedora or RHEL 😉

In such cases the host name must be set manually, according to the documentation of the OS. Just ensure that the resolution of the name works locally. Also, the DHCP configuration must be altered to send along the host name. In older RHEL and Fedora versions for example the option

DHCP_HOSTNAME=neon.qxyz.de

had to be added to /etc/sysconfig/network-scripts/ifcfg-eth0.

At this point automatic name resolution between VMs should already work after a restart of libvirt.

Configuring NetworkManager

The last missing piece is the configuration of the actual KVM/libvirt host, so that the local domain, here qxyz.de, is properly resolved. Adding another name server to /etc/resolv.conf might work for a workstation with a fixed network connection, but certainly does not work for laptops which have changing network connections and DNS servers all the time. In such cases, the NetworkManager is often used anyway so we take advantage of its capabilities.

First of all, NetworkManager needs to start its own version of dnsmasq. That can be achieved with a simple configuration option:

$ cat /etc/NetworkManager/conf.d/localdns.conf 
[main]
dns=dnsmasq

This second dnsmasq instance just works out of the box. All DNS requests will automatically be forwarded to DNS servers acquired by NetworkManager via DHCP, for example. The only notable difference is that the entry in /etc/resolv.conf is different:

# Generated by NetworkManager
search whatever
nameserver 127.0.0.1

Now as a second step the second dnsmasq instance needs to know that for all requests regarding qxyz.de the libvirt dnsmasq instance has to be queried. This can be achieved with another rather simple configuration option, given the domain and the IP from the libvirt network configuration at the top of this blog post:

$ cat /etc/NetworkManager/dnsmasq.d/libvirt_dnsmasq.conf 
server=/qxyz.de/192.168.122.1

And that’s it, already. Restart NetworkManager and everything should be working fine.

As a side node: if the attribute localOnly would not have been set in the libvirt network configuration, queries for unknown qxyz.de entries would be forwarded from the libvirt dnsmasq to the NetworkManager dnsmasq – which would again forward them to the libvirt dnsmasq, and so on. That would quickly overload your dnsmasq servers, resulting in error messages:

dnsmasq[15426]: Maximum number of concurrent DNS queries reached (max: 150)

Summary

With these rather few and simple changes a local domain is established for both guest and host, making it easy to resolve their names everywhere. There is no need to maintain one or even two lists of static IP entries, everything is done automatically.

For me this is a huge relief, making it much easier in the future to set up demo and test environments. Also, it looks much nicer during a demo if you have FQDNs and not IP addresses. I can only recommend this setup to everyone who often uses libvirt/KVM on a local machine for test/demo environments.

[Howto] Writing an Ansible module for a REST API

Ansible LogoAnsible comes along with a great set of modules. But maybe your favorite tool is not covered yet and you need to develop your own module. This guide shows you how to write an Ansible module – when you have a REST API to speak to.

Background: Ansible modules

Ansible is a great tool to automate almost everything in an IT environment. One of the huge benefits of Ansible are the so called modules: they provide a way to address automation tasks in the native language of the problem. For example, given a user needs to be created: this is usually done by calling certain commandos on the shell. In that case the automation developer has to think about which command line tool needs to be used, which parameters and options need to be provided, and the result is most likely not idempotent. And its hard t run tests (“checks”) with such an approach.

Enter Ansible user modules: with them the automation developer only has to provide the data needed for the actual problem like the user name, group name, etc. There is no need to remember the user management tool of the target platform or to look up parameters:

$ ansible server -m user -a "name=abc group=wheel" -b

Ansible comes along with hundreds of modules. But what is if your favorite task or tool is not supported by any module? You have to write your own Ansible module. If your tools support REST API, there are a few things to know which makes it much easier to get your module running fine with Ansible. These few things are outlined below.

REST APIs and Python libraries in Ansible modules

According to Wikipedia, REST is:

… the software architectural style of the World Wide Web.

In short, its a way to write, provide and access an API via usual HTTP tools and libraries (Apache web server, Curl, you name it), and it is very common in everything related to the WWW.

To access a REST API via an Ansible module, there are a few things to note. Ansible modules are usually written in Python. The library of choice to access URLs and thus REST APIs in Python is usually urllib. However, the library is not the easiest to use and there are some security topics to keep in mind when these are used. Out of these reasons alternative libraries like Python requests came up in the past and are pretty common.

However, using an external library in an Ansible module would add an extra dependency, thus the Ansible developers added their own library inside Ansible to access URLs: ansible.module_utils.urls. This one is already shipped with Ansible – the code can be found at lib/ansible/module_utils/urls.py – and it covers the shortcomings and security concerns of urllib. If you submit a module to Ansible calling REST APIs the Ansible developers usually require that you use the inbuilt library.

Unfortunately, currently the documentation on the Ansible url library is sparse at best. If you need information about it, look at other modules like the Github, Kubernetes or a10 modules. To cover that documentation gap I will try to cover the most important basics in the following lines – at least as far as I know.

Creating REST calls in an Ansible module

To access the Ansible urls library right in your modules, it needs to be imported in the same way as the basic library is imported in the module:

from ansible.module_utils.basic import *
from ansible.module_utils.urls import *

The main function call to access a URL via this library is open_url. It can take multiple parameters:

def open_url(url, data=None, headers=None, method=None, use_proxy=True,
        force=False, last_mod_time=None, timeout=10, validate_certs=True,
        url_username=None, url_password=None, http_agent=None,
force_basic_auth=False, follow_redirects='urllib2'):

The parameters in detail are:

  • url: the actual URL, the communication endpoint of your REST API
  • data: the payload for the URL request, for example a JSON structure
  • headers: additional headers, often this includes the content-type of the data stream
  • method: a URL call can be of various methods: GET, DELETE, PUT, etc.
  • use_proxy: if a proxy is to be used or not
  • force: force an update even if a 304 indicates that nothing has changed (I think…)
  • last_mod_time: the time stamp to add to the header in case we get a 304
  • timeout: set a timeout
  • validate_certs: if certificates should be validated or not; important for test setups where you have self signed certificates
  • url_username: the user name to authenticate
  • url_password: the password for the above listed username
  • http_agent: if you wnat to set the http agent
  • force_basic_auth: for ce the usage of the basic authentication
  • follow_redirects: determine how redirects are handled

For example, to fire a simple GET to a given source like Google most parameters are not needed and it would look like:

open_url('https://www.google.com',method="GET")

A more sophisticated example is to push actual information to a REST API. For example, if you want to search for the domain example on a Satellite server you need to change the method to PUT, add a data structure to set the actual search string ({"search":"example"}) and add a corresponding content type as header information ({'Content-Type':'application/json'}). Also, a username and password must be provided. Given we access a test system here the certification validation needs to be turned off also. The resulting string looks like this:

open_url('https://satellite-server.example.com/api/v2/domains',method="PUT",url_username="admin",url_password="abcd",data=json.dumps({"search":"example"}),force_basic_auth=True,validate_certs=False,headers={'Content-Type':'application/json'})

Beware that the data json structure needs to be processed by json.dumps. The result of the query can be formatted as json and further used as a json structure:

resp = open_url(...)
resp_json = json.loads(resp.read())

Full example

In the following example, we query a Satellite server to find a so called environment ID for two given parameters, an organization ID and an environment name. To create a REST call for this task in a module multiple, separate steps have to be done: first, create the actual URL endpoint. This usually consists of the server name as a variable and the API endpoint as the flexible part which is different in each REST call.

server_name = 'https://satellite.example.com'
api_endpoint = '/katello/api/v2/environments/'
my_url = server_name + api_endpoint

Besides the actual URL, the payload must be pieced together and the headers need to be set according to the content type of the payload – here json:

headers = {'Content-Type':'application/json'}
payload = {"organization_id":orga_id,"name":env_name}

Other content types depends on the REST API itself and on what the developer prefers. JSON is widely accepted as a good way to go for REST calls.

Next, we set the user and password and launch the call. The return data from the call are saved in a variable to analyze later on.

user = 'abc'
pwd = 'def'
resp = open_url(url_action,method="GET",headers=headers,url_username=module.params.get('user'),url_password=module.params.get('pwd'),force_basic_auth=True,data=json.dumps(payload))

Last but not least we transform the return value into a json construct, and analyze it: if the return value does not contain any data – that means the value for the key total is zero – we want the module to exit with an error. Something went wrong, and the automation administrator needs to know that. The module calls the built-in error functionmodule.fail_json. But if the total is not zero, we get out the actual environment ID we were looking for with this REST call from the beginning – it is deeply hidden in the json structure, btw.

resp_json = json.loads(resp.read())
if resp_json["total"] == 0:
    module.fail_json(msg="Environment %s not found." % env_name)
env_id = resp_json["results"][0]["id"]

Summary

It is fairly easy to write Ansible modules to access REST APIs. The most important part to know is that an internal, Ansible provided library should be used, instead of the better known urllib or requests library. Also, the actual library documentation is still pretty limited, but that gap is partially filled by the above post.

Useful command line options for ansible-playbook

Ansible LogoAnsible provides quite some useful command line options. Most of them are especially interesting during debugging.

Background

There are three major ways to work with Ansible:

  • launching single tasks with the ansible command
  • executing playbooks viaansible-playbook
  • using Tower to manage and run playbooks

While Tower might be the better option to run Ansible in the day-to-day business, and the ansible CLI itself is most likely only in one-time runs used, the executing of playbooks on the command line often happens during the development of playbooks, when no Tower is available – or during debugging. In such cases, there are quite some useful command line options which might not even be known to the seasoned Ansible user.

Do I say this right? – Syntax checking

Playbooks are written in YAML, and in YAML syntax is crucial – especially indentation:

Data structure hierarchy is maintained by outline indentation.

To check if a playbook is correctly formatted, the option --syntax-check looks at all involved playbooks and verifies the correct syntax. During a syntax check, no playbooks are actually executed.

$ ansible-playbook --syntax-check oraclejdk-destroy.yml
ERROR! Syntax Error while loading YAML.

The error appears to have been in '/home/liquidat/Gits/github/ansible-demo-oraclejdk/oracle-windows-destroy.yml': line 10, column 11,
but may be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  win_template: src=data/remove-program.j2 dest=C:\\temp\\remove-program.ps1
    - name: remove application
          ^ here

The syntax check helps if a playbook fails for no apparent reason – or if a playbook was edited a lot and it is simply not sure if everything was moved around correctly.

Whom am I talking to? – Listing affected hosts

With complex playbooks and dynamic inventories it sometimes is hard to say against which hosts a playbook will actually be executed. In such cases, the option --list-hosts will output a list of affected hosts, including the name of the actual play and the pattern with which the hosts were chosen:

$ ansible-playbook --list-hosts oraclejdk-destroy.yml

playbook: oraclejdk-destroy.yml

  play #1 (windows): remove OracleJDK on Windows	TAGS: []
    pattern: [u'windows']
    hosts (1):
      radon

  play #2 (rhel): remove OracleJDK on RHEL	TAGS: []
    pattern: [u'rhel']
    hosts (2):
      neon
      helium
...

This works also together with the -l option and might help debugging your inventory.

Again, no tasks are actually execute when the list of hosts is queried.

What’s going on here? – List tasks

Another thing which can get pretty complicated is the list of tasks actually executed: think of complex playbooks including other complex playbooks. That can get pretty complex and difficult to understand – here the option --list-tasks comes in handy. It lists what will be done, showing the names of the tasks but not executing any of them on the target nodes:

$ ansible-playbook --list-tasks oraclejdk-destroy.yml

playbook: oraclejdk-destroy.yml

  play #1 (windows): remove OracleJDK on Windows	TAGS: []
    tasks:
      copy Java remove script to temp	TAGS: []
      remove application	TAGS: []
      remove temp dir in Windows	TAGS: []

  play #2 (rhel): remove OracleJDK on RHEL	TAGS: []
    tasks:
      remove java dir	TAGS: []
...

What’s that thing? – List all tags

Besides all tasks, the used tags can be listed as well.

$ ansible-playbook --list-tags setup-control.yml

playbook: setup-control.yml

  play #1 (tuzak): 	TAGS: []
      TASK TAGS: [base_setup, db, imap, ldap, mail, oc, smtp]
...

Again, this option helps providing an overview what a playbook has to offer, how to use it. And again this option does not execute any task on the target node.

Are you sure? – Running in test mode

Ansible provides a so called check mode, also called dry run mode (in Tower for example). Invoked via --check the check mode does not alter the target nodes, but tries to output what would change and what not. Note however that this needs to be supported by the used modules, and not all modules support this.

For example, the following listing shows several tasks not supporting the dry run, which is indicated by the “skipping” line.

$ ansible-playbook --check oraclejdk-setup.yml

PLAY [set up OracleJDK on Windows] *********************************************

TASK [setup] *******************************************************************
ok: [radon]

TASK [set up temp dir in Windows] **********************************************
skipping: [radon]

TASK [copy JDK to Windows client] **********************************************
skipping: [radon]

TASK [run exe installer] *******************************************************
skipping: [radon]
...
PLAY [set up OracleJDK on RHEL] ************************************************

TASK [setup] *******************************************************************
ok: [helium]
ok: [neon]

TASK [copy JDK to RHEL client] *************************************************
skipping: [helium]
skipping: [neon]
....

This is quite useful to get an idea what impact the run of a playbook might have on target nodes. The lack of support in several modules dampens the positive effect a bit, though.

But since the --diff option (see below) supports it, it can be quite handy in certain situations.

Let me have a look at that… – Going through tasks step by step

Imagine that a playbook runs without errors, but somehow the result is not what exactly what was expected. In such cases one way to debug everything is to go through each task at a time, step by step, checking the state of all involved components after each task. This can be done with the option --step.

$ ansible-playbook --step oraclejdk-setup.yml

PLAY [set up OracleJDK on Windows] *********************************************
Perform task: TASK: setup (y/n/c): y

Perform task: TASK: setup (y/n/c): *********************************************

TASK [setup] *******************************************************************
ok: [radon]
Perform task: TASK: set up temp dir in Windows (y/n/c): y

Perform task: TASK: set up temp dir in Windows (y/n/c): ************************

TASK [set up temp dir in Windows] **********************************************
changed: [radon]
Perform task: TASK: copy JDK to Windows client (y/n/c): 
...

This is incredibly helpful on complex setups involving multiple nodes.

And yes, this time the tasks are actually executed on the target node!

Get me right there! – Starting playbooks in the middle

During debugging and development it might make sense to start playbooks not at the beginning, but somewhere in between. For example, because a playbook failed at task 14, and you don’t want to go through the first 13 tasks again. Starting at a given task requires the appropriate name of the task – and the option --start-at-task:

$ ansible-playbook --start-at-task="run exe installer" oraclejdk-setup.yml

PLAY [set up OracleJDK on Windows] *********************************************

TASK [setup] *******************************************************************
ok: [radon]

TASK [run exe installer] *******************************************************
ok: [radon]
...

In this example, the two tasks “set up temp dir in Windows” and “copy JDK to Windows client” are skipped, and the playbook starts directly at “run exe installer”. Note that skipped tasks are not shown or listed at all, and that the setup is run nevertheless.

As shown above, the proper name of each task is listed with the --list-tasks option.

Get down to business! – Showing diffs

Ansible is often used to deploy files, especially using templates. Usually, when a file is changed, Ansible just highlights that a change occurred – but not what was actually changed. In such cases, the option --diff comes in handy: it shows the diff in typical patch form:

$ ansible-playbook --diff examples/template.yml

PLAY [template example] ********************************************************

TASK [setup] *******************************************************************
ok: [helium]

TASK [copy template] ***********************************************************
changed: [helium]
--- before: /tmp/template.conf
+++ after: dynamically generated
@@ -1,2 +1,3 @@
 hostname: ansible-demo-helium
-bumble: bee
+foo: bar
+MX: 10 mx2.redhat.com.,5 mx1.redhat.com.

PLAY RECAP *********************************************************************
helium                     : ok=2    changed=1    unreachable=0    failed=0

This can be even combined with the option --check: in such cases, the diff is printed, but the change is not performed on the target node. That’s pretty handy indeed

That was interesting! – Summary

To summarize, ansible-playbook has quite some options to help debugging playbooks. The fact that many do not alter the target nodes makes it possible to use them on productive systems as well (but with care, as always). They also help a lot when it comes to understanding unknown playbooks, for example from other departments or coworkers.

[Howto] Keeping temporary Ansible scripts

Ansible LogoAnsible tasks are executed locally on the target machine. via generated Python scripts. For debugging it might make sense to analyze the scripts – so Ansible must be told to not delete them.

When Ansible executes a command on a remote host, usually a Python script is copied, executed and removed immediately. For each task, a script is copied and executed, as shown in the logs:

Feb 25 07:40:44 ansible-demo-helium sshd[2395]: Accepted publickey for liquidat from 192.168.122.1 port 54108 ssh2: RSA 78:7c:4a:15:17:b2:62:af:0b:ac:34:4a:00:c0:9a:1c
Feb 25 07:40:44 ansible-demo-helium systemd[1]: Started Session 7 of user liquidat
Feb 25 07:40:44 ansible-demo-helium sshd[2395]: pam_unix(sshd:session): session opened for user liquidat by (uid=0)
Feb 25 07:40:44 ansible-demo-helium systemd-logind[484]: New session 7 of user liquidat.
Feb 25 07:40:44 ansible-demo-helium systemd[1]: Starting Session 7 of user liquidat.
Feb 25 07:40:45 ansible-demo-helium ansible-yum[2399]: Invoked with name=['httpd'] list=None install_repoquery=True conf_file=None disable_gpg_check=False state=absent disablerepo=None update_cache=False enablerepo=None exclude=None
Feb 25 07:40:45 ansible-demo-helium sshd[2398]: Received disconnect from 192.168.122.1: 11: disconnected by user
Feb 25 07:40:45 ansible-demo-helium sshd[2395]: pam_unix(sshd:session): session closed for user liquidat

However, for debugging it might make sense to keep the script and execute it locally. Ansible can be persuaded to keep a script by setting the variable ANSIBLE_KEEP_REMOTE_FILES to true at the command line:

$ ANSIBLE_KEEP_REMOTE_FILES=1 ansible helium -m yum -a "name=httpd state=absent"

The actually executed command – and the created temporary file – is revealed when ansible is executed with the debug option:

$ ANSIBLE_KEEP_REMOTE_FILES=1 ansible helium -m yum -a "name=httpd state=absent" -vvv
...
<192.168.122.202> SSH: EXEC ssh -C -vvv -o ForwardAgent=yes -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -tt 192.168.122.202 'LANG=de_DE.UTF-8 LC_ALL=de_DE.UTF-8 LC_MESSAGES=de_DE.UTF-8 /usr/bin/python -tt /home/liquidat/.ansible/tmp/ansible-tmp-1456498240.12-1738868183958/yum'
...

Note that here the script is executed directly via Python. If the “become” flag i set, the Python execution is routed through a shell, the command looks like /bin/sh -c 'sudo -u $SUDO_USER /bin/sh -c "/usr/bin/python $SCRIPT"'.

The temporary file is a Python script, as the header shows:

$ head yum 
#!/usr/bin/python -tt
# -*- coding: utf-8 -*-
# -*- coding: utf-8 -*-

# (c) 2012, Red Hat, Inc
# Written by Seth Vidal <skvidal at fedoraproject.org>
# (c) 2014, Epic Games, Inc.
#
# This file is part of Ansible
...

The script can afterwards be executed by /usr/bin/python yum or /bin/sh -c 'sudo -u $SUDO_USER /bin/sh -c "/usr/bin/python yum"' respectively:

$ /bin/sh -c 'sudo -u root /bin/sh -c "/usr/bin/python yum"'
{"msg": "", "invocation": {"module_args": {"name": ["httpd"], "list": null, "install_repoquery": true, "conf_file": null, "disable_gpg_check": false, "state": "absent", ...

More detailed information about debugging Ansible can be found at Will Thames’ article “Debugging Ansible for fun and no profit”.

[Howto] Access Red Hat Satellite REST API via Ansible [Update]

Ansible LogoAs with all tools, Red Hat Satellite offers a REST API. Ansible offers a simple way to access the API.

Background

Most of the programs and functions developed these days offer a REST API. Red Hat for example usually follows the “API first” methodology with most of the products these days, thus all functions of all programs can be accessed via REST API calls. For example I covered how to access the CloudForms REST API via Python.

While exploring a REST API via Python teaches a lot about the API and how to deal with all the basic tasks around REST communication, in daily enterprise business API calls should be automated – say hello to Ansible.

Ansible offers the URI module to work with generic HTTP requests. It offers various authentication modules, can pass general headers and provides ways to deal with different return codes and has a generic body field. Together with Ansible’s extensive variable features this makes the ideal combination for automated REST queries.

Setup

The setup is fairly simple: a Red Hat Satellite Server in a newer version (6.1 or newer), Ansible, and that’s it. The URI module in Satellite comes pre-installed.

Since the URI module accesses the target hosts via http, the actual host executing the http commands is the host on which the playbooks run. As a result, the host definition in the playbook needs to be localhost. In such case it doesn’t make sense to gather facts, either, so gather_facts: no can be set to save time.

In the module definition itself, it might make sense for test environments to ignore certification errors if the Satellite server certificate is not properly signed: validate_certs: no. Also, sometimes the Python library stumbles upon the status code 401 to initiate authentication. In that case, the option force_basic_auth: yes might help.

Last but not least, the API itself must be understood. The appropriate documentation is pretty helpful here: Red Hat Satellite API Guide. Especially the numerious examples at the end are a good start to build own REST calls in Ansible.

Getting values

Getting values via the REST API is pretty easy – the usual URL needs to be queried, the result is provided as JSON (in this case). The following example playbook asks the Satellite for the information about a given host. The output is reduced to the puppet modules, the number of modules is counted and the result is printed out.

$ cat api-get.yml
---
- name: call API from Satellite
  hosts: localhost
  gather_facts: no
  vars:
    satelliteurl: satellite-server.example.com
    client: helium.example.com

  tasks:
    - name: get modules for given host from satellite 
      uri:
        url: https://{{ satelliteurl }}/api/v2/hosts/{{ client }}
        method: GET 
        user: admin
        password: password
        force_basic_auth: yes 
        validate_certs: no
      register: restdata
    - name: output rest data
      debug: msg="{{ restdata.json.all_puppetclasses | count }}" 

The execution of the playbook show the number of the installed Puppet modules:

$ ansible-playbook api-get.yml

PLAY [call API from Satellite] ************************************************ 

TASK: [get ip and name from satellite] **************************************** 
ok: [localhost]

TASK: [output rest data] ****************************************************** 
ok: [localhost] => {
    "msg": "8"
}

PLAY RECAP ******************************************************************** 
localhost                  : ok=2    changed=0    unreachable=0    failed=0

If the Jinja filter string | count is removed, the actual Puppet classes are listed.

Performing searches

Performing searches is simply another URL, and thus works the exact same way. The following playbook shows a search for all servers which are part of a given Puppet environment:

---
- name: call API from Satellite
  hosts: localhost
  gather_facts: no
  vars:
    satelliteurl: satellite-server.example.com
    clientenvironment: production

  tasks:
    - name: get Puppet environment from Satellite 
      uri:
        url: https://{{ satelliteurl }}/api/v2/hosts/?search=environment={{ clientenvironment }}
        method: GET 
        user: admin
        password: password
        force_basic_auth: yes 
        validate_certs: no
      register: restdata
    - name: output rest data
      debug: msg="{{ restdata.json }}"

Changing configuration: POST

While querying the REST API can already be pretty interesting, automation requires the ability to change values as well. This can be done by changing the method: in the playbook to POST. Also, additional headers are necessary, and a body defining what data will be posted to Satellite.

The following example implements the example CURL call from the API guide mentioned above to add another architecture to Satellite:

$ cat api-post.yml
---
- name: call API from Satellite
  hosts: localhost
  gather_facts: no
  vars:
    satelliteurl: satellite-server.example.com

  tasks:
    - name: set additional architecture in Satellite 
      uri:
        url: https://{{ satelliteurl }}/api/architectures
        method: POST
        user: admin
        password: password
        force_basic_auth: yes 
        validate_certs: no
        HEADER_Content-Type: application/json
        HEADER_Accept: :application/json,version=2
        body: >
          {"architecture":{"name":"i686"}}
      register: restdata
    - name: output rest data
      debug: msg="{{ restdata }}"

The result can be looked up in the web interface: an architecture of the type i686 can now be found.

Update
Note that the body: > notation, folded scalars, makes it much easier to paste payload. If you are providing the payload without the closing bracket but on the same line, all the quotation marks need to be escaped:

body: "{\"architecture\":{\"name\":\"i686\"}}"

Conclusion

Ansible can easily access, query and control the Red Hat Satellite REST API and thus other REST APIs out there as well.

Ansible offers the possibility to automate almost any tool which expose a REST API. Together with the dynamic variable system results from one tool can easily be re-used to perform actions in another tool. That way even complex setups can be integrated with each other via Ansible rather easy.

[Howto] Look up of external sources in Ansible

Ansible Logo Part of Ansible’s power comes from an easy integration with other systems. In this post I will cover how to look up data from external sources like DNS or Redis.

Background

A tool for automation is only as good as it is capable to integrate it with the already existing environment – thus with other tools. Among various ways Ansible offers the possibility to look up Ansible variables from external stores like DNS, Redis, etcd or even generic INI or CSV files. This enables Ansible to easily access data which are stored – and changed, managed – outside of Ansible.

Setup

Ansible’s lookup feature is already installed by default.

Queries are executed on the host where the playbook is executed – in case of Tower this would be the Tower host itself. Thus the node needs access to the resources which needs to be queried.

Some lookup functions for example for DNS or Redis servers require additional python libraries – on the host actually executing the queries! On Fedora, the python-dns package is necessary for DNS queries and the package python-redis for Redis queries.

Generic usage

The lookup function can be used the exact same way variables are used: curly brackets surround the lookup function, the result is placed where the variable would be. That means lookup functions can be used in the head of a playbook, inside the tasks, even in templates.

The lookup command itself has to list the plugin as well as the arguments for the plugin:

{{ lookup('plugin','arguments') }}

Examples

Files

Entire files can be used as content of a variable. This is simply done via:

vars:
  content: "{{ lookup('file','lorem.txt') }}"

As a result, the variable has the entire content of the file. Note that the lookup of files always searches the files relative to the path of the actual playbook, not relative to the path where the command is executed.

Also, the lookup might fail when the file itself contains quote characters.

CVS

While the file lookup is pretty simple and generic, the CVS lookup module gives the ability to access values of given keys in a CVS file. An optional parameter can identify the appropriate column. For example, if the following CSV file is given:

$ cat gamma.csv
daytime,time,meal
breakfast,7,soup
lunch,12,rice
tea,15,cake
dinner,18,noodles

Now the lookup function for CVS files can access the lines identified by keys which are compared to the values of the first column. The following example looks up the key dinner and gives back the entry of the third column: {{ lookup('csvfile','dinner file=gamma.csv delimiter=, col=2') }}.

Inserted in a playbook, this looks like:

ansible-playbook examples/lookup.yml

PLAY [demo lookups] *********************************************************** 

GATHERING FACTS *************************************************************** 
ok: [neon]

TASK: [lookup of a cvs file] ************************************************** 
ok: [neon] => {
    "msg": "noodles"
}

PLAY RECAP ******************************************************************** 
neon                       : ok=2    changed=0    unreachable=0    failed=0

The corresponding playbook gives out the variable via the debug module:

---
- name: demo lookups
  hosts: neon

  tasks:
    - name: lookup of a cvs file
      debug: msg="{{ lookup('csvfile','dinner file=gamma.csv delimiter=, col=2') }}"

DNS

The DNS lookup is particularly interesting in cases where the local DNS provides a lot of information like SSH fingerprints or the MX record.

The DNS lookup plugin is called dig – like the command line client dig. As arguments, the plugin takes a domain name and the DNS type: {{ lookup('dig', 'redhat.com. qtype=MX') }}. Another way to hand over the type argument is via slash: {{ lookup('dig', 'redhat.com./MX') }}

The result for this example is:

TASK: [lookup of dns dig entries] ********************************************* 
ok: [neon] => {
    "msg": "10 int-mx.corp.redhat.com."
}

Redis

It gets even more interesting when existing databases are queried. Ansible lookup supports for example Redis databases. The plugin takes as argument the entire URL: redis://$URL:$PORT,$KEY.

For example, to query a local Redis server for the key dinner:

---
tasks:
  - name: lookup of redis entries
    debug: msg="{{ lookup('redis_kv', 'redis://localhost:6379,dinner') }}" 

The result is:

TASK: [lookup of redis entries] *********************************************** 
ok: [neon] => {
    "msg": "noodles"
}

Template

As already mentioned, lookups can not only be used in Playbooks, but also directly in templates. For example, given the template code:

$ cat templatej2
...
Red Hat MX: {{ lookup('dig', 'redhat.com./MX') }}
$ cat template.conf
...
Red Hat MX: 10 mx2.redhat.com.,5 mx1.redhat.com.

Conclusion

As shown the lookup plugin of Ansible provides many possibilities to integrate Ansible with existing tools and environments which already contain valuable data about the systems. It is easy to use, integrates well with the existing Ansible concepts and can quickly be integrated. Just drop it where a variable would be dropped, and it already works.

I am looking forward to more lookup modules support in the future – I’d love to see a generic “http” and a generic “SQL” plugin, even with the ability to provide credentials, although these features can be somewhat realized with already existing modules.