[Short Tip] Retrieve your public IP with Ansible

There are multiple sources where variables for Ansible can be defined. Most of them can be shown via the setup module, but there are more. For example, if you use a dynamic inventory script to access a Satellite server many variables like the organization are provided via the inventory script – and these are not […]

Ansible Logo

There are multiple situations where you need to know your public IP: be it that you set up your home IT server behind a NAT, be it that your legacy enterprise business solution does not work properly without this information because the original developers 20 years ago never expected to be behind a NAT.

Of course, Ansible can help here as well: there is a tiny, neat module called ipify_facts which does nothing else but retrieving your public IP:

$ ansible localhost -m ipify_facts
localhost | SUCCESS => {
    "ansible_facts": {
        "ipify_public_ip": "23.161.144.221"
    }, 
    "changed": false
}

The return value can be registered as a variable and reused in other tasks:

---
- name: get public IP
  hosts: all 

  tasks:
    - name: get public IP
      ipify_facts:
      register: public_ip
    - name: output
      debug: msg="{{ public_ip }}"

The module by default accesses https://api.ipify.org to get the IP address, but the api URL can be changed via parameter.

[Short Tip] Show all variables of a host

Ansible Logo

There are multiple sources where variables for Ansible can be defined. Most of them can be shown via the setup module, but there are more.

For example, if you use a dynamic inventory script to access a Satellite server many variables like the organization are provided via the inventory script – and these are not shown in setup usually.

To get all variables of a host use the following notation:

---
- name: dump all
  hosts: all

  tasks:
  - name: get variables
    debug: var=hostvars[inventory_hostname]

Use this during debug to find out if the variables you’ve set somewhere are actually accessible in your playbooks.

If even created a small github repository for this to easily integrate it with Tower.

[Howto] Writing an Ansible module for a REST API

Ansible LogoAnsible comes along with a great set of modules. But maybe your favorite tool is not covered yet and you need to develop your own module. This guide shows you how to write an Ansible module – when you have a REST API to speak to.

Background: Ansible modules

Ansible is a great tool to automate almost everything in an IT environment. One of the huge benefits of Ansible are the so called modules: they provide a way to address automation tasks in the native language of the problem. For example, given a user needs to be created: this is usually done by calling certain commandos on the shell. In that case the automation developer has to think about which command line tool needs to be used, which parameters and options need to be provided, and the result is most likely not idempotent. And its hard t run tests (“checks”) with such an approach.

Enter Ansible user modules: with them the automation developer only has to provide the data needed for the actual problem like the user name, group name, etc. There is no need to remember the user management tool of the target platform or to look up parameters:

$ ansible server -m user -a "name=abc group=wheel" -b

Ansible comes along with hundreds of modules. But what is if your favorite task or tool is not supported by any module? You have to write your own Ansible module. If your tools support REST API, there are a few things to know which makes it much easier to get your module running fine with Ansible. These few things are outlined below.

REST APIs and Python libraries in Ansible modules

According to Wikipedia, REST is:

… the software architectural style of the World Wide Web.

In short, its a way to write, provide and access an API via usual HTTP tools and libraries (Apache web server, Curl, you name it), and it is very common in everything related to the WWW.

To access a REST API via an Ansible module, there are a few things to note. Ansible modules are usually written in Python. The library of choice to access URLs and thus REST APIs in Python is usually urllib. However, the library is not the easiest to use and there are some security topics to keep in mind when these are used. Out of these reasons alternative libraries like Python requests came up in the past and are pretty common.

However, using an external library in an Ansible module would add an extra dependency, thus the Ansible developers added their own library inside Ansible to access URLs: ansible.module_utils.urls. This one is already shipped with Ansible – the code can be found at lib/ansible/module_utils/urls.py – and it covers the shortcomings and security concerns of urllib. If you submit a module to Ansible calling REST APIs the Ansible developers usually require that you use the inbuilt library.

Unfortunately, currently the documentation on the Ansible url library is sparse at best. If you need information about it, look at other modules like the Github, Kubernetes or a10 modules. To cover that documentation gap I will try to cover the most important basics in the following lines – at least as far as I know.

Creating REST calls in an Ansible module

To access the Ansible urls library right in your modules, it needs to be imported in the same way as the basic library is imported in the module:

from ansible.module_utils.basic import *
from ansible.module_utils.urls import *

The main function call to access a URL via this library is open_url. It can take multiple parameters:

def open_url(url, data=None, headers=None, method=None, use_proxy=True,
        force=False, last_mod_time=None, timeout=10, validate_certs=True,
        url_username=None, url_password=None, http_agent=None,
force_basic_auth=False, follow_redirects='urllib2'):

The parameters in detail are:

  • url: the actual URL, the communication endpoint of your REST API
  • data: the payload for the URL request, for example a JSON structure
  • headers: additional headers, often this includes the content-type of the data stream
  • method: a URL call can be of various methods: GET, DELETE, PUT, etc.
  • use_proxy: if a proxy is to be used or not
  • force: force an update even if a 304 indicates that nothing has changed (I think…)
  • last_mod_time: the time stamp to add to the header in case we get a 304
  • timeout: set a timeout
  • validate_certs: if certificates should be validated or not; important for test setups where you have self signed certificates
  • url_username: the user name to authenticate
  • url_password: the password for the above listed username
  • http_agent: if you wnat to set the http agent
  • force_basic_auth: for ce the usage of the basic authentication
  • follow_redirects: determine how redirects are handled

For example, to fire a simple GET to a given source like Google most parameters are not needed and it would look like:

open_url('https://www.google.com',method="GET")

A more sophisticated example is to push actual information to a REST API. For example, if you want to search for the domain example on a Satellite server you need to change the method to PUT, add a data structure to set the actual search string ({"search":"example"}) and add a corresponding content type as header information ({'Content-Type':'application/json'}). Also, a username and password must be provided. Given we access a test system here the certification validation needs to be turned off also. The resulting string looks like this:

open_url('https://satellite-server.example.com/api/v2/domains',method="PUT",url_username="admin",url_password="abcd",data=json.dumps({"search":"example"}),force_basic_auth=True,validate_certs=False,headers={'Content-Type':'application/json'})

Beware that the data json structure needs to be processed by json.dumps. The result of the query can be formatted as json and further used as a json structure:

resp = open_url(...)
resp_json = json.loads(resp.read())

Full example

In the following example, we query a Satellite server to find a so called environment ID for two given parameters, an organization ID and an environment name. To create a REST call for this task in a module multiple, separate steps have to be done: first, create the actual URL endpoint. This usually consists of the server name as a variable and the API endpoint as the flexible part which is different in each REST call.

server_name = 'https://satellite.example.com'
api_endpoint = '/katello/api/v2/environments/'
my_url = server_name + api_endpoint

Besides the actual URL, the payload must be pieced together and the headers need to be set according to the content type of the payload – here json:

headers = {'Content-Type':'application/json'}
payload = {"organization_id":orga_id,"name":env_name}

Other content types depends on the REST API itself and on what the developer prefers. JSON is widely accepted as a good way to go for REST calls.

Next, we set the user and password and launch the call. The return data from the call are saved in a variable to analyze later on.

user = 'abc'
pwd = 'def'
resp = open_url(url_action,method="GET",headers=headers,url_username=module.params.get('user'),url_password=module.params.get('pwd'),force_basic_auth=True,data=json.dumps(payload))

Last but not least we transform the return value into a json construct, and analyze it: if the return value does not contain any data – that means the value for the key total is zero – we want the module to exit with an error. Something went wrong, and the automation administrator needs to know that. The module calls the built-in error functionmodule.fail_json. But if the total is not zero, we get out the actual environment ID we were looking for with this REST call from the beginning – it is deeply hidden in the json structure, btw.

resp_json = json.loads(resp.read())
if resp_json["total"] == 0:
    module.fail_json(msg="Environment %s not found." % env_name)
env_id = resp_json["results"][0]["id"]

Summary

It is fairly easy to write Ansible modules to access REST APIs. The most important part to know is that an internal, Ansible provided library should be used, instead of the better known urllib or requests library. Also, the actual library documentation is still pretty limited, but that gap is partially filled by the above post.

Ways to provide body payload in Ansible’s URI module [2. Update]

Ansible Logo

Talin to a REST API requires to provide some information, usually in the form of JSON payload. Ansible offers various ways to do that in the URI module in playbooks.

In modern applications REST APIs are often the main API to integrate the given APP with the existing infrastructure. REST often requires posting JSON structures as payload.

Ansible offers the URI module to talk to REST APIs, and there are multiple ways add JSON payload to a playbook task that are shown below.

For example, given that the following arbitrary JSON payload needs to be provided to a REST API via POST:

{
  "mainlevel": {
    "subkey": "finalvalue"
  }
}

The first and for me preferred way to provide JSON payloads is to write down the structure in plain YAML (if possible) and afterwards tell the module to format it as JSON:

HEADER_Content-Type: application/json
status_code: 202
body: 
  mainlevel:
    subkey: finalvalue
body_format: json

Among various reasons this works well because variables can be easily used.

Another way is to define a variable and then use jinja to format it:

vars:
  mainlevel:
    "subkey": finalvalue
...
    body: ' {{mainlevel|to_json}}'

Caution: not the empty space here in the body line. It avoids type detection which tries to check if a string begins with { or [.

Sometimes the payload is the content of a file generated somewhere else. In these cases the best way is to use the lookup plugin to read the file:

body: "{{ lookup('file','myvalues.json') }}"

Of course the lookup plugin can access data from other places as well – for example from a database or a config store, which is a nice way of integrating existing infrastructure with each other via Ansible.

A quicker, shorter way is to use folded style:

body: >
  {"mainlevel":{"subkey":"finalvalue"}}

Note that folded style pastes things as they are – but ignores single new lines. So it might be difficult to add variables here.

Therefore, a little bit better suited is the literal style, indicated by the pipe:

body: |
{
  "mainlevel": {
    "subkey": "finalvalue"
  }
}

This is probably the easiest way to deal with in many debugging situations where you need to be able to quickly change thins in your code.

Last, and honestly something I would try to avoid is the plain one-liner:

body: "{\"mainlevel\":{\"subkey\":\"finalvalue\"}}

Note, all quotation marks need to be escaped which makes it hard to read, hard to maintain and easy to introduce errors.

As shown Ansible is powerful and simple. Thus there are always multiple different ways to reach the goal you are aiming for – and it depends on the requirements what solution is the best one.

For more details I can only recommend Understanding multi line strings in YAML and Ansible (Part I & Part II) from adminswerk.de.

Update:
Added how to add body payload from existing files.

2. Update:
Added details about literal style.

Insights into Ansible: environments of executed playbooks

Ansible LogoUsually when Ansible Tower executes a playbook everything works just as on the command line. However, in some corner cases the behavior might be different: Ansible Tower runs its playbooks in a specific environment.

Different playbook results in Tower vs CLI

Ansible is a great tool for automation, and Ansible Tower enhances these capabilities by adding centralization, a UI, role based access control and a REST API. To take advantage of Tower, just import your playbooks and press start – it just works.

At least most of the time: lately I was playing around with the Google Cloud Engine, GCE. Ansible provides several GCE modules thus writing playbooks to control the setup was pretty easy. But while GCE related playbooks worked on the plain command line, they failed in Tower:

PLAY [create node on GCE] ******************************************************

TASK [launch instance] *********************************************************
task path: /var/lib/awx/projects/_43__gitolite_gce_node_tower_pem_file/gce-node.yml:13
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
  File "/var/lib/awx/.ansible/tmp/ansible-tmp-1461919385.95-6521356859698/gce", line 2573, in <module>
    main()
  File "/var/lib/awx/.ansible/tmp/ansible-tmp-1461919385.95-6521356859698/gce", line 506, in main
    module, gce, inames)
  File "/var/lib/awx/.ansible/tmp/ansible-tmp-1461919385.95-6521356859698/gce", line 359, in create_instances
    external_ip=external_ip, ex_disk_auto_delete=disk_auto_delete, ex_service_accounts=ex_sa_perms)
TypeError: create_node() got an unexpected keyword argument 'ex_can_ip_forward'

fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "gce"}, "parsed": false}

NO MORE HOSTS LEFT *************************************************************
	to retry, use: --limit @gce-node.retry

PLAY RECAP *********************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1 

To me that didn’t make sense at all: the exact same playbook was running on command line. How could that fail in Tower when Tower is only a UI to Ansible itself?

Environment variables during playbook runs

The answer is that playbooks are run by Tower within specific environment variables. For example, the GCE login credentials are provided to the playbook and thus to the modules via environment variables:

GCE_EMAIL
GCE_PROJECT
GCE_PEM_FILE_PATH

That means, if you want to debug a playbook and want to provide the login credentials just the way Tower does, the shell command has to be:

GCE_EMAIL=myuser@myproject.iam.gserviceaccount.com GCE_PROJECT=myproject GCE_PEM_FILE_PATH=/tmp/mykey.pem ansible-playbook myplaybook.yml

The error at hand was also caused by an environment variable, though: PYTHONPATH. Tower comes along with a set of Python libraries needed for Ansible. Among them some which are required by specific modules. In this case, the GCE modules require the Apache libcloud, and that one is installed with the Ansible Tower bundle. The libraries are installed at /usr/lib/python2.7/site-packages/awx/lib/site-packages – which is not a typical Python path.

For that reason, each playbook is run from within Tower with the environment variable PYTHONPATH="/usr/lib/python2.7/site-packages/awx/lib/site-packages:". Thus, to run a playbook just the same way it is run from within Tower, the shell command needs to be:

PYTHONPATH="/usr/lib/python2.7/site-packages/awx/lib/site-packages:" ansible-playbook myplaybook.yml

This way the GCE error shown above could be reproduced on the command line. So the environment provided by Tower was a problem, while the environment of plain Ansible (and thus plain Python) caused no errors. Tower does bundle the library because you cannot expect the library for example in the RHEL default repositories.

The root cause is that right now Tower still ships with an older version of the libcloud library which is not fully compatible with GCE anymore (GCE is a fast moving target). If you run Ansible on the command line you most likely install libcloud via pip or RPM which in most cases provides a pretty current version.

Workaround for Tower

While upgrading the library makes sense in the mid term, a short term workaround is needed as well. The best way is to first install a recent version of libcloud and second identify the actual task which fails and point that exact task to the new library.

In case of RHEL, enable the EPEL repository, install python-libcloud and then add the environment path PYTHONPATH: "/usr/lib/python2.7/site-packages" to the task via the environment option.

- name: launch instance
  gce:
    name: "{{ node_name }}"
    zone: europe-west1-c
    machine_type: "{{ machine_type }}"
    image: "{{ image }}"
  environment:
    PYTHONPATH: "/usr/lib/python2.7/site-packages"