Ansible community modules for Oracle DB & ASM

Ansible LogoBesides the almost thousand modules shipped with Ansible, there are many more community modules out there developed independently. A remarkable example is a set of modules to manage Oracle DBs.

The Ansible module system is a great way to improve data center automation: automation tasks do not have to be programmed “manually” in shell code, but can be simply executed by calling the appropriate module with the necessary parameters. Besides the fact that an automation user does not have to remember the shell code the modules are usually also idempotent, thus a module can be called multiple times and only changes something when it is needed.

This only works when a module for the given task exists. The list of Ansible modules is huge, but does not cover all tasks out there. For example quite some middleware products are not covered by Ansible modules (yet?). But there are also community modules out there, not part of the Ansible package, but nevertheless of high quality and developed actively.

A good example of such 3rd party modules are the Oracle DB & ASM modules developed by oravirt aka Mikael Sandström, in a community fashion. Oracle DBs are quite common in the daily enterprise IT business. And since automation is not about configuring single servers, but about integrating all parts of a business process, Oracle DBs should also be part of the automation. Here the extensive set of Ansible modules comes in handy. According to the README (shortened):

  • oracle_user
    • Creates & drops a user.
    • Grants privileges only
  • oracle_tablespace
    • Manages normal(permanent), temp & undo tablespaces (create, drop, make read only/read write, offline/online)
    • Tablespaces can be created as bigfile, autoextended
  • oracle_grants
    • Manages privileges for a user
    • Grants/revokes privileges
    • Handles roles/sys privileges properly.
    • The grants can be added as a string (dba,’select any dictionary’,’create any table’), or in a list (ie.g for use with with_items)
  • oracle_role
    • Manages roles in the database
  • oracle_parameter
    • Manages init parameters in the database (i.e alter system set parameter…)
    • Also handles underscore parameters. That will require using mode=sysdba, to be able to read the X$ tables needed to verify the existence of the parameter.
  • oracle_services
    • Manages services in an Oracle database (RAC/Single instance)
  • oracle_pdb
    • Manages pluggable databases in an Oracle container database
    • Creates/deletes/opens/closes the pdb
    • saves the state if you want it to. Default is yes
    • Can place the datafiles in a separate location
  • oracle_sql
    • 2 modes: sql or script
    • Executes arbitrary sql or runs a script
  • oracle_asmdg
    • Manages ASM diskgroup state. (absent/present)
    • Takes a list of disks and makes sure those disks are part of the DG. If the disk is removed from the disk it will be removed from the DG.
  • oracle_asmvol
    • Manages ASM volumes. (absent/present)
  • oracle_ldapuser
    • Syncronises users/role grants from LDAP/Active Directory to the database
  • oracle_privs
    • Manages system and object level grants
    • Object level grant support wildcards, so now it is possible to grant access to all tables in a schema and maintain it automatically!

I have not yet had the change to test the modules, but I think they are worth a look. The amount of quality code, the existing documentation and also the ongoing development shows an active and healthy project, development important and certainly relevant modules. Please note: these modules are not part of the Ansible community, nor part of any offering from Oracle or anyone else. So use them at your own risk, they probably will eat your data. And kittens!

So, if you are dealing with Oracle DBs these modules might be worth to take a look. And I hope they will be pushed upstream soon.

Advertisements

[Short Tip] Fix mount problems in RHV during GlusterFS mounts

Gluster Logo

When using Red Hat Virtualization or oVirt together with GLusterFS, there might be a strange error during the first creation of a storage domain:

Failed to add Storage Domain xyz.

One of the rather easy to fix reasons might be a permission problem: an initial Gluster exported file system belongs to the user root. However, the virtualization manager (ovirt-m bzw. RHV-M) does not have root rights and such needs another ownership.

In such cases, the fix is to mount the exported volume & set the user rights to the rhv-m user.

$ sudo mount -t glusterfs 192.168.122.241:my-vol /mnt
# cd /mnt/
# chown -R 36.36 .

Afterwarsd, the volume can be mounted properly. Some more general details can be found at RH KB 78503.

[Howto] Writing an Ansible module for a REST API

Ansible LogoAnsible comes along with a great set of modules. But maybe your favorite tool is not covered yet and you need to develop your own module. This guide shows you how to write an Ansible module – when you have a REST API to speak to.

Background: Ansible modules

Ansible is a great tool to automate almost everything in an IT environment. One of the huge benefits of Ansible are the so called modules: they provide a way to address automation tasks in the native language of the problem. For example, given a user needs to be created: this is usually done by calling certain commandos on the shell. In that case the automation developer has to think about which command line tool needs to be used, which parameters and options need to be provided, and the result is most likely not idempotent. And its hard t run tests (“checks”) with such an approach.

Enter Ansible user modules: with them the automation developer only has to provide the data needed for the actual problem like the user name, group name, etc. There is no need to remember the user management tool of the target platform or to look up parameters:

$ ansible server -m user -a "name=abc group=wheel" -b

Ansible comes along with hundreds of modules. But what is if your favorite task or tool is not supported by any module? You have to write your own Ansible module. If your tools support REST API, there are a few things to know which makes it much easier to get your module running fine with Ansible. These few things are outlined below.

REST APIs and Python libraries in Ansible modules

According to Wikipedia, REST is:

… the software architectural style of the World Wide Web.

In short, its a way to write, provide and access an API via usual HTTP tools and libraries (Apache web server, Curl, you name it), and it is very common in everything related to the WWW.

To access a REST API via an Ansible module, there are a few things to note. Ansible modules are usually written in Python. The library of choice to access URLs and thus REST APIs in Python is usually urllib. However, the library is not the easiest to use and there are some security topics to keep in mind when these are used. Out of these reasons alternative libraries like Python requests came up in the past and are pretty common.

However, using an external library in an Ansible module would add an extra dependency, thus the Ansible developers added their own library inside Ansible to access URLs: ansible.module_utils.urls. This one is already shipped with Ansible – the code can be found at lib/ansible/module_utils/urls.py – and it covers the shortcomings and security concerns of urllib. If you submit a module to Ansible calling REST APIs the Ansible developers usually require that you use the inbuilt library.

Unfortunately, currently the documentation on the Ansible url library is sparse at best. If you need information about it, look at other modules like the Github, Kubernetes or a10 modules. To cover that documentation gap I will try to cover the most important basics in the following lines – at least as far as I know.

Creating REST calls in an Ansible module

To access the Ansible urls library right in your modules, it needs to be imported in the same way as the basic library is imported in the module:

from ansible.module_utils.basic import *
from ansible.module_utils.urls import *

The main function call to access a URL via this library is open_url. It can take multiple parameters:

def open_url(url, data=None, headers=None, method=None, use_proxy=True,
        force=False, last_mod_time=None, timeout=10, validate_certs=True,
        url_username=None, url_password=None, http_agent=None,
force_basic_auth=False, follow_redirects='urllib2'):

The parameters in detail are:

  • url: the actual URL, the communication endpoint of your REST API
  • data: the payload for the URL request, for example a JSON structure
  • headers: additional headers, often this includes the content-type of the data stream
  • method: a URL call can be of various methods: GET, DELETE, PUT, etc.
  • use_proxy: if a proxy is to be used or not
  • force: force an update even if a 304 indicates that nothing has changed (I think…)
  • last_mod_time: the time stamp to add to the header in case we get a 304
  • timeout: set a timeout
  • validate_certs: if certificates should be validated or not; important for test setups where you have self signed certificates
  • url_username: the user name to authenticate
  • url_password: the password for the above listed username
  • http_agent: if you wnat to set the http agent
  • force_basic_auth: for ce the usage of the basic authentication
  • follow_redirects: determine how redirects are handled

For example, to fire a simple GET to a given source like Google most parameters are not needed and it would look like:

open_url('https://www.google.com',method="GET")

A more sophisticated example is to push actual information to a REST API. For example, if you want to search for the domain example on a Satellite server you need to change the method to PUT, add a data structure to set the actual search string ({"search":"example"}) and add a corresponding content type as header information ({'Content-Type':'application/json'}). Also, a username and password must be provided. Given we access a test system here the certification validation needs to be turned off also. The resulting string looks like this:

open_url('https://satellite-server.example.com/api/v2/domains',method="PUT",url_username="admin",url_password="abcd",data=json.dumps({"search":"example"}),force_basic_auth=True,validate_certs=False,headers={'Content-Type':'application/json'})

Beware that the data json structure needs to be processed by json.dumps. The result of the query can be formatted as json and further used as a json structure:

resp = open_url(...)
resp_json = json.loads(resp.read())

Full example

In the following example, we query a Satellite server to find a so called environment ID for two given parameters, an organization ID and an environment name. To create a REST call for this task in a module multiple, separate steps have to be done: first, create the actual URL endpoint. This usually consists of the server name as a variable and the API endpoint as the flexible part which is different in each REST call.

server_name = 'https://satellite.example.com'
api_endpoint = '/katello/api/v2/environments/'
my_url = server_name + api_endpoint

Besides the actual URL, the payload must be pieced together and the headers need to be set according to the content type of the payload – here json:

headers = {'Content-Type':'application/json'}
payload = {"organization_id":orga_id,"name":env_name}

Other content types depends on the REST API itself and on what the developer prefers. JSON is widely accepted as a good way to go for REST calls.

Next, we set the user and password and launch the call. The return data from the call are saved in a variable to analyze later on.

user = 'abc'
pwd = 'def'
resp = open_url(url_action,method="GET",headers=headers,url_username=module.params.get('user'),url_password=module.params.get('pwd'),force_basic_auth=True,data=json.dumps(payload))

Last but not least we transform the return value into a json construct, and analyze it: if the return value does not contain any data – that means the value for the key total is zero – we want the module to exit with an error. Something went wrong, and the automation administrator needs to know that. The module calls the built-in error functionmodule.fail_json. But if the total is not zero, we get out the actual environment ID we were looking for with this REST call from the beginning – it is deeply hidden in the json structure, btw.

resp_json = json.loads(resp.read())
if resp_json["total"] == 0:
    module.fail_json(msg="Environment %s not found." % env_name)
env_id = resp_json["results"][0]["id"]

Summary

It is fairly easy to write Ansible modules to access REST APIs. The most important part to know is that an internal, Ansible provided library should be used, instead of the better known urllib or requests library. Also, the actual library documentation is still pretty limited, but that gap is partially filled by the above post.

Impressions from #AnsibleFest London 2016 [Update]

Ansible LogoThe #AnsibleFest was taking place in London, and I was luckily able to attend. This post shares some impressions from the event, together with interesting announcements and stories.

Update: The slides of the various presentations are now available.

Preface

The #AnsibleFest London 2016 took place near the O2 Arena and lasted the entire day. The main highlight of the conference was the network automation coming along with Ansible now. Other very interesting talks covered very helpful tips about managing Windows Servers, the 101 on modules, how to implement continuous deployment, the journey of a french bank towards DevOps, how Cisco devices can be managed and how to handle immutable infrastructure. All focused on Ansible, of course.

But while the conference took place during Thursday, the #AnsibleFest started already the evening before: at the social event Ansible Social.
Ansible Social
And it was a wonderful evening: many people from Ansible, partners, coworkers from Red Hat and others were there to enjoy drinks, food and chatting through the evening. Getting to know many of the people there went pretty well, it was a friendly bunch meeting at a pretty nice place.
Ansible Social

Keynote

Upon arrival at the conference area one of the sponsor desks immediately caught the eye: Cisco!
20160218_084833-01
For everyone following Ansible news closely it was obvious that networking would be a big topic, especially since it was about to be featured twice during the day, once by Peter Sprygada from Ansible and later on by Fabrizio Maccioni from Cisco.

And this impression was confirmed when Todd Barr came to the stage and talked about the current state of Ansible and what to expect in the near future: networking is a big topic for Ansible right now, they are pushing resources into the topic and already hinted that there would be a larger announcement during the #AnsibleFest. During the presentation the strengths of Ansible were of course emphasized again: that it is simple to setup, to understand and to deploy. And that it does not require agents. While I do have my past with Puppet and still like it as a tool in certain circumstances, I must admit that I had to smile at the slide about agents.
Todd Barr at AnsibleFest
I have to admit, for many customers and many setups this is in fact true: they do not want agents for various reasons. And Ansible can deliver actual results without any need for a client.

The future of Ansible

Next up was Bill Nottingham talking setting the road for the future of Ansible. A focus is certainly better integration of Windows (no beta tag anymore!), better testing – and Python 3 support! It was acknowledged that there are more and more distributions out there not providing any Python 2 anymore and that they need to be catered.
Future of Ansible by Bill Nottingham
Ansible Tower was also covered, of course, and has very promising improvements coming up as well: the interface will be streamlined, the credentials and rights system will be improved, and there will be (virtual) appliances to get Ansible Tower out of the box in an instant. But the really exciting part is more large-scale, enterprise focused: Ansible Tower will be able to cater federated setups, meaning distributed replication of Ansible Tower commands via proxy Towers.
Federated Ansible Tower
Don’t expect this all in the next weeks, but we might see many of these features already in Ansible Tower 3.0. And it was mentioned that there might be a release in early fall.

Scaling abilities are indeed needed – many data centers these days have more than one location, or are spread over several departments and thus need partially independent setups to manage the infrastructure. At the same time, there are Ansible customers who are using Ansible with 50k nodes and more out there, and they have a demand for fine grained, federated infrastructure setups as well.

Networking with Ansible

While the upcoming Ansible Tower had some exciting news, the talk about networking support by Peter Sprygada really blew everyone away. Right at the moment of presentation Red Hat issued a press release that they bring DevOps to the network via Ansible:

[Red Hat] is bringing DevOps to networking by extending Ansible – its powerful IT automation and DevOps platform – to include native agentless support for automating heterogeneous network infrastructure devices using the same simple human and machine readable automation language that Ansible provides to IT teams.

Peter picked that up and presented a whole lot of technical details. The most important one was that there are now several networking core modules for commands, configuration and templates.
Ansible networking automation support
They cover a huge load of devices:

  • Arista EOS
  • Cisco NXOS
  • Cisco IOS
  • Cisco IOSXR
  • Cumulus Linux
  • Juniper Junos
  • OpenSwitch

While some of these devices were already supported by the raw module or some libraries out there, but fully integrated modules supported by Ansible and the network device manufacturers themselves takes networking automation to another new level. If you are interested, get the latest Ansible networking right away.

Ansible in a visual effects studio

The next talk was by the customer “Industrial Light and Magic”, a visual effects studio using Ansible to handle there massive setup via Ansible. It showed in particular how many obstacles you face in your daily routine running data centers and deploying software all the time – and how to tackle them using Ansible and Ansible’s features. I forgot to take a photo, though…

Ansible & Windows

John Hawkesworth from M*Modal came up to the stage next and delivered a brilliant speech about all the things needed to know when managing Windows with Ansible. Talking about the differences of Ansible 1.9 vs 2.0 briefly, he went over lessons learned like why the backslash should be escaped every time just to be sure (\t …) but also gave his favourite development and modules quite some attention. Turns out the registration module can come in very handy!
Ansible and Windows

Writing modules, 101

Next up James Cammarata introduced how to write modules for Ansible. And impressively, this was live demonstrated by a module he had written the days before to control his Philips Hue lights. They could be controlled via Ansible live on stage.
Ansible Modules 101
Besides the great live demo the major points of the presentation were:

  • It is quite easy to develop modules.
  • Start of simply, get more complex the further you go down the road.
  • Write a module when your Playbook for a single task exceeds ten lines of code.
  • Write in Python/Powershell when you want it to be integrated with Ansible Core.
  • Write in any language you want if you won’t share it anyway.

While I am sure that other module developers might see some of these points different, it gives a rather good idea what to keep in mind when the topic is approached.

Of course, the the code for the Philips Hue Ansible module is available on Github.

Continuous deployment

Continuous integration is a huge topic in DevOps, and thus especially with Ansible. Steve Smith of Atlassian picked up the topic and discussed what needs to be taken into account when Ansible is used to enable continuous integration.
Continous Integration with Ansible
And there were many memorable quotes during the talk which made it simply fun to watch. I particularly like this one:

Release features, not dumps.

It means: do release when you have something worth releasing – not at arbitrary dates. It is a strong statement against release or maintenance windows and does make sense: after all, why should you release when its not worth? And you certainly will not wait if it is important!

Also, since many maintenance windows are implemented because doing maintenance is hard:

Everything which is hard should be done more often, not less.

Combined with the fact that very complex, but successful enterprises do 300 releases an hour it is clear that continuous deployment is possible – but what often is needed is the right culture and probably at some point a great, simple to use tool able to cater the needs of complex infrastructure.

Ansible accelerates deployment

The next talk focused on a vertical which might usually say that they are too regulated and “special” to integrate DevOps: financial. Fabrice Bernhard presented the journey of the Bank Société Générale introducing DevOps principles with the help of Ansible to become more agile, more flexible and to be able to respond quicker to changes. The reason for that was summarized in a very good quote:

It’s not the big that eat the small. It’s the fast that eat the slow.

This is true for all the enterprises out there: software enabled companies have attacked almost any given business out there (Amazon vs Walmart, Uber vs cabs, Airbnb vs hotels and hostels, etc.). And there are enough analysts right now who see the banking market as the next big thing which might be seriously disrupted due to mobile payment, blockchain technology and other IT based developments.
Ansible and the challenges for businesses

But that also shows what the actual change must be about: the new companies do not take over because they have the better technology. They take over because they have a different culture, and approach problems totally different. And thus, to keep up with the development, change your culture. Or, as said on stage:

Automation is about cultural change. Move fast and break things!

DevOps discussion

After these two powerful talks the audience had a chance to catch some breath during the interactive DevOps discussion. It mainly picked up the topics from the previous talks, and it showed that everyone in the room is pretty sure that DevOps as such is more or less a name on the underlying situation that enterprises need to adopt – or they will fail in the long term, no matter how big they are.

Managing your Cisco data center – with Ansible

As already mentioned, Fabrizio Maccioni from Cisco had the second talk about managing networks with Ansible.
Ansible and Cisco
Interestingly enough, he mentioned that the interest to support Ansible was brought to them by customers who were already managing part of their infrastructure with Ansible. A key point is that Ansible does not require an agent. While Cisco does support some configuration management agents on their hardware, it seems that most of the customers would not do that.
Ansible is good becaue agentless

Immutable infrastructure

The last presentation was held by Vik Bhatti from Beamly. Their problem is that sometimes they have to massively scale in seconds. Literally, in seconds. That requires them to have images of machines up and running in no time. They do this with Ansible, having the playbooks right on the images on one hand, and using Ansible to control their image build process on the other. Actually, the image builder is Packer and it uses Ansible to partially build the image.

As a result, down the line they have images ready to deploy and can extend their environment very, very, very quickly. Since they are able to respond that fast, they were able to cut down hardware costs massively.

Final discussions, happy hour

The final panel dealt mainly with questions about Open Source Tower (it will be there eventually, but no fixed date) and similar questions. After that, everyone went to enjoy drinks and a beautiful skyline.
AnsibleFest skyline and happy hour

Conclusion

In conclusion the #AnsibleFest was a great success, in terms of the people I met as well as in terms of the technical discussions. I can’t wait to get my hand on the networking modules. I’d like to thank the people from Ansible making this event possible, and of course my employer Red Hat for making it possible to visit this event.

[Howto] Access Red Hat Satellite REST API via Ansible [Update]

Ansible LogoAs with all tools, Red Hat Satellite offers a REST API. Ansible offers a simple way to access the API.

Background

Most of the programs and functions developed these days offer a REST API. Red Hat for example usually follows the “API first” methodology with most of the products these days, thus all functions of all programs can be accessed via REST API calls. For example I covered how to access the CloudForms REST API via Python.

While exploring a REST API via Python teaches a lot about the API and how to deal with all the basic tasks around REST communication, in daily enterprise business API calls should be automated – say hello to Ansible.

Ansible offers the URI module to work with generic HTTP requests. It offers various authentication modules, can pass general headers and provides ways to deal with different return codes and has a generic body field. Together with Ansible’s extensive variable features this makes the ideal combination for automated REST queries.

Setup

The setup is fairly simple: a Red Hat Satellite Server in a newer version (6.1 or newer), Ansible, and that’s it. The URI module in Satellite comes pre-installed.

Since the URI module accesses the target hosts via http, the actual host executing the http commands is the host on which the playbooks run. As a result, the host definition in the playbook needs to be localhost. In such case it doesn’t make sense to gather facts, either, so gather_facts: no can be set to save time.

In the module definition itself, it might make sense for test environments to ignore certification errors if the Satellite server certificate is not properly signed: validate_certs: no. Also, sometimes the Python library stumbles upon the status code 401 to initiate authentication. In that case, the option force_basic_auth: yes might help.

Last but not least, the API itself must be understood. The appropriate documentation is pretty helpful here: Red Hat Satellite API Guide. Especially the numerious examples at the end are a good start to build own REST calls in Ansible.

Getting values

Getting values via the REST API is pretty easy – the usual URL needs to be queried, the result is provided as JSON (in this case). The following example playbook asks the Satellite for the information about a given host. The output is reduced to the puppet modules, the number of modules is counted and the result is printed out.

$ cat api-get.yml
---
- name: call API from Satellite
  hosts: localhost
  gather_facts: no
  vars:
    satelliteurl: satellite-server.example.com
    client: helium.example.com

  tasks:
    - name: get modules for given host from satellite 
      uri:
        url: https://{{ satelliteurl }}/api/v2/hosts/{{ client }}
        method: GET 
        user: admin
        password: password
        force_basic_auth: yes 
        validate_certs: no
      register: restdata
    - name: output rest data
      debug: msg="{{ restdata.json.all_puppetclasses | count }}" 

The execution of the playbook show the number of the installed Puppet modules:

$ ansible-playbook api-get.yml

PLAY [call API from Satellite] ************************************************ 

TASK: [get ip and name from satellite] **************************************** 
ok: [localhost]

TASK: [output rest data] ****************************************************** 
ok: [localhost] => {
    "msg": "8"
}

PLAY RECAP ******************************************************************** 
localhost                  : ok=2    changed=0    unreachable=0    failed=0

If the Jinja filter string | count is removed, the actual Puppet classes are listed.

Performing searches

Performing searches is simply another URL, and thus works the exact same way. The following playbook shows a search for all servers which are part of a given Puppet environment:

---
- name: call API from Satellite
  hosts: localhost
  gather_facts: no
  vars:
    satelliteurl: satellite-server.example.com
    clientenvironment: production

  tasks:
    - name: get Puppet environment from Satellite 
      uri:
        url: https://{{ satelliteurl }}/api/v2/hosts/?search=environment={{ clientenvironment }}
        method: GET 
        user: admin
        password: password
        force_basic_auth: yes 
        validate_certs: no
      register: restdata
    - name: output rest data
      debug: msg="{{ restdata.json }}"

Changing configuration: POST

While querying the REST API can already be pretty interesting, automation requires the ability to change values as well. This can be done by changing the method: in the playbook to POST. Also, additional headers are necessary, and a body defining what data will be posted to Satellite.

The following example implements the example CURL call from the API guide mentioned above to add another architecture to Satellite:

$ cat api-post.yml
---
- name: call API from Satellite
  hosts: localhost
  gather_facts: no
  vars:
    satelliteurl: satellite-server.example.com

  tasks:
    - name: set additional architecture in Satellite 
      uri:
        url: https://{{ satelliteurl }}/api/architectures
        method: POST
        user: admin
        password: password
        force_basic_auth: yes 
        validate_certs: no
        HEADER_Content-Type: application/json
        HEADER_Accept: :application/json,version=2
        body: >
          {"architecture":{"name":"i686"}}
      register: restdata
    - name: output rest data
      debug: msg="{{ restdata }}"

The result can be looked up in the web interface: an architecture of the type i686 can now be found.

Update
Note that the body: > notation, folded scalars, makes it much easier to paste payload. If you are providing the payload without the closing bracket but on the same line, all the quotation marks need to be escaped:

body: "{\"architecture\":{\"name\":\"i686\"}}"

Conclusion

Ansible can easily access, query and control the Red Hat Satellite REST API and thus other REST APIs out there as well.

Ansible offers the possibility to automate almost any tool which expose a REST API. Together with the dynamic variable system results from one tool can easily be re-used to perform actions in another tool. That way even complex setups can be integrated with each other via Ansible rather easy.

[Howto] Look up of external sources in Ansible

Ansible Logo Part of Ansible’s power comes from an easy integration with other systems. In this post I will cover how to look up data from external sources like DNS or Redis.

Background

A tool for automation is only as good as it is capable to integrate it with the already existing environment – thus with other tools. Among various ways Ansible offers the possibility to look up Ansible variables from external stores like DNS, Redis, etcd or even generic INI or CSV files. This enables Ansible to easily access data which are stored – and changed, managed – outside of Ansible.

Setup

Ansible’s lookup feature is already installed by default.

Queries are executed on the host where the playbook is executed – in case of Tower this would be the Tower host itself. Thus the node needs access to the resources which needs to be queried.

Some lookup functions for example for DNS or Redis servers require additional python libraries – on the host actually executing the queries! On Fedora, the python-dns package is necessary for DNS queries and the package python-redis for Redis queries.

Generic usage

The lookup function can be used the exact same way variables are used: curly brackets surround the lookup function, the result is placed where the variable would be. That means lookup functions can be used in the head of a playbook, inside the tasks, even in templates.

The lookup command itself has to list the plugin as well as the arguments for the plugin:

{{ lookup('plugin','arguments') }}

Examples

Files

Entire files can be used as content of a variable. This is simply done via:

vars:
  content: "{{ lookup('file','lorem.txt') }}"

As a result, the variable has the entire content of the file. Note that the lookup of files always searches the files relative to the path of the actual playbook, not relative to the path where the command is executed.

Also, the lookup might fail when the file itself contains quote characters.

CVS

While the file lookup is pretty simple and generic, the CVS lookup module gives the ability to access values of given keys in a CVS file. An optional parameter can identify the appropriate column. For example, if the following CSV file is given:

$ cat gamma.csv
daytime,time,meal
breakfast,7,soup
lunch,12,rice
tea,15,cake
dinner,18,noodles

Now the lookup function for CVS files can access the lines identified by keys which are compared to the values of the first column. The following example looks up the key dinner and gives back the entry of the third column: {{ lookup('csvfile','dinner file=gamma.csv delimiter=, col=2') }}.

Inserted in a playbook, this looks like:

ansible-playbook examples/lookup.yml

PLAY [demo lookups] *********************************************************** 

GATHERING FACTS *************************************************************** 
ok: [neon]

TASK: [lookup of a cvs file] ************************************************** 
ok: [neon] => {
    "msg": "noodles"
}

PLAY RECAP ******************************************************************** 
neon                       : ok=2    changed=0    unreachable=0    failed=0

The corresponding playbook gives out the variable via the debug module:

---
- name: demo lookups
  hosts: neon

  tasks:
    - name: lookup of a cvs file
      debug: msg="{{ lookup('csvfile','dinner file=gamma.csv delimiter=, col=2') }}"

DNS

The DNS lookup is particularly interesting in cases where the local DNS provides a lot of information like SSH fingerprints or the MX record.

The DNS lookup plugin is called dig – like the command line client dig. As arguments, the plugin takes a domain name and the DNS type: {{ lookup('dig', 'redhat.com. qtype=MX') }}. Another way to hand over the type argument is via slash: {{ lookup('dig', 'redhat.com./MX') }}

The result for this example is:

TASK: [lookup of dns dig entries] ********************************************* 
ok: [neon] => {
    "msg": "10 int-mx.corp.redhat.com."
}

Redis

It gets even more interesting when existing databases are queried. Ansible lookup supports for example Redis databases. The plugin takes as argument the entire URL: redis://$URL:$PORT,$KEY.

For example, to query a local Redis server for the key dinner:

---
tasks:
  - name: lookup of redis entries
    debug: msg="{{ lookup('redis_kv', 'redis://localhost:6379,dinner') }}" 

The result is:

TASK: [lookup of redis entries] *********************************************** 
ok: [neon] => {
    "msg": "noodles"
}

Template

As already mentioned, lookups can not only be used in Playbooks, but also directly in templates. For example, given the template code:

$ cat templatej2
...
Red Hat MX: {{ lookup('dig', 'redhat.com./MX') }}
$ cat template.conf
...
Red Hat MX: 10 mx2.redhat.com.,5 mx1.redhat.com.

Conclusion

As shown the lookup plugin of Ansible provides many possibilities to integrate Ansible with existing tools and environments which already contain valuable data about the systems. It is easy to use, integrates well with the existing Ansible concepts and can quickly be integrated. Just drop it where a variable would be dropped, and it already works.

I am looking forward to more lookup modules support in the future – I’d love to see a generic “http” and a generic “SQL” plugin, even with the ability to provide credentials, although these features can be somewhat realized with already existing modules.

[Howto] Introduction to Ansible variables

Ansible Logo To become more flexible, Ansible offers the possibility to use variables in loops, but also to use information the target system provides.

Background

Ansible uses variables to enable more flexibility in playbooks and roles. They can be used to loop through a set of given values, access various information like the hostname of a system and replace certain strings in templates by system specific values. Variables are provided through the inventory, by variable files, overwritten on the command line and set in Tower.

But note that variable names have some restrictions in Ansible:

Variable names should be letters, numbers, and underscores. Variables should always start with a letter.

If the variable in question is a dictionary, a key/value pair, it can be referenced both by bracket and dot notation:

foo.bar
foo['bar']

The question is however: how to use variables? And how to get them in the first place?

Variables and loops

Loops are arguably among the most common use cases of variables in general – the same is true for Ansible. While they are not using variables provided externally, I’d like to start with loops to give a first idea how to use them. To copy a set of files it is either possible to write a task for each file or to just loop through them:

tasks:
  - name: copy files
    copy: src={{ item }} dest=/tmp/{{ item }}
    with_items:
      - alha
      - beta

This already shows simple basics: variables can be used in module arguments, and they are referenced via curly brackets {{ }}.

Variables and templates

More importantly, variables can be used to substitute keys in configuration files with system or run-time specific values. Imagine a file which containing the actual host name simply copied over from a central source to the clients. In case of one hundred machines there would be one hundred copies of the configuration file on the server, all almost identical except for the corresponding host name. That is a waste of time and space.

It’s better to have one copy of the configuration file, with a variable as place-maker instead of the host names – this is where templates come into play:

$ cat template.j2
My host name is {{ ansible_hostname }}.

The Ansible module to use templates and activate variable substitution is the template module:

tasks:
  - name: copy template
    template: src=template.j2 dest="/tmp/abcapp.conf"

When this task is run against a set of systems, the all get a file called abcapp.conf containing the individual host name of the given system.

This is again a simple example, but shows the basics: in templates variables are also referenced by {{ }}. Templates can become much more sophisticated, but I will try to cover that in another post.

Using variables in conditions

Variables can be used as conditions, thus ensuring that certain tasks are only run when for example on a given host the requested variable is set to a certain value:

tasks:
  - name: install Apache on Solaris
    pkg5: name=web/server/apache-24
    when: ansible_os_family == "Solaris"

  - name: install Apache on RHEL
    yum:  name=httpd
    when: ansible_os_family == "RedHat"

In this case, the first task is only applied on Red Hat Enterprise Linux servers, while the second is only run on Solaris machines.

Getting variables from the system

Using variables is one thing, but they need to be defined first. I have covered so far some use cases of variables, but not where to get them.

Ansible already defines a rich set of variables, individual for each system: whenever Ansible is run on a system all facts and information about the system are gathered and set as a variable. The available variables can be output via the setup module:

$ ansible neon -m setup
neon | success >> {
    "ansible_facts": {
        "ansible_all_ipv4_addresses": [
            "192.168.122.203"
        ], 
        "ansible_all_ipv6_addresses": [
            "fe80::5054:ff:feba:9db3"
        ], 
        "ansible_architecture": "x86_64", 
        "ansible_bios_date": "04/01/2014", 
...

All these variables can be used in templates, but also in the playbooks themselves as mentioned above in the conditions example. Roles can take advantage of these variables as well, of course.

Getting variables from the command line

Another way to define variables is to call Ansible playbooks with the option --extra-vars:

$ ansible-playbook --extra-vars "cli_var=production" system-setup.yml

The reference is again via the {{ }} brackets:

$ cat template.j2
environment: {{ cli_var }}.

Setting variables in playbooks

A more direct way is to define variables in playbooks directly, with the key vars:

...
hosts: all 
vars:
  play_var: bar 

tasks:
...

This can be extended by including an actual yaml file containing variables:

...
hosts: all
include_vars: setupvariables.yml

tasks:
...

Including further files with more variables comes in extremly handy when the variables for each system are kept in a system specific file, like $HOSTNAME.yml, because the included variable file can again be a variable:

...
hosts: all
include_vars: "{{ ansible_hostname }}.yml"

tasks:
...

In this case, the playbook reads in specific variables for each corresponding host – this way it is possible to set for example different virtual host names for Apache servers, or different backup systems in case of separated data centers.

Setting variables in the inventory

Sometimes it might make more sense to define specific variables in the already set up inventory:

[clients]
helium intevent_var=helium_123
neon invent_var=bar

The inventory more or less treats any argument which is not Ansible specific as a host variable.

It is also possible to set variables for entire host groups:

[clients]
helium
neon

[clients:vars]
invent_var=group-foo

Setting variables in the inventory makes sense when different teams use the same set of Ansible roles and playbooks but have different machine setups which need specific treatment at each location.

Setting variables on a system: local facts

Variables can also be set by dumping specialized files onto the system: when Ansible accesses a remote system it checks for the directory /etc/ansible/facts.d and all files ending in .fact are read. The files can be of various formats – INI, JSON, or even executables which return JSON code. This offers the possibility to add even generic fact providers via scripting.

For example:

$ cat /etc/ansible/facts.d/variables.fact 
[system]
foo=bar
dim=dum

The variables can be displayed via the setup module:

$ ansible neon -m setup
...
      "type": "loopback"
  }, 
  "ansible_local": {
      "variables": {
          "system": {
              "dim": "dum", 
              "foo": "bar"
          }
     }
  }, 
  "ansible_machine": "x86_64", 
...

Using the results of tasks: registered variables

The results of a task during a playbook run can also be stored inside a variable – together with conditionals this enables a playbook to react to the results of the given task with other tasks.

For example, given that an httpd service needs to run. If the service does not run, the entire server should be powered down immediately to ensure that no data corruption takes place. The corresponding playbook checks the httpd service, ignores errors but instead analyses the result and powers down the machine if the service cannot be started:

---
- name: register example
  hosts: all 
  sudo: yes

  tasks:
    - name: start service
      service: name=httpd state=started
      ignore_errors: True
      register: service_result

    - name: shutdown
      command: "shutdown -h +1m"
      when: service_result | failed

Another example would be to trigger a function to remove the host from the loadbalancer. Or to check if the database is available and would otherwise immediately close down the front firewall.

Accessing variables of other hosts

Last but not least it might be interesting to access the variables (read: facts) of other hosts. This can of course only be done if the facts are actually available.

If this is given, the data can be accessed via the hostvars key. The value for the key is the name given in the inventory.

Groups from within the inventory can also be accessed. That makes it possible to loop through a list of hosts in a group and for example gather all IP addresses or host names or other data.

The following template first accesses the host name of the machine “tower” and afterwards collects all the epoch data of all machines:

Tower name is: {{ hostvars['tower']['ansible_hostname'] }}
The epochs of the clients are:
{% for host in groups['clients'] %}
- {{ hostvars[host]['ansible_date_time']['epoch'] }}
{% endfor %}

The code for the loop is actual line statement of the Jinja2 template engine. The engine also offers if statements and other common operations.

This comes in handy in simple cases like filling the /etc/hosts, but can also empower admins to for example fill in the data of a loadbalancer or a firewall configuration.

Conclusion

Varialbes are a very powerful feature within Ansible to enrich the functionality it provides. Together with templates and the Jinja2 template and language engine the possibilities are almost endless. And sooner or later every admin will leave the simple Ansible calls and playbooks behind and start diving into variables, templates, and loops through variables of other hosts. To make system automation even easier.