[Short Tip] Call Ansible Tower REST URI – with Ansible

Ansible Logo

It might sound strange to call the Ansible Tower API right from within Ansible itself. However, if you want to connect several playbooks with each other, or if you user Ansible Tower mainly as an API this indeed makes sense. To me this use case is interesting since it is a way to document how to access, how to use the Ansible Tower API.

The following playbook is an example to launch a job in Ansible Tower. The body payload contains an extra variable needed by the job itself. In this example it is a name of a to-be launched VM.

---
- name: POST to Tower API
  hosts: localhost
  vars:
    job_template_id: 44
    vmname: myvmname

  tasks:
    - name: create additional node in GCE
      uri:
        url: https://tower.example.com/api/v1/job_templates/{{ job_template_id }}/launch/
        method: POST
        user: admin
        password: $PASSWORD
        status_code: 202
        body: 
          extra_vars:
            node_name: "{{ vmname }}"
        body_format: json

Note the status code (202) – the URI module needs to know that a non-200 status code is used to show the proper acceptance of the API call. Also, the job is identified by its ID. But since Tower shows the ID in the web interface it is no problem to get the correct id.

[Howto] Workaround failing MongoDB on RHEL/CentOS 7

Ansible LogoMongoDB is often installed right from upstream provided repositories. In such cases with recent updates the service might fail to start via systemctl. A workaround requires some SELinux work.

Ansible Tower collects system data inside a MongoDB. Since MongoDB is not part of RHEL/CentOS, it is installed directly form the upstream MongoDB repositories. However, with recent versions of MongoDB the database might not come up via systemctl:

[root@ansible-demo-tower init.d]# systemctl start mongod
Job for mongod.service failed because the control process exited with error code. See "systemctl status mongod.service" and "journalctl -xe" for details.
[root@ansible-demo-tower init.d]# journalctl -xe
May 03 08:26:00 ansible-demo-tower systemd[1]: Starting SYSV: Mongo is a scalable, document-oriented database....
-- Subject: Unit mongod.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit mongod.service has begun starting up.
May 03 08:26:00 ansible-demo-tower runuser[7266]: pam_unix(runuser:session): session opened for user mongod by (uid=0)
May 03 08:26:00 ansible-demo-tower runuser[7266]: pam_unix(runuser:session): session closed for user mongod
May 03 08:26:00 ansible-demo-tower mongod[7259]: Starting mongod: [FAILED]
May 03 08:26:00 ansible-demo-tower systemd[1]: mongod.service: control process exited, code=exited status=1
May 03 08:26:00 ansible-demo-tower systemd[1]: Failed to start SYSV: Mongo is a scalable, document-oriented database..
-- Subject: Unit mongod.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit mongod.service has failed.
-- 
-- The result is failed.
May 03 08:26:00 ansible-demo-tower systemd[1]: Unit mongod.service entered failed state.
May 03 08:26:00 ansible-demo-tower systemd[1]: mongod.service failed.
May 03 08:26:00 ansible-demo-tower polkitd[11436]: Unregistered Authentication Agent for unix-process:7254:1405622 (system bus name :1.184, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_

The root cause of the problem is that the MongoDB developers do not provide a proper SELinux</a configuration with their packages, see the corresponding bug report.

A short workaround is to create a proper (more or less) SELinux rule and install it to the system:

[root@ansible-demo-tower ~]# grep mongod /var/log/audit/audit.log | audit2allow -m mongod > mongod.te
[root@ansible-demo-tower ~]# cat mongod.te 

module mongod 1.0;

require {
	type locale_t;
	type mongod_t;
	type ld_so_cache_t;
	class file execute;
}

#============= mongod_t ==============
allow mongod_t ld_so_cache_t:file execute;
allow mongod_t locale_t:file execute;
[root@ansible-demo-tower ~]# grep mongod /var/log/audit/audit.log | audit2allow -M mongod
******************** IMPORTANT ***********************
To make this policy package active, execute:

semodule -i mongod.pp

[root@ansible-demo-tower ~]# semodule -i mongod.pp 
[root@ansible-demo-tower ~]# sudo service mongod start
                                                           [  OK  ]

Keep in mind that audit2allow generated rule sets are not to be used on production systems. The generated SELinux rules need to be analyzed manually to verify that it covers nothing but the problematic use case.

Insights into Ansible: environments of executed playbooks

Ansible LogoUsually when Ansible Tower executes a playbook everything works just as on the command line. However, in some corner cases the behavior might be different: Ansible Tower runs its playbooks in a specific environment.

Different playbook results in Tower vs CLI

Ansible is a great tool for automation, and Ansible Tower enhances these capabilities by adding centralization, a UI, role based access control and a REST API. To take advantage of Tower, just import your playbooks and press start – it just works.

At least most of the time: lately I was playing around with the Google Cloud Engine, GCE. Ansible provides several GCE modules thus writing playbooks to control the setup was pretty easy. But while GCE related playbooks worked on the plain command line, they failed in Tower:

PLAY [create node on GCE] ******************************************************

TASK [launch instance] *********************************************************
task path: /var/lib/awx/projects/_43__gitolite_gce_node_tower_pem_file/gce-node.yml:13
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
  File "/var/lib/awx/.ansible/tmp/ansible-tmp-1461919385.95-6521356859698/gce", line 2573, in <module>
    main()
  File "/var/lib/awx/.ansible/tmp/ansible-tmp-1461919385.95-6521356859698/gce", line 506, in main
    module, gce, inames)
  File "/var/lib/awx/.ansible/tmp/ansible-tmp-1461919385.95-6521356859698/gce", line 359, in create_instances
    external_ip=external_ip, ex_disk_auto_delete=disk_auto_delete, ex_service_accounts=ex_sa_perms)
TypeError: create_node() got an unexpected keyword argument 'ex_can_ip_forward'

fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "gce"}, "parsed": false}

NO MORE HOSTS LEFT *************************************************************
	to retry, use: --limit @gce-node.retry

PLAY RECAP *********************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1 

To me that didn’t make sense at all: the exact same playbook was running on command line. How could that fail in Tower when Tower is only a UI to Ansible itself?

Environment variables during playbook runs

The answer is that playbooks are run by Tower within specific environment variables. For example, the GCE login credentials are provided to the playbook and thus to the modules via environment variables:

GCE_EMAIL
GCE_PROJECT
GCE_PEM_FILE_PATH

That means, if you want to debug a playbook and want to provide the login credentials just the way Tower does, the shell command has to be:

GCE_EMAIL=myuser@myproject.iam.gserviceaccount.com GCE_PROJECT=myproject GCE_PEM_FILE_PATH=/tmp/mykey.pem ansible-playbook myplaybook.yml

The error at hand was also caused by an environment variable, though: PYTHONPATH. Tower comes along with a set of Python libraries needed for Ansible. Among them some which are required by specific modules. In this case, the GCE modules require the Apache libcloud, and that one is installed with the Ansible Tower bundle. The libraries are installed at /usr/lib/python2.7/site-packages/awx/lib/site-packages – which is not a typical Python path.

For that reason, each playbook is run from within Tower with the environment variable PYTHONPATH="/usr/lib/python2.7/site-packages/awx/lib/site-packages:". Thus, to run a playbook just the same way it is run from within Tower, the shell command needs to be:

PYTHONPATH="/usr/lib/python2.7/site-packages/awx/lib/site-packages:" ansible-playbook myplaybook.yml

This way the GCE error shown above could be reproduced on the command line. So the environment provided by Tower was a problem, while the environment of plain Ansible (and thus plain Python) caused no errors. Tower does bundle the library because you cannot expect the library for example in the RHEL default repositories.

The root cause is that right now Tower still ships with an older version of the libcloud library which is not fully compatible with GCE anymore (GCE is a fast moving target). If you run Ansible on the command line you most likely install libcloud via pip or RPM which in most cases provides a pretty current version.

Workaround for Tower

While upgrading the library makes sense in the mid term, a short term workaround is needed as well. The best way is to first install a recent version of libcloud and second identify the actual task which fails and point that exact task to the new library.

In case of RHEL, enable the EPEL repository, install python-libcloud and then add the environment path PYTHONPATH: "/usr/lib/python2.7/site-packages" to the task via the environment option.

- name: launch instance
  gce:
    name: "{{ node_name }}"
    zone: europe-west1-c
    machine_type: "{{ machine_type }}"
    image: "{{ image }}"
  environment:
    PYTHONPATH: "/usr/lib/python2.7/site-packages"

Useful command line options for ansible-playbook

Ansible LogoAnsible provides quite some useful command line options. Most of them are especially interesting during debugging.

Background

There are three major ways to work with Ansible:

  • launching single tasks with the ansible command
  • executing playbooks viaansible-playbook
  • using Tower to manage and run playbooks

While Tower might be the better option to run Ansible in the day-to-day business, and the ansible CLI itself is most likely only in one-time runs used, the executing of playbooks on the command line often happens during the development of playbooks, when no Tower is available – or during debugging. In such cases, there are quite some useful command line options which might not even be known to the seasoned Ansible user.

Do I say this right? – Syntax checking

Playbooks are written in YAML, and in YAML syntax is crucial – especially indentation:

Data structure hierarchy is maintained by outline indentation.

To check if a playbook is correctly formatted, the option --syntax-check looks at all involved playbooks and verifies the correct syntax. During a syntax check, no playbooks are actually executed.

$ ansible-playbook --syntax-check oraclejdk-destroy.yml
ERROR! Syntax Error while loading YAML.

The error appears to have been in '/home/liquidat/Gits/github/ansible-demo-oraclejdk/oracle-windows-destroy.yml': line 10, column 11,
but may be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  win_template: src=data/remove-program.j2 dest=C:\\temp\\remove-program.ps1
    - name: remove application
          ^ here

The syntax check helps if a playbook fails for no apparent reason – or if a playbook was edited a lot and it is simply not sure if everything was moved around correctly.

Whom am I talking to? – Listing affected hosts

With complex playbooks and dynamic inventories it sometimes is hard to say against which hosts a playbook will actually be executed. In such cases, the option --list-hosts will output a list of affected hosts, including the name of the actual play and the pattern with which the hosts were chosen:

$ ansible-playbook --list-hosts oraclejdk-destroy.yml

playbook: oraclejdk-destroy.yml

  play #1 (windows): remove OracleJDK on Windows	TAGS: []
    pattern: [u'windows']
    hosts (1):
      radon

  play #2 (rhel): remove OracleJDK on RHEL	TAGS: []
    pattern: [u'rhel']
    hosts (2):
      neon
      helium
...

This works also together with the -l option and might help debugging your inventory.

Again, no tasks are actually execute when the list of hosts is queried.

What’s going on here? – List tasks

Another thing which can get pretty complicated is the list of tasks actually executed: think of complex playbooks including other complex playbooks. That can get pretty complex and difficult to understand – here the option --list-tasks comes in handy. It lists what will be done, showing the names of the tasks but not executing any of them on the target nodes:

$ ansible-playbook --list-tasks oraclejdk-destroy.yml

playbook: oraclejdk-destroy.yml

  play #1 (windows): remove OracleJDK on Windows	TAGS: []
    tasks:
      copy Java remove script to temp	TAGS: []
      remove application	TAGS: []
      remove temp dir in Windows	TAGS: []

  play #2 (rhel): remove OracleJDK on RHEL	TAGS: []
    tasks:
      remove java dir	TAGS: []
...

What’s that thing? – List all tags

Besides all tasks, the used tags can be listed as well.

$ ansible-playbook --list-tags setup-control.yml

playbook: setup-control.yml

  play #1 (tuzak): 	TAGS: []
      TASK TAGS: [base_setup, db, imap, ldap, mail, oc, smtp]
...

Again, this option helps providing an overview what a playbook has to offer, how to use it. And again this option does not execute any task on the target node.

Are you sure? – Running in test mode

Ansible provides a so called check mode, also called dry run mode (in Tower for example). Invoked via --check the check mode does not alter the target nodes, but tries to output what would change and what not. Note however that this needs to be supported by the used modules, and not all modules support this.

For example, the following listing shows several tasks not supporting the dry run, which is indicated by the “skipping” line.

$ ansible-playbook --check oraclejdk-setup.yml

PLAY [set up OracleJDK on Windows] *********************************************

TASK [setup] *******************************************************************
ok: [radon]

TASK [set up temp dir in Windows] **********************************************
skipping: [radon]

TASK [copy JDK to Windows client] **********************************************
skipping: [radon]

TASK [run exe installer] *******************************************************
skipping: [radon]
...
PLAY [set up OracleJDK on RHEL] ************************************************

TASK [setup] *******************************************************************
ok: [helium]
ok: [neon]

TASK [copy JDK to RHEL client] *************************************************
skipping: [helium]
skipping: [neon]
....

This is quite useful to get an idea what impact the run of a playbook might have on target nodes. The lack of support in several modules dampens the positive effect a bit, though.

But since the --diff option (see below) supports it, it can be quite handy in certain situations.

Let me have a look at that… – Going through tasks step by step

Imagine that a playbook runs without errors, but somehow the result is not what exactly what was expected. In such cases one way to debug everything is to go through each task at a time, step by step, checking the state of all involved components after each task. This can be done with the option --step.

$ ansible-playbook --step oraclejdk-setup.yml

PLAY [set up OracleJDK on Windows] *********************************************
Perform task: TASK: setup (y/n/c): y

Perform task: TASK: setup (y/n/c): *********************************************

TASK [setup] *******************************************************************
ok: [radon]
Perform task: TASK: set up temp dir in Windows (y/n/c): y

Perform task: TASK: set up temp dir in Windows (y/n/c): ************************

TASK [set up temp dir in Windows] **********************************************
changed: [radon]
Perform task: TASK: copy JDK to Windows client (y/n/c): 
...

This is incredibly helpful on complex setups involving multiple nodes.

And yes, this time the tasks are actually executed on the target node!

Get me right there! – Starting playbooks in the middle

During debugging and development it might make sense to start playbooks not at the beginning, but somewhere in between. For example, because a playbook failed at task 14, and you don’t want to go through the first 13 tasks again. Starting at a given task requires the appropriate name of the task – and the option --start-at-task:

$ ansible-playbook --start-at-task="run exe installer" oraclejdk-setup.yml

PLAY [set up OracleJDK on Windows] *********************************************

TASK [setup] *******************************************************************
ok: [radon]

TASK [run exe installer] *******************************************************
ok: [radon]
...

In this example, the two tasks “set up temp dir in Windows” and “copy JDK to Windows client” are skipped, and the playbook starts directly at “run exe installer”. Note that skipped tasks are not shown or listed at all, and that the setup is run nevertheless.

As shown above, the proper name of each task is listed with the --list-tasks option.

Get down to business! – Showing diffs

Ansible is often used to deploy files, especially using templates. Usually, when a file is changed, Ansible just highlights that a change occurred – but not what was actually changed. In such cases, the option --diff comes in handy: it shows the diff in typical patch form:

$ ansible-playbook --diff examples/template.yml

PLAY [template example] ********************************************************

TASK [setup] *******************************************************************
ok: [helium]

TASK [copy template] ***********************************************************
changed: [helium]
--- before: /tmp/template.conf
+++ after: dynamically generated
@@ -1,2 +1,3 @@
 hostname: ansible-demo-helium
-bumble: bee
+foo: bar
+MX: 10 mx2.redhat.com.,5 mx1.redhat.com.

PLAY RECAP *********************************************************************
helium                     : ok=2    changed=1    unreachable=0    failed=0

This can be even combined with the option --check: in such cases, the diff is printed, but the change is not performed on the target node. That’s pretty handy indeed

That was interesting! – Summary

To summarize, ansible-playbook has quite some options to help debugging playbooks. The fact that many do not alter the target nodes makes it possible to use them on productive systems as well (but with care, as always). They also help a lot when it comes to understanding unknown playbooks, for example from other departments or coworkers.

[Howto] Access Red Hat Satellite REST API via Ansible [Update]

Ansible LogoAs with all tools, Red Hat Satellite offers a REST API. Ansible offers a simple way to access the API.

Background

Most of the programs and functions developed these days offer a REST API. Red Hat for example usually follows the “API first” methodology with most of the products these days, thus all functions of all programs can be accessed via REST API calls. For example I covered how to access the CloudForms REST API via Python.

While exploring a REST API via Python teaches a lot about the API and how to deal with all the basic tasks around REST communication, in daily enterprise business API calls should be automated – say hello to Ansible.

Ansible offers the URI module to work with generic HTTP requests. It offers various authentication modules, can pass general headers and provides ways to deal with different return codes and has a generic body field. Together with Ansible’s extensive variable features this makes the ideal combination for automated REST queries.

Setup

The setup is fairly simple: a Red Hat Satellite Server in a newer version (6.1 or newer), Ansible, and that’s it. The URI module in Satellite comes pre-installed.

Since the URI module accesses the target hosts via http, the actual host executing the http commands is the host on which the playbooks run. As a result, the host definition in the playbook needs to be localhost. In such case it doesn’t make sense to gather facts, either, so gather_facts: no can be set to save time.

In the module definition itself, it might make sense for test environments to ignore certification errors if the Satellite server certificate is not properly signed: validate_certs: no. Also, sometimes the Python library stumbles upon the status code 401 to initiate authentication. In that case, the option force_basic_auth: yes might help.

Last but not least, the API itself must be understood. The appropriate documentation is pretty helpful here: Red Hat Satellite API Guide. Especially the numerious examples at the end are a good start to build own REST calls in Ansible.

Getting values

Getting values via the REST API is pretty easy – the usual URL needs to be queried, the result is provided as JSON (in this case). The following example playbook asks the Satellite for the information about a given host. The output is reduced to the puppet modules, the number of modules is counted and the result is printed out.

$ cat api-get.yml
---
- name: call API from Satellite
  hosts: localhost
  gather_facts: no
  vars:
    satelliteurl: satellite-server.example.com
    client: helium.example.com

  tasks:
    - name: get modules for given host from satellite 
      uri:
        url: https://{{ satelliteurl }}/api/v2/hosts/{{ client }}
        method: GET 
        user: admin
        password: password
        force_basic_auth: yes 
        validate_certs: no
      register: restdata
    - name: output rest data
      debug: msg="{{ restdata.json.all_puppetclasses | count }}" 

The execution of the playbook show the number of the installed Puppet modules:

$ ansible-playbook api-get.yml

PLAY [call API from Satellite] ************************************************ 

TASK: [get ip and name from satellite] **************************************** 
ok: [localhost]

TASK: [output rest data] ****************************************************** 
ok: [localhost] => {
    "msg": "8"
}

PLAY RECAP ******************************************************************** 
localhost                  : ok=2    changed=0    unreachable=0    failed=0

If the Jinja filter string | count is removed, the actual Puppet classes are listed.

Performing searches

Performing searches is simply another URL, and thus works the exact same way. The following playbook shows a search for all servers which are part of a given Puppet environment:

---
- name: call API from Satellite
  hosts: localhost
  gather_facts: no
  vars:
    satelliteurl: satellite-server.example.com
    clientenvironment: production

  tasks:
    - name: get Puppet environment from Satellite 
      uri:
        url: https://{{ satelliteurl }}/api/v2/hosts/?search=environment={{ clientenvironment }}
        method: GET 
        user: admin
        password: password
        force_basic_auth: yes 
        validate_certs: no
      register: restdata
    - name: output rest data
      debug: msg="{{ restdata.json }}"

Changing configuration: POST

While querying the REST API can already be pretty interesting, automation requires the ability to change values as well. This can be done by changing the method: in the playbook to POST. Also, additional headers are necessary, and a body defining what data will be posted to Satellite.

The following example implements the example CURL call from the API guide mentioned above to add another architecture to Satellite:

$ cat api-post.yml
---
- name: call API from Satellite
  hosts: localhost
  gather_facts: no
  vars:
    satelliteurl: satellite-server.example.com

  tasks:
    - name: set additional architecture in Satellite 
      uri:
        url: https://{{ satelliteurl }}/api/architectures
        method: POST
        user: admin
        password: password
        force_basic_auth: yes 
        validate_certs: no
        HEADER_Content-Type: application/json
        HEADER_Accept: :application/json,version=2
        body: >
          {"architecture":{"name":"i686"}}
      register: restdata
    - name: output rest data
      debug: msg="{{ restdata }}"

The result can be looked up in the web interface: an architecture of the type i686 can now be found.

Update
Note that the body: > notation, folded scalars, makes it much easier to paste payload. If you are providing the payload without the closing bracket but on the same line, all the quotation marks need to be escaped:

body: "{\"architecture\":{\"name\":\"i686\"}}"

Conclusion

Ansible can easily access, query and control the Red Hat Satellite REST API and thus other REST APIs out there as well.

Ansible offers the possibility to automate almost any tool which expose a REST API. Together with the dynamic variable system results from one tool can easily be re-used to perform actions in another tool. That way even complex setups can be integrated with each other via Ansible rather easy.

[Howto] Look up of external sources in Ansible

Ansible Logo Part of Ansible’s power comes from an easy integration with other systems. In this post I will cover how to look up data from external sources like DNS or Redis.

Background

A tool for automation is only as good as it is capable to integrate it with the already existing environment – thus with other tools. Among various ways Ansible offers the possibility to look up Ansible variables from external stores like DNS, Redis, etcd or even generic INI or CSV files. This enables Ansible to easily access data which are stored – and changed, managed – outside of Ansible.

Setup

Ansible’s lookup feature is already installed by default.

Queries are executed on the host where the playbook is executed – in case of Tower this would be the Tower host itself. Thus the node needs access to the resources which needs to be queried.

Some lookup functions for example for DNS or Redis servers require additional python libraries – on the host actually executing the queries! On Fedora, the python-dns package is necessary for DNS queries and the package python-redis for Redis queries.

Generic usage

The lookup function can be used the exact same way variables are used: curly brackets surround the lookup function, the result is placed where the variable would be. That means lookup functions can be used in the head of a playbook, inside the tasks, even in templates.

The lookup command itself has to list the plugin as well as the arguments for the plugin:

{{ lookup('plugin','arguments') }}

Examples

Files

Entire files can be used as content of a variable. This is simply done via:

vars:
  content: "{{ lookup('file','lorem.txt') }}"

As a result, the variable has the entire content of the file. Note that the lookup of files always searches the files relative to the path of the actual playbook, not relative to the path where the command is executed.

Also, the lookup might fail when the file itself contains quote characters.

CVS

While the file lookup is pretty simple and generic, the CVS lookup module gives the ability to access values of given keys in a CVS file. An optional parameter can identify the appropriate column. For example, if the following CSV file is given:

$ cat gamma.csv
daytime,time,meal
breakfast,7,soup
lunch,12,rice
tea,15,cake
dinner,18,noodles

Now the lookup function for CVS files can access the lines identified by keys which are compared to the values of the first column. The following example looks up the key dinner and gives back the entry of the third column: {{ lookup('csvfile','dinner file=gamma.csv delimiter=, col=2') }}.

Inserted in a playbook, this looks like:

ansible-playbook examples/lookup.yml

PLAY [demo lookups] *********************************************************** 

GATHERING FACTS *************************************************************** 
ok: [neon]

TASK: [lookup of a cvs file] ************************************************** 
ok: [neon] => {
    "msg": "noodles"
}

PLAY RECAP ******************************************************************** 
neon                       : ok=2    changed=0    unreachable=0    failed=0

The corresponding playbook gives out the variable via the debug module:

---
- name: demo lookups
  hosts: neon

  tasks:
    - name: lookup of a cvs file
      debug: msg="{{ lookup('csvfile','dinner file=gamma.csv delimiter=, col=2') }}"

DNS

The DNS lookup is particularly interesting in cases where the local DNS provides a lot of information like SSH fingerprints or the MX record.

The DNS lookup plugin is called dig – like the command line client dig. As arguments, the plugin takes a domain name and the DNS type: {{ lookup('dig', 'redhat.com. qtype=MX') }}. Another way to hand over the type argument is via slash: {{ lookup('dig', 'redhat.com./MX') }}

The result for this example is:

TASK: [lookup of dns dig entries] ********************************************* 
ok: [neon] => {
    "msg": "10 int-mx.corp.redhat.com."
}

Redis

It gets even more interesting when existing databases are queried. Ansible lookup supports for example Redis databases. The plugin takes as argument the entire URL: redis://$URL:$PORT,$KEY.

For example, to query a local Redis server for the key dinner:

---
tasks:
  - name: lookup of redis entries
    debug: msg="{{ lookup('redis_kv', 'redis://localhost:6379,dinner') }}" 

The result is:

TASK: [lookup of redis entries] *********************************************** 
ok: [neon] => {
    "msg": "noodles"
}

Template

As already mentioned, lookups can not only be used in Playbooks, but also directly in templates. For example, given the template code:

$ cat templatej2
...
Red Hat MX: {{ lookup('dig', 'redhat.com./MX') }}
$ cat template.conf
...
Red Hat MX: 10 mx2.redhat.com.,5 mx1.redhat.com.

Conclusion

As shown the lookup plugin of Ansible provides many possibilities to integrate Ansible with existing tools and environments which already contain valuable data about the systems. It is easy to use, integrates well with the existing Ansible concepts and can quickly be integrated. Just drop it where a variable would be dropped, and it already works.

I am looking forward to more lookup modules support in the future – I’d love to see a generic “http” and a generic “SQL” plugin, even with the ability to provide credentials, although these features can be somewhat realized with already existing modules.