[Short Tip] Flatten nested dict/list structures in Ansible with json_query

A few days ago I was asked how to best deal with structures in Ansible which are mixing dictionaries and lists. json_query can help here!

Ansible Logo

A few days ago I was asked how to best deal with structures in Ansible which are mixing dictionaries and lists. Basically, the following example was provided and the questioned remained how to deal with this – for example how to flatten it:

    myhash:
      cloud1:
        region1:
          - name: "city1"
          - size: "large"
          - param: "alpha"
        region2:
          - name: "city2"
          - size: "small"
          - param: "beta"
      cloud2:
        region1:
          - name: "city1"
          - size: "large"
          - param: "gamma"

I was wondering a lot how to deal with this – after all dict2items only deals with dicts and fails when it reaches the lists in there. I also fooled around with the map filter, but most of my results also required some previous knowledge about the data structure, were only acting by providing “cloud1.region1” or similar.

The solution was the json_query filter: it is based on jmespath and can deal with the above mentioned structure by list and object projections:

  tasks:
  - name: Projections using json_query
    debug:
      msg: "Item value is: {{ item }}"
    loop: "{{ myhash|json_query(projection_query)|list }}"
    vars:
      projection_query: "*.*[]"

And indeed, the loop does create a simplified output of all the elements in this nested structure:

TASK [Projections using json_query] **********************************************************
ok: [localhost] => (item=[{'name': 'city1'}, {'size': 'large'}, {'param': 'alpha'}]) => {
    "msg": "Item value is: [{'name': 'city1'}, {'size': 'large'}, {'param': 'alpha'}]"
}
ok: [localhost] => (item=[{'name': 'city2'}, {'size': 'small'}, {'param': 'beta'}]) => {
    "msg": "Item value is: [{'name': 'city2'}, {'size': 'small'}, {'param': 'beta'}]"
}
ok: [localhost] => (item=[{'name': 'city1'}, {'size': 'large'}, {'param': 'gamma'}]) => {
    "msg": "Item value is: [{'name': 'city1'}, {'size': 'large'}, {'param': 'gamma'}]"
}

Of course, some knowledge is still needed to make this work: you need to know if you are projecting on a list or on a dictionary. So if your data structure changes on that level between executions, you might need something else.

Image by Andrew Martin from Pixabay

[Howto] Using the new Podman API

Podman is a daemonless container engine to develop, run and manage OCI containers. In a recent version the API was rewritten and now offers a REST interface as well as a docker compatible endpoint.

Podman is a daemonless container engine to develop, run and manage OCI containers. In a recent version the API was rewritten and now offers a REST interface as well as a docker compatible endpoint.

In case you never heard of Podman before, it is certainly worth a look. Besides offering a more secure drop-in-replacement for many docker functions, it can also manage pods and thus provides a container experience more aligned with what Kubernetes uses. It even can understand Kubernetes yaml (see podman-play-kube), easing the transition from single host container development over to fully fledged container management environments. Last but not least it is among the tools supporting newest features in the container space like cgroups v2.

Background: Podman API

Of course Podman is not perfect – due to the focus on Kubernetes yaml there is no support for docker-compose files (though alternatives exist), networking and routing based on names is not as simple as on Docker (read more about Podman container networking) and last but not least, the API was different – making it hard to migrate solutions dependent on the docker API.

This changed: recently, a new API was merged:

The new API is a simpler implementation based on HTTP/REST. We provide two basic groups of endpoints. The first one is for libpod; the second is for Docker compatibility, to ease adoption. 

New API coming for Podman

So how can I access the new API and fool around with it?

If you are familiar with Podman, or read carefully, the first question is: where is this API running if Podman is daemonless? And in fact, an API service needs to be started explicitly:

$ podman system service --timeout 5000

This starts the API on a UNIX socket. Other options, like a TCP socket or to run this without a timeout are also possible, the documentation provides examples.

How to use the Docker API endpoint

Let’s use the Docker API endpoint. To talk to a UNIX socket based REST API a recent curl (version >= 7.40) is quite helpful:

$ curl --unix-socket /$XDG_RUNTIME_DIR/podman/podman.sock http://localhost/images/json
[{"Containers":1,"Created":1583300892,"Id":"8c2e0da7c436e45be5ebf2adf26b41d13939190bd186214a4d45c30485071f9f","Labels":{"license":"MIT","name":"fedora","vendor":"Fedora Project","version":"31"},"ParentId":...

Note that here we are speaking to the rootless container, thus the unix domain socket is in the user runtime directory. Also, localhost has to be provided in the URL for very recent curl versions, otherwise it does not output anything!

The answer is a JSON listing, which is not easily readable. Simplify it with the help of Python (and silence curl info with the silent flag):

$ curl -s --unix-socket /$XDG_RUNTIME_DIR/podman/podman.sock http://localhost/containers/json|python -m json.tool
[
    {
        "Id": "4829e030ab1beb83db07dbc5e51481cb66562f57b79dd9eb3069dfcde91019ed",
        "Names": [
            "/87faf76aea6a-infra"
...

So what can you do with the API? Podman tries to recreate most of the docker API, so you can basically use the docker API documentation to see what should be possible. Note though that not all API endpoints are supported since Podman does not provide all functions Docker offers.

How to use the Podman API endpoint

As mentioned the API does provide two endpoints: the Docker endpoint, and a Podman specific endpoint. This second API is necessary for multiple reasons: first, Podman has functions which are alien to Docker and thus not part of the Docker API. The pod function is the most notable here. Another reason is that an independent API enables the Podman developers to further innovate in their own way and velocity, and to change the API when needed or wanted.

The API for Podman can be reached via curl as mentioned above. However, there are two notable differences: first, the Podman endpoint is marked via an additional “podman” string in the API URI, and second the Podman API is always versioned. To list the images as shown above, but via podman’s own API, the following call is necessary:

$ curl -s --unix-socket /$XDG_RUNTIME_DIR/podman/podman.sock http://localhost/v1.24/libpod/images/json
[{"Id":"8c2e0da7c436e45be5ebf2adf26b41d13939190bd186214a4d45c30485071f9f","RepoTags":["registry.fedoraproject.org/fedora:latest"],"Created":1583300892,"Size":199632198,"Labels":{"license":"MIT","name":"fedora","vendor":"Fedora ...

For pods, the endpoint is for example /pods instead of /images:

$ curl -s --unix-socket /$XDG_RUNTIME_DIR/podman/podman.sock http://localhost/v1.24/libpod/pods/json|python -m json.tool
[
    {
        "Cgroup": "user.slice",
        "Containers": [
            {
                "Id": "1510dca23d2d15ae8be1eeadcdbfb660cbf818a69d5780705cd6535d97a4a578",
                "Names": "wonderful_ardinghelli",
                "Status": "running"
            },
            {
                "Id": "6c05c20a42e6987ac9f78b277a9d9152ab37dd05e3bfd5ec9e675979eb93bf0e",
                "Names": "eff81a37b4b8-infra",
                "Status": "running"
            }
        ],
        "Created": "2020-04-19T21:45:17.838549003+02:00",
        "Id": "eff81a37b4b85e92916613239001cddc2ba42f3595236586f7462492be0ac5fc",
        "InfraId": "6c05c20a42e6987ac9f78b277a9d9152ab37dd05e3bfd5ec9e675979eb93bf0e",
        "Name": "testme",
        "Namespace": "",
        "Status": "Running"
    }
]

Currently there is no documentation of the API available – or at least none of the level of the current Docker API documentation. But hopefully that will change soon.

Takeaways

Podman providing a Docker API is a great step for people who are dependent on the Docker API but nevertheless want switch to Podman. But providing a unique, but simple to consume REST API for Podman itself is equally great because it makes it easy to integrate Podman processes into existing tools and frameworks.

Just don’t forget that the API is still in development!

Featured image by Magnascan from Pixabay

Getting Started with Ansible Security Automation: Investigation Enrichment

Last November we introduced Ansible security automation as our answer to the lack of integration across the IT security industry. Let’s have a closer look at one of the scenarios where Ansible can facilitate typical operational challenges of security practitioners.

Last November we introduced Ansible security automation as our answer to the lack of integration across the IT security industry. Let’s have a closer look at one of the scenarios where Ansible can facilitate typical operational challenges of security practitioners.

A big portion of security practitioners’ daily activity is dedicated to investigative tasks. Enrichment is one of those tasks, and could be both repetitive and time-consuming, making it a perfect candidate for automation. Streamlining these processes can free up their analysts to focus on more strategic tasks, accelerate the response in time-sensitive situations and reduce human errors. However, in many large organizations , the multiple security solutions aspect of these activities are not integrated with each other. Hence, different teams may be in charge of different aspects of IT security, sometimes with no processes in common.

That often leads to manual work and interaction between people of different teams which can be error-prone and above all, slow. So when something suspicious happens and further attention is needed, security teams spend a lot of valuable time operating on many different security solutions and coordinating work with other teams, instead of focusing on the suspicious activity directly.

In this blog post we have a closer look at how Ansible can help to overcome these challenges and support investigation enrichment activities. In the following example we’ll see how Ansible can be used to enable programmatic access to information like logs coming from technologies that may not be integrated into a SIEM. As an example we’ll use enterprise firewalls and intrusion detection and protection systems (IDPS).

Simple Demo Setup

To showcase the aforementioned scenario we created a simplified, very basic demo setup to showcase the interactions. This setup includes two security solutions providing information about suspicious traffic, as well as a SIEM: we use a Check Point Next Generation Firewall (NGFW) and a Snort IDPS as security solutions providing information. The SIEM to gather and analyze those data is IBM QRadar.

Also, from a machine called “attacker” we will simulate a potential attack pattern on the target machine on which the IDPS is running.

Roland blog 1

This is just a basic demo setup, a real world setup of an Ansible security automation integration would look different, and can feature other vendors and technologies.

Logs: crucial, but distributed

Now imagine you are a security analyst in an enterprise. You were just informed of an anomaly in an application, showing  suspicious log activities. For example, we have a little demo where we curl a certain endpoint of the web server which we conveniently called “web_attack_simulation”:

$ sudo grep web_attack /var/log/httpd/access_log
172.17.78.163 - - [22/Sep/2019:15:56:49 +0000] "GET /web_attack_simulation HTTP/1.1" 200 22 "-" "curl/7.29.0"
...

As a security analyst you know that anomalies can be the sign of a potential threat. You have to determine if this is a false positive, that can be simply dismissed or an actual threat which requires a series of remediation activities to be stopped. Thus you need to collect more data points – like from the firewall and the IDS. Going through the logs of the firewall and IDPS manually takes a lot of time. In large organizations, the security analyst might not even have the necessary access rights and needs to contact the teams that each are responsible for both the enterprise firewall and the IDPS, asking them to manually go through the respective logs and directly check for anomalies on their own and then reply with the results. This could imply a phone call, a ticket, long explanations, necessary exports or other actions consuming valuable time.

It is common in large organisations to centralise event management on a SIEM and use it as the primary dashboard for investigations. In our demo example the SIEM is QRadar, but the steps shown here are valid for any SIEM. To properly analyze security-related events there are multiple steps necessary: the security technologies in question – here the firewall and the IDPS – need to be configured to stream their logs to the SIEM in the first place. But the SIEM also needs to be configured to help ensure that those logs are parsed in the correct way and meaningful events are generated. Doing this manually is time-intensive and requires in-depth domain knowledge. Additionally it might require privileges a security analyst does not have.

But Ansible allows security organizations to create pre-approved automation workflows in the form of playbooks. Those can even be maintained centrally and shared across different teams to enable security workflows at the press of a button. 

Why don’t we add those logs to QRadar permanently? This could create alert fatigue, where too much data in the system generates too many events, and analysts might miss the crucial events. Additionally, sending all logs from all systems easily consumes a huge amount of cloud resources and network bandwidth.

So let’s write such a playbook to first configure the log sources to send their logs to the SIEM. We start the playbook with Snort and configure it to send all logs to the IP address of the SIEM instance:

---
- name: Configure snort for external logging
  hosts: snort
  become: true
  vars:
    ids_provider: "snort"
    ids_config_provider: "snort"
    ids_config_remote_log: true
    ids_config_remote_log_destination: "192.168.3.4"
    ids_config_remote_log_procotol: udp
    ids_install_normalize_logs: false

  tasks:
    - name: import ids_config role
      include_role:
        name: "ansible_security.ids_config"

Note that here we only have one task, which imports an existing role. Roles are an essential part of Ansible, and help in structuring your automation content. Roles usually encapsulate the tasks and other data necessary for a clearly defined purpose. In the case of the above shown playbook, we use the role ids_config, which manages the configuration of various IDPS. It is provided as an example by the ansible-security team. This role, like others mentioned in this blog post, are provided as a guidance to help customers that may not be accustomed to Ansible to become productive faster. They are not necessarily meant as a best practise or a reference implementation.

Using this role we only have to note a few parameters, the domain knowledge of how to configure Snort itself is hidden away. Next, we do the very same thing with the Check Point firewall. Again an existing role is re-used, log_manager:

- name: Configure Check Point to send logs to QRadar
  hosts: checkpoint

  tasks:
    - include_role:
        name: ansible_security.log_manager
        tasks_from: forward_logs_to_syslog
      vars:
        syslog_server: "192.168.3.4"
        checkpoint_server_name: "gw-2d3c54"
        firewall_provider: checkpoint

With these two snippets we are already able to reach out to two security solutions in an automated way and reconfigure them to send their logs to a central SIEM.

We can also automatically configure the SIEM to accept those logs and sort them into corresponding streams in QRadar:

- name: Add Snort log source to QRadar
  hosts: qradar
  collections:
    - ibm.qradar

  tasks:
    - name: Add snort remote logging to QRadar
      qradar_log_source_management:
        name: "Snort rsyslog source - 192.168.14.15"
        type_name: "Snort Open Source IDS"
        state: present
        description: "Snort rsyslog source"
        identifier: "ip-192-168-14-15"

- name: Add Check Point log source to QRadar
  hosts: qradar
  collections:
    - ibm.qradar

  tasks:
    - name: Add Check Point remote logging to QRadar
      qradar_log_source_management:
        name: "Check Point source - 192.168.23.24"
        type_name: "Check Point FireWall-1"
        state: present
        description: "Check Point log source"
        identifier: "192.168.23.24"

Here we do use Ansible Content Collections: the new method of distributing, maintaining and consuming automation content. Collections can contain roles, but also modules and other code necessary to enable automation of certain environments. In our case the collection for example contains a role, but also the necessary modules and connection plugins to interact with QRadar.

Without any further intervention by the security analyst, Check Point logs start to appear in the QRadar log overview. Note that so far no logs are sent from Snort to QRadar: Snort does not know yet that this traffic is noteworthy! We will come to this in a few moments.

roland blog 2

Remember, taking the perspective of a security analyst: now we have more data at our disposal. We have a better understanding of what could be the cause of the anomaly in the application behaviour. Logs from the firewall are shown, who is sending traffic to whom. But this is still not enough data to fully qualify what is going on.

Fine-tuning the investigation

Given the data at your disposal you decide to implement a custom signature on the IDPS to get alert logs if a specific pattern is detected.

In a typical situation, implementing a new rule would require another interaction with the security operators in charge of Snort who would likely have to manually configure multiple instances. But luckily we can again use an Ansible Playbook to achieve the same goal without the need for time consuming manual steps or interactions with other team members.

There is also the option to have a set of playbooks for customer specific situations pre-create. Since the language of Ansible is YAML, even team members with little knowledge can contribute to the playbooks, making it possible to have agreed upon playbooks ready to be used by the analysts.

Again we reuse a role, ids_rule. Note that this time some  understanding of Snort rules is required to make the playbook work. Still, the actual knowledge of how to manage Snort as a service across various target systems is shielded away by the role.

---
- name: Add Snort rule
  hosts: snort
  become: yes

  vars:
    ids_provider: snort

  tasks:
    - name: Add snort web attack rule
      include_role:
        name: "ansible_security.ids_rule"
      vars:
        ids_rule: 'alert tcp any any -> any any (msg:"Attempted Web Attack"; uricontent:"/web_attack_simulation"; classtype:web-application-attack; sid:99000020; priority:1; rev:1;)'
        ids_rules_file: '/etc/snort/rules/local.rules'
        ids_rule_state: present

Finish the offense

Moments after the playbook is executed, we can check in QRadar if we see alerts. And indeed, in our demo setup this is the case:

roland blog 3

With this  information on  hand, we can now finally check all offenses of this type, and verify that they are all coming only from one single host – here the attacker.

From here we can move on with the investigation. For our demo we assume that the behavior is intentional, and thus close the offense as false positive.

Rollback!

Last but not least, there is one step which is often overlooked, but is crucial: rolling back all the changes! After all, as discussed earlier, sending all logs into the SIEM all the time is resource-intensive.

With Ansible the rollback is quite easy: basically the playbooks from above can be reused, they just need to be slightly altered to not create log streams, but remove them again. That way, the entire process can be fully automated and at the same time  made as resource friendly as possible.

Takeaways and where to go next

It happens that the job of a CISO and her team is difficult even if they have in place all necessary tools, because the tools don’t integrate with each other. When there is a security threat, an analyst has to perform an investigation, chasing all relevant pieces of information across the entire infrastructure, consuming valuable time to understand what’s going on and ultimately perform any sort of remediation.

Ansible security automation is designed to help enable integration and interoperability of security technologies to support security analysts’ ability to investigate and remediate security incidents faster.

As next steps there are plenty of resources to follow up on the topic:

Credits

This post was originally released on ansible.com/blog: GETTING STARTED WITH ANSIBLE SECURITY AUTOMATION: INVESTIGATION ENRICHMENT

Header image by Alexas_Fotos from Pixabay.

[Short Tip] Accessing Nautilus mounted locations via shell/terminal

When using Gnome, it is quite convenient to mount remote locations directly in Nautilus. As an example, I often mount my Google Drive work folder, also my personal NextCloud instance.

While this makes it easy to work with remote accessible files from within Gnome tools, it is less obvious how to access for example files from within a shell like gnome-terminal.

Nautilus uses FUSE to mount remote locations. You can find more in the Gnome documentation GVfs.

With this knowledge, the solution is this directory:

$XDG_RUNTIME_DIR/gvfs

In that directory you will find the actual fuse mounts of the remote locations linked from within Nautilus:

$ ls -1 $XDG_RUNTIME_DIR/gvfs
'dav:host=nc.bayz.de,ssl=true,prefix=%2Fremote.php%2Fwebdav'
'google-drive:host=redhat.com,user=ABCDEF'

Those are directories, so you can just work with them as with usual directories and change into them, edit files, etc. Of course, depending on the internet connection and the endpoint and the protocol the interactions will not be comparable to working on files on your local SSD. But it might be just enough for your use cases.

Thanks to strugee for this detailed comment on StackExchange.

[Short Tip] Handling “can’t concat str to bytes” error in Ansible’s uri module

Ansible Logo

When working with web services, especially REST APIs, Ansible can be of surprising help when you need to automate those, our want to integrate them into your automation.

However, today I run into a strange Python bug while I tried to use the uri module:

An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: can't concat str to bytes
fatal: [localhost]: FAILED! => {"changed": false, "content": "", "elapsed": 0, "msg": "Status code was -1 and not [200]: An unknown error occurred: can't concat str to bytes", "redirected": false, "status": -1, "url": "https://www.ansible.com"}

This drove me almost nuts because it happened on all kinds of machines I tested, even with Ansible’s devel upstream version. It was even independent of the service I targeted – the error happened way earlier. And a playbook to showcase this was suspiciously short and simple:

---                                 
- name: Show concat str byte error  
  hosts: localhost                  
  connection: local                 
  gather_facts: no                  
                                    
  tasks:                            
    - name: call problematic URL call
      uri:                          
        url: "https://www.ansible.com"
        method: POST                
        body_format: json
        body:                       
          name: "myngfw"

When I was about to fill an issue at Ansible’s Github page I thought again and wondered that this is too simple: I couldn’t imagine that I was the only one hitting this problem. I realized that the error had to be on my side. And thus meant that something was missing.

And indeed: the body_format option was not explicitly stated, so Ansible assumed “raw”, while my body data were provided in json format. A simple

        body_format: json

solved my problems.

Never dare to ask me how long it took me to figure this one out. And that from the person who write and entire how to about how to provide payload with the Ansible URI module….

[Short Tip] Exclude files in Git diff

What if you git diff shows changes to files you want to omit? Use pathspecs to filter those.

Git icon

Did you ever saw a git diff with file changes you wanted to omit? I recently had to go through a very large git diff – and I realized that some modified files were not needed and I had to somehow remove them from the diff.

To go through the huge diff and remove all patches to certain files manually would be way too much work. Luckily since Git 1.9 you can use pathspec patterns to limit the output of git – and thus the output of git-diff:

git diff devel..feature ':!path/to/file'
git diff devel..feature ':!path/to/*manyfiles'
git diff devel..feature ':!just/a/path'

The exclamation mark is just a short form for (exclude). If you use the bracket style you can even add additional “magic words” – for example to make the exclusion case insensitive:

git diff devel..feature ':(exclude,icase)PATH/TO/FILE

Other magic words are literal, top and glob.

Git pathspecs also work with git-grep, git-log and others.

[Howto] Using Ansible and Ansible Tower with shared Roles

Roles are a neat way in Ansible to make playbooks and everything related to them re-usable. If used with Tower, they can be even more powerful.

Ansible Logo

Roles are a neat way in Ansible to make playbooks and everything related to them re-usable. If used with Tower, they can be even more powerful.

(I published this post originally at ansible.com/blog .)

Roles are an essential part of Ansible, and help in structuring your automation content. The idea is to have clearly defined roles for dedicated tasks. During your automation code, the roles will be called by the Ansible Playbooks.

Since roles usually have a well defined purpose, they make it easy to reuse your code for yourself, but also in your team. And you can even share roles with the global community. In fact, the Ansible community created Ansible Galaxy as a central place to display, search and view Ansible roles from thousands of people.

So what does a role look like? Basically it is a predefined structure of folders and files to hold your automation code. There is a folder for your templates, a folder to keep files with tasks, one for handlers, another one for your default variables, and so on:

tasks/ 
handlers/ 
files/ 
templates/ 
vars/ 
defaults/ 
meta/

In folders which contain Ansible code – like tasks, handlers, vars, defaults – there are main.yml files. Those contain the relevant Ansible bits. In case of the tasks directory, they often include other yaml files within the same directory. Roles even provide ways to test your automation code – in an automated fashion, of course.

This post will show how roles can be shared with others, be used in your projects and how this works with Red Hat Ansible Tower.

Share Roles via Repositories

Roles can be part of your project repository. They usually sit underneath a dedicated roles/ directory. But keeping roles in your own repository makes it hard to share them with others, to be reused and improved by them. If someone works on a different team, or on a different project, they might not have access to your repository – or they may use their own anyway. So even if you send them a copy of your role, they could add it to their own repository, making it hard to exchange improvements, bug fixes and changes across totally different repositories.

For that reason, a better way is to keep a role in its own repository. That way it can be easily shared and improved. However, to be available to a playbook, the role still needs to be included. Technically there are multiple ways to do that.

For example there can be a global roles directory outside your project where all roles are kept. This can be referenced in ansible.cfg. However, this requires that all developer setups and also the environment in which the automation is finally executed have the same global directory structure. This is not very practical.

When Git is used as the version control system, there is also the possibility of importing roles from other repositories via Git submodules, or even using Git subtrees. However, this requires quite some knowledge about advanced Git features by each and everyone using it – so it is far from simple.

The best way to make shared roles available to your playbooks is to use a function built into Ansible itself: by using the command ansible-galaxy , ansible galaxy can read a file specifying which external roles need to be imported for a successful Ansible run: requirements.yml. It lists external roles and their sources. If needed, it can also point to a specific version:

# from GitHub
- src: https://github.com/bennojoy/nginx 
# from GitHub, overriding the name and specifying a tag 
- src: https://github.com/bennojoy/nginx 
  version: master 
  name: nginx_role 
# from Bitbucket 
- src: git+http://bitbucket.org/willthames/git-ansible-galaxy 
  version: v1.4 # from galaxy 
- src: yatesr.timezone

The file can be used via the command ansible-galaxy. It reads the file and downloads all specified roles to the appropriate path:

ansible-galaxy install -r roles/requirements.yml 
- extracting nginx to /home/rwolters/ansible/roles/nginx 
- nginx was installed successfully 
- extracting nginx_role to 
/home/rwolters/ansible/roles/nginx_role 
- nginx_role (master) was installed successfully 
...

The output also highlights when a specific version was downloaded. You will find a copy of each role in your roles/directory – so make sure that you do not accidentally add the downloaded roles to your repository! The best option is to add them to the .gitignore file.

This way, roles can be imported into the project and are available to all playbooks while they are still shared via a central repository. Changes to the role need to be made in the dedicated repository – which ensures that no light-minded and project specific changes are done in the role.

At the same time the version attribute in requirements.ymlensures that the used role can be pinned to a certain release tag value, commit hash, or branch name. This is useful in case the development of a role is quickly moving forward, but your project has longer development cycles.

Using Roles in Ansible Tower

If you use automation on larger, enterprise scales you most likely will start using Ansible Tower sooner or later. So how do roles work with Ansible Tower? In fact – just like mentioned above. Each time Ansible Tower checks out a project it looks for a roles/requirements.yml. If such a file is present, a new version of each listed role is copied to the local checkout of the project and thus available to the relevant playbooks.

That way shared roles can easily be reused in Ansible Tower – it is built in right from the start!

Best Practices and Things to Keep in Mind

There are a few best practices around sharing of Ansible roles that make your life easier. The first is the naming and location of the roles directory. While it is possible to name the directory any way via the roles_path in ansible.cfg, we strongly recommend to stick to the directory name roles, sitting in the root of your project directory. Do not choose another name for it or move it to some subdirectory.

The same is true for requirements.yml: have one requirements.yml only, and keep it at roles/requirements.yml. While it is technically possible to have multiple files and spread them across your project, this will not work when the project is imported into Ansible Tower.

Also, if the roles are not only shared among multiple users, but are also developed with others or not by you at all, it might make sense to pin the role to the actual commit you’ve tested your setup against. That way you will avoid unwanted changes in the role behaviour.

More Information

Find, reuse, and share the best Ansible content on Ansible Galaxy.

Learn more about roles on Ansible Docs.