[Howto] Using Ansible and Ansible Tower with shared Roles

Ansible Logo

Roles are a neat way in Ansible to make playbooks and everything related to them re-usable. If used with Tower, they can be even more powerful.

(I published this post originally at ansible.com/blog .)

Roles are an essential part of Ansible, and help in structuring your automation content. The idea is to have clearly defined roles for dedicated tasks. During your automation code, the roles will be called by the Ansible Playbooks.

Since roles usually have a well defined purpose, they make it easy to reuse your code for yourself, but also in your team. And you can even share roles with the global community. In fact, the Ansible community created Ansible Galaxy as a central place to display, search and view Ansible roles from thousands of people.

So what does a role look like? Basically it is a predefined structure of folders and files to hold your automation code. There is a folder for your templates, a folder to keep files with tasks, one for handlers, another one for your default variables, and so on:

tasks/ 
handlers/ 
files/ 
templates/ 
vars/ 
defaults/ 
meta/

In folders which contain Ansible code – like tasks, handlers, vars, defaults – there are main.yml files. Those contain the relevant Ansible bits. In case of the tasks directory, they often include other yaml files within the same directory. Roles even provide ways to test your automation code – in an automated fashion, of course.

This post will show how roles can be shared with others, be used in your projects and how this works with Red Hat Ansible Tower.

Share Roles via Repositories

Roles can be part of your project repository. They usually sit underneath a dedicated roles/ directory. But keeping roles in your own repository makes it hard to share them with others, to be reused and improved by them. If someone works on a different team, or on a different project, they might not have access to your repository – or they may use their own anyway. So even if you send them a copy of your role, they could add it to their own repository, making it hard to exchange improvements, bug fixes and changes across totally different repositories.

For that reason, a better way is to keep a role in its own repository. That way it can be easily shared and improved. However, to be available to a playbook, the role still needs to be included. Technically there are multiple ways to do that.

For example there can be a global roles directory outside your project where all roles are kept. This can be referenced in ansible.cfg. However, this requires that all developer setups and also the environment in which the automation is finally executed have the same global directory structure. This is not very practical.

When Git is used as the version control system, there is also the possibility of importing roles from other repositories via Git submodules, or even using Git subtrees. However, this requires quite some knowledge about advanced Git features by each and everyone using it – so it is far from simple.

The best way to make shared roles available to your playbooks is to use a function built into Ansible itself: by using the command ansible-galaxy , ansible galaxy can read a file specifying which external roles need to be imported for a successful Ansible run: requirements.yml. It lists external roles and their sources. If needed, it can also point to a specific version:

# from GitHub
- src: https://github.com/bennojoy/nginx 
# from GitHub, overriding the name and specifying a tag 
- src: https://github.com/bennojoy/nginx 
  version: master 
  name: nginx_role 
# from Bitbucket 
- src: git+http://bitbucket.org/willthames/git-ansible-galaxy 
  version: v1.4 # from galaxy 
- src: yatesr.timezone

The file can be used via the command ansible-galaxy. It reads the file and downloads all specified roles to the appropriate path:

ansible-galaxy install -r roles/requirements.yml 
- extracting nginx to /home/rwolters/ansible/roles/nginx 
- nginx was installed successfully 
- extracting nginx_role to 
/home/rwolters/ansible/roles/nginx_role 
- nginx_role (master) was installed successfully 
...

The output also highlights when a specific version was downloaded. You will find a copy of each role in your roles/directory – so make sure that you do not accidentally add the downloaded roles to your repository! The best option is to add them to the .gitignore file.

This way, roles can be imported into the project and are available to all playbooks while they are still shared via a central repository. Changes to the role need to be made in the dedicated repository – which ensures that no light-minded and project specific changes are done in the role.

At the same time the version attribute in requirements.ymlensures that the used role can be pinned to a certain release tag value, commit hash, or branch name. This is useful in case the development of a role is quickly moving forward, but your project has longer development cycles.

Using Roles in Ansible Tower

If you use automation on larger, enterprise scales you most likely will start using Ansible Tower sooner or later. So how do roles work with Ansible Tower? In fact – just like mentioned above. Each time Ansible Tower checks out a project it looks for a roles/requirements.yml. If such a file is present, a new version of each listed role is copied to the local checkout of the project and thus available to the relevant playbooks.

That way shared roles can easily be reused in Ansible Tower – it is built in right from the start!

Best Practices and Things to Keep in Mind

There are a few best practices around sharing of Ansible roles that make your life easier. The first is the naming and location of the roles directory. While it is possible to name the directory any way via the roles_path in ansible.cfg, we strongly recommend to stick to the directory name roles, sitting in the root of your project directory. Do not choose another name for it or move it to some subdirectory.

The same is true for requirements.yml: have one requirements.yml only, and keep it at roles/requirements.yml. While it is technically possible to have multiple files and spread them across your project, this will not work when the project is imported into Ansible Tower.

Also, if the roles are not only shared among multiple users, but are also developed with others or not by you at all, it might make sense to pin the role to the actual commit you’ve tested your setup against. That way you will avoid unwanted changes in the role behaviour.

More Information

Find, reuse, and share the best Ansible content on Ansible Galaxy.

Learn more about roles on Ansible Docs.

Advertisements

Ansible and Ansible Tower special variables

Ansible Logo

Ansible and Ansible Tower provide a powerful variable system. At the same time, there are some variables reserved to one or the other, which cannot be used by others, but can be helpful. This post lists all reserved and magic variables and also important keywords.

Ansible Variables

Variables in Ansible are a powerful tool to influence and control your automation execution. In fact, I’ve dedicated a fare share of posts to the topic over the years:

The official documentation of Ansible variables is also quite comprehensive.

The variable system is in fact so powerful that Ansible uses it itself. There are certain variables which are reserved, the so called magic variables.

The given documentation lists many of them – but is missing the Tower ones. For that reason this post list all magic variables in Ansible and Ansible Tower with references to more information.

Note that the variables and keywords might be different for different Ansible versions. The lists provided here are for Ansible 2.8 which is the current release and als shipped in Fedora – and Tower 3.4/3.5.

Reserved & Magic Variables

Magic Variables

The following list shows true magic variables. They are reserved internally and are overwritten by Ansible if needed. A “(D)” highlights that the variable is deprecated.

ansible_check_mode
ansible_dependent_role_names
ansible_diff_mode
ansible_forks
ansible_inventory_sources
ansible_limit
ansible_loop
ansible_loop_var
ansible_play_batch (D)
ansible_play_hosts (D)
ansible_play_hosts_all
ansible_play_role_names
ansible_playbook_python
ansible_role_names
ansible_run_tags
ansible_search_path
ansible_skip_tags
ansible_verbosity
ansible_version
group_names
groups
hostvars
inventory_hostname
inventory_hostname_short
inventory_dir
inventory_file
omit
play_hosts (D)
ansible_play_name
playbook_dir
role_name
role_names
role_path

Source: docs.ansible.com/ansible/latest/reference_appendices/special_variables.html

Facts

Facts are not magic variables because they are not internal. But they are collected during facts gathering or execution of the setup module, so it helps to keep them in mind. There are two “main” variables related to facts, and a lot of other variables depending on what the managed node has to offer. Since those are different from system to system, it is tricky to list them all. But they can be easily identified by the leading ansible_.

ansible_facts
ansible_local
ansible_*

Sources: docs.ansible.com/ansible/latest/reference_appendices/special_variables.html & docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html

Connection Variables

Connection variables control the way Ansible connects to target machines: what connection plugin to use, etc.

ansible_become_user
ansible_connection
ansible_host
ansible_python_interpreter
ansible_user

Soource: docs.ansible.com/ansible/latest/reference_appendices/special_variables.html

Ansible Tower

Tower has its own set of magic variables which are used internally to control the execution of the automation. Note that those variables can optionally start with awx_ instead of tower_.

tower_job_id
tower_job_launch_type
tower_job_template_id
tower_job_template_name
tower_user_id
tower_user_name
tower_schedule_id
tower_schedule_name
tower_workflow_job_id
tower_workflow_job_name

Source: docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html

Keywords

Keywords are strictly speaking not variables. In fact, you can even set a variable named as a key word. Instead, they are the parts of a playbook that make a playbook work: think of the keys hosts, tasks, name or even the parameters of a module.

It is just important to keep those keywords in mind – and it certainly helps when you name your variables in a way that they are not mixed up with keywords by chance.

The following lists shows all keywords by where they can appear. Note that some keywords are listed multiple times because they can be used at different places.

Play

any_errors_fatal
become
become_flags
become_method
become_user
check_mode
collections
connection
debugger
diff
environment
fact_path
force_handlers
gather_facts
gather_subset
gather_timeout
handlers
hosts
ignore_errors
ignore_unreachable
max_fail_percentage
module_defaults
name
no_log
order
port
post_tasks
pre_tasks
remote_user
roles
run_once
serial
strategy
tags
tasks
vars
vars_files
vars_prompt

Source: docs.ansible.com/ansible/latest/reference_appendices/playbooks_keywords.html

Role

any_errors_fatal
become
become_flags
become_method
become_user
check_mode
collections
connection
debugger
delegate_facts
delegate_to
diff
environment
ignore_errors
ignore_unreachable
module_defaults
name
no_log
port
remote_user
run_once
tags
vars
when

Source: docs.ansible.com/ansible/latest/reference_appendices/playbooks_keywords.html

Block

always
any_errors_fatal
become
become_flags
become_method
become_user
block
check_mode
collections
connection
debugger
delegate_facts
delegate_to
diff
environment
ignore_errors
ignore_unreachable
module_defaults
name
no_log
port
remote_user
rescue
run_once
tags
vars
when

Source: docs.ansible.com/ansible/latest/reference_appendices/playbooks_keywords.html

Task

action
any_errors_fatal
args
async
become
become_flags
become_method
become_user
changed_when
check_mode
collections
connection
debugger
delay
delegate_facts
delegate_to
diff
environment
failed_when
ignore_errors
ignore_unreachable
local_action
loop
loop_control
module_defaults
name
no_log
notify
poll
port
register
remote_user
retries
run_once
tags
until
vars
when
with_<lookup_plugin>

Source: docs.ansible.com/ansible/latest/reference_appendices/playbooks_keywords.html

[Short Tip] Exit bad/broken/locked ssh sessions

Sometimes it happens that SSH connections lock up. For example due to weird SSH server configuration or bad connectivity on your side, suddenly your SSH connection is broken. You cannot send any more comments via the SSH connection. The terminal just doesn’t react.

And that includes the typical exit commands: Ctrl+z or Ctrl+d are not working anymore. So you are only left with the choice to close the terminal – right? In fact, no, you can just exist the SSH session.

The trick is:
Enter+~+.

Why does this work? Because it is one of the defined escape sequences:

The supported escapes (assuming the default ‘~’) are:
~.Disconnect.
~^Z Background ssh.
~# List forwarded connections.
~& Background ssh at logout when waiting for forwarded connection / X11 sessions to terminate.
[…]

https://man.openbsd.org/ssh#EXIT_STATUS

To many of you this is probably nothing new – but I never knew that, even after years of using SSH on a daily base, so I had the urge to share this.

[Howto] Rebasing Fedora Silverblue – even from Rawhide to Fedora 30

silverblue-logo

I recently switched to Fedora Silverblue, the immutable desktop version of Fedora. With Silverblue, rebasing is easy – even when I had to downgrade from Rawhide to a stable release!

Fedora Silverblue is an interesting attempt at providing an immutable operating system – targeted at desktop users. Using it on a daily base helps me to get more familiar with the toolset and the ideas behind it which are also used in other projects like Fedora Atomic or Fedora CoreOS.

When Fedora 30 was released I decided to give it a try, went to the Silverblue download page – and unfortunately picked the wrong image: the one for Rawhide.

Rawhide is the rolling release/development branch of Fedora, and is way too unstable for my daily usage. But I only discovered this when I had it already installed and spent quite some time on customizing it.

But Silverblue is an immutable distribution, so switching to a previous version should be no problem, right? And in fact, yes, it is very easy!

Silverblue supports rebasing, switching between different branches. To get a list of available branches, first list the name of the remote source, and afterwards query the available references/branches:

[liquidat@heisenberg ~]$ ostree remote list
fedora
[liquidat@heisenberg ~]$ ostree remote refs fedora
[...]
fedora:fedora/30/x86_64/silverblue
fedora:fedora/30/x86_64/testing/silverblue
fedora:fedora/30/x86_64/updates/silverblue
[...]
fedora:fedora/rawhide/x86_64/silverblue
fedora:fedora/rawhide/x86_64/workstation

The list is quite long, and does list multiple operating system versions.

In my case I was on the rawhide branch and tried to rebase to version 30. That however failed:

[liquidat@heisenberg ~]$ rpm-ostree rebase fedora/30/x86_64/silverblue
1 metadata, 0 content objects fetched; 569 B transferred in 4 seconds
Checking out tree 7420c3a... done
Enabled rpm-md repositories: rawhide
Updating metadata for 'rawhide'... done
rpm-md repo 'rawhide'; generated: 2019-05-13T08:01:20Z
Importing rpm-md... done
⠁  
Forbidden base package replacements:
  libgcc 9.1.1-1.fc30 -> 9.1.1-1.fc31 (rawhide)
  libgomp 9.1.1-1.fc30 -> 9.1.1-1.fc31 (rawhide)
This likely means that some of your layered packages have requirements on newer or older versions of some base packages. `rpm-ostree cleanup -m` may help. For more details, see: https://githResolving dependencies... done
error: Some base packages would be replaced

The problem was that I had installed additional packages in the meantime. Note that there are multiple ways to install packages in Silverblue:

– Flatpak apps: this is the primary way that apps get installed on Silverblue.
– Containers: which can be installed and used for development purposes.
– Toolbox containers: a special kind of container that are tailored to be used as a software development environment.

The other method of installing software on Silverblue is package layering. This is different from the other methods, and goes against the general principle of immutability. Package layering adds individual packages to the Silverblue system, and in so doing modifies the operating system.

https://docs.fedoraproject.org/en-US/fedora-silverblue/getting-started/

While Flatpak in itself is a pretty cool solution to the Linux desktop packaging problem it usually comes with sandboxed environments, making it less usable for integrated tools and libraries.

For that reason it is still possible to install RPMs on top, in a layered form. This however might result in dependency issues when the underlying image is supposed to change.

This is exactly what happened here: I had additional software installed, which depended on some specific versions of the underlying image. So I had to remove those:

[liquidat@heisenberg ~]$ rpm-ostree uninstall fedora-workstation-repositories golang pass vim zsh

Afterwards it was easy to rebase the entire system onto a different branch or – in my case – a different version of the same branch:

[liquidat@heisenberg ~]$ rpm-ostree rebase fedora/30/x86_64/silverblue
1 metadata, 0 content objects fetched; 569 B transferred in 2 seconds
Staging deployment... done
Freed: 47,4 MB (pkgcache branches: 0)
  liberation-fonts-common 1:2.00.3-3.fc30 -> 1:2.00.5-1.fc30
  [...]
Downgraded:
  GConf2 3.2.6-26.fc31 -> 3.2.6-26.fc30
  [...]
Removed:
  fedora-repos-rawhide-31-0.2.noarch
  [...]
Added:
  PackageKit-gstreamer-plugin-1.1.12-5.fc30.x86_64
  [...]
Run "systemctl reboot" to start a reboot

And that’s it – after a short systemctl reboot the machine was back, running Fedora 30. And since ostree works with images the reboot went smooth and quick, long sessions of installing/updating software during shutdown or reboot are not necessary with such a setup!

In conclusion I must say that I am pretty impressed – both by the concept as well as the execution on the concept, how well Silverblue works in a day to day situation even as a desktop. My next step will be to test it on a Laptop on the ride, and see if other problems come up there.

[Howto] Using include directive with ssh client configuration

A SSH client configuration makes accessing servers much easier and more convenient. Until recently the configuration was done in one single file which could be problematic. But newer versions support includes to read configuration from multiple places.

SSH is the default way to access servers remotely – Linux and other UNIX systems, and since recently Windows as well.

One feature of the OpenSSH client is to configure often used parameters for SSH connections in a central config file, ~/.ssh/config. This comes in especially handy when multiple remote servers require different parameters: varying ports, other user names, different SSH keys, and so on. It also provides the possibility to define aliases for host names to avoid the necessity to type in the FQDN each time. Since such a configuration is directly read by the SSH client other tools wich are using the SSH client in the background – like Ansible – can benefit from the configuration as well.

A typical configuration of such a config file can look like this:

Host webapp
    HostName webapp.example.com
    User mycorporatelogin
    IdentityFile ~/.ssh/id_corporate
    IdentitiesOnly yes
Host github
    HostName github.com
    User mygithublogin
    IdentityFile ~/.ssh/id_ed25519
Host gitlab
    HostName gitlab.corporate.com
    User mycorporatelogin
    IdentityFile ~/.ssh/id_corporate
Host myserver
    HostName myserver.example.net
    Port 1234
    User myuser
    IdentityFile ~/.ssh/id_ed25519
Host aws-dev
    HostName 12.24.33.66
    User ec2-user
    IdentityFile ~/.ssh/aws.pem
    IdentitiesOnly yes
    StrictHostKeyChecking no
Host azure-prod
    HostName 4.81.234.19
    User azure-prod
    IdentityFile ~/.ssh/azure_ed25519

While this is very handy and helps a lot to maintain sanity even with very different and strange SSH configurations, a single huge file is hard to manage.

Cloud environments for example change constantly, so it makes sense to update/rebuild the configuration regularly. There are scripts out there, but they either just overwrite existing configuration, or do entirely work on an extra file which is referenced in each SSH client call with ssh -F .ssh/aws-config, or they require to mark sections in the .ssh/config like "### AZURE-SSH-CONFIG BEGIN ###". All attempts are either clumsy or error prone.

Another use case is where parts of the SSH configuration is managed by configuration management systems or by software packages for example by a company – again that requires changes to a single file and might alter or remove existing configuration for your others services and servers. After all, it is not uncommon to use your more-or-less private Github account for your company work so that you have mixed entries in your .ssh/config.

The underneath problem of managing more complex software configurations in single files is not unique to OpenSSH, but more or less common across many software stacks which are configured in text files. Recently it became more and more common to write software in a way that configuration is not read as a single file, but that all files from a certain directory are read in. Examples for this include:

  • sudo with the directory /etc/sudoers.d/
  • Apache with /etc/httpd/conf.d
  • Nginx with /etc/nginx/conf.d/
  • Systemd with /etc/systemd/system.conf.d/*
  • and so on…

Initially such an approach was not possible with the SSH client configuration in OpenSSH but there was a bug reported even including a patch quite some years ago. Luckily, almost three years ago OpenSSH version 7.3 was released and that version did come with the solution:

* ssh(1): Add an Include directive for ssh_config(5) files.


https://www.openssh.com/txt/release-7.3

So now it is possible to add one or even multiple files and directories from where additional configuration can be loaded.

Include

Include the specified configuration file(s). Multiple pathnames may be specified and each pathname may contain glob(7) wildcards and, for user configurations, shell-like `~’ references to user home directories. Files without absolute paths are assumed to be in ~/.ssh if included in a user configuration file or /etc/ssh if included from the system configuration file. Include directive may appear inside a Match or Host block to perform conditional inclusion.

https://www.freebsd.org/cgi/man.cgi?ssh_config(5)

The following .ssh/config file defines a sub-directory from where additional configuration can be read in:

$ cat ~/.ssh/config 
Include ~/.ssh/conf.d/*

Underneath ~/.ssh/conf.d there can be additional files, each containing one or more host definitions:

$ ls ~/.ssh/conf.d/
corporate.conf
github.conf
myserver.conf
aws.conf
azure.conf
$ cat ~/.ssh/conf.d/aws.conf
Host aws-dev
    HostName 12.24.33.66
    User ec2-user
    IdentityFile ~/.ssh/aws.pem
    IdentitiesOnly yes
    StrictHostKeyChecking no

This feature made managing SSH configuration for me much easier, and I only have few use cases and mainly require it to keep a simple overview over things. For more flexible (aka cloud based) setups this is crucial and can make things way easier.

Note that the additional config files should only contain host definitions! General SSH configuration should be inside ~/.ssh/config and should be before the include directive: any configuration provided after a “Host” keyword is interpreted as part of that exact host definition – until the next host block or until the next “Match” keyword.

[Howto] ara – making Ansible runs easier to read and understand

Ara is a simple web server showing detailed information about Ansible runs. It is helpful in understanding and troubleshooting Ansible runs.

Background

Ansible runs, especially on the command line, do only provide limited information. Details about used variables, the timing of each task or other information are only available using additional plugins, but the details provided by them are usually narrowed to a use case.

A better way to provide information about Ansible runs is to collect the data and provide them in a web framework. That is what Ansible Tower (or AWX, the upstream project to Tower) does for example: collecting detailed data and providing them in the jobs overview.

But there are situations where a fully fledged Tower is too much, or where a comparing overview of the various runs is needed. This is where ara comes in:

ARA Records Ansible playbook runs and makes the recorded data available and intuitive for users and systems.
It makes your Ansible playbooks easier to understand and troubleshoot.

https://ara.recordsansible.org/

ara was originally developed by people of the OpenStack community, and still today has strong ties with it. It does not replace Ansible Tower at all, since it does not manage the execution at all. It complements the information and overview part, and in a way more competes with the logging solutions which can be connected to Ansible Tower.

How to install

The installation of ara is pretty straight forward and described in the documentation: the software is basically installed via pip, afterwards the server can be started as a local running instance. The connection between Ansible and ara is done via action and callback plugins.

The installation of the ara package is quickly done. Note that on systems with both Python 2 and 3 you need to pick the right pip version:

$ pip3 install --user ara
...
$ python3 -m ara.setup.action_plugins                                                                                                   /home/liquidat/.local/lib/python3.7/site-packages/ara/plugins/actions
$ python3  -m ara.setup.callback_plugins                                                                                                /home/liquidat/.local/lib/python3.7/site-packages/ara/plugins/callbacks

Notice that the binaries end up in ~/.local/bin. If that is not part of the $PATH variable, the server executable to start ara needs to be addressed directly, like ~/.local/bin/ara-manage runserver:

$ ~/.local/bin/ara-manage runserver                                                                                                      * Serving Flask app "ara" (lazy loading)
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: off
2019-05-06 02:45:49,156 INFO werkzeug:  * Running on http://127.0.0.1:9191/ (Press CTRL+C to quit)
2019-05-06 02:45:55,915 INFO werkzeug: 127.0.0.1 - - [06/May/2019 02:45:55] "GET / HTTP/1.1" 302 -

The web page can be accessed by pointing a web browser towards http://127.0.0.1:9191/. Since Ansible is not connected yet to ara no data are shown:

As mentioned, to connect ara to Ansible a callback plugin is used. There are different ways available to tell Ansible to use a callback plugin, the easiest is to set up a ansible.cfg with the appropriate data:

$ python3 -m ara.setup.ansible | tee -a ansible.cfg                                                                                        
[defaults]
callback_plugins=/home/rwolters/.local/lib/python3.7/site-packages/ara/plugins/callbacks
action_plugins=/home/rwolters/.local/lib/python3.7/site-packages/ara/plugins/actions

Note here that this creates a new section named [defaults]. Check if your ansible.cfg already has a section called [defaults] and if so merge the entries manually. Now call a few playbooks and check the results:

ara provides easy access to all existing runs, making it possible to easily compare different runs with each other. At the same time detailed information are provided for individual runs, making it easy to figure out what actually happened.

Summary

ara is an interesting attempt at better displaying the information from Ansible runs. It helps analyzing what is happening in each run, where problems might be hidden and so on.

If you use Ansible Tower already the information are available to you anyway. If you like the way how it is presented in ara you can even use both at the same time.

[Howto] Adding SSH keys to Ansible Tower via tower-cli [Update]

Ansible Logo

The tool tower-cli is often used to pre-configure Ansible Tower in a scripted way. It provides a convenient way to boot-strap a Tower configuration. But adding SSH keys as machine credentials is far from easy.

Boot-strapping Ansible Tower can become necessary for testing and QA environments where the same setup is created and destroyed multiple times. Other use cases are when multiple Tower installations need to be configured in the same way or share at least a larger part of the configuration.

One of the necessary tasks in such setups is to create machine credentials in Ansible Tower so that Ansible is able to connect properly to a target machine. In a Linux environment, this is often done via SSH keys.

However, tower-cli calls the Tower API in the background – and JSON POST data need to be in one line. But SSH keys come in multiple lines, so providing the file via a $(cat ssh_file) does not work:

tower-cli credential create --name "Example Credentials" \
                     --organization "Default" --credential-type "Machine" \
                     --inputs="{\"username\":\"ansible\",\"ssh_key_data\":\"$(cat .ssh/id_rsa)\",\"become_method\":\"sudo\"}"

Multiple workarounds can be found on the net, like manually editing the file to remove the new lines or creating a dedicated variables file containing the SSH key. There is even a bug report discussing that.

But for my use case I needed to read an existing SSH file directly, and did not want to add another manual step or create an additional variables file. The trick is a rather complex piece of SED:

$(sed -E ':a;N;$!ba;s/\r{0,1}\n/\\n/g' /home/ansible/.ssh/id_rsa)

This basically reads in the entire file (instead of just line by line), removes the new lines and replaces them with \n. To be precise:

  • we first create a label "a"
  • append the next line to the pattern space ("N")
  • find out if this is the last line or not ("$!"), and if not
  • branch back to label a ("ba")
  • after that, we search for the new lines ("\r{0,1}")
  • and replace them with the string for a new line, "\n"

Note that this needs to be accompanied with proper line endings and quotation marks. The full call of tower-cli with the sed command inside is:

tower-cli credential create --name "Example Credentials" \
                     --organization "Default" --credential-type "Machine" \
                     --inputs="{\"username\":\"ansible\",\"ssh_key_data\":\"$(sed -E ':a;N;$!ba;s/\r{0,1}\n/\\n/g' /home/ansible/.ssh/id_rsa)\n\",\"become_method\":\"sudo\"}"

Note all the escaped quotations marks.

Update

Another way to add the keys is to provide yaml in the shell command:

tower-cli credential create --name "Example Credentials" \
                     --organization "Default" --credential-type "Machine" \
                     --inputs='username: ansible
become_method: sudo
ssh_key_data: |
'"$(sed 's/^/    /' /home/ansible/.ssh/id_rsa)"

This method is appealing since the corresponding sed call is a little bit easier to understand. But make sure to indent the variables exactly like shown above.

Thanks to the @ericzolf of the Red Hat Automation Community of Practice hinting me to that solution. If you are interested in the Red Hat Communities of Practice, you can read more about them in the blog “Communities of practice: Straight from the open source”.