[Short Tip] Call Ansible or Ansible Playbooks without an inventory

Ansible Logo

Ansible is a great tool to automate almost anything in IT. However, one of the core concepts of Ansible is the inventory where the to be managed nodes are listed. However, in some situations setting up a dedicated inventory is overkill.

For example there are many situation where admins just want to ssh to a machine or two to figure something out. Ansible modules can often make such SSH calls in a much more efficient way, making them unnecessary – but creating a inventory first is a waste of time for such short tasks.

In such cases it is handy to call Ansible or Ansible playbooks without an inventory. In case of plain Ansible this can be done by  addressing all nodes while at the same time limiting them to an actual hostslist:

$ ansible all -i jenkins.qxyz.de, -m wait_for -a "host=jenkins.qxyz.de port=8080"
jenkins.qxyz.de | SUCCESS => {
    "changed": false, 
    "elapsed": 0, 
    "path": null, 
    "port": 8080, 
    "search_regex": null, 
    "state": "started"
}

The comma is needed since Ansible expects a list of hosts – and a list of one host still needs the comma.

For Ansible playbooks the syntax is slightly different:

$ ansible-playbook -i neon.qxyz.de, my_playbook.yml

Here the “all” is missing since the playbook already contains a hosts directive. But the comma still needs to be there to mark a list of hosts.

[Howto] Automated DNS resolution for KVM/libvirt guests with a local domain

libvirt_logo-svg

I often run demos on my laptop with the help of libvirt. Managing 20+ machines that way is annoying when you have no DNS resolution for those. Luckily, with libvirt and NetworkManager, that can be easily solved.

The problem

Imagine you want to test something in a demo setup with 5 machines. You create the necessary VMs in your local KVM/libvirt environment – but you cannot address them properly by name. With 5 machines you also need to write down the appropriate IP addresses – that’s hardly practical.

It is possible to create static entries in the libvirt network configuration – however, that is still very inflexible, difficult to automate and only works for name resolution inside the libvirt environment. When you want to ssh into a running VM from the host, you again have to look up the IP.

Name resolution in  the host network would be possible by adding each entry to /etc/hosts additionally. But that would require the management of two lists at the same time. Not automated, far from dynamic, and very ponderous.

The solution

Luckily, there is an elegant solution: libvirt comes with its own in-build DNS server, dnsmasq. Configured properly, that can be used to serve DHCP and DNS to servers respecting a previous defined domain. Additionally, NetworkManager can be configured to use its own dnsmasq instance to resolve DNS entries – forwarding requests to the libvirt instance if needed.

That way, the only thing which has to be done is setting a proper host name inside the VMs. Everything else just works out of the box (with a recently Linux, see below).

The solution presented here is based on great post from Dominic Cleal.

Configuring libvirt

First of all, libvirt needs to be configured. Given that the network “default” is assigned to the relevant VMs, the configuration should look like this:

$ sudo virsh net-dumpxml default
<network connections='1'>
  <name>default</name>
  <uuid>158880c3-9adb-4a44-ab51-d0bc1c18cddc</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:fa:cb:e5'/>
  <domain name='qxyz.de' localOnly='yes'/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.128' end='192.168.122.254'/>
    </dhcp>
  </ip>
</network>

The interesting part is below the mac address: a local domain is defined and marked as localOnly. That domain will be the authoritative domain for the relevant VMs, and libvirt will configure dnsmasq to act as a resolver for that domain. The attribute makes sure that DNS requests regarding that domain will never be forwarded upstream. This is important to avoid loop holes.

Configuring the VM guests

When the domain is set, the guests inside the VMs need to be defined. With recent Linux releases this is as simple as setting the host name:

$ sudo hostnamectl set-hostname neon.qxyz.de

There is no need to enter the host name anywhere else: the command above takes care of that. And the default configuration of DHCP clients of recent Linux releases sends this hostname together with the DHCP request – dnsmasq picks the host name automatically  up if the domain matches.

If you are on a Linux where the hostnamectl command does not work, or where the DHCP client does send the host name with the request – switch to a recent version of Fedora or RHEL 😉

In such cases the host name must be set manually, according to the documentation of the OS. Just ensure that the resolution of the name works locally. Also, the DHCP configuration must be altered to send along the host name. In older RHEL and Fedora versions for example the option

DHCP_HOSTNAME=neon.qxyz.de

had to be added to /etc/sysconfig/network-scripts/ifcfg-eth0.

At this point automatic name resolution between VMs should already work after a restart of libvirt.

Configuring NetworkManager

The last missing piece is the configuration of the actual KVM/libvirt host, so that the local domain, here qxyz.de, is properly resolved. Adding another name server to /etc/resolv.conf might work for a workstation with a fixed network connection, but certainly does not work for laptops which have changing network connections and DNS servers all the time. In such cases, the NetworkManager is often used anyway so we take advantage of its capabilities.

First of all, NetworkManager needs to start its own version of dnsmasq. That can be achieved with a simple configuration option:

$ cat /etc/NetworkManager/conf.d/localdns.conf 
[main]
dns=dnsmasq

This second dnsmasq instance just works out of the box. All DNS requests will automatically be forwarded to DNS servers acquired by NetworkManager via DHCP, for example. The only notable difference is that the entry in /etc/resolv.conf is different:

# Generated by NetworkManager
search whatever
nameserver 127.0.0.1

Now as a second step the second dnsmasq instance needs to know that for all requests regarding qxyz.de the libvirt dnsmasq instance has to be queried. This can be achieved with another rather simple configuration option, given the domain and the IP from the libvirt network configuration at the top of this blog post:

$ cat /etc/NetworkManager/dnsmasq.d/libvirt_dnsmasq.conf 
server=/qxyz.de/192.168.122.1

And that’s it, already. Restart NetworkManager and everything should be working fine.

As a side node: if the attribute localOnly would not have been set in the libvirt network configuration, queries for unknown qxyz.de entries would be forwarded from the libvirt dnsmasq to the NetworkManager dnsmasq – which would again forward them to the libvirt dnsmasq, and so on. That would quickly overload your dnsmasq servers, resulting in error messages:

dnsmasq[15426]: Maximum number of concurrent DNS queries reached (max: 150)

Summary

With these rather few and simple changes a local domain is established for both guest and host, making it easy to resolve their names everywhere. There is no need to maintain one or even two lists of static IP entries, everything is done automatically.

For me this is a huge relief, making it much easier in the future to set up demo and test environments. Also, it looks much nicer during a demo if you have FQDNs and not IP addresses. I can only recommend this setup to everyone who often uses libvirt/KVM on a local machine for test/demo environments.

Ansible community modules for Oracle DB & ASM

Ansible LogoBesides the almost thousand modules shipped with Ansible, there are many more community modules out there developed independently. A remarkable example is a set of modules to manage Oracle DBs.

The Ansible module system is a great way to improve data center automation: automation tasks do not have to be programmed “manually” in shell code, but can be simply executed by calling the appropriate module with the necessary parameters. Besides the fact that an automation user does not have to remember the shell code the modules are usually also idempotent, thus a module can be called multiple times and only changes something when it is needed.

This only works when a module for the given task exists. The list of Ansible modules is huge, but does not cover all tasks out there. For example quite some middleware products are not covered by Ansible modules (yet?). But there are also community modules out there, not part of the Ansible package, but nevertheless of high quality and developed actively.

A good example of such 3rd party modules are the Oracle DB & ASM modules developed by oravirt aka Mikael Sandström, in a community fashion. Oracle DBs are quite common in the daily enterprise IT business. And since automation is not about configuring single servers, but about integrating all parts of a business process, Oracle DBs should also be part of the automation. Here the extensive set of Ansible modules comes in handy. According to the README (shortened):

  • oracle_user
    • Creates & drops a user.
    • Grants privileges only
  • oracle_tablespace
    • Manages normal(permanent), temp & undo tablespaces (create, drop, make read only/read write, offline/online)
    • Tablespaces can be created as bigfile, autoextended
  • oracle_grants
    • Manages privileges for a user
    • Grants/revokes privileges
    • Handles roles/sys privileges properly.
    • The grants can be added as a string (dba,’select any dictionary’,’create any table’), or in a list (ie.g for use with with_items)
  • oracle_role
    • Manages roles in the database
  • oracle_parameter
    • Manages init parameters in the database (i.e alter system set parameter…)
    • Also handles underscore parameters. That will require using mode=sysdba, to be able to read the X$ tables needed to verify the existence of the parameter.
  • oracle_services
    • Manages services in an Oracle database (RAC/Single instance)
  • oracle_pdb
    • Manages pluggable databases in an Oracle container database
    • Creates/deletes/opens/closes the pdb
    • saves the state if you want it to. Default is yes
    • Can place the datafiles in a separate location
  • oracle_sql
    • 2 modes: sql or script
    • Executes arbitrary sql or runs a script
  • oracle_asmdg
    • Manages ASM diskgroup state. (absent/present)
    • Takes a list of disks and makes sure those disks are part of the DG. If the disk is removed from the disk it will be removed from the DG.
  • oracle_asmvol
    • Manages ASM volumes. (absent/present)
  • oracle_ldapuser
    • Syncronises users/role grants from LDAP/Active Directory to the database
  • oracle_privs
    • Manages system and object level grants
    • Object level grant support wildcards, so now it is possible to grant access to all tables in a schema and maintain it automatically!

I have not yet had the change to test the modules, but I think they are worth a look. The amount of quality code, the existing documentation and also the ongoing development shows an active and healthy project, development important and certainly relevant modules. Please note: these modules are not part of the Ansible community, nor part of any offering from Oracle or anyone else. So use them at your own risk, they probably will eat your data. And kittens!

So, if you are dealing with Oracle DBs these modules might be worth to take a look. And I hope they will be pushed upstream soon.

[Short Tip] Retrieve your public IP with Ansible

Ansible Logo

There are multiple situations where you need to know your public IP: be it that you set up your home IT server behind a NAT, be it that your legacy enterprise business solution does not work properly without this information because the original developers 20 years ago never expected to be behind a NAT.

Of course, Ansible can help here as well: there is a tiny, neat module called ipify_facts which does nothing else but retrieving your public IP:

$ ansible localhost -m ipify_facts
localhost | SUCCESS => {
    "ansible_facts": {
        "ipify_public_ip": "23.161.144.221"
    }, 
    "changed": false
}

The return value can be registered as a variable and reused in other tasks:

---
- name: get public IP
  hosts: all 

  tasks:
    - name: get public IP
      ipify_facts:
      register: public_ip
    - name: output
      debug: msg="{{ public_ip }}"

The module by default accesses https://api.ipify.org to get the IP address, but the api URL can be changed via parameter.

[Short Tip] Show all variables of a host

Ansible Logo

There are multiple sources where variables for Ansible can be defined. Most of them can be shown via the setup module, but there are more.

For example, if you use a dynamic inventory script to access a Satellite server many variables like the organization are provided via the inventory script – and these are not shown in setup usually.

To get all variables of a host use the following notation:

---
- name: dump all
  hosts: all

  tasks:
  - name: get variables
    debug: var=hostvars[inventory_hostname]

Use this during debug to find out if the variables you’ve set somewhere are actually accessible in your playbooks.

If even created a small github repository for this to easily integrate it with Tower.

[Short Tip] Fix mount problems in RHV during GlusterFS mounts

Gluster Logo

When using Red Hat Virtualization or oVirt together with GLusterFS, there might be a strange error during the first creation of a storage domain:

Failed to add Storage Domain xyz.

One of the rather easy to fix reasons might be a permission problem: an initial Gluster exported file system belongs to the user root. However, the virtualization manager (ovirt-m bzw. RHV-M) does not have root rights and such needs another ownership.

In such cases, the fix is to mount the exported volume & set the user rights to the rhv-m user.

$ sudo mount -t glusterfs 192.168.122.241:my-vol /mnt
# cd /mnt/
# chown -R 36.36 .

Afterwarsd, the volume can be mounted properly. Some more general details can be found at RH KB 78503.