[Howto] Rebase feature branches in Git/Github

Git-Icon-1788CUpdating a feature branch to the actual state of the upstream main branch can be troublesome. Here is a workflow that works – at least for me.

Developing with Git is amazing, due to the possibilities to work with feature branches, remote repositories and so on. However, at some point, after some hours of development, the base of a feature branch will be outdated and it makes sense to update it before a pull request is send upstream. This is best done via rebasing. Here is a short work flow for a typical feature branch rebase I often need when developing for example Ansible modules.

  1. First, checkout the main branch, here devel.
  2. Update the main branch from the upstream repository.
  3. Rebase the local copy of the main branch.
  4. Push it to the remote origin, most likely your personal fork of the Git repo.
  5. Check out the feature branch
  6. Rebase the feature branch to the main branch.
  7. Force push the new history to the remote feature branch, most likely again your personal fork of the Git repo.

In terms of code this means:

$ git checkout devel
$ git fetch upstream devel
$ git rebase upstream/devel
$ git push
$ git checkout feature_branch
$ git rebase origin/devel
$ git push -f

This looks rather clean and easy – but I have to admit it took me quite some errors and Git cherry picking to finally get what is needed and what actually works.

[Short Tip] Workaround MIT-SHM error when starting QT/KDE apps with SUDO

Gnomelogo.svg

Starting GUI programs as root usually is not a problem. In worst case, sudo inside a terminal should do the trick.

However, recently I had to start a QT application as sudo from within GNOME. It was the yubikey configuration GUI, a third party tool thus not part of any desktop environment. Executing the app failed, it only showed a gray window and multiple errors in the command line:

$ sudo /usr/bin/yubikey-personalization-gui 
X Error: BadAccess (attempt to access private resource denied) 10
  Extension:    130 (MIT-SHM)
  Minor opcode: 1 (X_ShmAttach)
  Resource id:  0x142
X Error: BadShmSeg (invalid shared segment parameter) 128
  Extension:    130 (MIT-SHM)
  Minor opcode: 5 (X_ShmCreatePixmap)
  Resource id:  0xfa
X Error: BadDrawable (invalid Pixmap or Window parameter) 9
  Major opcode: 62 (X_CopyArea)
  Resource id:  0x2800015

Workarounds like pkexec and adding a policykit rule didn’t help, either. The error indicates that there is a problem with the MIT Shared Memory Extension of X.

A good workaround is to deactivate the usage of the extension on command line:

$ sudo QT_X11_NO_MITSHM=1 /usr/bin/yubikey-personalization-gui

It works like a charm.

[Short Tip] Git, cherry-pick and push

Git-Icon-1788C

Much too often, when working with Git and working on long time pull requests, I tend to screw up the Git history. For that reason, I often have to cherry pick a commit into a new branch and push that one upstream to the feature branch – with force.

First, identify the actual commits you want to cherry pick. You need to get the correct hash there. Then, create a new branch, cherry pick the commit, force the new branch upstream, delete the old branch, and rename the new one.

git log --author liquidat
git branch mywork_feature_tmp
git cherry-pick abcdefgh123456
git push --force origin HEAD:mywork_feature
git checkout devel
git branch -D mywork_feature
git branch -m mywork_feature_tmp mywork_feature

My hope is that in some point in the future I will be able to fix such broken Git repos at that point without cherry-pick. But until then, the current way works for me…

[Howto] Writing an Ansible module for a REST API

Ansible LogoAnsible comes along with a great set of modules. But maybe your favorite tool is not covered yet and you need to develop your own module. This guide shows you how to write an Ansible module – when you have a REST API to speak to.

Background: Ansible modules

Ansible is a great tool to automate almost everything in an IT environment. One of the huge benefits of Ansible are the so called modules: they provide a way to address automation tasks in the native language of the problem. For example, given a user needs to be created: this is usually done by calling certain commandos on the shell. In that case the automation developer has to think about which command line tool needs to be used, which parameters and options need to be provided, and the result is most likely not idempotent. And its hard t run tests (“checks”) with such an approach.

Enter Ansible user modules: with them the automation developer only has to provide the data needed for the actual problem like the user name, group name, etc. There is no need to remember the user management tool of the target platform or to look up parameters:

$ ansible server -m user -a "name=abc group=wheel" -b

Ansible comes along with hundreds of modules. But what is if your favorite task or tool is not supported by any module? You have to write your own Ansible module. If your tools support REST API, there are a few things to know which makes it much easier to get your module running fine with Ansible. These few things are outlined below.

REST APIs and Python libraries in Ansible modules

According to Wikipedia, REST is:

… the software architectural style of the World Wide Web.

In short, its a way to write, provide and access an API via usual HTTP tools and libraries (Apache web server, Curl, you name it), and it is very common in everything related to the WWW.

To access a REST API via an Ansible module, there are a few things to note. Ansible modules are usually written in Python. The library of choice to access URLs and thus REST APIs in Python is usually urllib. However, the library is not the easiest to use and there are some security topics to keep in mind when these are used. Out of these reasons alternative libraries like Python requests came up in the past and are pretty common.

However, using an external library in an Ansible module would add an extra dependency, thus the Ansible developers added their own library inside Ansible to access URLs: ansible.module_utils.urls. This one is already shipped with Ansible – the code can be found at lib/ansible/module_utils/urls.py – and it covers the shortcomings and security concerns of urllib. If you submit a module to Ansible calling REST APIs the Ansible developers usually require that you use the inbuilt library.

Unfortunately, currently the documentation on the Ansible url library is sparse at best. If you need information about it, look at other modules like the Github, Kubernetes or a10 modules. To cover that documentation gap I will try to cover the most important basics in the following lines – at least as far as I know.

Creating REST calls in an Ansible module

To access the Ansible urls library right in your modules, it needs to be imported in the same way as the basic library is imported in the module:

from ansible.module_utils.basic import *
from ansible.module_utils.urls import *

The main function call to access a URL via this library is open_url. It can take multiple parameters:

def open_url(url, data=None, headers=None, method=None, use_proxy=True,
        force=False, last_mod_time=None, timeout=10, validate_certs=True,
        url_username=None, url_password=None, http_agent=None,
force_basic_auth=False, follow_redirects='urllib2'):

The parameters in detail are:

  • url: the actual URL, the communication endpoint of your REST API
  • data: the payload for the URL request, for example a JSON structure
  • headers: additional headers, often this includes the content-type of the data stream
  • method: a URL call can be of various methods: GET, DELETE, PUT, etc.
  • use_proxy: if a proxy is to be used or not
  • force: force an update even if a 304 indicates that nothing has changed (I think…)
  • last_mod_time: the time stamp to add to the header in case we get a 304
  • timeout: set a timeout
  • validate_certs: if certificates should be validated or not; important for test setups where you have self signed certificates
  • url_username: the user name to authenticate
  • url_password: the password for the above listed username
  • http_agent: if you wnat to set the http agent
  • force_basic_auth: for ce the usage of the basic authentication
  • follow_redirects: determine how redirects are handled

For example, to fire a simple GET to a given source like Google most parameters are not needed and it would look like:

open_url('https://www.google.com',method="GET")

A more sophisticated example is to push actual information to a REST API. For example, if you want to search for the domain example on a Satellite server you need to change the method to PUT, add a data structure to set the actual search string ({"search":"example"}) and add a corresponding content type as header information ({'Content-Type':'application/json'}). Also, a username and password must be provided. Given we access a test system here the certification validation needs to be turned off also. The resulting string looks like this:

open_url('https://satellite-server.example.com/api/v2/domains',method="PUT",url_username="admin",url_password="abcd",data=json.dumps({"search":"example"}),force_basic_auth=True,validate_certs=False,headers={'Content-Type':'application/json'})

Beware that the data json structure needs to be processed by json.dumps. The result of the query can be formatted as json and further used as a json structure:

resp = open_url(...)
resp_json = json.loads(resp.read())

Full example

In the following example, we query a Satellite server to find a so called environment ID for two given parameters, an organization ID and an environment name. To create a REST call for this task in a module multiple, separate steps have to be done: first, create the actual URL endpoint. This usually consists of the server name as a variable and the API endpoint as the flexible part which is different in each REST call.

server_name = 'https://satellite.example.com'
api_endpoint = '/katello/api/v2/environments/'
my_url = server_name + api_endpoint

Besides the actual URL, the payload must be pieced together and the headers need to be set according to the content type of the payload – here json:

headers = {'Content-Type':'application/json'}
payload = {"organization_id":orga_id,"name":env_name}

Other content types depends on the REST API itself and on what the developer prefers. JSON is widely accepted as a good way to go for REST calls.

Next, we set the user and password and launch the call. The return data from the call are saved in a variable to analyze later on.

user = 'abc'
pwd = 'def'
resp = open_url(url_action,method="GET",headers=headers,url_username=module.params.get('user'),url_password=module.params.get('pwd'),force_basic_auth=True,data=json.dumps(payload))

Last but not least we transform the return value into a json construct, and analyze it: if the return value does not contain any data – that means the value for the key total is zero – we want the module to exit with an error. Something went wrong, and the automation administrator needs to know that. The module calls the built-in error functionmodule.fail_json. But if the total is not zero, we get out the actual environment ID we were looking for with this REST call from the beginning – it is deeply hidden in the json structure, btw.

resp_json = json.loads(resp.read())
if resp_json["total"] == 0:
    module.fail_json(msg="Environment %s not found." % env_name)
env_id = resp_json["results"][0]["id"]

Summary

It is fairly easy to write Ansible modules to access REST APIs. The most important part to know is that an internal, Ansible provided library should be used, instead of the better known urllib or requests library. Also, the actual library documentation is still pretty limited, but that gap is partially filled by the above post.

[Short Tip] Call Ansible Tower REST URI – with Ansible

Ansible Logo

It might sound strange to call the Ansible Tower API right from within Ansible itself. However, if you want to connect several playbooks with each other, or if you user Ansible Tower mainly as an API this indeed makes sense. To me this use case is interesting since it is a way to document how to access, how to use the Ansible Tower API.

The following playbook is an example to launch a job in Ansible Tower. The body payload contains an extra variable needed by the job itself. In this example it is a name of a to-be launched VM.

---
- name: POST to Tower API
  hosts: localhost
  vars:
    job_template_id: 44
    vmname: myvmname

  tasks:
    - name: create additional node in GCE
      uri:
        url: https://tower.example.com/api/v1/job_templates/{{ job_template_id }}/launch/
        method: POST
        user: admin
        password: $PASSWORD
        status_code: 202
        body: 
          extra_vars:
            node_name: "{{ vmname }}"
        body_format: json

Note the status code (202) – the URI module needs to know that a non-200 status code is used to show the proper acceptance of the API call. Also, the job is identified by its ID. But since Tower shows the ID in the web interface it is no problem to get the correct id.

Ways to provide body payload in Ansible’s URI module

Ansible LogoTalin to a REST API requires to provide some information, usually in the form of JSON payload. Ansible offers various ways to do that in the URI module in playbooks.

In modern applications REST APIs are often the main API to integrate the given APP with the existing infrastructure. REST often requires posting JSON structures as payload.

Ansible offers the URI module to talk to REST APIs, and there are multiple ways add JSON payload to a playbook task that are shown below.

For example, given that the following arbitrary JSON payload needs to be provided to a REST API via POST:

{
  "mainlevel": {
    "subkey": "finalvalue"
  }
}

The first and for me preferred way to provide JSON payloads is to write down the structure in plain YAML (if possible) and afterwards tell the module to format it as JSON:

HEADER_Content-Type: application/json
status_code: 202
body: 
  mainlevel:
    subkey: finalvalue
body_format: json

Among various reasons this works well because variables can be easily used.

Another way is to define a variable and then use jinja to format it:

vars:
  mainlevel:
    "subkey": finalvalue
...
    body: ' {{mainlevel|to_json}}'

Caution: not the empty space here in the body line. It avoids type detection which tries to check if a string begins with { or [.

A quicker, shorter way is to use folded style:

body: >
  {"mainlevel":{"subkey":"finalvalue"}}

However, it might be difficult to add variables here.

Last, and honestly something I would try to avoid is the plain one-liner:

body: "{\"mainlevel\":{\"subkey\":\"finalvalue\"}}

As it can be seen, all quotation marks need to be escaped which makes it hard to read, hard to maintain and easy to introduce errors.

As shown Ansible is powerful and simple. Thus there are always multiple different ways to reach the goal you are aiming for – and it depends on the requirements what solution is the best one.

[Howto] Workaround failing MongoDB on RHEL/CentOS 7

Ansible LogoMongoDB is often installed right from upstream provided repositories. In such cases with recent updates the service might fail to start via systemctl. A workaround requires some SELinux work.

Ansible Tower collects system data inside a MongoDB. Since MongoDB is not part of RHEL/CentOS, it is installed directly form the upstream MongoDB repositories. However, with recent versions of MongoDB the database might not come up via systemctl:

[root@ansible-demo-tower init.d]# systemctl start mongod
Job for mongod.service failed because the control process exited with error code. See "systemctl status mongod.service" and "journalctl -xe" for details.
[root@ansible-demo-tower init.d]# journalctl -xe
May 03 08:26:00 ansible-demo-tower systemd[1]: Starting SYSV: Mongo is a scalable, document-oriented database....
-- Subject: Unit mongod.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit mongod.service has begun starting up.
May 03 08:26:00 ansible-demo-tower runuser[7266]: pam_unix(runuser:session): session opened for user mongod by (uid=0)
May 03 08:26:00 ansible-demo-tower runuser[7266]: pam_unix(runuser:session): session closed for user mongod
May 03 08:26:00 ansible-demo-tower mongod[7259]: Starting mongod: [FAILED]
May 03 08:26:00 ansible-demo-tower systemd[1]: mongod.service: control process exited, code=exited status=1
May 03 08:26:00 ansible-demo-tower systemd[1]: Failed to start SYSV: Mongo is a scalable, document-oriented database..
-- Subject: Unit mongod.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit mongod.service has failed.
-- 
-- The result is failed.
May 03 08:26:00 ansible-demo-tower systemd[1]: Unit mongod.service entered failed state.
May 03 08:26:00 ansible-demo-tower systemd[1]: mongod.service failed.
May 03 08:26:00 ansible-demo-tower polkitd[11436]: Unregistered Authentication Agent for unix-process:7254:1405622 (system bus name :1.184, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_

The root cause of the problem is that the MongoDB developers do not provide a proper SELinux</a configuration with their packages, see the corresponding bug report.

A short workaround is to create a proper (more or less) SELinux rule and install it to the system:

[root@ansible-demo-tower ~]# grep mongod /var/log/audit/audit.log | audit2allow -m mongod > mongod.te
[root@ansible-demo-tower ~]# cat mongod.te 

module mongod 1.0;

require {
	type locale_t;
	type mongod_t;
	type ld_so_cache_t;
	class file execute;
}

#============= mongod_t ==============
allow mongod_t ld_so_cache_t:file execute;
allow mongod_t locale_t:file execute;
[root@ansible-demo-tower ~]# grep mongod /var/log/audit/audit.log | audit2allow -M mongod
******************** IMPORTANT ***********************
To make this policy package active, execute:

semodule -i mongod.pp

[root@ansible-demo-tower ~]# semodule -i mongod.pp 
[root@ansible-demo-tower ~]# sudo service mongod start
                                                           [  OK  ]

Keep in mind that audit2allow generated rule sets are not to be used on production systems. The generated SELinux rules need to be analyzed manually to verify that it covers nothing but the problematic use case.