[Howto] Accessing CloudForms updated REST API with Python

Red Hat CloudForms LogoCloudForms comes with a REST API which was updated to version 2.0 in CloudForms 3.2. It covers substantially more functions compared to v1.0 and also offers feature parity with the old SOAP interface. This post holds a short introduction to calling the REST API via Python.

Introduction

Red Hat CloudForms is a Manager to “manage virtual, private, and hybrid cloud infrastructures” – it provides a single interface to manage OpenStack, Amazon EC2, Red Hat Enterprise Virtualization Management, VMware vCenter and Microsoft System Center Virtual Machine Manager. Simply said it is a manager of managers. While CloudForms focusses on virtual environments of all kinds its abilities are not limited to simple deployment and starting/stopping VMs, but cover the entire business process and workflow surrounding larger deployments of virtual machines in large or distributed data centers. Tasks can be highly automated, charge backs and optimizer enable the admins to put workloads where they make most sense, an almost unbelievable amount of reports help operations and please the management and entire service catalogs can help provision not single VMs, but setups of interrelated instances. And of course, as with all Red Hat products and technologies, CloudForms is fully Open Source and based upon a community project: ManageIQ.

One of the major use cases of CloudForms is to keep an overview of all the various types of “clouds” used in production and the VMs running on them during the day to day work. Many companies have VMWare instances in their data centers, but also have a second virtual environment like RHEV or Hyper-V. Additionally they use public cloud offerings for example to cover the load during peak times, or integrate OpenStack to provide their own, private cloud. In such cases CloudForms is the one interface to rule them all, one interface to manage them. ;-)

CloudForms itself can be managed via Webinterface but also via the API. Up until recently, the focus was on a SOAP API. Since Ruby on Rails – the base for CloudForms – will not support SOAP anymore in the future the developers decided to switch to a REST API. With CloudForms 3.2 this move was in so far completed that the REST API reached feature parity. The API offers quite a lot of functions – besides gathering information it can also be used to trigger actions, define or delete services, etc.

Python examples

In general the API can be called by any REST compatible tool – which means by almost any HTTP client. For my tests with the new API I decided to use Python and more specifically iPython together with the requests and the json library. All examples are surrounded by a JSON dumps statement to prettify the output.

The REST authentication is provided by default HTTP means. The normal way is to authenticate once, get a token in return and use the token for all further calls. The default API url is https://cf.example.com/api, which shows which collections can be queried via the API, for example: vms, clusters, providers, etc. Please note that the role based access control of CloudForms is also present in the API: you can only query collections and and modify objects when you have proper rights to do so.

current_token=json.loads(requests.get('https://cf.example.com/api/auth',auth=("admin",'password')).text)['auth_token']
print json.dumps(json.loads(requests.get('https://cf.example.com/api',headers={'X-Auth-Token' : current_token}).text),sort_keys=True,indent=4,separators=(',', ': '))
{
    "collections": [
        {
            "description": "Automation Requests",
            "href": "https://cf.example.com/api/automation_requests",
            "name": "automation_requests"
        },
...

This shows that the basic access works. Next we want to query a certain collection, for example the vms:

print json.dumps(json.loads(requests.get('https://cf.example.com/api/vms',headers={'X-Auth-Token' : current_token}).text),sort_keys=True,indent=4,separators=(',', ': '))
...
    "count": 2,
    "name": "vms",
    "resources": [
        {
            "href": "https://cf.example.com/api/vms/602000000000007"
        },
        {
            "href": "https://cf.example.com/api/vms/602000000000006"
        }
    ],
    "subcount": 2
}

While all VMs are listed, the information shown above are not enough to understand which vm is actually which: at least the name should be shown. So we need to expand the information about each vm and afterwards add a condition to only show name and for example vendor:: ?expand=resources&attributes=name,vendor:

print json.dumps(json.loads(requests.get('https://cf.example.com/api/vms?expand=resources&attributes=name,vendor',headers={'X-Auth-Token' : current_token}).text),sort_keys=True,indent=4,separators=(',', ': '))
...
"resources": [
    {
        "href": "https://cf.example.com/api/vms/602000000000007",
        "id": 602000000000007,
        "name": "my-vm",
        "vendor": "redhat"
    },
    {
        "href": "https://cf.example.com/api/vms/602000000000006",
        "id": 602000000000006,
        "name": "myvm-clone",
        "vendor": "redhat"
    }
],

This works of course for 2 vms, but not if you manage 20.000. Thus it’s better to use a filter:&filter[]='name="my-vm"'. Since filters use a lot of quotation marks depending on the amount of strings you use it is best to define a string containing the filter argument and afterwards add that one to the URL:

filter="name='my-vm'"
print json.dumps(json.loads(requests.get('https://cf.example.com/api/vms?expand=resources&attributes=name&filter[]='+filter',headers={'X-Auth-Token' : current_token}).text),sort_keys=True,indent=4,separators=(',', ': '))
...
"count": 2,
"name": "vms",
"resources": [
    {
        "href": "https://cf.example.com/api/vms/602000000000007",
        "id": 602000000000007,
        "name": "my-vm"
    }
],
"subcount": 1

Note the subcount which shows how many vms with the given name were found. If you want to combine more than one filter, simply add them to the URL: &filter[]='name="my-vm"'&filter[]='power_state=on'.

With a given href to the correct vm you can shut it down. Use the HTTP POST method and provide a JSON payload calling the action “stop”.

print json.dumps(json.loads(requests.post('https://cf.example.com/api/vms/602000000000007',headers={'X-Auth-Token' : current_token},,data=json.dumps({'action':'stop'})).text),sort_keys=True,indent=4,separators=(',', ': '))
{
    "href": "https://cf.example.com/api/vms/602000000000007",
    "message": "VM id:602000000000007 name:'my-vm' stopping",
    "success": true,
    "task_href": "https://cf.example.com/api/tasks/602000000000097",
    "task_id": 602000000000097
}

If you want to call an action to more than one instance, change the href to the corresponding collection and include the actual hrefs for the vms in the payload in an resources array:

print json.dumps(json.loads(requests.post('https://cf.example.com/api/vms',headers={'X-Auth-Token' : current_token},data=json.dumps({'action':'stop', 'resources': [{'href':'https://cf.example.com/api/vms/602000000000007'},{'href':'https://cf.example.com/api/vms/602000000000006'}]})).text),sort_keys=True,indent=4,separators=(',', ': '))
"results": [
    {
        "href": "https://cf.example.com/api/vms/602000000000007",
        "message": "VM id:602000000000007 name:'my-vm' stopping",
        "success": true,
        "task_href": "https://cf.example.com/api/tasks/602000000000104",
        "task_id": 602000000000104
    },
    {
        "href": "https://cf.example.com/api/vms/602000000000006",
        "message": "VM id:602000000000006 name:'myvm-clone' stopping",
        "success": true,
        "task_href": "https://cf.example.com/api/tasks/602000000000105",
        "task_id": 602000000000105
    }
]

The last example shows how more than one call to the API are connected to each other: we call the API to scan a VM, get the task id and query the task id to see if the task was successfully called. So first we call the API to start the scan:

print json.dumps(json.loads(requests.post('https://cf.example.com/api/vms/602000000000006',headers={'X-Auth-Token' : current_token},verify=False,data=json.dumps({'action':'scan'})).text),sort_keys=True,indent=4,separators=(',', ': '))
{
    "href": "https://cf.example.com/api/vms/602000000000006",
    "message": "VM id:602000000000006 name:'my-vm' scanning",
    "success": true,
    "task_href": "https://cf.example.com/api/tasks/602000000000106",
    "task_id": 602000000000106
}

Next, we take the given id 602000000000106 and query the state:

print json.dumps(json.loads(requests.get('https://cf.example.com/api/tasks/602000000000106',headers={'X-Auth-Token' : current_token},verify=False).text),sort_keys=True,indent=4,separators=(',', ': '))
{
    "created_on": "2015-08-25T15:00:16Z",
    "href": "https://cf.example.com/api/tasks/602000000000106",
    "id": 602000000000106,
    "message": "Task completed successfully",
    "name": "VM id:602000000000006 name:'my-vm' scanning",
    "state": "Finished",
    "status": "Ok",
    "updated_on": "2015-08-25T15:00:20Z",
    "userid": "admin"
}

However, please note that “Finished” here means that the call of the task was successful – but not necessarily the task outcome itself. For that you would have to call the vm state itself.

Final words

The REST API of CloudForms offers quite some useful functions to integrate CloudForms with your own programs, scripts and applications. The REST API documentation is also quite extensive, and the community documentation for ManageIQ has a lot of API usage examples.

So if you used to call your CloudForms via SOAP you will be happy to find the new REST API in CloudForms 3.2. If you never used the API you might want to start today – as you have seen its quite simple to get results quickly.

[Short Tip] show processes accessing a file: fuser & lsof

920839987_135ba34fff

Sometimes it’s good to know which processes access certain files, paths or devices. Think of debugging, but also of certain files blocked due to currently running processes. There are two commonly used tools to get information about which process accesses which file: lsof and fuser.

The best known tool is arguably lsof – it lists all open files, the output can be limited with options and arguments. Typical commands are:

$ lsof -p $PROCESSID
$ lsof $FILENAME
$ lsof |grep $EXPRESSION

The other well known tool is fuser. Compared to lsof it works the other way around: it lists open processes for a given target. The target must always be supplied and is not optional. Typical commands are:

$ fuser $FILENAME
$ fuser -mva $MOUNTPOINT
$ fuser -k $FILENAME

The last command is used to kill the process directly.

In the day to day work fuser feels slightly less bloated and is more handy to simply kill processes right away. On the other hand, if you are used to lsof there is hardly any reason to switch to fuser.

First days at Red Hat

Red Hat Logo As I mentioned in my last post I left my previous employer after quite some years – since July 1st I work for Red Hat.

So, its one month since I joined Red Hat and it is been quite an experience so far. Keeping in mind where I come from – infrastructure focused, couple dozen people – Red Hat is something entirely different. They are huge. Like, *really* big. And that shows everywhere. Organization, processes, structure, reach, customers, employees, possibilities, etc. Also, these days Red Hat is much more than just Linux: other huge chunks of Red Hat are Middleware, there are several virtualization products, they are serious towards software defined storage, and they indeed have a very specific idea of what Cloud means and how to do that – and it’s all backed up by products which are again backed by pretty vivid community projects (with colorful names as Drools, Byteman and CapeDwarf).

All in all, it’s a lot to learn – and as usual I will use the blog to try to digest everything. Most likely this will focus on technologies I yet don’t even have a clue about – like the aforementioned drooling midgets. But I might also reiterate everything else I have to know in my own words to better learn it – subscription model, product variation, all the shiny stuff you print glossy papers about but have to explain anyway.

It might not be the most interesting for others – but vital for me. And I’m actually looking forward to learn, well, really a lot in a short time :)

Hello Red Hat

Red Hat Logo As I mentioned in my last post I left my previous employer after quite some years – since July 1st I work for Red Hat.

In my new position I will be a Solutions Architect – so basically a sales engineer, thus the one talking to the customers on a more technical level, providing details or proof of concepts where they need it.

Since its my first day I don’t really know how it will be – but I’m very much looking forward to it, it’s an amazing opportunity! =)

Good bye credativ

As you might know 7 years ago I joined a company called credativ. credativ was and is a German IT company specialized in Open Source support around Debian solutions.

And it was a great opportunity for me: having no business/enterprise experience whatsoever there was much to learn for me. Dealing with various enterprise and public customers, learning and executing project management, support sales as a technician/pre-sales and so on. Without credativ I wouldn’t be who I am today. So thanks, credativ, for 7 wonderful years!

However, everything must come to an end: over the recent time I realized that it’s time for me to try something different: to see what else I am capable of, to explore new and different opportunities for me and also to dive into more aspects of the ever growing open source ecosystem.

And thus I decided to look out for a new job. My future still is with Linux, and might not be that surprising for some readers – but more about that in another post.

Today, I’d just like to say thanks to credativ. Good bye, and all the best for the future! =)

[Howto] ownCloud auto setup including LDAP

ownCloud LogoThe self hosting file sharing solution ownCloud is becoming increasingly popular, even in companies you regularly come across installations. To make auto setup of ownCloud easier the following howto shows the steps to automatically connect it to a LDAP server.

File exchange services like Dropbox or Google Drive offer a neat and quick way to exchange even large amounts of data. However, they only work because the data are uploaded to the servers of such corporations in the first hand, which is in times a bit questionable when you deal with sensitive data.

Here ownCloud comes into play: it offers the possibility to self host a file sharing service on infrastructure you trust. Additionally it is Open Source, thus providing at least a minimum amount of trust. And it is not anymore a solution only used by few people for their private servers: these days ownCloud is used in the public sector, universities and companies of all sizes. For example the sciebo project offers ownCloud based file exchange services for 300k students with 5 PB of storage.

It is thus no wonder that the interest in hosting ownCloud services is unbroken. Here at credativ we often see corresponding requests from customers who want support in setting up such installations.

Among the challenges to setup ownCloud in a business environment, two of the biggest ones are the connection to the central authentication service like LDAP and unattended installation. The first task is important to fully integrate ownCloud into the existing user space and make it a first class citizen in the existing infrastrucutre. The second task is especially relevant if you want to easily deploy the service reproducible: Think of containers, docker, VMs, etc. here.

especially the combination of both tasks is challenging: usually ownCloud expects the admin to follow through several steps manually which involve a lot of clicking and entering data until it is up, running and connected to the LDAP. But it is possible to avoid these point-and-click-adventures: Configuration templates can help pre-configuring the ownCloud service, and the setup of the LDAP connection can be automated using ownCloud’s configuration command line tool occ.

So let’s go through the process step by step: At first, ownCloud has to be installed – that can usually be done by the usual package management tools like yum, apt, etc. After the installation, the ownCloud URL is usually opened via browser to start the first run wizard. This can be automated by providing the configuration template $owncloud/config/autoconfig.php which contains all necessary information usually queried in the first run wizard: admin user, pwd, db type, db user, db password, etc. ownCloud checks at start if the file is present and if, omits the first run wizard. Here is an example of such a autoconfig template:

<?php
$AUTOCONFIG = array (
  'directory' => '/var/www/html/owncloud/data',
  'adminlogin'    => 'mmu',
  'adminpass'     => '123456',
  'dbtype'        => 'pgsql',
  'dbname'        => 'owncloud',
  'dbuser'        => 'postgres',
  'dbpass'        => '123456',
  'dbhost'        => '192.168.123.45',
  'dbtableprefix' => 'oc_',
);

Note that further configuration of your ownCloud can also be placed int the usual config.php file: the values of the autoconfig file will be merged into the existing configuration file. This way you can pre-configure most parts of your entire server. More details can be found in the admin documentation.

To actually start the processing of the autoconfig file the ownCloud URL must be called at least once. This can be done from the server itself via the help of curl: curl -s -k 127.0.0.1/owncloud/ > /dev/null.

When the basic configuration is done, the next step is to connect the server to LDAP. This would usually be done by opening the ownCloud URL, activating the LDAP app and configuring it. Instead of clicking through the web page, these tasks can be accomplished with the help of the occ tool. It can be used to activate the app, write and an empty configuration (thanks mark0n for this) and also to set the basic LDAP data. Make sure to call all commands as the user the webserver is called at – otherwise you might get all kinds of problems. The individual steps are:

php -f $ocpath/occ app:enable user_ldap
php -f $ocpath/occ ldap:create-empty-config
php -f $ocpath/occ ldap:set-config "" ldapHost 192.168.123.45
php -f $ocpath/occ ldap:set-config "" ldapPort 389
php -f $ocpath/occ ldap:set-config "" ldapBase \"dc=example,dc=net\"
php -f $ocpath/occ ldap:set-config "" ldapConfigurationActive 1

In case you are debugging problems, check the configuration of the ownCloud server via php -f $ocpath/occ ldap:show-config.

And that’s it already – your ownCloud should be connected to your LDAP server now. If you script all commands for example in Ansible or write a Puppet module it is even easily reproducible.

In case you are interested, I also wrote a German blog article about the problem on credativ’s blog: Owncloud Auto-Setup mit LDAP-Anbindung.

[Howto] LDAP schema for Postfix

Postfix LogoThe official Postfix documentation to use LDAP for user and alias lookup mentions certain LDAP attributes which are not part of the default OpenLDAP. In this article I will shortly explain a basic theme providing these attributes and the corresponding object class.

Postfix can easily be connected to LDAP to lookup addresses and aliases. The Postfix LDAP documentation covers all the details. As mentioned there the default configuration of Postfix expects two LDAP attributes in the LDAP schema: mailacceptinggeneralid and maildrop. This also shows in the code in src/global/dict_ldap.c:

dict_ldap->query =
    cfg_get_str(dict_ldap->parser, "query_filter",
        "(mailacceptinggeneralid=%s)", 0, 0);

However, these attributes are not part of the default OpenLDAP installation, and the Postfix documentation does not mention how exactly that has to look like and where to get it. For that reason we at my employer credativ provide such a schema at Github: github.com/credativ/postfix-ldap-schema. The github repository contains the schema, the corresponding licence and a short documentation. A German introduction to the schema can also be found at credativ’s blog: LDAP-Schema für Postfix-Abfragen

The provided schema defines the necessary attribute types mailacceptinggeneralid and maildrop as well as the object class postfixUser. Please note that in this schema the used OIDs are of the type Experimental OpenLDAP, see also the OID database.

To use the schema it must be used by OpenLDAP, for example by including in in slapd.conf. A corresponding LDAP entry could look like:

dn: uid=mmu,ou=accounts,dc=example,dc=net
objectclass: top
objectclass: person
objectclass: posixAccount
objectclass: postfixUser
cn: Max Mustermann
sn: Mustermann
uid: mmu
uidNumber: 5001
gidNumber: 5000
homeDirectory: /home/vmail
mailacceptinggeneralid: mmu
mailacceptinggeneralid: max.mustermann
mailacceptinggeneralid: m.mustermann
mailacceptinggeneralid: bugs
maildrop: mmu

As you see the example covers multiple aliases. Also, the final mailbox is a domain less entity: maildrop: mmu does not mention any domain name. This only works if your mail boxes actually do not require (or even allow) domain names – in this case this was true since the mail is finally transported to a Dovecot server which does not know about the various domains.

Please note that this schema can only be the foundation for a more sophisticated, more complex schema which need to be tailored to fit the individual needs of the corresponding setup.