[Short Tip] Fix mount problems in RHV during GlusterFS mounts

Gluster Logo

When using Red Hat Virtualization or oVirt together with GLusterFS, there might be a strange error during the first creation of a storage domain:

Failed to add Storage Domain xyz.

One of the rather easy to fix reasons might be a permission problem: an initial Gluster exported file system belongs to the user root. However, the virtualization manager (ovirt-m bzw. RHV-M) does not have root rights and such needs another ownership.

In such cases, the fix is to mount the exported volume & set the user rights to the rhv-m user.

$ sudo mount -t glusterfs 192.168.122.241:my-vol /mnt
# cd /mnt/
# chown -R 36.36 .

Afterwarsd, the volume can be mounted properly. Some more general details can be found at RH KB 78503.

Impressions from #AnsibleFest London 2016 [Update]

Ansible LogoThe #AnsibleFest was taking place in London, and I was luckily able to attend. This post shares some impressions from the event, together with interesting announcements and stories.

Update: The slides of the various presentations are now available.

Preface

The #AnsibleFest London 2016 took place near the O2 Arena and lasted the entire day. The main highlight of the conference was the network automation coming along with Ansible now. Other very interesting talks covered very helpful tips about managing Windows Servers, the 101 on modules, how to implement continuous deployment, the journey of a french bank towards DevOps, how Cisco devices can be managed and how to handle immutable infrastructure. All focused on Ansible, of course.

But while the conference took place during Thursday, the #AnsibleFest started already the evening before: at the social event Ansible Social.
Ansible Social
And it was a wonderful evening: many people from Ansible, partners, coworkers from Red Hat and others were there to enjoy drinks, food and chatting through the evening. Getting to know many of the people there went pretty well, it was a friendly bunch meeting at a pretty nice place.
Ansible Social

Keynote

Upon arrival at the conference area one of the sponsor desks immediately caught the eye: Cisco!
20160218_084833-01
For everyone following Ansible news closely it was obvious that networking would be a big topic, especially since it was about to be featured twice during the day, once by Peter Sprygada from Ansible and later on by Fabrizio Maccioni from Cisco.

And this impression was confirmed when Todd Barr came to the stage and talked about the current state of Ansible and what to expect in the near future: networking is a big topic for Ansible right now, they are pushing resources into the topic and already hinted that there would be a larger announcement during the #AnsibleFest. During the presentation the strengths of Ansible were of course emphasized again: that it is simple to setup, to understand and to deploy. And that it does not require agents. While I do have my past with Puppet and still like it as a tool in certain circumstances, I must admit that I had to smile at the slide about agents.
Todd Barr at AnsibleFest
I have to admit, for many customers and many setups this is in fact true: they do not want agents for various reasons. And Ansible can deliver actual results without any need for a client.

The future of Ansible

Next up was Bill Nottingham talking setting the road for the future of Ansible. A focus is certainly better integration of Windows (no beta tag anymore!), better testing – and Python 3 support! It was acknowledged that there are more and more distributions out there not providing any Python 2 anymore and that they need to be catered.
Future of Ansible by Bill Nottingham
Ansible Tower was also covered, of course, and has very promising improvements coming up as well: the interface will be streamlined, the credentials and rights system will be improved, and there will be (virtual) appliances to get Ansible Tower out of the box in an instant. But the really exciting part is more large-scale, enterprise focused: Ansible Tower will be able to cater federated setups, meaning distributed replication of Ansible Tower commands via proxy Towers.
Federated Ansible Tower
Don’t expect this all in the next weeks, but we might see many of these features already in Ansible Tower 3.0. And it was mentioned that there might be a release in early fall.

Scaling abilities are indeed needed – many data centers these days have more than one location, or are spread over several departments and thus need partially independent setups to manage the infrastructure. At the same time, there are Ansible customers who are using Ansible with 50k nodes and more out there, and they have a demand for fine grained, federated infrastructure setups as well.

Networking with Ansible

While the upcoming Ansible Tower had some exciting news, the talk about networking support by Peter Sprygada really blew everyone away. Right at the moment of presentation Red Hat issued a press release that they bring DevOps to the network via Ansible:

[Red Hat] is bringing DevOps to networking by extending Ansible – its powerful IT automation and DevOps platform – to include native agentless support for automating heterogeneous network infrastructure devices using the same simple human and machine readable automation language that Ansible provides to IT teams.

Peter picked that up and presented a whole lot of technical details. The most important one was that there are now several networking core modules for commands, configuration and templates.
Ansible networking automation support
They cover a huge load of devices:

  • Arista EOS
  • Cisco NXOS
  • Cisco IOS
  • Cisco IOSXR
  • Cumulus Linux
  • Juniper Junos
  • OpenSwitch

While some of these devices were already supported by the raw module or some libraries out there, but fully integrated modules supported by Ansible and the network device manufacturers themselves takes networking automation to another new level. If you are interested, get the latest Ansible networking right away.

Ansible in a visual effects studio

The next talk was by the customer “Industrial Light and Magic”, a visual effects studio using Ansible to handle there massive setup via Ansible. It showed in particular how many obstacles you face in your daily routine running data centers and deploying software all the time – and how to tackle them using Ansible and Ansible’s features. I forgot to take a photo, though…

Ansible & Windows

John Hawkesworth from M*Modal came up to the stage next and delivered a brilliant speech about all the things needed to know when managing Windows with Ansible. Talking about the differences of Ansible 1.9 vs 2.0 briefly, he went over lessons learned like why the backslash should be escaped every time just to be sure (\t …) but also gave his favourite development and modules quite some attention. Turns out the registration module can come in very handy!
Ansible and Windows

Writing modules, 101

Next up James Cammarata introduced how to write modules for Ansible. And impressively, this was live demonstrated by a module he had written the days before to control his Philips Hue lights. They could be controlled via Ansible live on stage.
Ansible Modules 101
Besides the great live demo the major points of the presentation were:

  • It is quite easy to develop modules.
  • Start of simply, get more complex the further you go down the road.
  • Write a module when your Playbook for a single task exceeds ten lines of code.
  • Write in Python/Powershell when you want it to be integrated with Ansible Core.
  • Write in any language you want if you won’t share it anyway.

While I am sure that other module developers might see some of these points different, it gives a rather good idea what to keep in mind when the topic is approached.

Of course, the the code for the Philips Hue Ansible module is available on Github.

Continuous deployment

Continuous integration is a huge topic in DevOps, and thus especially with Ansible. Steve Smith of Atlassian picked up the topic and discussed what needs to be taken into account when Ansible is used to enable continuous integration.
Continous Integration with Ansible
And there were many memorable quotes during the talk which made it simply fun to watch. I particularly like this one:

Release features, not dumps.

It means: do release when you have something worth releasing – not at arbitrary dates. It is a strong statement against release or maintenance windows and does make sense: after all, why should you release when its not worth? And you certainly will not wait if it is important!

Also, since many maintenance windows are implemented because doing maintenance is hard:

Everything which is hard should be done more often, not less.

Combined with the fact that very complex, but successful enterprises do 300 releases an hour it is clear that continuous deployment is possible – but what often is needed is the right culture and probably at some point a great, simple to use tool able to cater the needs of complex infrastructure.

Ansible accelerates deployment

The next talk focused on a vertical which might usually say that they are too regulated and “special” to integrate DevOps: financial. Fabrice Bernhard presented the journey of the Bank Société Générale introducing DevOps principles with the help of Ansible to become more agile, more flexible and to be able to respond quicker to changes. The reason for that was summarized in a very good quote:

It’s not the big that eat the small. It’s the fast that eat the slow.

This is true for all the enterprises out there: software enabled companies have attacked almost any given business out there (Amazon vs Walmart, Uber vs cabs, Airbnb vs hotels and hostels, etc.). And there are enough analysts right now who see the banking market as the next big thing which might be seriously disrupted due to mobile payment, blockchain technology and other IT based developments.
Ansible and the challenges for businesses

But that also shows what the actual change must be about: the new companies do not take over because they have the better technology. They take over because they have a different culture, and approach problems totally different. And thus, to keep up with the development, change your culture. Or, as said on stage:

Automation is about cultural change. Move fast and break things!

DevOps discussion

After these two powerful talks the audience had a chance to catch some breath during the interactive DevOps discussion. It mainly picked up the topics from the previous talks, and it showed that everyone in the room is pretty sure that DevOps as such is more or less a name on the underlying situation that enterprises need to adopt – or they will fail in the long term, no matter how big they are.

Managing your Cisco data center – with Ansible

As already mentioned, Fabrizio Maccioni from Cisco had the second talk about managing networks with Ansible.
Ansible and Cisco
Interestingly enough, he mentioned that the interest to support Ansible was brought to them by customers who were already managing part of their infrastructure with Ansible. A key point is that Ansible does not require an agent. While Cisco does support some configuration management agents on their hardware, it seems that most of the customers would not do that.
Ansible is good becaue agentless

Immutable infrastructure

The last presentation was held by Vik Bhatti from Beamly. Their problem is that sometimes they have to massively scale in seconds. Literally, in seconds. That requires them to have images of machines up and running in no time. They do this with Ansible, having the playbooks right on the images on one hand, and using Ansible to control their image build process on the other. Actually, the image builder is Packer and it uses Ansible to partially build the image.

As a result, down the line they have images ready to deploy and can extend their environment very, very, very quickly. Since they are able to respond that fast, they were able to cut down hardware costs massively.

Final discussions, happy hour

The final panel dealt mainly with questions about Open Source Tower (it will be there eventually, but no fixed date) and similar questions. After that, everyone went to enjoy drinks and a beautiful skyline.
AnsibleFest skyline and happy hour

Conclusion

In conclusion the #AnsibleFest was a great success, in terms of the people I met as well as in terms of the technical discussions. I can’t wait to get my hand on the networking modules. I’d like to thank the people from Ansible making this event possible, and of course my employer Red Hat for making it possible to visit this event.

[Short Tip] Debug Spamassassin within Amavisd

920839987_135ba34fff
Filtering e-mail for spam and viruses can be done efficiently with Amavisd-New. Besides its own technologies to identify and filter out Spam it can also make use of Spamassassin and its results. However, since Amavisd starts Spamassassin itself, it sometimes is hard to debug when something is not working.

For example in a recent case I saw that the Bayes database was not used when checking for spam characteristics, although the database was properly trained with ham and spam.

Thus first I checked Spamassassin itself:

$ su -s /bin/bash mailuser -c "spamassassin -D -t < ExampleSpam.eml 2>&1"  | tee sa.out

That worked well, the Bayes database was queried, results were shown.

Next, I added $sa_debug = '1,all'; to the Amavisd configuration and run Amavisd in debug mode:

$ amavisd -c /etc/amavisd/amavisd.conf debug

And that showed the problem: one of the Bayes files had wrong permissions. After fixing those, the filter run again properly.

[Howto] Solaris 11 on KVM

solarisRecently I had to test a few things on Solaris 11 and wondered how well it works virtualized with KVM. It does – with a few tweaks.

Preface

Testing various different versions of operating systems is easy these days thanks to virtualization. However, I’m mainly used to Linux variants and hardly ever install any other kind of UNIX based OS. Thus I was curious if an installation of Solaris 11 on KVM / libvirt works.

For the test I actually used virt-manager since it does provide neat defaults during the VM setup. But the same comments and lessons learned are true for the command line tool as well.

Setting up the VM

virt-manager usually does not provide Solaris as an operating system type by default in the VM setup dialog. You first have to click on “OS Type”, “Show all OS options” as shown here:
virt-manager Solaris guest picker

Note, a Solaris 11 should have at least 2 GB RAM, otherwise the installation and also booting might take very long or run into their very own problems.

The installation runs through – although quite some errors clutter the screen (see below).

Errors and problems

As soon as the machine is started several error messages are shown:

WARNING: /pci@0,0/pci1af4,1100@6,1 (uhci1): No SOF interrupts have been received, this USB UHCI host controller is unusable
WARNING: /pci@0,0/pci1af4,1100@6,2 (uhci2): No SOF interrupts have been received, this USB UHCI host controller is unusable

This shows that something is wrong with the interrupts and thus withe the “hardware” of the machine – or at least with the way the guest machine discovers the hardware.

Additionally, even if DHCP is configured, the machine is unable to obtain the networking configuration. A fixed IP address and gateway do not help here, either. The host system might even report that it provides DHCP data, but the guest system continues to request these:

Dez 23 11:11:05 liquidat dnsmasq-dhcp[13997]: DHCPDISCOVER(virbr0) 52:54:00:31:31:4b
Dez 23 11:11:05 liquidat dnsmasq-dhcp[13997]: DHCPOFFER(virbr0) 192.168.122.205 52:54:00:31:31:4b
Dez 23 11:11:09 liquidat dnsmasq-dhcp[13997]: DHCPDISCOVER(virbr0) 52:54:00:31:31:4b
Dez 23 11:11:09 liquidat dnsmasq-dhcp[13997]: DHCPOFFER(virbr0) 192.168.122.205 52:54:00:31:31:4b
Dez 23 11:11:17 liquidat dnsmasq-dhcp[13997]: DHCPDISCOVER(virbr0) 52:54:00:31:31:4b
Dez 23 11:11:17 liquidat dnsmasq-dhcp[13997]: DHCPOFFER(virbr0) 192.168.122.205 52:54:00:31:31:4b
...

Also, when the machine is shutting down and ready to be powered off, the CPU usage spikes to 100 %.

The solution: APIC

The solution for the “hardware” problems mentioned above and also for the networking trouble is to deactivate a APIC feature inside the VM: x2APIC, Intel’s programmable interrupt controller. Some more details about the problem can be found in the Red Hat Bugzilla entry #1040500.

To apply the fix the virtual machine definition needs to be edited to disable the feature. The xml definition can be edited with the command sudo virsh edit with the machine name as command line option, the change needs to be done in the section cpu as shown below. Make sure tha VM is stopped before the changes are done.

$ sudo virsh edit krypton
...
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Broadwell</model>
    <feature policy='disable' name='x2apic'/>
  </cpu>
$ sudo virsh start krypton

After this changes Solaris does not report any interrupt problems anymore and DHCP works without flaws. Note however that the CPU still spikes at power off. If anyone knows a solution to that problem I would be happy to hear about it and add it to this post.

[Howto] OpenSCAP – basics and how to use in Satellite

Open-SCAP logoSecurity compliance policies are common in enterprise environments and must be evaluated regularly. This is best done automatically – especially if you talk about hundreds of machines. The Security Content Automation Protocol provides the necessary standards around compliance testing – and OpenSCAP implements these in Open Source tools like Satellite.

Background

Security can be ensured by various means. One of the processes in enterprise environments is to establish and enforce sets of default security policies to ensure that all systems at least follow the same set of IT baseline protection.

Part of such a process is to check the compliance of the affected systems regularly and document the outcome, positive or negative.

To avoid checking each system manually – repeating the same steps again and again – a defined method to describe policies and how to test these was developed: the Security Content Automation Protocol, SCAP. In simple words, SCAP is a protocol that describes how to write security compliance checklists. In real worlds, the concept behind SCAP is little bit more complicated, and it is worth reading through the home page to understand it.

OpenSCAP is a certified Open Source implementation of the Security Content Automation Protocol and enables users to run the mentioned checklists against Linux systems. It is developed in the broader ecosystem of the Fedora Project.

How to use OpenSCAP on Fedora, RHEL, etc.

Checking the security compliance of systems requires, first and foremost, a given set of compliance rules. In a real world environment the requirements of the given business would be evaluated and the necessary rules would be derived. In industries there are also pre-defined rules.

For a start it is sufficient to utilize one of the existing rule sets. Luckily, the OpenSCAP packages in Fedora, Red Hat Enterprise Linux and relate distributions are shipped with a predefined set of compliance checks.

So, first install the necessary software and compliance checks:

$ sudo dnf install scap-security-guide openscap-scanner

Check which profiles (checklists, more or less) are installed:

$ sudo oscap info /usr/share/xml/scap/ssg/content/ssg-fedora-ds.xml
Document type: Source Data Stream
Imported: 2015-10-20T09:01:27

Stream: scap_org.open-scap_datastream_from_xccdf_ssg-fedora-xccdf-1.2.xml
Generated: (null)
Version: 1.2
Checklists:
Ref-Id: scap_org.open-scap_cref_ssg-fedora-xccdf-1.2.xml
Profiles:
xccdf_org.ssgproject.content_profile_common
Referenced check files:
ssg-fedora-oval.xml
system: http://oval.mitre.org/XMLSchema/oval-definitions-5
Checks:
Ref-Id: scap_org.open-scap_cref_ssg-fedora-oval.xml
No dictionaries.

Run a test with the available profile:

$ sudo oscap xccdf eval \
--profile xccdf_org.ssgproject.content_profile_common \
--report /tmp/report.html \
/usr/share/xml/scap/ssg/content/ssg-fedora-ds.xml

In this example, the result will be printed to /tmp/report.html and roughly looks like this:

Report

If a report is clicked, more details are shown:

Details

The details are particularly interesting if a test fails: they contain rich information about the test itself: the rationale behind the compliance policy itself to help auditors to understand the severity of the failing test, as well as detailed technical information about what was actually checked so that sysadmins can verify the test on their own. Also, linked identifiers provide further information like CVEs and other sources.

Usage in Satellite

Red Hat Satellite, Red Hat’s system management solution to deploy and manage RHEL instances has the ability to integrate OpenSCAP. The same is true for Foreman, one of the Open Source projects Satellite is based upon.

While the OpenSCAP packages need to be extra installed on a Satellite server, the procedure is fairly simple:

$ sudo yum install ruby193-rubygem-foreman_openscap puppet-foreman_scap_client -y
...
$ sudo systemctl restart httpd &amp;amp;amp;amp;&amp;amp;amp;amp; sudo systemctl restart foreman-proxy

Afterwards, SCAP policies can be configured directly in the web interface, under Hosts -> Policies:

Satellite-SCAP

Beforehand you might want to check if proper SCAP content is provided already under Hosts -> SCAP Contents. If no content is shown, change the Organization to “Any Context” – there is currently a bug in Satellite making this step necessary.

When a policy has been created, hosts need to be assigned to the policy. Also, the given hosts should be supplied with the appropriate Puppet modules:

SCAP-Puppet

Due to the Puppet class the given host will be configured automatically, including the SCAP content and all necessary packages. There is no need to do any task on the host.

However, SCAP policies are checked usually once a week, and shortly after installation the admin probably would like to test the new capabilities. Thus there is also a manual way to start a SCAP run on the hosts. First, Puppet must be triggered to run at least once to download the new module, install the packages, etc. Afterwards, the configuration must be checked for the internal policy id, and the OpenSCAP client needs to be run with the id as argument.

$ sudo puppet agent -t
...
$ sudo cat /etc/foreman_scap_client/config.yaml
...
# policy (key is id as in Foreman)

2:
:profile: 'xccdf_org.ssgproject.content_profile_stig-rhel7-server-upstream'
...
$ sudo foreman_scap_client 2
DEBUG: running: oscap xccdf eval --profile xccdf_org.ssgproject.content_profile_stig-rhel7-server-upstream --results-arf /tmp/d20151211-2610-1h5ysfc/results.xml /var/lib/openscap/content/96c2a9d5278d5da905221bbb2dc61d0ace7ee3d97f021fccac994d26296d986d.xml
DEBUG: running: /usr/bin/bzip2 /tmp/d20151211-2610-1h5ysfc/results.xml
Uploading results to ...

If a Capsule is involved as well, the proper command to upload the report to the central server is smart-proxy-openscap-send.

After these steps Satellite provides a good overview of all reports, even on the dashboard:

SCAP-Reports

As you see: my demo system is certainly out of shape! =D

Conclusion

SCAP is a very convenient and widely approved way to evaluate security compliance policies on given systems. The SCAP implementation OpenSCAP is not only compatible with the SCAP standards and even a certified implementation, it also provides appealing reports which can be used to document the compliance of systems while at the same time incorporates enough information to help sysadmins do their job.

Last but not least, the integration with Satellite is quite nice: proper checklists are already provided for RHEL and others, Puppet makes sure everything just falls into place, and there is a neat integration into the management interface which offers RBAC for example to enable auditors to access the reports.

So: if you are dealing with security compliance policies in your IT environment, definitely check out OpenSCAP. And if you have RHEL instances, take your Satellite and start using it – right away!!

[Howto] Accessing CloudForms updated REST API with Python

Red Hat CloudForms LogoCloudForms comes with a REST API which was updated to version 2.0 in CloudForms 3.2. It covers substantially more functions compared to v1.0 and also offers feature parity with the old SOAP interface. This post holds a short introduction to calling the REST API via Python.

Introduction

Red Hat CloudForms is a Manager to “manage virtual, private, and hybrid cloud infrastructures” – it provides a single interface to manage OpenStack, Amazon EC2, Red Hat Enterprise Virtualization Management, VMware vCenter and Microsoft System Center Virtual Machine Manager. Simply said it is a manager of managers. While CloudForms focusses on virtual environments of all kinds its abilities are not limited to simple deployment and starting/stopping VMs, but cover the entire business process and workflow surrounding larger deployments of virtual machines in large or distributed data centers. Tasks can be highly automated, charge backs and optimizer enable the admins to put workloads where they make most sense, an almost unbelievable amount of reports help operations and please the management and entire service catalogs can help provision not single VMs, but setups of interrelated instances. And of course, as with all Red Hat products and technologies, CloudForms is fully Open Source and based upon a community project: ManageIQ.

One of the major use cases of CloudForms is to keep an overview of all the various types of “clouds” used in production and the VMs running on them during the day to day work. Many companies have VMWare instances in their data centers, but also have a second virtual environment like RHEV or Hyper-V. Additionally they use public cloud offerings for example to cover the load during peak times, or integrate OpenStack to provide their own, private cloud. In such cases CloudForms is the one interface to rule them all, one interface to manage them. 😉

CloudForms itself can be managed via Webinterface but also via the API. Up until recently, the focus was on a SOAP API. Since Ruby on Rails – the base for CloudForms – will not support SOAP anymore in the future the developers decided to switch to a REST API. With CloudForms 3.2 this move was in so far completed that the REST API reached feature parity. The API offers quite a lot of functions – besides gathering information it can also be used to trigger actions, define or delete services, etc.

Python examples

In general the API can be called by any REST compatible tool – which means by almost any HTTP client. For my tests with the new API I decided to use Python and more specifically iPython together with the requests and the json library. All examples are surrounded by a JSON dumps statement to prettify the output.

The REST authentication is provided by default HTTP means. The normal way is to authenticate once, get a token in return and use the token for all further calls. The default API url is https://cf.example.com/api, which shows which collections can be queried via the API, for example: vms, clusters, providers, etc. Please note that the role based access control of CloudForms is also present in the API: you can only query collections and and modify objects when you have proper rights to do so.

current_token=json.loads(requests.get('https://cf.example.com/api/auth',auth=("admin",'password')).text)['auth_token']
print json.dumps(json.loads(requests.get('https://cf.example.com/api',headers={'X-Auth-Token' : current_token}).text),sort_keys=True,indent=4,separators=(',', ': '))
{
    "collections": [
        {
            "description": "Automation Requests",
            "href": "https://cf.example.com/api/automation_requests",
            "name": "automation_requests"
        },
...

This shows that the basic access works. Next we want to query a certain collection, for example the vms:

print json.dumps(json.loads(requests.get('https://cf.example.com/api/vms',headers={'X-Auth-Token' : current_token}).text),sort_keys=True,indent=4,separators=(',', ': '))
...
    "count": 2,
    "name": "vms",
    "resources": [
        {
            "href": "https://cf.example.com/api/vms/602000000000007"
        },
        {
            "href": "https://cf.example.com/api/vms/602000000000006"
        }
    ],
    "subcount": 2
}

While all VMs are listed, the information shown above are not enough to understand which vm is actually which: at least the name should be shown. So we need to expand the information about each vm and afterwards add a condition to only show name and for example vendor:: ?expand=resources&attributes=name,vendor:

print json.dumps(json.loads(requests.get('https://cf.example.com/api/vms?expand=resources&attributes=name,vendor',headers={'X-Auth-Token' : current_token}).text),sort_keys=True,indent=4,separators=(',', ': '))
...
"resources": [
    {
        "href": "https://cf.example.com/api/vms/602000000000007",
        "id": 602000000000007,
        "name": "my-vm",
        "vendor": "redhat"
    },
    {
        "href": "https://cf.example.com/api/vms/602000000000006",
        "id": 602000000000006,
        "name": "myvm-clone",
        "vendor": "redhat"
    }
],

This works of course for 2 vms, but not if you manage 20.000. Thus it’s better to use a filter:&filter[]='name="my-vm"'. Since filters use a lot of quotation marks depending on the amount of strings you use it is best to define a string containing the filter argument and afterwards add that one to the URL:

filter="name='my-vm'"
print json.dumps(json.loads(requests.get('https://cf.example.com/api/vms?expand=resources&attributes=name&filter[]='+filter',headers={'X-Auth-Token' : current_token}).text),sort_keys=True,indent=4,separators=(',', ': '))
...
"count": 2,
"name": "vms",
"resources": [
    {
        "href": "https://cf.example.com/api/vms/602000000000007",
        "id": 602000000000007,
        "name": "my-vm"
    }
],
"subcount": 1

Note the subcount which shows how many vms with the given name were found. If you want to combine more than one filter, simply add them to the URL: &filter[]='name="my-vm"'&filter[]='power_state=on'.

With a given href to the correct vm you can shut it down. Use the HTTP POST method and provide a JSON payload calling the action “stop”.

print json.dumps(json.loads(requests.post('https://cf.example.com/api/vms/602000000000007',headers={'X-Auth-Token' : current_token},,data=json.dumps({'action':'stop'})).text),sort_keys=True,indent=4,separators=(',', ': '))
{
    "href": "https://cf.example.com/api/vms/602000000000007",
    "message": "VM id:602000000000007 name:'my-vm' stopping",
    "success": true,
    "task_href": "https://cf.example.com/api/tasks/602000000000097",
    "task_id": 602000000000097
}

If you want to call an action to more than one instance, change the href to the corresponding collection and include the actual hrefs for the vms in the payload in an resources array:

print json.dumps(json.loads(requests.post('https://cf.example.com/api/vms',headers={'X-Auth-Token' : current_token},data=json.dumps({'action':'stop', 'resources': [{'href':'https://cf.example.com/api/vms/602000000000007'},{'href':'https://cf.example.com/api/vms/602000000000006'}]})).text),sort_keys=True,indent=4,separators=(',', ': '))
"results": [
    {
        "href": "https://cf.example.com/api/vms/602000000000007",
        "message": "VM id:602000000000007 name:'my-vm' stopping",
        "success": true,
        "task_href": "https://cf.example.com/api/tasks/602000000000104",
        "task_id": 602000000000104
    },
    {
        "href": "https://cf.example.com/api/vms/602000000000006",
        "message": "VM id:602000000000006 name:'myvm-clone' stopping",
        "success": true,
        "task_href": "https://cf.example.com/api/tasks/602000000000105",
        "task_id": 602000000000105
    }
]

The last example shows how more than one call to the API are connected to each other: we call the API to scan a VM, get the task id and query the task id to see if the task was successfully called. So first we call the API to start the scan:

print json.dumps(json.loads(requests.post('https://cf.example.com/api/vms/602000000000006',headers={'X-Auth-Token' : current_token},verify=False,data=json.dumps({'action':'scan'})).text),sort_keys=True,indent=4,separators=(',', ': '))
{
    "href": "https://cf.example.com/api/vms/602000000000006",
    "message": "VM id:602000000000006 name:'my-vm' scanning",
    "success": true,
    "task_href": "https://cf.example.com/api/tasks/602000000000106",
    "task_id": 602000000000106
}

Next, we take the given id 602000000000106 and query the state:

print json.dumps(json.loads(requests.get('https://cf.example.com/api/tasks/602000000000106',headers={'X-Auth-Token' : current_token},verify=False).text),sort_keys=True,indent=4,separators=(',', ': '))
{
    "created_on": "2015-08-25T15:00:16Z",
    "href": "https://cf.example.com/api/tasks/602000000000106",
    "id": 602000000000106,
    "message": "Task completed successfully",
    "name": "VM id:602000000000006 name:'my-vm' scanning",
    "state": "Finished",
    "status": "Ok",
    "updated_on": "2015-08-25T15:00:20Z",
    "userid": "admin"
}

However, please note that “Finished” here means that the call of the task was successful – but not necessarily the task outcome itself. For that you would have to call the vm state itself.

Final words

The REST API of CloudForms offers quite some useful functions to integrate CloudForms with your own programs, scripts and applications. The REST API documentation is also quite extensive, and the community documentation for ManageIQ has a lot of API usage examples.

So if you used to call your CloudForms via SOAP you will be happy to find the new REST API in CloudForms 3.2. If you never used the API you might want to start today – as you have seen its quite simple to get results quickly.

[Short Tip] show processes accessing a file: fuser & lsof

920839987_135ba34fff

Sometimes it’s good to know which processes access certain files, paths or devices. Think of debugging, but also of certain files blocked due to currently running processes. There are two commonly used tools to get information about which process accesses which file: lsof and fuser.

The best known tool is arguably lsof – it lists all open files, the output can be limited with options and arguments. Typical commands are:

$ lsof -p $PROCESSID
$ lsof $FILENAME
$ lsof |grep $EXPRESSION

The other well known tool is fuser. Compared to lsof it works the other way around: it lists open processes for a given target. The target must always be supplied and is not optional. Typical commands are:

$ fuser $FILENAME
$ fuser -mva $MOUNTPOINT
$ fuser -k $FILENAME

The last command is used to kill the process directly.

In the day to day work fuser feels slightly less bloated and is more handy to simply kill processes right away. On the other hand, if you are used to lsof there is hardly any reason to switch to fuser.