[Howto] Automated DNS resolution for KVM/libvirt guests with a local domain [Update]

I often run demos on my laptop with the help of libvirt. Managing 20+ machines that way is annoying when you have no DNS resolution for those. Luckily, with libvirt and NetworkManager, that can be easily solved.

libvirt_logo-svg

I often run demos on my laptop with the help of libvirt. Managing 20+ machines that way is annoying when you have no DNS resolution for those. Luckily, with libvirt and NetworkManager, that can be easily solved.

The problem

Imagine you want to test something in a demo setup with 5 machines. You create the necessary VMs in your local KVM/libvirt environment – but you cannot address them properly by name. With 5 machines you also need to write down the appropriate IP addresses – that’s hardly practical.

It is possible to create static entries in the libvirt network configuration – however, that is still very inflexible, difficult to automate and only works for name resolution inside the libvirt environment. When you want to ssh into a running VM from the host, you again have to look up the IP.

Name resolution in  the host network would be possible by adding each entry to /etc/hosts additionally. But that would require the management of two lists at the same time. Not automated, far from dynamic, and very ponderous.

The solution

Luckily, there is an elegant solution: libvirt comes with its own in-build DNS server, dnsmasq. Configured properly, that can be used to serve DHCP and DNS to servers respecting a previous defined domain. Additionally, NetworkManager can be configured to use its own dnsmasq instance to resolve DNS entries – forwarding requests to the libvirt instance if needed.

That way, the only thing which has to be done is setting a proper host name inside the VMs. Everything else just works out of the box (with a recently Linux, see below).

The solution presented here is based on great post from Dominic Cleal.

Configuring libvirt

First of all, libvirt needs to be configured. Given that the network “default” is assigned to the relevant VMs, the configuration should look like this:

$ sudo virsh net-dumpxml default
<network connections='1'>
  <name>default</name>
  <uuid>158880c3-9adb-4a44-ab51-d0bc1c18cddc</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:fa:cb:e5'/>
  <domain name='qxyz.de' localOnly='yes'/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.128' end='192.168.122.254'/>
    </dhcp>
  </ip>
</network>

You can modify the network for example with the command virsh net-edit default. The interesting part is below the mac address: a local domain is defined and marked as localOnly. That domain will be the authoritative domain for the relevant VMs, and libvirt will configure dnsmasq to act as a resolver for that domain. The attribute makes sure that DNS requests regarding that domain will never be forwarded upstream. This is important to avoid loop holes.

Note, however: as mentioned in the comment by taurus, your domain should not be named “local” because this might cause trouble in relation to mDNS.

Configuring the VM guests

When the domain is set, the guests inside the VMs need to be defined. With recent Linux releases this is as simple as setting the host name:

$ sudo hostnamectl set-hostname neon.qxyz.de

There is no need to enter the host name anywhere else: the command above takes care of that. And the default configuration of DHCP clients of recent Linux releases sends this hostname together with the DHCP request – dnsmasq picks the host name automatically  up if the domain matches.

If you are on a Linux where the hostnamectl command does not work, or where the DHCP client does not send the host name with the request – switch to a recent version of Fedora or RHEL 😉

Because with such systems the host name must be set manually. To do so follow the documentation of your OS. Just ensure that the resolution of the name works locally. Additionally, besides the hostname itself the DHCP configuration must be altered to send along the hostname. For example, in older RHEL and Fedora versions the option

DHCP_HOSTNAME=neon.qxyz.de

has to be added to /etc/sysconfig/network-scripts/ifcfg-eth0.

At this point automatic name resolution between VMs should already work after a restart of libvirt.

Configuring NetworkManager

The last missing piece is the configuration of the actual KVM/libvirt host, so that the local domain, here qxyz.de, is properly resolved. Adding another name server to /etc/resolv.conf might work for a workstation with a fixed network connection, but certainly does not work for laptops which have changing network connections and DNS servers all the time. In such cases, the NetworkManager is often used anyway so we take advantage of its capabilities.

First of all, NetworkManager needs to start its own version of dnsmasq. That can be achieved with a simple configuration option:

$ cat /etc/NetworkManager/conf.d/localdns.conf 
[main]
dns=dnsmasq

This second dnsmasq instance just works out of the box. All DNS requests will automatically be forwarded to DNS servers acquired by NetworkManager via DHCP, for example. The only notable difference is that the entry in /etc/resolv.conf is different:

# Generated by NetworkManager
search whatever
nameserver 127.0.0.1

Now as a second step the second dnsmasq instance needs to know that for all requests regarding qxyz.de the libvirt dnsmasq instance has to be queried. This can be achieved with another rather simple configuration option, given the domain and the IP from the libvirt network configuration at the top of this blog post:

$ cat /etc/NetworkManager/dnsmasq.d/libvirt_dnsmasq.conf 
server=/qxyz.de/192.168.122.1

And that’s it, already. Restart NetworkManager and everything should be working fine.

As a side node: if the attribute localOnly would not have been set in the libvirt network configuration, queries for unknown qxyz.de entries would be forwarded from the libvirt dnsmasq to the NetworkManager dnsmasq – which would again forward them to the libvirt dnsmasq, and so on. That would quickly overload your dnsmasq servers, resulting in error messages:

dnsmasq[15426]: Maximum number of concurrent DNS queries reached (max: 150)

Summary

With these rather few and simple changes a local domain is established for both guest and host, making it easy to resolve their names everywhere. There is no need to maintain one or even two lists of static IP entries, everything is done automatically.

For me this is a huge relief, making it much easier in the future to set up demo and test environments. Also, it looks much nicer during a demo if you have FQDNs and not IP addresses. I can only recommend this setup to everyone who often uses libvirt/KVM on a local machine for test/demo environments.

[Howto] Solaris 11 on KVM

solarisRecently I had to test a few things on Solaris 11 and wondered how well it works virtualized with KVM. It does – with a few tweaks.

Preface

Testing various different versions of operating systems is easy these days thanks to virtualization. However, I’m mainly used to Linux variants and hardly ever install any other kind of UNIX based OS. Thus I was curious if an installation of Solaris 11 on KVM / libvirt works.

For the test I actually used virt-manager since it does provide neat defaults during the VM setup. But the same comments and lessons learned are true for the command line tool as well.

Setting up the VM

virt-manager usually does not provide Solaris as an operating system type by default in the VM setup dialog. You first have to click on “OS Type”, “Show all OS options” as shown here:
virt-manager Solaris guest picker

Note, a Solaris 11 should have at least 2 GB RAM, otherwise the installation and also booting might take very long or run into their very own problems.

The installation runs through – although quite some errors clutter the screen (see below).

Errors and problems

As soon as the machine is started several error messages are shown:

WARNING: /pci@0,0/pci1af4,1100@6,1 (uhci1): No SOF interrupts have been received, this USB UHCI host controller is unusable
WARNING: /pci@0,0/pci1af4,1100@6,2 (uhci2): No SOF interrupts have been received, this USB UHCI host controller is unusable

This shows that something is wrong with the interrupts and thus withe the “hardware” of the machine – or at least with the way the guest machine discovers the hardware.

Additionally, even if DHCP is configured, the machine is unable to obtain the networking configuration. A fixed IP address and gateway do not help here, either. The host system might even report that it provides DHCP data, but the guest system continues to request these:

Dez 23 11:11:05 liquidat dnsmasq-dhcp[13997]: DHCPDISCOVER(virbr0) 52:54:00:31:31:4b
Dez 23 11:11:05 liquidat dnsmasq-dhcp[13997]: DHCPOFFER(virbr0) 192.168.122.205 52:54:00:31:31:4b
Dez 23 11:11:09 liquidat dnsmasq-dhcp[13997]: DHCPDISCOVER(virbr0) 52:54:00:31:31:4b
Dez 23 11:11:09 liquidat dnsmasq-dhcp[13997]: DHCPOFFER(virbr0) 192.168.122.205 52:54:00:31:31:4b
Dez 23 11:11:17 liquidat dnsmasq-dhcp[13997]: DHCPDISCOVER(virbr0) 52:54:00:31:31:4b
Dez 23 11:11:17 liquidat dnsmasq-dhcp[13997]: DHCPOFFER(virbr0) 192.168.122.205 52:54:00:31:31:4b
...

Also, when the machine is shutting down and ready to be powered off, the CPU usage spikes to 100 %.

The solution: APIC

The solution for the “hardware” problems mentioned above and also for the networking trouble is to deactivate a APIC feature inside the VM: x2APIC, Intel’s programmable interrupt controller. Some more details about the problem can be found in the Red Hat Bugzilla entry #1040500.

To apply the fix the virtual machine definition needs to be edited to disable the feature. The xml definition can be edited with the command sudo virsh edit with the machine name as command line option, the change needs to be done in the section cpu as shown below. Make sure tha VM is stopped before the changes are done.

$ sudo virsh edit krypton
...
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Broadwell</model>
    <feature policy='disable' name='x2apic'/>
  </cpu>
$ sudo virsh start krypton

After this changes Solaris does not report any interrupt problems anymore and DHCP works without flaws. Note however that the CPU still spikes at power off. If anyone knows a solution to that problem I would be happy to hear about it and add it to this post.

[Short Tip] dnsmasq and /etc/hosts

920839987_135ba34fff

In case you do simple network tests with KVM virtual machines on your host, you might want to add some host names and IPs to /etc/hosts. However, that may not work: KVM, or better said libvirt ignores the entries in /etc/hosts. That is due to the fact that dnsmasq reads the entries of the file once: at startup. So you need to restart dnsmasq, or simply send it the SIGHUP signal:

killall -HUP dnsmasq

[Howto] Vagrant: libvirt, (Multi-)Multi-Machine, Ansible and Puppet

Jean Victor Balin - CubesVagrant is a tool to create and configure virtual development environments. As a wrapper around common virtual machine solutions it helps bringing up and disposing a virtual machine in a glimpse. I played around with it – and, well, got a bit carried away…

What all the fuzz is about… – Vagrant Basics

Simply said, Vagrant is nothing more than a wrapper around your average virtualization solution. While it was mainly developed for VirtualBox these days it also supports libvirt/kvm, Amazon, VMWare and others. The aim is to have one single description file to bring up and tear down an entire virtual machine with simple commands. That way each developer or tester can use the same description file and thus the same environment – Vagrant wants to fight the “works on my machine” excuse. Also, since bringing up and disposing images is a matter of seconds all testing and development can be done against clean environments.

Another big feature is that Vagrant seamlessly integrates with configuration management systems like Puppet, Chef, Ansible and others. Thus the deployment of the virtual development machines can be done with the same configuration scripts you use for your production environment.

The Vagrant VMs are always spawned from a base image, which is again derived from a so called “box”. There are many of these boxes freely available on the internet. Most of them have the same features: only basic packages installed plus Puppet, Chef and similar clients, a user vagrant with the password vagrant, sudo rights and an insecure ssh public key in .ssh/authorized_keys.

Why? – The motivation

The question quickly arises why anyone should use Vagrant. After all, virsh already is a proper abstraction layer for qemu and kvm and provides quite some functionality. The rest can be done manually or by helper scripts: bringing up a VM, installing configuration management, throwing machines away, bringing new ones up, etc.

However, Vagrant is much simpler than virsh, and also offers a simple but kind of sane configuration file for it. And its easier to use Vagrant which is continuously improved than developing and maintaining your own set of scripts.

There are of course other solutions – Docker is rather often mentioned in this regard. I have not really looked at Docker or other alternatives in detail, so I won’t do a comparison right now. Maybe I will find time in the future to really dive into Docker and then compare them – if possible at all.

Can we start now? – Basic usage

Vagrant was originally developed around VirtualBox. Since I am used to libvirt i decided to test Vagrant on libvirt. James has written an incredible helpful howto describing what to do and where users should be careful.

The general steps are on Fedora (and similar on other machines like Debian, etc.):

  • get a recent Vagrant rpm and install it
  • throw in some dependencies for later: libvirt-devel, libxslt-devel, libxml2-devel, virsh, qemu-img
  • call vagrant and install a plugin to convert VirtualBox images to libvirt ones: vagrant plugin install vagrant-mutate
  • get the first Vagrant box: vagrant box add precise32 http://files.vagrantup.com/precise32.box – the default example, it a VirtualBox image
  • transform it to a VMDK2 image, because Fedora’s qemu right now cannot handle VMDK3 images: wget https://raw.github.com/erik-smit/one-liners/master/qemu-img.vmdk3.hack.sh, chmod u+x qemu-img.vmdk3.hack.sh, ./qemu-img.vmdk3.hack.sh ~/.vagrant.d/boxes/precise32/virtualbox/box-disk1.vmdk
  • mutate the VirtualBox image to libvirt: vagrant mutate precise32 libvirt
  • initiate a Vagrant project: vagrant init precise32, as you see the Vagrant configuration file called Vagrantfile is created
  • launch your first Vagrant box: vagrant up

And you are done! You’ve created your first VM with Vagrant. You can get access via vagrant ssh:

$ vagrant ssh
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic-pae i686)

 * Documentation:  https://help.ubuntu.com/
Welcome to your Vagrant-built virtual machine.
Last login: Fri Sep 14 06:22:31 2012 from 10.0.2.2
vagrant@precise32:~$

The command drops to a normal shell on the machine as the user “vagrant”.

And now comes the beauty of vagrant: with a single vagrant destroy you can get rid of the entire VM including the wasted space. If you want to try it again: vagrant up, box goes up, vagrant destroy, box goes down. Box goes up, Box goes down. Box goes up, box goes down.

Each time all modifications to the VM are lost, thus you can ensure all your changes in configuration management and development work against a clean environment. And it only takes half a minute to bring up the machine!

By the way, before you get too experimental: your current working directory is exported as a NFS share. Thus exchanging files between the machines is rather easy. But it also means that you should not try rm -rf / on a Vagrant machine.

Got problems? – Troubleshooting

What, already? Well… In that case, launch virt-manager and look at the machine. After all, Vagrant is just a wrapper!

For example if DHCP failed or NFS timed out it might be that there are problems with the firewall rules of the host machine. The way to allow the proper connections for recent Fedora versions is explained in Vagrant issue #2447. Basically, the Vagrant libvirt network bridge should be assigned to the internal zone, and services like NFS, DHCP, mountd and rpc for both TCP and UDP should be added there.

Also, if the request for the user password comes up all the time to secure only authorized access to the virtual machine management, PolicyKit rules can permanently provide proper rights. James’ blog post mentioned above explains the details.

And in case each machine gets two adapters with two IP addresses… well, that happened to me as well, and so far I have no solution. I am happy for any help regarding that topic.

Storage! – Where to put the images and boxes

Running virtual machines eats your storage. Usually, the boxes are managed in ~/.vagrant.d/boxes, while the actual VM templates and the machines themselves are stored in the default storage of the default virtual machine provider. In this case that is the default storage of libvirt.

It makes sense to dedicate an extra storage pool like an extra partition to Vagrant. That can be achieved by creating a storage pool in libvirt and telling Vagrant in the Vagrantfile about it. The “provider” section is the right place for that:

# Provider-specific configuration
config.vm.provider :libvirt do |libvirt|
    libvirt.driver = "qemu"
    libvirt.connect_via_ssh = false
    libvirt.storage_pool_name = "vm-images"
end

Providers are Vagrants name for virtual environments like VirtualBox, libvirt and so on. In this example the provider libvirt is ensured in case other providers are installed on the host as well. Also , the storage pool name is given, “vm-images”.

We want more! – Getting other boxes

In the above example an Ubuntu Precise machine was used. But it might rather well be that someone would like to use other operating systems or versions as a base. Quite some are listed at vagrantbox.es. Also, the Puppet project provides some useful boxes, even “clean” ones without any further additions installed.

However, using boxes from others is of course a security risk. But there is a way to build boxes yourself. I found James’ Makefile very handy: It uses virt-builder from the libguestfs project to build a CentOS 6 machine. Of course, using images from libguestfs only reduces your security risk, best would be to build your own images from scratch.

I built my CentOS 6 images using the above mentioned Makefile with two smaller adjustments to avoid problems with SSH fingerprints an for a better Ansible integration.

One, one-two, one-two-three! – Multi machines

Vagrant makes fun already – but things get more interesting with the possibility to start more than one machine. The Vagrant Documentation about multi-machines shows how further machines can be added – and named – in the Vagrantfile:

  config.vm.define "staging" do |staging|
    staging.vm.network private_network, ip: "192.168.121.101"
  end

  config.vm.define "prod" do |prod|
    prod.vm.network private_network, ip: "192.168.121.111"
  end

In this example two machines are now available: one named “staging” and one called “prod”, both with different IP addresses. Now all the Vagrant commands like up and ssh need the name of the machine to know on which machine they should act: vagrant up staging, vagrant ssh staging and vagrant destroy staging.

If no machine name is given, the task like up and destroy is run on all machine at the same time, in parallel. That can lead to problems like frozen machines, for unknown reasons. In such cases a workaround is to start the machines with the option --no-parallel.

Do you YAML? – Defining machines in external files

Being excited by the simplicity of Vagrant I got playful – and decided that specifying machines in the Vagrantfile is not very handy. Also, since I have no idea of Ruby and I am bound to make errors when I have to alter the file often. Thus I decided to specify machines in a YAML file, including name and IP. I picked YAML because I know it from Hiera/Puppet, Ansible and others. The file looks like the following:

---
- name: prod
  ip: 192.168.121.101
  environment: prod
- name: staging
  ip: 192.168.121.102
  environment: staging

This must be read into Vagrant – and luckily the entire script to read in these YAML data can be placed in the Vagrantfile itself:

  require 'yaml'
  servers = YAML.load_file('servers.yaml')

  servers.each do |servers|
      config.vm.define servers["name"] do |serv|
          serv.vm.network "private_network", ip: servers["ip"]
      end 
  end 

That’s it already. Now Vagrant can run as many machines as are specified in the YAML file.

Multi multi multi – Specifying multi-machines with different boxes

But that wasn’t enough – I work in two worlds, Debian/Ubuntu and Fedora/CentOS, and thus needed to be able to spawn VMs of CentOS and Ubuntu images. Thus I added the box name to the YAML file: “centos64” is the base box for a recent CentOS, and “saucy64” the base box for the last Ubuntu release.

- name: cstaging
  box: centos64
  ip: 192.168.121.102
- name: ustaging
  box: saucy64
  ip: 192.168.121.150

The names need to be different so that Vagrant can differentiate between the machines. In the above outlined example the names are prefixed with “u” for the Ubuntu machines and “c” for the CentOS machines.

The sourc ecode to read in the details in the Vagrantfile are:

  servers.each do |servers|
      config.vm.define servers["name"] do |serv|
          serv.vm.box = servers["box"]
          serv.vm.network "private_network", ip: servers["ip"]
      end 
  end

Do what I want! – Configuration management

As mentioned in the beginning, the real strength of Vagrant shines when it comes to integrating configuration management: Imagine a setup where all the developers work on the main Gits of the configuration management and the application only. And when they want to see if their changes work, they just fire up Vagrant: upon vagrant up development a new VM is deployed, all configurations and the application code from the stage “development” are pulled and installed automatically, and the developer just checks if everything is alright. Afterwards, the developer destroys the VM, and pushes the just tested changes to the staging repo.

To make that come true we need to integrate configuration management with Vagrant. By the way: Vagrant speaks about “provisioning” here not the best naming, but you got to live with it. Anyway, as mentioned there are multiple was to integrate configuration management. Puppet alone can be run by puppet apply or in terms of a real Puppet client, and the multiple configuration management systems can be even combined and run after each other. Here I will only shed some light on integrating either Ansible or Puppet Apply.

The decision which machine is to be managed by which solution will – of course – be done in the YAML file:

- name: cclean
  box: centos64
  ip: 192.168.121.102
  prov: ansible
  environment: staging
- name: uclean
  box: saucy64
  ip: 192.168.121.150
  prov: puppet
  environment: common

The CentOS machines will be managed by Ansible, the Ubuntu ones by Puppet. Also, please note that additional variables for stages are defined which might come in handy later on.

Simple and fast – integrating Ansible

I like Ansible. It’s really fast and easy to deploy. The same is true for its integration with Vagrant:

  servers.each do |servers|
      config.vm.define servers["name"] do |serv|
          serv.vm.box = servers["box"]
          serv.vm.network "private_network", ip: servers["ip"]
          if servers["prov"] == "ansible"
              serv.vm.provision "ansible" do |ansible|
                  ansible.playbook = "ansible/playbook.yaml"
              end 
          end 
      end 
  end

A subdirectory called “ansible” contains the playbook which is needed by Ansible. And, well, that’s it. The Vagrant Ansible documentation covers some more options like additional variables which can set. These come in handy when you need to differentiate between various environments like staging and production.

But in general, you can now write your playbook as you like and see how Ansible works its way. It is automatically launched after the NFS share was made ready.

Big and heavy – Puppet

Compared to Ansible, Puppet is rather heavy, and not that quick&dirty. Thus, the integration with Vagrant is a bit more difficult. But in general it is what anyone would expect of a Puppet server: a Hiera configuration, a modules directory and a basic Puppet manifest file to start with. The integration with Vagrant is done entirely in the Vagrantfile, and is shown below together with the the already above mentioned configuration:

  servers.each do |servers|
      config.vm.define servers["name"] do |serv|
          serv.vm.box = servers["box"]
          serv.vm.network "private_network", ip: servers["ip"]
          if servers["prov"] == "ansible"
              serv.vm.provision "ansible" do |ansible|
                  ansible.playbook = "ansible/playbook.yaml"
              end 
          elsif servers["prov"] == "puppet"
              serv.vm.provision :puppet do |puppet|
                  puppet.manifest_file  = "site.pp"
                  puppet.manifests_path = "puppet/manifests"
                  puppet.module_path = "puppet/modules"
                  puppet.hiera_config_path = "puppet/hiera.yaml"
                  puppet.working_directory = "/vagrant/puppet"
                  puppet.facter = { 
                      "environment" => servers["environment"]
                  }   
              end 
          end 
      end 
  end

In the example above an environment variable is provided to the Puppet Apply routine in form of a facter variable. Also, please note that a working directory has to be given so that all necessary files like the hieradata store can be found by Puppet. The rest is, however, pure Puppet: getting proper modules, setting up a well designed hierarchy with Hiera, etc.

Sooo…. What now? – Conclusion

Well… in the beginning I just wanted to understand what Vagrant exactly does and how and if it can help me. After getting this far I can say it will help me a lot in the future: creating and dumping virtual machines was never so quick and easy.

With the option of integrating configuration provisioning methods it can really change the way people develop code in business environments like for customers at my work at credativ. Often enough customers are simply too busy to bring up a VM because it takes too much time and is too tedious even with predefined scripts. They end up developing on their own machine, and later on run into problems with the further development. At such points Vagrant’s ease and simplicity can be incredible helpful to test configuration management recipes or development code. It could greatly ease the pain of providing an automated QA workflow where the QA testers get their own disposable VMs which are automatically brought up with the current testing stage.

But even for smaller setups like “personal” development Vagrant can really make things easier: as mentioned in the beginning, many people use virtualization in ways more or less similar to Vagrant. But keeping together personal scripts to bring up and destroy machines is a tedious work. Vagrant makes it quicker, simpler, and thus more reliable. Only for one-time tasks where for example I would need to create an entire new box in Vagrant I might still use virt-manager.

Of course, some bits are still missing. For example the provisioning done by YAML file could be improved so that Ansible, Puppet and all the others could share a single server configuration base with Vagrant. And as mentioned above, a comparison with other solutions would be nice as well.

But so far Vagrant has proven itself: Whenever I need to repeatedly test some configuration or integration I will use Vagrant. And I am already looking forward to all the shiny things which I can test now just a bit easier.

[Howto] Connecting a USB GSM modem to a KVM guest – USB pass through

TuxWith current virtualization technologies it is possible to pass through devices from the host to the guest, calld USB pass through. KVM is no exception here, it even works with a USB GSM modem.

Many of customers I work with are migrating old IT systems and existing servers over to a newer and virtualized infrastructure. That often works without any problems. However, some services do depend on extra hardware like additional PCI cards – or, as in my case, on an external USB GSM modem.

To pass through such a device to the VM guest first the vendor and the product ID must be identified on the VM host:

$ lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 004 Device 002: ID 0557:2221 ATEN International Co., Ltd Winbond Hermon
Bus 002 Device 003: ID 12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E230/E270/E870 HSDPA/HSUPA Modem

The last entry shows the mentioned GSM modem, built by Huawei. The interesting numbers are the vendor ID 12d1 and the product ID 1003. THe VM guest is oblivious of the device right now:

$ lsusb
Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd 
Bus 001 Device 003: ID 0409:55aa NEC Corp. Hub

Next, the device must be defined in the VM guest XML. This can be done by directly editing the XML file within virsh: $ sudo virsh edit example-server. The command brings up an editor with the content of the XML definition file of the host. The USB device must be added in the device section:

  <devices>
    [...]
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x12d1'/>
        <product id='0x1003'/>
      </source>
    </hostdev>
  </devices>

Please note that a leading 0x must be added to the IDs! Save the file, reboot the VM guest, and check if the guest now shows the new device:

$ lsusb
Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd 
Bus 001 Device 003: ID 0409:55aa NEC Corp. Hub
Bus 001 Device 004: ID 12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E230/E270/E870 HSDPA/HSUPA Modem

There it is. And the syslog shows that it was properly detected and can now be used in the usual ways, it’s done.

USB Serial support registered for GSM modem (1-port)
option 1-2.1:1.0: GSM modem (1-port) converter detected
usb 1-2.1: GSM modem (1-port) converter now attached to ttyUSB0
option 1-2.1:1.1: GSM modem (1-port) converter detected
usb 1-2.1: GSM modem (1-port) converter now attached to ttyUSB1
usbcore: registered new interface driver option
option: v0.7.2:USB Driver for GSM modems