[Howto] ownCloud auto setup including LDAP

ownCloud LogoThe self hosting file sharing solution ownCloud is becoming increasingly popular, even in companies you regularly come across installations. To make auto setup of ownCloud easier the following howto shows the steps to automatically connect it to a LDAP server.

File exchange services like Dropbox or Google Drive offer a neat and quick way to exchange even large amounts of data. However, they only work because the data are uploaded to the servers of such corporations in the first hand, which is in times a bit questionable when you deal with sensitive data.

Here ownCloud comes into play: it offers the possibility to self host a file sharing service on infrastructure you trust. Additionally it is Open Source, thus providing at least a minimum amount of trust. And it is not anymore a solution only used by few people for their private servers: these days ownCloud is used in the public sector, universities and companies of all sizes. For example the sciebo project offers ownCloud based file exchange services for 300k students with 5 PB of storage.

It is thus no wonder that the interest in hosting ownCloud services is unbroken. Here at credativ we often see corresponding requests from customers who want support in setting up such installations.

Among the challenges to setup ownCloud in a business environment, two of the biggest ones are the connection to the central authentication service like LDAP and unattended installation. The first task is important to fully integrate ownCloud into the existing user space and make it a first class citizen in the existing infrastrucutre. The second task is especially relevant if you want to easily deploy the service reproducible: Think of containers, docker, VMs, etc. here.

especially the combination of both tasks is challenging: usually ownCloud expects the admin to follow through several steps manually which involve a lot of clicking and entering data until it is up, running and connected to the LDAP. But it is possible to avoid these point-and-click-adventures: Configuration templates can help pre-configuring the ownCloud service, and the setup of the LDAP connection can be automated using ownCloud’s configuration command line tool occ.

So let’s go through the process step by step: At first, ownCloud has to be installed – that can usually be done by the usual package management tools like yum, apt, etc. After the installation, the ownCloud URL is usually opened via browser to start the first run wizard. This can be automated by providing the configuration template $owncloud/config/autoconfig.php which contains all necessary information usually queried in the first run wizard: admin user, pwd, db type, db user, db password, etc. ownCloud checks at start if the file is present and if, omits the first run wizard. Here is an example of such a autoconfig template:

<?php
$AUTOCONFIG = array (
  'directory' => '/var/www/html/owncloud/data',
  'adminlogin'    => 'mmu',
  'adminpass'     => '123456',
  'dbtype'        => 'pgsql',
  'dbname'        => 'owncloud',
  'dbuser'        => 'postgres',
  'dbpass'        => '123456',
  'dbhost'        => '192.168.123.45',
  'dbtableprefix' => 'oc_',
);

Note that further configuration of your ownCloud can also be placed int the usual config.php file: the values of the autoconfig file will be merged into the existing configuration file. This way you can pre-configure most parts of your entire server. More details can be found in the admin documentation.

To actually start the processing of the autoconfig file the ownCloud URL must be called at least once. This can be done from the server itself via the help of curl: curl -s -k 127.0.0.1/owncloud/ > /dev/null.

When the basic configuration is done, the next step is to connect the server to LDAP. This would usually be done by opening the ownCloud URL, activating the LDAP app and configuring it. Instead of clicking through the web page, these tasks can be accomplished with the help of the occ tool. It can be used to activate the app, write and an empty configuration (thanks mark0n for this) and also to set the basic LDAP data. Make sure to call all commands as the user the webserver is called at – otherwise you might get all kinds of problems. The individual steps are:

php -f $ocpath/occ app:enable user_ldap
php -f $ocpath/occ ldap:create-empty-config
php -f $ocpath/occ ldap:set-config "" ldapHost 192.168.123.45
php -f $ocpath/occ ldap:set-config "" ldapPort 389
php -f $ocpath/occ ldap:set-config "" ldapBase \"dc=example,dc=net\"
php -f $ocpath/occ ldap:set-config "" ldapConfigurationActive 1

In case you are debugging problems, check the configuration of the ownCloud server via php -f $ocpath/occ ldap:show-config.

And that’s it already – your ownCloud should be connected to your LDAP server now. If you script all commands for example in Ansible or write a Puppet module it is even easily reproducible.

In case you are interested, I also wrote a German blog article about the problem on credativ’s blog: Owncloud Auto-Setup mit LDAP-Anbindung.

[Howto] LDAP schema for Postfix

Postfix LogoThe official Postfix documentation to use LDAP for user and alias lookup mentions certain LDAP attributes which are not part of the default OpenLDAP. In this article I will shortly explain a basic theme providing these attributes and the corresponding object class.

Postfix can easily be connected to LDAP to lookup addresses and aliases. The Postfix LDAP documentation covers all the details. As mentioned there the default configuration of Postfix expects two LDAP attributes in the LDAP schema: mailacceptinggeneralid and maildrop. This also shows in the code in src/global/dict_ldap.c:

dict_ldap->query =
    cfg_get_str(dict_ldap->parser, "query_filter",
        "(mailacceptinggeneralid=%s)", 0, 0);

However, these attributes are not part of the default OpenLDAP installation, and the Postfix documentation does not mention how exactly that has to look like and where to get it. For that reason we at my employer credativ provide such a schema at Github: github.com/credativ/postfix-ldap-schema. The github repository contains the schema, the corresponding licence and a short documentation. A German introduction to the schema can also be found at credativ’s blog: LDAP-Schema für Postfix-Abfragen

The provided schema defines the necessary attribute types mailacceptinggeneralid and maildrop as well as the object class postfixUser. Please note that in this schema the used OIDs are of the type Experimental OpenLDAP, see also the OID database.

To use the schema it must be used by OpenLDAP, for example by including in in slapd.conf. A corresponding LDAP entry could look like:

dn: uid=mmu,ou=accounts,dc=example,dc=net
objectclass: top
objectclass: person
objectclass: posixAccount
objectclass: postfixUser
cn: Max Mustermann
sn: Mustermann
uid: mmu
uidNumber: 5001
gidNumber: 5000
homeDirectory: /home/vmail
mailacceptinggeneralid: mmu
mailacceptinggeneralid: max.mustermann
mailacceptinggeneralid: m.mustermann
mailacceptinggeneralid: bugs
maildrop: mmu

As you see the example covers multiple aliases. Also, the final mailbox is a domain less entity: maildrop: mmu does not mention any domain name. This only works if your mail boxes actually do not require (or even allow) domain names – in this case this was true since the mail is finally transported to a Dovecot server which does not know about the various domains.

Please note that this schema can only be the foundation for a more sophisticated, more complex schema which need to be tailored to fit the individual needs of the corresponding setup.

[Short Tip] Use host names for Docker links

Docker-logo-011

Whenever you link Docker containers together, the question comes up how to access services provided by the linked container: the actual IP address of the container is not static and cannot be guessed beforehand. Sure, the IP address can be looked up by the environment variables ($ env), but not all programs can be modfied to understand these variables. This is even more true for containers which you receive from the Docker registry.

Thus the quickest way is to define a host name along the docker run. The container can be reached afterwards via that exact name.

$ docker run --hostname=db-container -d postgres
...
$ docker run -it --link db:dbtestlink centos /bin/bash
# ping db-container
PING dbtestlink (172.17.0.13) 56(84) bytes of data.
64 bytes from dbtestlink (172.17.0.13): icmp_seq=1 ttl=64 time=0.178 ms

10 years of /home/liquidat

It’s time for an anniversary: the oldest blog post on my blog is ten years old today. Hooray! =D I’d like to take the opportunity to write down some thoughts about the blog itself.

First I should clarify what the anniversary is actually about: I blog for more than 10 years now. But the oldest blog post still in existence is today exactly ten years old. Older blog posts were on the platform blogger.de and there was no way to take the posts with me when I moved over to blogspot.com ten years ago. Btw., as you might notice I also left blogspot.com behind me a year later when I migrated over to wordpress.com. The first published post there was Partitioning with Linux, the first written, processed and published post on wordpress.com was APT-RPM lives.

Ten years ago I blogged in German – my native language. At that time my English was, well, not the best. I you want to get an idea of my English skills back then, have a look at my earliest attempts: The desktop of tomorrow. That’s a looooong time ago… :D

Actually my poor language skills were the reason why I decided to post all future entries in English back in August 2005: I had just moved to Scandinavia and needed to improve my English drastically. And nothing is better than practicing all the time. Thus beginning with a screenshot tour about KDE 3.5 Alpha 1 I wrote all my entries in English.

While I am at screenshot tours: these always drew attention. The most successful blog post in regards of visits in one day was the screenshot tour of KDE 4 Beta 3: 74.000 visits in one day. And even in these days screenshot tour are a visitor magnet: for example the screenshot tour of the web based Systemd server management tool Cockpit got thousands of views on one day.

In regards to success the probably most successful post of all time was Short Tip: Get UUID of Hard Disks. It generates hundreds of visits each day. Tenth of thousands each year. For 8 years now. Actually it still seemed to be such an important topic day after day that I wrote an update post with all possible details about uuids on Linux some years later. But still the short tip is the most visited post ever.

However, success and many visits are not always positive: a short blog post about the then new Dolphin turned out to stir quite some reaction about the future of KDE so that even official KDE developers had to make comments about the ongoing development and make clear that Konqueror is not going to die (back then). That taught me to be more careful in the future with my posts.

Over the years the time I had for blogging varied. Particularly in the last years I blogged less and less, due to my job at my current employer credativ – I even thought that I had to stop blogging as such in March 2010. But only few months later I missed it already, so I re-started again in February 2013. And while there are strong and weak months, I still love doing it.

So, as a summary: quite some interesting ten years! I’d like to thank everyone who supported me in the last ten years, who accompanied me during that time. First and foremost thanks to my friends, but also to all people who helped me with suggestions and also all the readers of my blog who payed me a visit and/or left comments. Let’s see what the future holds for the blog and also for me =)

[Short Tip] Splitting and merging PDF files

PDF Logo

I recently had to modify quite a stack of PDF files. Many of them where scanned documents, and sometimes I only needed certain pages, or had to re-arrange parts of some files in new documents. A set of handy tools to perform such low level tasks quick and easy comes along in the package poppler-utils. The package is available via the default package managers on Fedora, RHEL/CentOS, Ubuntu, Debian and others.

The command pdfseparate can be used to extract certain pages of large PDFs – in this example all pages from the third up to the fifth are separated into single page PDFs:

$ pdfseparate -f 3 -l 5 Scanned-Document.pdf Separated%d.pdf
$ ls
Scanned-Document.pdf  Separated3.pdf  Separated4.pdf  Separated5.pdf

If you want to combine for example the fifth and the third page in that order in one single, new PDF you can use pdfunite:

$ pdfunite Separated5.pdf Separated3.pdf NewDocument.pdf

Note that there is usually no output on the shell as long as everything works out fine. You can check the results with the PDF viewer of your choice, like Okular on KDE or Evince on Gnome.

[Howto] Use Powerline on Fedora

920839987_135ba34fffPowerline is a status line plugin for Vim, but also a prompt plugin for Bash, ZSH and others. It can easily be installed in Fedora via provided packages.

The status line plugin Powerline is available via the Fedora repositories. There has just been an update which is already available in the testing repository:

$ sudo yum install --enablerepo=updates-testing powerline

The powerline documentation is rather good and explains all steps necessary to configure all the various Powerline plugins. However, note that the string {repository_root} in the examples have to be replaced by /usr/lib/python2.7/site-packages/, so for example {repository_root}/powerline/bindings/vim becomes /usr/lib/python2.7/site-packages/powerline/bindings/vim/. This is due to the fact that the Powerline rpm installs the Powerline code into this specific directory.

So to use Powerline in Vim, just add the following line to the top of your ~/.vimrc:

set rtp+=/usr/lib/python2.7/site-packages/powerline/bindings/vim/

If your previously used other Vim plugins also altering the status line, make sure that you deactivate these.

To use Powerline in Zsh, simply add the following lines to your ~/.zshrc:

# Powerline
if [[ -r /usr/share/powerline/zsh/powerline.zsh ]]; then
  source /usr/share/powerline/zsh/powerline.zsh
fi

In case you use Zsh and want to get rid of the EMACS at the beginning, you need to create a configuration path for Powerline, copy the necessary Shell theme files and alter them accordingly:

$ mkdir -p ~/.config/powerline/themes/shell
$ cp -a /usr/lib/python2.7/site-packages/powerline/config_files/themes/shell/* ~/.config/powerline/themes/shell/

Open the file default.json and remove the lines:

      {
        "function": "powerline.segments.shell.mode"
      },

You might have to restart the powerline-daemon, powerline-daemon -r but afterwards the shell line in Zsh does not contain the current mode anymore. Have fun!

PS: In case you use Ubuntu, an almost perfect Howto can be found at AskUbuntu: How can I install and use powerline plugin?.

[Howto] Upgrading CentOS 6 to CentOS 7

CentOS LogoCentOS 7 is out. With systemd, Docker integration and many more fixes and new features I wanted to see if the new remote upgrade possibility already works. I’ve already posted the procedure to Vexxhost’s Blog and wanted to share it here as well. But beware: don’t try this on production servers!

CentOS 7 was released only few weeks after Red Hat Enterprise Linux 7, including the same exciting features RHEL ships. Besides the long awaited Systemd and the right now much discussed Docker this release also features the possibility to perform upgrades from version 6 to version 7 automatically without the need of the installation images. And although the upgrade still requires a reboot and thus is not a live upgrade as such, it comes in very handy for servers which can only be reached remotely.

Red Hat has already released and documented the necessary tools. The CentOS team didn’t have time yet to import, test and rebuild the tools but the developers are already on it – and they provide untested binaries.

Please, note: Since the packages are not tested yet you should not, by any means, try these on anything else than on spare test machines you can easily re-deploy and which do not have any valuable data. Do not try this on your production machines!

But if you want to get a first idea of how the tools do basically work, I recommend to set up a simple virtual machine with a fully updated CentOS 6 and as few packages as possible. Next, install the rpms from the CentOS repository mentioned above. Among these is the Preupgrade Assistant, which can be run on a system with no harm: preupg just analyses the system and gives hints what to look out for during an upgrade without performing any tasks. Since I only tested with systems with hardly any services installed I got no real results from preupg. Even a test run on a system with more services installed brought the same output (only showing some examples of the dozens and dozens of lines):

$ sudo preupg
Preupg tool doesn't do the actual upgrade.
Please ensure you have backed up your system and/or data in the event of a failed upgrade
 that would require a full re-install of the system from installation media.
Do you want to continue? y/n
y
Gathering logs used by preupgrade assistant:
All installed packages : 01/10 ...finished (time 00:00s)
All changed files      : 02/10 ...finished (time 00:48s)
Changed config files   : 03/10 ...finished (time 00:00s)
All users              : 04/10 ...finished (time 00:00s)
...
042/100 ...done    (samba shared directories selinux)
043/100 ...done    (CUPS Browsing/BrowsePoll configuration)
044/100 ...done    (CVS Package Split)
...
|samba shared directories selinux          |notapplicable  |
|CUPS Browsing/BrowsePoll configuration    |notapplicable  |
|CVS Package Split                         |notapplicable  |
...

As mentioned above the Preupgrade Assistant only helps evaluating what problems might come up during the upgrade – the real step must be done with the tool redhat-upgrade-tool-cli. For that to work the CentOS 7 key must be imported first:

$ sudo rpm --import http://isoredirect.centos.org/centos/7/os/x86_64/RPM-GPG-KEY-CentOS-7

Afterwards, the actual upgrade tool can be called. As options it takes the future distribution version and a URL to pull the data from. Additionally I had to add the option --force since the tool complained that preupg was not run previously – although it was. As soon as the upgrade tool is called, it starts downloading all necessary information, packages and images, and afterwards asks for a reboot – the reboot does not happen automatically.

$ sudo /usr/bin/redhat-upgrade-tool-cli --force --network 7 --instrepo=http://mirror.centos.org/centos/7/os/x86_64
setting up repos...
.treeinfo                                                                                                                                            | 1.1 kB     00:00     
getting boot images...

After the reboot the machine updates itself with the help of the downloaded packages. Note that this phase does take some time, depending on the speed of the machine, expect minutes, not seconds. However, if everything turns out right, the next login will be into a CentOS 7 machine:

$ cat /etc/os-release 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

Concluding it can be said that the upgrade tool worked quite nicely. While it is not comparable to a real live upgrade if offers a decent way to upgrade remote servers. I’ve tested it with a clean VM and also with a bare metal, remote server, and it worked surprisingly good. The analysis tool unfortunately did not perform how I expected it to work, but that might be due to the untested state or I was not using it properly. I’m looking forward what how that develops and improves over time.

But, again, and as mentioned before – don’t try this on your own prod servers.