Category Archives: Fedora

Hello Red Hat

Red Hat Logo As I mentioned in my last post I left my previous employer after quite some years – since July 1st I work for Red Hat.

In my new position I will be a Solutions Architect – so basically a sales engineer, thus the one talking to the customers on a more technical level, providing details or proof of concepts where they need it.

Since its my first day I don’t really know how it will be – but I’m very much looking forward to it, it’s an amazing opportunity! =)

[Howto] ownCloud auto setup including LDAP

ownCloud LogoThe self hosting file sharing solution ownCloud is becoming increasingly popular, even in companies you regularly come across installations. To make auto setup of ownCloud easier the following howto shows the steps to automatically connect it to a LDAP server.

File exchange services like Dropbox or Google Drive offer a neat and quick way to exchange even large amounts of data. However, they only work because the data are uploaded to the servers of such corporations in the first hand, which is in times a bit questionable when you deal with sensitive data.

Here ownCloud comes into play: it offers the possibility to self host a file sharing service on infrastructure you trust. Additionally it is Open Source, thus providing at least a minimum amount of trust. And it is not anymore a solution only used by few people for their private servers: these days ownCloud is used in the public sector, universities and companies of all sizes. For example the sciebo project offers ownCloud based file exchange services for 300k students with 5 PB of storage.

It is thus no wonder that the interest in hosting ownCloud services is unbroken. Here at credativ we often see corresponding requests from customers who want support in setting up such installations.

Among the challenges to setup ownCloud in a business environment, two of the biggest ones are the connection to the central authentication service like LDAP and unattended installation. The first task is important to fully integrate ownCloud into the existing user space and make it a first class citizen in the existing infrastrucutre. The second task is especially relevant if you want to easily deploy the service reproducible: Think of containers, docker, VMs, etc. here.

especially the combination of both tasks is challenging: usually ownCloud expects the admin to follow through several steps manually which involve a lot of clicking and entering data until it is up, running and connected to the LDAP. But it is possible to avoid these point-and-click-adventures: Configuration templates can help pre-configuring the ownCloud service, and the setup of the LDAP connection can be automated using ownCloud’s configuration command line tool occ.

So let’s go through the process step by step: At first, ownCloud has to be installed – that can usually be done by the usual package management tools like yum, apt, etc. After the installation, the ownCloud URL is usually opened via browser to start the first run wizard. This can be automated by providing the configuration template $owncloud/config/autoconfig.php which contains all necessary information usually queried in the first run wizard: admin user, pwd, db type, db user, db password, etc. ownCloud checks at start if the file is present and if, omits the first run wizard. Here is an example of such a autoconfig template:

<?php
$AUTOCONFIG = array (
  'directory' => '/var/www/html/owncloud/data',
  'adminlogin'    => 'mmu',
  'adminpass'     => '123456',
  'dbtype'        => 'pgsql',
  'dbname'        => 'owncloud',
  'dbuser'        => 'postgres',
  'dbpass'        => '123456',
  'dbhost'        => '192.168.123.45',
  'dbtableprefix' => 'oc_',
);

Note that further configuration of your ownCloud can also be placed int the usual config.php file: the values of the autoconfig file will be merged into the existing configuration file. This way you can pre-configure most parts of your entire server. More details can be found in the admin documentation.

To actually start the processing of the autoconfig file the ownCloud URL must be called at least once. This can be done from the server itself via the help of curl: curl -s -k 127.0.0.1/owncloud/ > /dev/null.

When the basic configuration is done, the next step is to connect the server to LDAP. This would usually be done by opening the ownCloud URL, activating the LDAP app and configuring it. Instead of clicking through the web page, these tasks can be accomplished with the help of the occ tool. It can be used to activate the app, write and an empty configuration (thanks mark0n for this) and also to set the basic LDAP data. Make sure to call all commands as the user the webserver is called at – otherwise you might get all kinds of problems. The individual steps are:

php -f $ocpath/occ app:enable user_ldap
php -f $ocpath/occ ldap:create-empty-config
php -f $ocpath/occ ldap:set-config "" ldapHost 192.168.123.45
php -f $ocpath/occ ldap:set-config "" ldapPort 389
php -f $ocpath/occ ldap:set-config "" ldapBase \"dc=example,dc=net\"
php -f $ocpath/occ ldap:set-config "" ldapConfigurationActive 1

In case you are debugging problems, check the configuration of the ownCloud server via php -f $ocpath/occ ldap:show-config.

And that’s it already – your ownCloud should be connected to your LDAP server now. If you script all commands for example in Ansible or write a Puppet module it is even easily reproducible.

In case you are interested, I also wrote a German blog article about the problem on credativ’s blog: Owncloud Auto-Setup mit LDAP-Anbindung.

[Short Tip] Use host names for Docker links

Docker-logo-011

Whenever you link Docker containers together, the question comes up how to access services provided by the linked container: the actual IP address of the container is not static and cannot be guessed beforehand. Sure, the IP address can be looked up by the environment variables ($ env), but not all programs can be modfied to understand these variables. This is even more true for containers which you receive from the Docker registry.

Thus the quickest way is to define a host name along the docker run. The container can be reached afterwards via that exact name.

$ docker run --hostname=db-container -d postgres
...
$ docker run -it --link db:dbtestlink centos /bin/bash
# ping db-container
PING dbtestlink (172.17.0.13) 56(84) bytes of data.
64 bytes from dbtestlink (172.17.0.13): icmp_seq=1 ttl=64 time=0.178 ms

[Short Tip] Splitting and merging PDF files

PDF Logo

I recently had to modify quite a stack of PDF files. Many of them where scanned documents, and sometimes I only needed certain pages, or had to re-arrange parts of some files in new documents. A set of handy tools to perform such low level tasks quick and easy comes along in the package poppler-utils. The package is available via the default package managers on Fedora, RHEL/CentOS, Ubuntu, Debian and others.

The command pdfseparate can be used to extract certain pages of large PDFs – in this example all pages from the third up to the fifth are separated into single page PDFs:

$ pdfseparate -f 3 -l 5 Scanned-Document.pdf Separated%d.pdf
$ ls
Scanned-Document.pdf  Separated3.pdf  Separated4.pdf  Separated5.pdf

If you want to combine for example the fifth and the third page in that order in one single, new PDF you can use pdfunite:

$ pdfunite Separated5.pdf Separated3.pdf NewDocument.pdf

Note that there is usually no output on the shell as long as everything works out fine. You can check the results with the PDF viewer of your choice, like Okular on KDE or Evince on Gnome.

[Howto] Use Powerline on Fedora

920839987_135ba34fffPowerline is a status line plugin for Vim, but also a prompt plugin for Bash, ZSH and others. It can easily be installed in Fedora via provided packages.

The status line plugin Powerline is available via the Fedora repositories. There has just been an update which is already available in the testing repository:

$ sudo yum install --enablerepo=updates-testing powerline

The powerline documentation is rather good and explains all steps necessary to configure all the various Powerline plugins. However, note that the string {repository_root} in the examples have to be replaced by /usr/lib/python2.7/site-packages/, so for example {repository_root}/powerline/bindings/vim becomes /usr/lib/python2.7/site-packages/powerline/bindings/vim/. This is due to the fact that the Powerline rpm installs the Powerline code into this specific directory.

So to use Powerline in Vim, just add the following line to the top of your ~/.vimrc:

set rtp+=/usr/lib/python2.7/site-packages/powerline/bindings/vim/

If your previously used other Vim plugins also altering the status line, make sure that you deactivate these.

To use Powerline in Zsh, simply add the following lines to your ~/.zshrc:

# Powerline
if [[ -r /usr/share/powerline/zsh/powerline.zsh ]]; then
  source /usr/share/powerline/zsh/powerline.zsh
fi

In case you use Zsh and want to get rid of the EMACS at the beginning, you need to create a configuration path for Powerline, copy the necessary Shell theme files and alter them accordingly:

$ mkdir -p ~/.config/powerline/themes/shell
$ cp -a /usr/lib/python2.7/site-packages/powerline/config_files/themes/shell/* ~/.config/powerline/themes/shell/

Open the file default.json and remove the lines:

      {
        "function": "powerline.segments.shell.mode"
      },

You might have to restart the powerline-daemon, powerline-daemon -r but afterwards the shell line in Zsh does not contain the current mode anymore. Have fun!

PS: In case you use Ubuntu, an almost perfect Howto can be found at AskUbuntu: How can I install and use powerline plugin?.

[Howto] Upgrading CentOS 6 to CentOS 7

CentOS LogoCentOS 7 is out. With systemd, Docker integration and many more fixes and new features I wanted to see if the new remote upgrade possibility already works. I’ve already posted the procedure to Vexxhost’s Blog and wanted to share it here as well. But beware: don’t try this on production servers!

CentOS 7 was released only few weeks after Red Hat Enterprise Linux 7, including the same exciting features RHEL ships. Besides the long awaited Systemd and the right now much discussed Docker this release also features the possibility to perform upgrades from version 6 to version 7 automatically without the need of the installation images. And although the upgrade still requires a reboot and thus is not a live upgrade as such, it comes in very handy for servers which can only be reached remotely.

Red Hat has already released and documented the necessary tools. The CentOS team didn’t have time yet to import, test and rebuild the tools but the developers are already on it – and they provide untested binaries.

Please, note: Since the packages are not tested yet you should not, by any means, try these on anything else than on spare test machines you can easily re-deploy and which do not have any valuable data. Do not try this on your production machines!

But if you want to get a first idea of how the tools do basically work, I recommend to set up a simple virtual machine with a fully updated CentOS 6 and as few packages as possible. Next, install the rpms from the CentOS repository mentioned above. Among these is the Preupgrade Assistant, which can be run on a system with no harm: preupg just analyses the system and gives hints what to look out for during an upgrade without performing any tasks. Since I only tested with systems with hardly any services installed I got no real results from preupg. Even a test run on a system with more services installed brought the same output (only showing some examples of the dozens and dozens of lines):

$ sudo preupg
Preupg tool doesn't do the actual upgrade.
Please ensure you have backed up your system and/or data in the event of a failed upgrade
 that would require a full re-install of the system from installation media.
Do you want to continue? y/n
y
Gathering logs used by preupgrade assistant:
All installed packages : 01/10 ...finished (time 00:00s)
All changed files      : 02/10 ...finished (time 00:48s)
Changed config files   : 03/10 ...finished (time 00:00s)
All users              : 04/10 ...finished (time 00:00s)
...
042/100 ...done    (samba shared directories selinux)
043/100 ...done    (CUPS Browsing/BrowsePoll configuration)
044/100 ...done    (CVS Package Split)
...
|samba shared directories selinux          |notapplicable  |
|CUPS Browsing/BrowsePoll configuration    |notapplicable  |
|CVS Package Split                         |notapplicable  |
...

As mentioned above the Preupgrade Assistant only helps evaluating what problems might come up during the upgrade – the real step must be done with the tool redhat-upgrade-tool-cli. For that to work the CentOS 7 key must be imported first:

$ sudo rpm --import http://isoredirect.centos.org/centos/7/os/x86_64/RPM-GPG-KEY-CentOS-7

Afterwards, the actual upgrade tool can be called. As options it takes the future distribution version and a URL to pull the data from. Additionally I had to add the option --force since the tool complained that preupg was not run previously – although it was. As soon as the upgrade tool is called, it starts downloading all necessary information, packages and images, and afterwards asks for a reboot – the reboot does not happen automatically.

$ sudo /usr/bin/redhat-upgrade-tool-cli --force --network 7 --instrepo=http://mirror.centos.org/centos/7/os/x86_64
setting up repos...
.treeinfo                                                                                                                                            | 1.1 kB     00:00     
getting boot images...

After the reboot the machine updates itself with the help of the downloaded packages. Note that this phase does take some time, depending on the speed of the machine, expect minutes, not seconds. However, if everything turns out right, the next login will be into a CentOS 7 machine:

$ cat /etc/os-release 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

Concluding it can be said that the upgrade tool worked quite nicely. While it is not comparable to a real live upgrade if offers a decent way to upgrade remote servers. I’ve tested it with a clean VM and also with a bare metal, remote server, and it worked surprisingly good. The analysis tool unfortunately did not perform how I expected it to work, but that might be due to the untested state or I was not using it properly. I’m looking forward what how that develops and improves over time.

But, again, and as mentioned before – don’t try this on your own prod servers.

[Howto] Testing Docker container links with netcat

Docker-logo-011Playing around with Docker is fun. Things get even more exciting when you start linking containers together. This can easily be tested with the help of Netcat.

First of all, make sure that you have an image with Netcat installed. In this example the basic centos image is used, but really any Linux image like Ubuntu, Debian and so on should be fine as long as the package manager provides Netcat.

A Centos image with Netcat installed is created and commited with the name “nc”:

$ docker run -t centos yum -y install nc
...
$ docker ps -a
CONTAINER ID        IMAGE               COMMAND ...
69d6c32dd258        centos:centos6      yum -y install nc ...
$ docker commit 69d6c32dd258 nc-image
a3bc8d7d736f6f03e8a8678e0ff31c1c31176792d4e4374c8873e51d13a3e78b
$ docker images
REPOSITORY          TAG                 IMAGE ID ...
nc-image            latest              a3bc8d7d736f ...

To test linking we now set up the server container with nc listening on port 8080:

$ docker run -d -p 8080 --name nc-server nc-image nc -l 8080
3d1962adf47b98de71342ca87c2b56a4e8c8ecbec4191c0e0f1a3131c3a99fe4
$ docker ps
... STATUS              PORTS                     NAMES
... Up 6 seconds        0.0.0.0:49155->8080/tcp   nc-server 

As can be seen above the ps command also shows the port forwarding.

The link can now be created by starting another container and telling Docker in the run command that it should be linked to the “server”. In this example the container is run interactively with /bin/bash to show how the connection is seen from inside the “client” container: Docker defines a set of environment variables to tell the container how ports on other containers can be reached.

$ docker run -i -t --rm=true --name nc-client --link nc-server:nc nc-image /bin/bash
# env
[...]
NC_PORT_8080_TCP_ADDR=172.17.0.2
NC_PORT_8080_TCP_PORT=8080

At the same time, the Docker process monitor docker ps now shows that these containers are linked:

... COMMAND             ... PORTS                     NAMES
... /bin/bash           ...                           nc-client                
... nc -l 8080          ... 0.0.0.0:49155->8080/tcp   nc-client/nc,nc-server 

The client container can now talk to directly to the port of the server container. Inside the client container we call nc on the specific port:

bash-4.1# nc 172.17.0.2 8080
foobar

The logs of the server container do show that the text is indeed received:

$ docker logs nc-server
foobar

This simple example shows how to link two containers together. The next step is now to try this with real applications and daemons.