[Howto] My own mail & groupware server, part 1: what, why, how?

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground, and this blog post is the first in a series describing all the steps.

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground, and this blog post is the first in a series describing all the steps.

Running mail servers on your own

Running your own mail server sounds tempting: having control over a central piece of your own communication does sound good, right?

But that can be quite challenging: The mail standards are not written for today’s way to operate technology. Many of them are vague, too generic to be really helpful, or just not deployed widely in reality. Other things are not standardized at all so you have to guess and test (max message sizes, anyone?). Also, even the biggest providers have their own interpretations of the standards and sometimes aggressively ignore them which you are forced to accept.

Also there is the spam problem: there is still a lot of spam out there. And since this is an ongoing fight mail server admins have to constantly adjust their systems to newest tricks and requirements. Think of SPF, DKIM, DMARC and DANE here.

Last but not least the market is more and more dominated by large corporations. If your email is tagged as spam by one of those, you often have no way to figure out what the problem is – or how to fix it. They simply will not talk to you if you are not of equal size (or otherwise important). In fact, if I have a pessimistic look into the future of email, it might happen that all small mail service providers die and we all have to use the big services.

Thus the question is if anyone should run their own mail server at all. Frankly, I would not recommend it if you are not really motivated to do so. So be warned.

However, if you do decide to do that on your own, you will learn a lot about the underlying technology, about how a core technology of “the internet” works, how companies work and behave, and you will have huge control about a central piece of today’s communication: mail is still a corner stone of today’s communication, even if we all hate it.

My background

To better understand my motivation it helps to know where I come from: In my past job at credativ I was project manager for a team dealing with large mail clusters. Like, really large. The people in the team were and are awesome folks who *really* understand mail servers. If you ever need help running your own open source mail infrastructure, get them on board, I would vouch for them anytime.

And while I never reached and never will reach the level of understanding the people in my team had, I got my fair share of knowledge. About the the technological components, the developments in the field, the challenges and so on. Out of this I decided at some point that it would be fun to run my own mail server (yeah, not the brightest day of my life, in hindsight…).

Thus at some point I set up my own domain and mail server. And right from the start I wanted more than a mail server: I wanted a groupware server. Calendars, address books, such a like. I do not recall how it all started, and how the first setup looked like, but I know that there was a Zarafa instance once, in 2013. Also I used OpenLDAP for a while, munin was in there as well, even a trac service to host a git repository. Certificates were shipped via StartSSL. Yeah, good times.

In summer 2017 this changed: I moved Zarafa out of the picture, in came SOGo. Also, trac was replaced by Gitlab and that again by Gitea. The mail server was completely based on Postfix, Dovecot and the likes (Amavisd, Spamassassin, ClamAV). OpenLDAP was replaced by FreeIPA, StartSSL by letsencrypt. All this was setup via docker containers, for easier separation of services and for simpler management. Nginx was the reverse proxy. Besides the groupware components and the git server there was also a OwnCloud (later Nextcloud) instance. Some of the container images were upstream, some I built myself. There was even a secondary mail server for emergencies, though that one was always somewhat out of date in terms of configuration.

This all served me well for years. Well, more or less. It was never perfect and missed a lot of features. But most mail got through.

Why the restart?

If it all served me well, why did I have to re-create the setup? Well, a few days ago I had to run an update of the certificates (still manually at that time). Since I had to bring down the reverse proxy for it, I decided run a full update of the underlying OS and also of the docker images and to reboot the machine.

It went fine, came back up – but something was wrong. Postfix had problems accepting mails. The more I dug down, the deeper the rabbit hole got. Postfix simply didn’t answer after the “DATA” part in the SMTP communication anymore. Somehow I got that fixed – but then Dovecot didn’t accept the mails for unknown reasons, and bounced were created!

I debugged for hours. But every time I thought I had figured it out, another problem came up. At one point I realized that the underlying FreeIPA service had erratic restarts and I had no idea why.

After three or four days I still had no idea what was going on, why my system was behaving that bad. Even with a verified working configuration from backup things went randomly broken. I was not able to receive or send mails reliably. My three major suspects were:

  • FreeIPA had a habit in the past to introduce new problems in new images – maybe this image was broken as well? But I wasn’t able to find overly obvious issues or reports.
  • Docker was updated from an outdated version to something newer – and Docker never was a friend of CentOS firewall rules. Maybe the recent update screwed up my delicate network setup?
  • Faulty RAM? Weird, hard to reproduce and changing errors of known-to-be-working setups can be the sign of faulty RAM. Maybe the hardware was done for.

I realized I had to make a decision: abandon my own mail hosting approaches (the more sensible option) – or get a new setup running fast.

Well – guess what I did?

Running your own mail server: there is a project for that!

I decided to re-create my setup. And this time I decided to not do it all by myself: Over the years I noticed that I was not the only person with the crazy idea to run their own mail server in containers. Others started entire projects around this with many contributors and additional tooling. I realized that I would loose little by using code from such existing projects, but would gain a lot: better tested code, more people to ask and discuss if problems arise, more features added by others, etc.

Two projects caught my interest over time, I followed them on Github for quite a while already: Mailu and mailcow. Indeed, my original plan was to migrate to one of them in the long term, like in 2021 or something, and maybe even hosted on Kubernetes or at least Podman. However, with the recent outage of my mail server I had to act quickly, and decided to go with a Docker based setup again.

Both projects mentioned above are basically built around Docker COmpose, Postfix, Dovecot, RSpamd and some custom admin tooling to make things easier. If you look closer they both have their advantages and special features, so if you think to run your own mail server I suggest you look into them yourself.

For me the final decision was to go with mailu: mailu does support Kubernetes and I wanted be prepared for a kube based future.

What’s next?

So with all this background you already know what to expect from the next posts: how to bring up mailu as a mail server, how to add Nextcloud and Gitea to the picture, and a few other gimmicks.

This will all be tailored to my needs – but I will try to keep it all as close to the defaults as possible. First to keep it simple but also to make this content reusable for others. I do hope that this will help others to start using their own setups or fine tuning what they already have.

Image by Gerhard Gellinger from Pixabay

[Howto] Writing an Ansible module for a REST API

Ansible LogoAnsible comes along with a great set of modules. But maybe your favorite tool is not covered yet and you need to develop your own module. This guide shows you how to write an Ansible module – when you have a REST API to speak to.

Background: Ansible modules

Ansible is a great tool to automate almost everything in an IT environment. One of the huge benefits of Ansible are the so called modules: they provide a way to address automation tasks in the native language of the problem. For example, given a user needs to be created: this is usually done by calling certain commandos on the shell. In that case the automation developer has to think about which command line tool needs to be used, which parameters and options need to be provided, and the result is most likely not idempotent. And its hard t run tests (“checks”) with such an approach.

Enter Ansible user modules: with them the automation developer only has to provide the data needed for the actual problem like the user name, group name, etc. There is no need to remember the user management tool of the target platform or to look up parameters:

$ ansible server -m user -a "name=abc group=wheel" -b

Ansible comes along with hundreds of modules. But what is if your favorite task or tool is not supported by any module? You have to write your own Ansible module. If your tools support REST API, there are a few things to know which makes it much easier to get your module running fine with Ansible. These few things are outlined below.

REST APIs and Python libraries in Ansible modules

According to Wikipedia, REST is:

… the software architectural style of the World Wide Web.

In short, its a way to write, provide and access an API via usual HTTP tools and libraries (Apache web server, Curl, you name it), and it is very common in everything related to the WWW.

To access a REST API via an Ansible module, there are a few things to note. Ansible modules are usually written in Python. The library of choice to access URLs and thus REST APIs in Python is usually urllib. However, the library is not the easiest to use and there are some security topics to keep in mind when these are used. Out of these reasons alternative libraries like Python requests came up in the past and are pretty common.

However, using an external library in an Ansible module would add an extra dependency, thus the Ansible developers added their own library inside Ansible to access URLs: ansible.module_utils.urls. This one is already shipped with Ansible – the code can be found at lib/ansible/module_utils/urls.py – and it covers the shortcomings and security concerns of urllib. If you submit a module to Ansible calling REST APIs the Ansible developers usually require that you use the inbuilt library.

Unfortunately, currently the documentation on the Ansible url library is sparse at best. If you need information about it, look at other modules like the Github, Kubernetes or a10 modules. To cover that documentation gap I will try to cover the most important basics in the following lines – at least as far as I know.

Creating REST calls in an Ansible module

To access the Ansible urls library right in your modules, it needs to be imported in the same way as the basic library is imported in the module:

from ansible.module_utils.basic import *
from ansible.module_utils.urls import *

The main function call to access a URL via this library is open_url. It can take multiple parameters:

def open_url(url, data=None, headers=None, method=None, use_proxy=True,
        force=False, last_mod_time=None, timeout=10, validate_certs=True,
        url_username=None, url_password=None, http_agent=None,
force_basic_auth=False, follow_redirects='urllib2'):

The parameters in detail are:

  • url: the actual URL, the communication endpoint of your REST API
  • data: the payload for the URL request, for example a JSON structure
  • headers: additional headers, often this includes the content-type of the data stream
  • method: a URL call can be of various methods: GET, DELETE, PUT, etc.
  • use_proxy: if a proxy is to be used or not
  • force: force an update even if a 304 indicates that nothing has changed (I think…)
  • last_mod_time: the time stamp to add to the header in case we get a 304
  • timeout: set a timeout
  • validate_certs: if certificates should be validated or not; important for test setups where you have self signed certificates
  • url_username: the user name to authenticate
  • url_password: the password for the above listed username
  • http_agent: if you wnat to set the http agent
  • force_basic_auth: for ce the usage of the basic authentication
  • follow_redirects: determine how redirects are handled

For example, to fire a simple GET to a given source like Google most parameters are not needed and it would look like:

open_url('https://www.google.com',method="GET")

A more sophisticated example is to push actual information to a REST API. For example, if you want to search for the domain example on a Satellite server you need to change the method to PUT, add a data structure to set the actual search string ({"search":"example"}) and add a corresponding content type as header information ({'Content-Type':'application/json'}). Also, a username and password must be provided. Given we access a test system here the certification validation needs to be turned off also. The resulting string looks like this:

open_url('https://satellite-server.example.com/api/v2/domains',method="PUT",url_username="admin",url_password="abcd",data=json.dumps({"search":"example"}),force_basic_auth=True,validate_certs=False,headers={'Content-Type':'application/json'})

Beware that the data json structure needs to be processed by json.dumps. The result of the query can be formatted as json and further used as a json structure:

resp = open_url(...)
resp_json = json.loads(resp.read())

Full example

In the following example, we query a Satellite server to find a so called environment ID for two given parameters, an organization ID and an environment name. To create a REST call for this task in a module multiple, separate steps have to be done: first, create the actual URL endpoint. This usually consists of the server name as a variable and the API endpoint as the flexible part which is different in each REST call.

server_name = 'https://satellite.example.com'
api_endpoint = '/katello/api/v2/environments/'
my_url = server_name + api_endpoint

Besides the actual URL, the payload must be pieced together and the headers need to be set according to the content type of the payload – here json:

headers = {'Content-Type':'application/json'}
payload = {"organization_id":orga_id,"name":env_name}

Other content types depends on the REST API itself and on what the developer prefers. JSON is widely accepted as a good way to go for REST calls.

Next, we set the user and password and launch the call. The return data from the call are saved in a variable to analyze later on.

user = 'abc'
pwd = 'def'
resp = open_url(url_action,method="GET",headers=headers,url_username=module.params.get('user'),url_password=module.params.get('pwd'),force_basic_auth=True,data=json.dumps(payload))

Last but not least we transform the return value into a json construct, and analyze it: if the return value does not contain any data – that means the value for the key total is zero – we want the module to exit with an error. Something went wrong, and the automation administrator needs to know that. The module calls the built-in error functionmodule.fail_json. But if the total is not zero, we get out the actual environment ID we were looking for with this REST call from the beginning – it is deeply hidden in the json structure, btw.

resp_json = json.loads(resp.read())
if resp_json["total"] == 0:
    module.fail_json(msg="Environment %s not found." % env_name)
env_id = resp_json["results"][0]["id"]

Summary

It is fairly easy to write Ansible modules to access REST APIs. The most important part to know is that an internal, Ansible provided library should be used, instead of the better known urllib or requests library. Also, the actual library documentation is still pretty limited, but that gap is partially filled by the above post.

[Short Tip] Call Ansible Tower REST URI – with Ansible

Ansible Logo

It might sound strange to call the Ansible Tower API right from within Ansible itself. However, if you want to connect several playbooks with each other, or if you user Ansible Tower mainly as an API this indeed makes sense. To me this use case is interesting since it is a way to document how to access, how to use the Ansible Tower API.

The following playbook is an example to launch a job in Ansible Tower. The body payload contains an extra variable needed by the job itself. In this example it is a name of a to-be launched VM.

---
- name: POST to Tower API
  hosts: localhost
  vars:
    job_template_id: 44
    vmname: myvmname

  tasks:
    - name: create additional node in GCE
      uri:
        url: https://tower.example.com/api/v1/job_templates/{{ job_template_id }}/launch/
        method: POST
        user: admin
        password: $PASSWORD
        status_code: 202
        body: 
          extra_vars:
            node_name: "{{ vmname }}"
        body_format: json

Note the status code (202) – the URI module needs to know that a non-200 status code is used to show the proper acceptance of the API call. Also, the job is identified by its ID. But since Tower shows the ID in the web interface it is no problem to get the correct id.

Impressions from #AnsibleFest London 2016 [Update]

Ansible LogoThe #AnsibleFest was taking place in London, and I was luckily able to attend. This post shares some impressions from the event, together with interesting announcements and stories.

Update: The slides of the various presentations are now available.

Preface

The #AnsibleFest London 2016 took place near the O2 Arena and lasted the entire day. The main highlight of the conference was the network automation coming along with Ansible now. Other very interesting talks covered very helpful tips about managing Windows Servers, the 101 on modules, how to implement continuous deployment, the journey of a french bank towards DevOps, how Cisco devices can be managed and how to handle immutable infrastructure. All focused on Ansible, of course.

But while the conference took place during Thursday, the #AnsibleFest started already the evening before: at the social event Ansible Social.
Ansible Social
And it was a wonderful evening: many people from Ansible, partners, coworkers from Red Hat and others were there to enjoy drinks, food and chatting through the evening. Getting to know many of the people there went pretty well, it was a friendly bunch meeting at a pretty nice place.
Ansible Social

Keynote

Upon arrival at the conference area one of the sponsor desks immediately caught the eye: Cisco!
20160218_084833-01
For everyone following Ansible news closely it was obvious that networking would be a big topic, especially since it was about to be featured twice during the day, once by Peter Sprygada from Ansible and later on by Fabrizio Maccioni from Cisco.

And this impression was confirmed when Todd Barr came to the stage and talked about the current state of Ansible and what to expect in the near future: networking is a big topic for Ansible right now, they are pushing resources into the topic and already hinted that there would be a larger announcement during the #AnsibleFest. During the presentation the strengths of Ansible were of course emphasized again: that it is simple to setup, to understand and to deploy. And that it does not require agents. While I do have my past with Puppet and still like it as a tool in certain circumstances, I must admit that I had to smile at the slide about agents.
Todd Barr at AnsibleFest
I have to admit, for many customers and many setups this is in fact true: they do not want agents for various reasons. And Ansible can deliver actual results without any need for a client.

The future of Ansible

Next up was Bill Nottingham talking setting the road for the future of Ansible. A focus is certainly better integration of Windows (no beta tag anymore!), better testing – and Python 3 support! It was acknowledged that there are more and more distributions out there not providing any Python 2 anymore and that they need to be catered.
Future of Ansible by Bill Nottingham
Ansible Tower was also covered, of course, and has very promising improvements coming up as well: the interface will be streamlined, the credentials and rights system will be improved, and there will be (virtual) appliances to get Ansible Tower out of the box in an instant. But the really exciting part is more large-scale, enterprise focused: Ansible Tower will be able to cater federated setups, meaning distributed replication of Ansible Tower commands via proxy Towers.
Federated Ansible Tower
Don’t expect this all in the next weeks, but we might see many of these features already in Ansible Tower 3.0. And it was mentioned that there might be a release in early fall.

Scaling abilities are indeed needed – many data centers these days have more than one location, or are spread over several departments and thus need partially independent setups to manage the infrastructure. At the same time, there are Ansible customers who are using Ansible with 50k nodes and more out there, and they have a demand for fine grained, federated infrastructure setups as well.

Networking with Ansible

While the upcoming Ansible Tower had some exciting news, the talk about networking support by Peter Sprygada really blew everyone away. Right at the moment of presentation Red Hat issued a press release that they bring DevOps to the network via Ansible:

[Red Hat] is bringing DevOps to networking by extending Ansible – its powerful IT automation and DevOps platform – to include native agentless support for automating heterogeneous network infrastructure devices using the same simple human and machine readable automation language that Ansible provides to IT teams.

Peter picked that up and presented a whole lot of technical details. The most important one was that there are now several networking core modules for commands, configuration and templates.
Ansible networking automation support
They cover a huge load of devices:

  • Arista EOS
  • Cisco NXOS
  • Cisco IOS
  • Cisco IOSXR
  • Cumulus Linux
  • Juniper Junos
  • OpenSwitch

While some of these devices were already supported by the raw module or some libraries out there, but fully integrated modules supported by Ansible and the network device manufacturers themselves takes networking automation to another new level. If you are interested, get the latest Ansible networking right away.

Ansible in a visual effects studio

The next talk was by the customer “Industrial Light and Magic”, a visual effects studio using Ansible to handle there massive setup via Ansible. It showed in particular how many obstacles you face in your daily routine running data centers and deploying software all the time – and how to tackle them using Ansible and Ansible’s features. I forgot to take a photo, though…

Ansible & Windows

John Hawkesworth from M*Modal came up to the stage next and delivered a brilliant speech about all the things needed to know when managing Windows with Ansible. Talking about the differences of Ansible 1.9 vs 2.0 briefly, he went over lessons learned like why the backslash should be escaped every time just to be sure (\t …) but also gave his favourite development and modules quite some attention. Turns out the registration module can come in very handy!
Ansible and Windows

Writing modules, 101

Next up James Cammarata introduced how to write modules for Ansible. And impressively, this was live demonstrated by a module he had written the days before to control his Philips Hue lights. They could be controlled via Ansible live on stage.
Ansible Modules 101
Besides the great live demo the major points of the presentation were:

  • It is quite easy to develop modules.
  • Start of simply, get more complex the further you go down the road.
  • Write a module when your Playbook for a single task exceeds ten lines of code.
  • Write in Python/Powershell when you want it to be integrated with Ansible Core.
  • Write in any language you want if you won’t share it anyway.

While I am sure that other module developers might see some of these points different, it gives a rather good idea what to keep in mind when the topic is approached.

Of course, the the code for the Philips Hue Ansible module is available on Github.

Continuous deployment

Continuous integration is a huge topic in DevOps, and thus especially with Ansible. Steve Smith of Atlassian picked up the topic and discussed what needs to be taken into account when Ansible is used to enable continuous integration.
Continous Integration with Ansible
And there were many memorable quotes during the talk which made it simply fun to watch. I particularly like this one:

Release features, not dumps.

It means: do release when you have something worth releasing – not at arbitrary dates. It is a strong statement against release or maintenance windows and does make sense: after all, why should you release when its not worth? And you certainly will not wait if it is important!

Also, since many maintenance windows are implemented because doing maintenance is hard:

Everything which is hard should be done more often, not less.

Combined with the fact that very complex, but successful enterprises do 300 releases an hour it is clear that continuous deployment is possible – but what often is needed is the right culture and probably at some point a great, simple to use tool able to cater the needs of complex infrastructure.

Ansible accelerates deployment

The next talk focused on a vertical which might usually say that they are too regulated and “special” to integrate DevOps: financial. Fabrice Bernhard presented the journey of the Bank Société Générale introducing DevOps principles with the help of Ansible to become more agile, more flexible and to be able to respond quicker to changes. The reason for that was summarized in a very good quote:

It’s not the big that eat the small. It’s the fast that eat the slow.

This is true for all the enterprises out there: software enabled companies have attacked almost any given business out there (Amazon vs Walmart, Uber vs cabs, Airbnb vs hotels and hostels, etc.). And there are enough analysts right now who see the banking market as the next big thing which might be seriously disrupted due to mobile payment, blockchain technology and other IT based developments.
Ansible and the challenges for businesses

But that also shows what the actual change must be about: the new companies do not take over because they have the better technology. They take over because they have a different culture, and approach problems totally different. And thus, to keep up with the development, change your culture. Or, as said on stage:

Automation is about cultural change. Move fast and break things!

DevOps discussion

After these two powerful talks the audience had a chance to catch some breath during the interactive DevOps discussion. It mainly picked up the topics from the previous talks, and it showed that everyone in the room is pretty sure that DevOps as such is more or less a name on the underlying situation that enterprises need to adopt – or they will fail in the long term, no matter how big they are.

Managing your Cisco data center – with Ansible

As already mentioned, Fabrizio Maccioni from Cisco had the second talk about managing networks with Ansible.
Ansible and Cisco
Interestingly enough, he mentioned that the interest to support Ansible was brought to them by customers who were already managing part of their infrastructure with Ansible. A key point is that Ansible does not require an agent. While Cisco does support some configuration management agents on their hardware, it seems that most of the customers would not do that.
Ansible is good becaue agentless

Immutable infrastructure

The last presentation was held by Vik Bhatti from Beamly. Their problem is that sometimes they have to massively scale in seconds. Literally, in seconds. That requires them to have images of machines up and running in no time. They do this with Ansible, having the playbooks right on the images on one hand, and using Ansible to control their image build process on the other. Actually, the image builder is Packer and it uses Ansible to partially build the image.

As a result, down the line they have images ready to deploy and can extend their environment very, very, very quickly. Since they are able to respond that fast, they were able to cut down hardware costs massively.

Final discussions, happy hour

The final panel dealt mainly with questions about Open Source Tower (it will be there eventually, but no fixed date) and similar questions. After that, everyone went to enjoy drinks and a beautiful skyline.
AnsibleFest skyline and happy hour

Conclusion

In conclusion the #AnsibleFest was a great success, in terms of the people I met as well as in terms of the technical discussions. I can’t wait to get my hand on the networking modules. I’d like to thank the people from Ansible making this event possible, and of course my employer Red Hat for making it possible to visit this event.

[Howto] ownCloud auto setup including LDAP

ownCloud LogoThe self hosting file sharing solution ownCloud is becoming increasingly popular, even in companies you regularly come across installations. To make auto setup of ownCloud easier the following howto shows the steps to automatically connect it to a LDAP server.

File exchange services like Dropbox or Google Drive offer a neat and quick way to exchange even large amounts of data. However, they only work because the data are uploaded to the servers of such corporations in the first hand, which is in times a bit questionable when you deal with sensitive data.

Here ownCloud comes into play: it offers the possibility to self host a file sharing service on infrastructure you trust. Additionally it is Open Source, thus providing at least a minimum amount of trust. And it is not anymore a solution only used by few people for their private servers: these days ownCloud is used in the public sector, universities and companies of all sizes. For example the sciebo project offers ownCloud based file exchange services for 300k students with 5 PB of storage.

It is thus no wonder that the interest in hosting ownCloud services is unbroken. Here at credativ we often see corresponding requests from customers who want support in setting up such installations.

Among the challenges to setup ownCloud in a business environment, two of the biggest ones are the connection to the central authentication service like LDAP and unattended installation. The first task is important to fully integrate ownCloud into the existing user space and make it a first class citizen in the existing infrastrucutre. The second task is especially relevant if you want to easily deploy the service reproducible: Think of containers, docker, VMs, etc. here.

especially the combination of both tasks is challenging: usually ownCloud expects the admin to follow through several steps manually which involve a lot of clicking and entering data until it is up, running and connected to the LDAP. But it is possible to avoid these point-and-click-adventures: Configuration templates can help pre-configuring the ownCloud service, and the setup of the LDAP connection can be automated using ownCloud’s configuration command line tool occ.

So let’s go through the process step by step: At first, ownCloud has to be installed – that can usually be done by the usual package management tools like yum, apt, etc. After the installation, the ownCloud URL is usually opened via browser to start the first run wizard. This can be automated by providing the configuration template $owncloud/config/autoconfig.php which contains all necessary information usually queried in the first run wizard: admin user, pwd, db type, db user, db password, etc. ownCloud checks at start if the file is present and if, omits the first run wizard. Here is an example of such a autoconfig template:

<?php
$AUTOCONFIG = array (
  'directory' => '/var/www/html/owncloud/data',
  'adminlogin'    => 'mmu',
  'adminpass'     => '123456',
  'dbtype'        => 'pgsql',
  'dbname'        => 'owncloud',
  'dbuser'        => 'postgres',
  'dbpass'        => '123456',
  'dbhost'        => '192.168.123.45',
  'dbtableprefix' => 'oc_',
);

Note that further configuration of your ownCloud can also be placed int the usual config.php file: the values of the autoconfig file will be merged into the existing configuration file. This way you can pre-configure most parts of your entire server. More details can be found in the admin documentation.

To actually start the processing of the autoconfig file the ownCloud URL must be called at least once. This can be done from the server itself via the help of curl: curl -s -k 127.0.0.1/owncloud/ > /dev/null.

When the basic configuration is done, the next step is to connect the server to LDAP. This would usually be done by opening the ownCloud URL, activating the LDAP app and configuring it. Instead of clicking through the web page, these tasks can be accomplished with the help of the occ tool. It can be used to activate the app, write and an empty configuration (thanks mark0n for this) and also to set the basic LDAP data. Make sure to call all commands as the user the webserver is called at – otherwise you might get all kinds of problems. The individual steps are:

php -f $ocpath/occ app:enable user_ldap
php -f $ocpath/occ ldap:create-empty-config
php -f $ocpath/occ ldap:set-config "" ldapHost 192.168.123.45
php -f $ocpath/occ ldap:set-config "" ldapPort 389
php -f $ocpath/occ ldap:set-config "" ldapBase \"dc=example,dc=net\"
php -f $ocpath/occ ldap:set-config "" ldapConfigurationActive 1

In case you are debugging problems, check the configuration of the ownCloud server via php -f $ocpath/occ ldap:show-config.

And that’s it already – your ownCloud should be connected to your LDAP server now. If you script all commands for example in Ansible or write a Puppet module it is even easily reproducible.

In case you are interested, I also wrote a German blog article about the problem on credativ’s blog: Owncloud Auto-Setup mit LDAP-Anbindung.