[Howto] Launch traefik as a docker container in a secure way

Traefik is a great reverse proxy solution, and a perfect tool to direct traffic in container environments. However, to do that, it needs access to docker – and that is very dangerous and must be tightly secured!

The problem: access to the docker socket

Containers offer countless opportunities to improve the deployment and management of services. However, having multiple containers on one system, often re-deploying them on the fly, requires a dynamic way of routing traffic to them. Additionally, there might be reasons to have a front end reverse proxy to sort the traffic properly anyway.

In comes traefik – “the cloud native edge router”. Among many supported backends it knows how to listen to docker and create dynamic routes on the fly when new containers come up.

To do so traefik needs access to the docker socket. Many people decide to just provide that as a volume to traefik. This usually does not work because SELinux prevents it for a reason. The apparent workaround for many is to run traefik in a privileged container. But that is a really bad idea:

Docker currently does not have any Authorization controls. If you can talk to the docker socket or if docker is listening on a network port and you can talk to it, you are allowed to execute all docker commands. […]
At which point you, or any user that has these permissions, have total control on your system.

http://www.projectatomic.io/blog/2014/09/granting-rights-to-users-to-use-docker-in-fedora/

The solution: a docker socket proxy

But there are ways to securely provide traefik the access it needs – without exposing too much permissions. One way is to provide limited access to the docker socket via tcp via another container which cannot be reached from the outside that easily.

Meet Tecnativa’s docker-socket-proxy:

What?
This is a security-enhaced proxy for the Docker Socket.
Why?
Giving access to your Docker socket could mean giving root access to your host, or even to your whole swarm, but some services require hooking into that socket to react to events, etc. Using this proxy lets you block anything you consider those services should not do.

https://github.com/Tecnativa/docker-socket-proxy/blob/master/README.md

It is a container which connects to the docker socket and exports the API features in a secured and configurable way via TCP. At container startup it is configured with booleans to which API sections access is granted.

So basically you set up a docker proxy to support your proxy for docker containers. Well…

How to use it

The docker socket proxy is a container itself. Thus it needs to be launched as a privileged container with access to the docker socket. Also, it must not publish any ports to the outside. Instead it should run on a dedicated docker network shared with the traefik container. The Ansible code to launch the container that way is for example:

- name: ensure privileged docker socket container
  docker_container:
    name: dockersocket4traefik
    image: tecnativa/docker-socket-proxy
    log_driver: journald
    env:
      CONTAINERS: 1
    state: started
    privileged: yes
    exposed_ports:
      - 2375
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:z"
    networks:
      - name: dockersocket4traefik_nw

Note the env right in the middle: that is where the exported permissions are configured. CONTAINERS: 1  provides access to container relevant information. There are also SERVICES: 1 and SWARM: 1 to manage access to docker services and swarm.

Traefik needs to have access to the same network. Also, the traefik configuration needs to point to the docker container via tcp:

[docker]
endpoint = "tcp://dockersocket4traefik:2375"

Conclusion

This setup works surprisingly easy. And it allows traefik to access the docker socket for the things it needs without exposing critical permissions to take over the system. At the same time, full access to the docker socket is restricted to a non-public container, which makes it harder for attackers to exploit it.

If you have a simple container setup and use Ansible to start and stop the containers, I’ve written a role to get the above mentioned setup running.

Advertisements

[Short Tip] Provide dictionaries as default in Ansible variables

Ansible Logo

Ansible uses the Jinja2 template engine to handle variables. This includes the default filter, which sets a default value if a referenced variable is not explicitly defined somewhere else.

With Ansible it might happen that instead of a skalar variable a key-value is needed, a dictionary. If you just paste the plain text in there, you might run into trouble:

fatal: [test.example.com]: FAILED! => {"changed": false, "msg": "argument env is of type and we were unable to convert to dict: dictionary requested, could not parse JSON or key=value"}

The key-value pair needs to be properly formatted:

"{{ my_variable|default({'key':'value'}) }}"

Thanks to @bcoca for his post about this.

[Short Tip] Identify supported platforms of Ansible Galaxy

Ansible Logo

Ansible Galaxy recently got a fresh update and now has much more features worth a look. Among those are automatic quality scorings.

In a recent role upload my scoring was only 4.5. One of the problems was a “invalid platform”. I wondered which platforms are supported, and how the strings for those are, but the documentation is sparse in this regard.

However, Ansible Galaxy does feature an API to query those things. And in fact galaxy.ansible.com/api/v1/platforms/ shows the appropriate Fedora versions:

    {
        "id": 143,
        "url": "/api/v1/platforms/143/",
        "related": {},
        "summary_fields": {},
        "created": "2018-01-15T11:54:54.212531Z",
        "modified": "2018-01-15T11:54:54.212560Z",
        "name": "Fedora",
        "release": "27",
        "active": true
    },
    {
        "id": 162,
        "url": "/api/v1/platforms/162/",
        "related": {},
        "summary_fields": {},
        "created": "2018-04-30T16:35:24.066120Z",
        "modified": "2018-04-30T16:35:24.066153Z",
        "name": "Fedora",
        "release": "28",
        "active": true
    },
    {
        "id": 61,
        "url": "/api/v1/platforms/61/",
        "related": {},
        "summary_fields": {},
        "created": "2016-02-04T06:29:41.226911Z",
        "modified": "2016-02-04T06:29:41.226980Z",
        "name": "FreeBSD",
        "release": "10.0",
        "active": true
    }

So Fedora 29 is not supported right now, but there is even a bug report already.

[Short Tip] Use Ansible with managed nodes running Python3

Ansible Logo

Python 3 is becoming the default Python version on more and more distributions. Fedora 28 ships Python 3, and RHEL 8 is expected to ship Python 3 as well.

With Ansible this can lead to trouble: some of these distributions do not ship a default /usr/bin/python but instead insist on picking either /usr/bin/python2 or /usr/bin/python3 thus leading to errors when Ansible is called to manage such machines:

TASK [Gathering Facts] 
fatal: [116.116.116.202]: FAILED! => {"changed": false, "module_stderr": "Connection to 116.116.116.202 closed.\r\n", "module_stdout": "/bin/sh: /usr/bin/python: No such file or directory\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}

The fix is to define the Python interpreter in additional variables. They can even be provided on the command line:

$ ansible-playbook -i 116.116.116.202, mybook.yml -e ansible_python_interpreter="/usr/bin/python3"

[Howto] Using Ansible to manage RHEL 5 systems

Ansible Logo

With the release of Ansible 2.4, Ansible requires that managed nodes have a Python version of at least 2.6. Most notable, this leaves RHEL 5 users wondering how to manage RHEL 5 systems in the future – given it only provides Python 2.4.

I covered this topic in a recent blog post at ansible.com/blog, read more at “USING ANSIBLE TO MANAGE RHEL 5 YESTERDAY, TODAY AND TOMORROW“.

[HowTo] Combine Python methods with Jinja filters in Ansible

Ansible Logo

Ansible has a lot of ways to manipulate variables and their content. We shed some light on the different possibilities – and how to combine them.

Ansible inbuilt filters

One way to manipulate variables in Ansible is to use filters. Filters are connected to variables via pipes, |, and the result is the modified variable. Ansible offers a set of inbuilt filters. For example the ipaddr filter can be used to find IP addresses with certain properties in a list of given strings:

# Example list of values
test_list = ['192.24.2.1', 'host.fqdn', '::1', '192.168.32.0/24', 'fe80::100/10', True, '', '42540766412265424405338506004571095040/64']

# {{ test_list | ipaddr }}
['192.24.2.1', '::1', '192.168.32.0/24', 'fe80::100/10', '2001:db8:32c:faad::/64']

Jinja2 filters

Another set of filters which can be utilized in Ansible are the Jinja2 filters of the template engine Jinja2, which is the default templating engine in Ansible.

For example the map filter can be used to pick certain values from a given dictionary. Note the following code snippet where from a list of names only the first names are given out as a list due to the mapping filter (and the list filter for the output).

vars:
  names:
    - first: Foo
      last: Bar
    - first: John
      last: Doe
 
 tasks:
 - debug:
     msg: "{{ names | map(attribute='first') |list }}"

Python methods

Besides filters, variables can also be modified by the Python string methods: Python is the scripting language Ansible is written in, and and provides string manipulation methods Ansible can just use. In contrast to filters, methods are not attached to variables with a pipe, but with dot notation:

vars:
  - mystring: foobar something

- name: endswith method
  debug:
    msg: "{{ mystring.endswith('thing') }}"

...

TASK [endswith method] *****************************************************************
ok: [localhost] => {
 "msg": true
}

Due to the close relation between Python and Jinja2 many of the above mentioned Jinja2 filters are quite similar to the string methods in Python and as a result, some capabilities like capitalize are available as a filter as well as a method:

vars:
  - mystring: foobar something

tasks:
- name: capitalize filter
  debug:
    msg: "{{ mystring|capitalize() }}"

- name: capitalize method
  debug:
    msg: "{{ mystring.capitalize() }}"

Connecting filters and methods

Due to the different ways of invoking filters and methods, it is sometimes difficult to bring both together. Caution needs to be applied if filters and methods are to be mixed.

For example, if a list of IP addresses is given and we want the last element of the included address of the range 10.0.0.0/8, we first can use the ipaddr filter to only output the IP within the appropriate range, and afterwards use the split method to break up the address in a list with four elements:

vars:
 - myaddresses: ['192.24.2.1', '10.0.3.5', '171.17.32.1']

tasks:
- name: get last element of 10* IP
  debug:
    msg: "{{ (myaddresses|ipaddr('10.0.0.0/8'))[0].split('.')[-1] }}"

...

TASK [get last element of 10* IP] **************************************************************
ok: [localhost] => {
 "msg": "5"
}

As can be seen above, to attach a method to a filtered object, another set of brackets – ( ) – is needed. Also, since the result of this filter is a list, we need to take the list element – in this case this is easy since we only have one result, so we take the element 0. Afterwards, the split method is called upon the result, gives back a list of elements, and we take the last element (-1, but element 3 would have worked here as well).

 

Conclusion

There are many ways in Ansible to manipulate strings, however since they are coming from various sources it is sometimes a little bit tricky to find what is actually needed.

Ansible package moved from EPEL to extras

Ansible LogoA few days ago the Ansible package was removed from EPEL and many ask why that happened. The background is that Ansible is now provided in certain Red Hat channels.

What happened?

In the past (pre-2017-10) most people who were on RHEL or CentOS or similar RHEL based systems used to install Ansible from the EPEL repository. This way the package was updates regularly and it was ensured that it met the quite high packaging standards of the EPEL project.

However, a few days ago someone noticed that the EPEL repositories no longer contain an Ansible rpm package:

I'm running RHEL 7.3, and have installed the latest epel-release-latest-7.noarch.rpm. However, I'm unable to install ansible from this repo.

This caused some confusion and questions about the reasons behind that move.

EPEL repository policy

To better understand what happened it is important to understand EPEL’s package policy:

EPEL strives to never replace or interfere with packages shipped by Enterprise Linux.

While the idea of EPEL is to provide cool additional packages for RHEL, they will never replace anything that is shipped.

Change at Red Hat Enterprise Linux

That philosophy regularly requires that the EPEL project removes packages: each time when RHEL adds a package EPEL needs to check if they are providing it, and removes it.

And a few weeks ago exactly that happened: Ansible was included in RHELs extras repository.

The reasons behind that move is that the newest incarnation of RHEL now comes along with so called system roles – which require Ansible to execute them.

But where to get it now?

Ansible is now directly available to RHEL users as mentioned above. Also, CentOS picked up Ansible in their extras repository, and there are plenty of other ways available.

The only case where something actually changes for people is when the EPEL repository is activated – but the extras repository is not.