[Howto] Launch traefik as a docker container in a secure way

Traefik is a great reverse proxy solution, and a perfect tool to direct traffic in container environments. However, to do that, it needs access to docker – and that is very dangerous and must be tightly secured!

The problem: access to the docker socket

Containers offer countless opportunities to improve the deployment and management of services. However, having multiple containers on one system, often re-deploying them on the fly, requires a dynamic way of routing traffic to them. Additionally, there might be reasons to have a front end reverse proxy to sort the traffic properly anyway.

In comes traefik – “the cloud native edge router”. Among many supported backends it knows how to listen to docker and create dynamic routes on the fly when new containers come up.

To do so traefik needs access to the docker socket. Many people decide to just provide that as a volume to traefik. This usually does not work because SELinux prevents it for a reason. The apparent workaround for many is to run traefik in a privileged container. But that is a really bad idea:

Docker currently does not have any Authorization controls. If you can talk to the docker socket or if docker is listening on a network port and you can talk to it, you are allowed to execute all docker commands. […]
At which point you, or any user that has these permissions, have total control on your system.

http://www.projectatomic.io/blog/2014/09/granting-rights-to-users-to-use-docker-in-fedora/

The solution: a docker socket proxy

But there are ways to securely provide traefik the access it needs – without exposing too much permissions. One way is to provide limited access to the docker socket via tcp via another container which cannot be reached from the outside that easily.

Meet Tecnativa’s docker-socket-proxy:

What?
This is a security-enhaced proxy for the Docker Socket.
Why?
Giving access to your Docker socket could mean giving root access to your host, or even to your whole swarm, but some services require hooking into that socket to react to events, etc. Using this proxy lets you block anything you consider those services should not do.

https://github.com/Tecnativa/docker-socket-proxy/blob/master/README.md

It is a container which connects to the docker socket and exports the API features in a secured and configurable way via TCP. At container startup it is configured with booleans to which API sections access is granted.

So basically you set up a docker proxy to support your proxy for docker containers. Well…

How to use it

The docker socket proxy is a container itself. Thus it needs to be launched as a privileged container with access to the docker socket. Also, it must not publish any ports to the outside. Instead it should run on a dedicated docker network shared with the traefik container. The Ansible code to launch the container that way is for example:

- name: ensure privileged docker socket container
  docker_container:
    name: dockersocket4traefik
    image: tecnativa/docker-socket-proxy
    log_driver: journald
    env:
      CONTAINERS: 1
    state: started
    privileged: yes
    exposed_ports:
      - 2375
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:z"
    networks:
      - name: dockersocket4traefik_nw

Note the env right in the middle: that is where the exported permissions are configured. CONTAINERS: 1  provides access to container relevant information. There are also SERVICES: 1 and SWARM: 1 to manage access to docker services and swarm.

Traefik needs to have access to the same network. Also, the traefik configuration needs to point to the docker container via tcp:

[docker]
endpoint = "tcp://dockersocket4traefik:2375"

Conclusion

This setup works surprisingly easy. And it allows traefik to access the docker socket for the things it needs without exposing critical permissions to take over the system. At the same time, full access to the docker socket is restricted to a non-public container, which makes it harder for attackers to exploit it.

If you have a simple container setup and use Ansible to start and stop the containers, I’ve written a role to get the above mentioned setup running.

Advertisements

So you think offline systems need no updates?

offlineOften customers run offline systems and claim that such machines do not need updates since they are offline. But this is a fallacy: updates do not only close security holes but also deliver bug fixes – and they can be crucial.

Background

Recently a customer approached me with questions regarding an upgrade of a server. During the discussion, the customer mentioned that the system never got upgrades:

“It is an offline system, there is no need.”

That’s a common misconception. And a dangerous one.

Many people think that updates are only important to fix security issues, and that bugfixes are not really worth considering – after all, the machine works, right?

Wrong!

Software is never perfect. Errors happen. And while security issues might be uncomfortable, bugs in the program code can be a much more serious issue than “mere” security problems.

Example One: Xerox

To pick an example, almost each company out there has one type of system which hardly ever gets updated: copy machines. These days they are connected to the internet and can e-mail scanned documents. They are usually never updated, after all it just works, right?

In 2013 it was discovered that many Xerox WorkCentres had a serious software bug, causing them to alter scanned numbers. It took quite some weeks and analysis until finally a software update fixed the issue. During that time it turned out that the bug was at least 8 years old. So millions and millions of faulty scans have been produced over the years. In some cases the originals were destroyed in the meantime. It can hardly be estimated what impact that will have, but for sure it’s huge and will accompany us for a long time. And it was estimated that even today many scanners are still not patched – because it is not common to patch such systems. Offline, right?

So yes, a security issue might expose your data to the world. But it’s worse when the data is wrong to begin with.

Example two: Jails

Another example hit the news just recently: the US Washington State Department of Correction released inmates too early – due to a software bug. Again the software bug was present for years and years, releasing inmates too early all the time.

Example three: Valve

While Valve’s systems are often per definition online, the Valve Steam for Linux bug showed that all kinds of software can contain, well, all kinds of bugs: if you moved the folder of your Steam client, it could actually delete your entire (home) directory. Just like that. And again: this bug did not happen all the time, but only in certain situations and after quite some time.

# deletes your files when the variable is not set anymore
rm -rf "$STEAMROOT/"*

Example four: Office software

Imagine you have a bug in your calculating software – so that numbers are not processed or displayed correctly. The possible implications are endless. Two famous bugs which shows that bugfixes are worth considering are the MS Office multiplication bug from 2007 and the MS Office sum bug from a year later.

Example five: health

Yet another example surfaced in 2000 when a treatment planning system at a radiotherapy department was found to calculate wrong treatment times for patients and thus the patients were exposed to much more radiation than was good for them. It took quite some time until the bug was discovered – too lat for some patients whose

“deaths were probably radiation related”.

Conclusion

So, yes, security issues are harmful. They must be taken serious, and a solid and well designed security concept should be applied. Multiple layers, different zones, role based access, update often, etc.

But systems which are secured by air gaps need to be updated as well. The above mentioned examples do not show bugs in highly specific applications, but also in software components used in thousands and millions of machines. So administrators should at least spend few seconds reading into each update and check if its relevant. Otherwise you might ignore that you corrupt your data over years and years without realizing it – until its too late.

[Howto] OpenSCAP – basics and how to use in Satellite

Open-SCAP logoSecurity compliance policies are common in enterprise environments and must be evaluated regularly. This is best done automatically – especially if you talk about hundreds of machines. The Security Content Automation Protocol provides the necessary standards around compliance testing – and OpenSCAP implements these in Open Source tools like Satellite.

Background

Security can be ensured by various means. One of the processes in enterprise environments is to establish and enforce sets of default security policies to ensure that all systems at least follow the same set of IT baseline protection.

Part of such a process is to check the compliance of the affected systems regularly and document the outcome, positive or negative.

To avoid checking each system manually – repeating the same steps again and again – a defined method to describe policies and how to test these was developed: the Security Content Automation Protocol, SCAP. In simple words, SCAP is a protocol that describes how to write security compliance checklists. In real worlds, the concept behind SCAP is little bit more complicated, and it is worth reading through the home page to understand it.

OpenSCAP is a certified Open Source implementation of the Security Content Automation Protocol and enables users to run the mentioned checklists against Linux systems. It is developed in the broader ecosystem of the Fedora Project.

How to use OpenSCAP on Fedora, RHEL, etc.

Checking the security compliance of systems requires, first and foremost, a given set of compliance rules. In a real world environment the requirements of the given business would be evaluated and the necessary rules would be derived. In industries there are also pre-defined rules.

For a start it is sufficient to utilize one of the existing rule sets. Luckily, the OpenSCAP packages in Fedora, Red Hat Enterprise Linux and relate distributions are shipped with a predefined set of compliance checks.

So, first install the necessary software and compliance checks:

$ sudo dnf install scap-security-guide openscap-scanner

Check which profiles (checklists, more or less) are installed:

$ sudo oscap info /usr/share/xml/scap/ssg/content/ssg-fedora-ds.xml
Document type: Source Data Stream
Imported: 2015-10-20T09:01:27

Stream: scap_org.open-scap_datastream_from_xccdf_ssg-fedora-xccdf-1.2.xml
Generated: (null)
Version: 1.2
Checklists:
Ref-Id: scap_org.open-scap_cref_ssg-fedora-xccdf-1.2.xml
Profiles:
xccdf_org.ssgproject.content_profile_common
Referenced check files:
ssg-fedora-oval.xml
system: http://oval.mitre.org/XMLSchema/oval-definitions-5
Checks:
Ref-Id: scap_org.open-scap_cref_ssg-fedora-oval.xml
No dictionaries.

Run a test with the available profile:

$ sudo oscap xccdf eval \
--profile xccdf_org.ssgproject.content_profile_common \
--report /tmp/report.html \
/usr/share/xml/scap/ssg/content/ssg-fedora-ds.xml

In this example, the result will be printed to /tmp/report.html and roughly looks like this:

Report

If a report is clicked, more details are shown:

Details

The details are particularly interesting if a test fails: they contain rich information about the test itself: the rationale behind the compliance policy itself to help auditors to understand the severity of the failing test, as well as detailed technical information about what was actually checked so that sysadmins can verify the test on their own. Also, linked identifiers provide further information like CVEs and other sources.

Usage in Satellite

Red Hat Satellite, Red Hat’s system management solution to deploy and manage RHEL instances has the ability to integrate OpenSCAP. The same is true for Foreman, one of the Open Source projects Satellite is based upon.

While the OpenSCAP packages need to be extra installed on a Satellite server, the procedure is fairly simple:

$ sudo yum install ruby193-rubygem-foreman_openscap puppet-foreman_scap_client -y
...
$ sudo systemctl restart httpd && sudo systemctl restart foreman-proxy

Afterwards, SCAP policies can be configured directly in the web interface, under Hosts -> Policies:

Satellite-SCAP

Beforehand you might want to check if proper SCAP content is provided already under Hosts -> SCAP Contents. If no content is shown, change the Organization to “Any Context” – there is currently a bug in Satellite making this step necessary.

When a policy has been created, hosts need to be assigned to the policy. Also, the given hosts should be supplied with the appropriate Puppet modules:

SCAP-Puppet

Due to the Puppet class the given host will be configured automatically, including the SCAP content and all necessary packages. There is no need to do any task on the host.

However, SCAP policies are checked usually once a week, and shortly after installation the admin probably would like to test the new capabilities. Thus there is also a manual way to start a SCAP run on the hosts. First, Puppet must be triggered to run at least once to download the new module, install the packages, etc. Afterwards, the configuration must be checked for the internal policy id, and the OpenSCAP client needs to be run with the id as argument.

$ sudo puppet agent -t
...
$ sudo cat /etc/foreman_scap_client/config.yaml
...
# policy (key is id as in Foreman)

2:
:profile: 'xccdf_org.ssgproject.content_profile_stig-rhel7-server-upstream'
...
$ sudo foreman_scap_client 2
DEBUG: running: oscap xccdf eval --profile xccdf_org.ssgproject.content_profile_stig-rhel7-server-upstream --results-arf /tmp/d20151211-2610-1h5ysfc/results.xml /var/lib/openscap/content/96c2a9d5278d5da905221bbb2dc61d0ace7ee3d97f021fccac994d26296d986d.xml
DEBUG: running: /usr/bin/bzip2 /tmp/d20151211-2610-1h5ysfc/results.xml
Uploading results to ...

If a Capsule is involved as well, the proper command to upload the report to the central server is smart-proxy-openscap-send.

After these steps Satellite provides a good overview of all reports, even on the dashboard:

SCAP-Reports

As you see: my demo system is certainly out of shape! =D

Conclusion

SCAP is a very convenient and widely approved way to evaluate security compliance policies on given systems. The SCAP implementation OpenSCAP is not only compatible with the SCAP standards and even a certified implementation, it also provides appealing reports which can be used to document the compliance of systems while at the same time incorporates enough information to help sysadmins do their job.

Last but not least, the integration with Satellite is quite nice: proper checklists are already provided for RHEL and others, Puppet makes sure everything just falls into place, and there is a neat integration into the management interface which offers RBAC for example to enable auditors to access the reports.

So: if you are dealing with security compliance policies in your IT environment, definitely check out OpenSCAP. And if you have RHEL instances, take your Satellite and start using it – right away!!

Short Tip: Extract attachments from multipart messages

920839987_135ba34fff

Sometimes e-mails are stored as a plain text file. This might be due to backup solutions or as in my case, sometimes I have to save e-mails in text format because the client has issues with more complicated S/MIME mails with attachments.

In such cases the text file itself contains the multipart message body of the e-mail, so the attachments are provided as base64 streams:

--------------060903090608060502040600
Content-Type: application/x-gzip;
 name="liquidat.tar.gz"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="liquidat.tar.gz"

qrsIAstukk4AA+17Wa+jSpZuPfMrrK6Hvi1qJ4OxsfuodQU2YLDBUJDGrvqBeR7MaPj1N7D3
OEmSxO8Wq7+3Y48dTWvXi8XvvKj8od6vPf9vKjWIv1v7nt3G/d8rEX5D/FdrDIxj2IrUPeE/
j5Dv4g9+fPnTRcX006T++LdYYw7w+i...

These data can be extracted manually, but that’s a wearisome task. It’s easier to use munpack, which can easily extract attachment(s) out of such text files and write them into a proper named files.

$ munpack -f mail.txt
liquidat.tar.gz (application/x-gzip)

Short Tip: Convert PEM files to CRT/DER

920839987_135ba34fffWithin the Linux eco system certificates are often exchanged in PEM format. However, when you have to deal with other, most often proprietary eco systems you must be able to export the certificates in other file formats. For example, if you have certificates stored as PEM files and need to export them in DER format which often ends in .crt, .cer or .der you can use openssl to rewrite them:

$ openssl x509 -outform der -in MyCertificate.pem -out MyCertificate.crt

[Howto] Git history cleanup

920839987_135ba34fffGit is great. It stores everything you hand over to it. Unfortunately it also stores stuff you, later on, realize you should better not have handed over, for example due to security concerns. Here are two short ways to remove stuff from git, to cleanup the history.

Most people using Git in their daily routine sooner or later stumble in a situation where you realize that you have committed files which should not be in the repository anymore. In my (rather special case I admit) I was working in a Git repo and created a new branch to add some further stuff in a new sub-directory. Later on, however, when I had to clone the content of the new branch to another remote location I realized that there were some old files in the repo (and thus also in the new branch) which could not be exported to another location due to security concerns. They had to be removed beforehand!

So, I had to screw around with Git – but since Git is awesome, there are ways. One way I found under the marvelous title git rocks even when it sucks is to go through the entire commit history and rewrite each and every commit by deleting everything related to the given file name. This can be done by using git filter-branch.

For example, given that I had to remove a folder called “Calculation” the command is:

$ git filter-branch -f --index-filter 'git rm -rf --cached --ignore-unmatch Calculation' -- --all
Rewrite 5089fb36c64934c1b7a8301fe346a214a7cccdaa (360/365)rm 'Calculation'
Rewrite cc232788dfa60355dd6db6c672305700717835b4 (361/365)rm 'Calculation'
Rewrite 33d1782fdd6de5c75b7db994abfe228a028d7351 (362/365)rm 'Calculation'
Rewrite 7416d33cac120fd782f75c6eb91157ce8135590b (363/365)rm 'Calculation'
Rewrite 81e77acb22bd08c9de743b38a02341682ca369dd (364/365)rm 'Calculation'
Rewrite 2dce54592832f333f3ab947b020c0f98c94d1f51 (365/365)rm 'Calculation'

Ref 'refs/heads/documentation' was rewritten
Ref 'refs/remotes/origin/master' was rewritten
Ref 'refs/remotes/origin/documentation' was rewritten
WARNING: Ref 'refs/remotes/origin/master' is unchanged

The folder was removed entirely! However, old commit logs still there, so you better not have any relevant data in the commit messages! And as mentioned in the linked blog post, to really get rid of all traces of the files the best is to clone the repository again once afterwards.

In my case an even simpler way was to take the new subdirectory, make it the new root or the repository and rewrite everything regarding to the root. All other files not under the new root are discarded in such case. Here is the proper command, given that I have added my new content under the subdir “documentation”:

$ git filter-branch --subdirectory-filter documentation -- --all
Rewrite dd1d03f648e983208b1acd9a9db853ee820129b9 (34/34)
Ref 'refs/heads/documentation' was rewritten
WARNING: Ref 'refs/remotes/origin/master' is unchanged
Ref 'refs/remotes/origin/documentation' was rewritten
WARNING: Ref 'refs/remotes/origin/master' is unchanged

Please note that in both cases you have to be extra careful when you renamed the directories in the meantime. If you are not sure, better check all files which have ever been in the repository:

$ for commit in `git log --all --pretty=format:%H`; do git ls-tree -r -l $commit; done |awk '{print $5}'
documentation/foobar.txt
documentation/barfoo.txt
Calculation/example.txt
Calculation/HiglySecureStuff.txt
...

Skype is following your links – that’s proprietary for you

network-63770_150
Yesterday it was reported that Skype, owned by Microsoft these days, seems to automatically follow each exchanged https link. Besides the fact that this is a huge security and personal rights problem in its own it again shows how important it is to not trust a proprietary system.

The problem, skin deep

Heise reported yesterday that Skype follows https links which have been exchanged in chats on a regular basis. First and foremost, this is a privacy issue: it looks like Skype, and thus Microsoft, scans your chat history and acts based on these findings on a regular base. That cannot be explained by “security measures” or anything like it and is not acceptable. My personal data are mine, and Microsoft should not have anything to do with as long as there is no need!

Second, there is the security problem: imagine you are exchanging private links, or even links containing passwords and usernames for direct access (you shouldn’t, but sometimes you have to). Microsoft does follows these links -and therefore gains full access to all data hidden there. Imagine these are sensitive data (private or business), you have no idea what Microsoft is going to do with them.

Third, there is the disturbing part: Microsoft only follows the https links, only the encrypted URLs. If this action would be a security thing, they would surely follow the http links as well. So there must be another explanation – but which one? It is disturbing to know that Microsoft has a motivation to regularly follow links to specifically secured content.

The problem, profound

While these news are shocking, the root problem is not Skype or the behavior of Microsoft – I am pretty sure that their Licence Agreement will cover such actions. And it is most likely that others like WhatsApp, Facebook Chat or whatnot do behave in similar ways. So the actual problem is handing over all your data to a company which you have no inside to. You have no idea what they are doing, you have no control about it, and you cannot even be sure that nothing bad is done with it. Also, most vendors try to lock you in with your service, so that switching away from them is painfully due to used workflows, tools and social networks.

The solution

From my point of view, my personal perfect solution is hosting such sensitive services on my own. However, that cannot be a solution for everyone, and I for myself cannot provide for example the SLAs others need.

Thus I guess the best solution is to be conscious about what you do – and what the consequences are. Try to avoid proprietary solutions where possible. For example for chats, try to use open protocols like XMPP. Google Talk is a good example here: company based, but still using open protocols, they even push the development forward (Jingle, …). Or, if you upload files to web services, make sure you have local backup. Also, try not to upload sensitive data – if you have to, encrypt it beforehand. And if you use social networks, try to not depend on one of them too much, use cross posts for various services at the same time if possible.

And, last but not least: ask your service providers to establish transparency and rules for a responsible and acceptable usage of your data. After all, they depend on the users trust, and if enough users are requesting such changes, they will have to follow.