So you think offline systems need no updates?

offlineOften customers run offline systems and claim that such machines do not need updates since they are offline. But this is a fallacy: updates do not only close security holes but also deliver bug fixes – and they can be crucial.

Background

Recently a customer approached me with questions regarding an upgrade of a server. During the discussion, the customer mentioned that the system never got upgrades:

“It is an offline system, there is no need.”

That’s a common misconception. And a dangerous one.

Many people think that updates are only important to fix security issues, and that bugfixes are not really worth considering – after all, the machine works, right?

Wrong!

Software is never perfect. Errors happen. And while security issues might be uncomfortable, bugs in the program code can be a much more serious issue than “mere” security problems.

Example One: Xerox

To pick an example, almost each company out there has one type of system which hardly ever gets updated: copy machines. These days they are connected to the internet and can e-mail scanned documents. They are usually never updated, after all it just works, right?

In 2013 it was discovered that many Xerox WorkCentres had a serious software bug, causing them to alter scanned numbers. It took quite some weeks and analysis until finally a software update fixed the issue. During that time it turned out that the bug was at least 8 years old. So millions and millions of faulty scans have been produced over the years. In some cases the originals were destroyed in the meantime. It can hardly be estimated what impact that will have, but for sure it’s huge and will accompany us for a long time. And it was estimated that even today many scanners are still not patched – because it is not common to patch such systems. Offline, right?

So yes, a security issue might expose your data to the world. But it’s worse when the data is wrong to begin with.

Example two: Jails

Another example hit the news just recently: the US Washington State Department of Correction released inmates too early – due to a software bug. Again the software bug was present for years and years, releasing inmates too early all the time.

Example three: Valve

While Valve’s systems are often per definition online, the Valve Steam for Linux bug showed that all kinds of software can contain, well, all kinds of bugs: if you moved the folder of your Steam client, it could actually delete your entire (home) directory. Just like that. And again: this bug did not happen all the time, but only in certain situations and after quite some time.

# deletes your files when the variable is not set anymore
rm -rf "$STEAMROOT/"*

Example four: Office software

Imagine you have a bug in your calculating software – so that numbers are not processed or displayed correctly. The possible implications are endless. Two famous bugs which shows that bugfixes are worth considering are the MS Office multiplication bug from 2007 and the MS Office sum bug from a year later.

Example five: health

Yet another example surfaced in 2000 when a treatment planning system at a radiotherapy department was found to calculate wrong treatment times for patients and thus the patients were exposed to much more radiation than was good for them. It took quite some time until the bug was discovered – too lat for some patients whose

“deaths were probably radiation related”.

Conclusion

So, yes, security issues are harmful. They must be taken serious, and a solid and well designed security concept should be applied. Multiple layers, different zones, role based access, update often, etc.

But systems which are secured by air gaps need to be updated as well. The above mentioned examples do not show bugs in highly specific applications, but also in software components used in thousands and millions of machines. So administrators should at least spend few seconds reading into each update and check if its relevant. Otherwise you might ignore that you corrupt your data over years and years without realizing it – until its too late.

Advertisements

[Howto] OpenSCAP – basics and how to use in Satellite

Open-SCAP logoSecurity compliance policies are common in enterprise environments and must be evaluated regularly. This is best done automatically – especially if you talk about hundreds of machines. The Security Content Automation Protocol provides the necessary standards around compliance testing – and OpenSCAP implements these in Open Source tools like Satellite.

Background

Security can be ensured by various means. One of the processes in enterprise environments is to establish and enforce sets of default security policies to ensure that all systems at least follow the same set of IT baseline protection.

Part of such a process is to check the compliance of the affected systems regularly and document the outcome, positive or negative.

To avoid checking each system manually – repeating the same steps again and again – a defined method to describe policies and how to test these was developed: the Security Content Automation Protocol, SCAP. In simple words, SCAP is a protocol that describes how to write security compliance checklists. In real worlds, the concept behind SCAP is little bit more complicated, and it is worth reading through the home page to understand it.

OpenSCAP is a certified Open Source implementation of the Security Content Automation Protocol and enables users to run the mentioned checklists against Linux systems. It is developed in the broader ecosystem of the Fedora Project.

How to use OpenSCAP on Fedora, RHEL, etc.

Checking the security compliance of systems requires, first and foremost, a given set of compliance rules. In a real world environment the requirements of the given business would be evaluated and the necessary rules would be derived. In industries there are also pre-defined rules.

For a start it is sufficient to utilize one of the existing rule sets. Luckily, the OpenSCAP packages in Fedora, Red Hat Enterprise Linux and relate distributions are shipped with a predefined set of compliance checks.

So, first install the necessary software and compliance checks:

$ sudo dnf install scap-security-guide openscap-scanner

Check which profiles (checklists, more or less) are installed:

$ sudo oscap info /usr/share/xml/scap/ssg/content/ssg-fedora-ds.xml
Document type: Source Data Stream
Imported: 2015-10-20T09:01:27

Stream: scap_org.open-scap_datastream_from_xccdf_ssg-fedora-xccdf-1.2.xml
Generated: (null)
Version: 1.2
Checklists:
Ref-Id: scap_org.open-scap_cref_ssg-fedora-xccdf-1.2.xml
Profiles:
xccdf_org.ssgproject.content_profile_common
Referenced check files:
ssg-fedora-oval.xml
system: http://oval.mitre.org/XMLSchema/oval-definitions-5
Checks:
Ref-Id: scap_org.open-scap_cref_ssg-fedora-oval.xml
No dictionaries.

Run a test with the available profile:

$ sudo oscap xccdf eval \
--profile xccdf_org.ssgproject.content_profile_common \
--report /tmp/report.html \
/usr/share/xml/scap/ssg/content/ssg-fedora-ds.xml

In this example, the result will be printed to /tmp/report.html and roughly looks like this:

Report

If a report is clicked, more details are shown:

Details

The details are particularly interesting if a test fails: they contain rich information about the test itself: the rationale behind the compliance policy itself to help auditors to understand the severity of the failing test, as well as detailed technical information about what was actually checked so that sysadmins can verify the test on their own. Also, linked identifiers provide further information like CVEs and other sources.

Usage in Satellite

Red Hat Satellite, Red Hat’s system management solution to deploy and manage RHEL instances has the ability to integrate OpenSCAP. The same is true for Foreman, one of the Open Source projects Satellite is based upon.

While the OpenSCAP packages need to be extra installed on a Satellite server, the procedure is fairly simple:

$ sudo yum install ruby193-rubygem-foreman_openscap puppet-foreman_scap_client -y
...
$ sudo systemctl restart httpd && sudo systemctl restart foreman-proxy

Afterwards, SCAP policies can be configured directly in the web interface, under Hosts -> Policies:

Satellite-SCAP

Beforehand you might want to check if proper SCAP content is provided already under Hosts -> SCAP Contents. If no content is shown, change the Organization to “Any Context” – there is currently a bug in Satellite making this step necessary.

When a policy has been created, hosts need to be assigned to the policy. Also, the given hosts should be supplied with the appropriate Puppet modules:

SCAP-Puppet

Due to the Puppet class the given host will be configured automatically, including the SCAP content and all necessary packages. There is no need to do any task on the host.

However, SCAP policies are checked usually once a week, and shortly after installation the admin probably would like to test the new capabilities. Thus there is also a manual way to start a SCAP run on the hosts. First, Puppet must be triggered to run at least once to download the new module, install the packages, etc. Afterwards, the configuration must be checked for the internal policy id, and the OpenSCAP client needs to be run with the id as argument.

$ sudo puppet agent -t
...
$ sudo cat /etc/foreman_scap_client/config.yaml
...
# policy (key is id as in Foreman)

2:
:profile: 'xccdf_org.ssgproject.content_profile_stig-rhel7-server-upstream'
...
$ sudo foreman_scap_client 2
DEBUG: running: oscap xccdf eval --profile xccdf_org.ssgproject.content_profile_stig-rhel7-server-upstream --results-arf /tmp/d20151211-2610-1h5ysfc/results.xml /var/lib/openscap/content/96c2a9d5278d5da905221bbb2dc61d0ace7ee3d97f021fccac994d26296d986d.xml
DEBUG: running: /usr/bin/bzip2 /tmp/d20151211-2610-1h5ysfc/results.xml
Uploading results to ...

If a Capsule is involved as well, the proper command to upload the report to the central server is smart-proxy-openscap-send.

After these steps Satellite provides a good overview of all reports, even on the dashboard:

SCAP-Reports

As you see: my demo system is certainly out of shape! =D

Conclusion

SCAP is a very convenient and widely approved way to evaluate security compliance policies on given systems. The SCAP implementation OpenSCAP is not only compatible with the SCAP standards and even a certified implementation, it also provides appealing reports which can be used to document the compliance of systems while at the same time incorporates enough information to help sysadmins do their job.

Last but not least, the integration with Satellite is quite nice: proper checklists are already provided for RHEL and others, Puppet makes sure everything just falls into place, and there is a neat integration into the management interface which offers RBAC for example to enable auditors to access the reports.

So: if you are dealing with security compliance policies in your IT environment, definitely check out OpenSCAP. And if you have RHEL instances, take your Satellite and start using it – right away!!

Short Tip: Extract attachments from multipart messages

920839987_135ba34fff

Sometimes e-mails are stored as a plain text file. This might be due to backup solutions or as in my case, sometimes I have to save e-mails in text format because the client has issues with more complicated S/MIME mails with attachments.

In such cases the text file itself contains the multipart message body of the e-mail, so the attachments are provided as base64 streams:

--------------060903090608060502040600
Content-Type: application/x-gzip;
 name="liquidat.tar.gz"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="liquidat.tar.gz"

qrsIAstukk4AA+17Wa+jSpZuPfMrrK6Hvi1qJ4OxsfuodQU2YLDBUJDGrvqBeR7MaPj1N7D3
OEmSxO8Wq7+3Y48dTWvXi8XvvKj8od6vPf9vKjWIv1v7nt3G/d8rEX5D/FdrDIxj2IrUPeE/
j5Dv4g9+fPnTRcX006T++LdYYw7w+i...

These data can be extracted manually, but that’s a wearisome task. It’s easier to use munpack, which can easily extract attachment(s) out of such text files and write them into a proper named files.

$ munpack -f mail.txt
liquidat.tar.gz (application/x-gzip)

Short Tip: Convert PEM files to CRT/DER

920839987_135ba34fffWithin the Linux eco system certificates are often exchanged in PEM format. However, when you have to deal with other, most often proprietary eco systems you must be able to export the certificates in other file formats. For example, if you have certificates stored as PEM files and need to export them in DER format which often ends in .crt, .cer or .der you can use openssl to rewrite them:

$ openssl x509 -outform der -in MyCertificate.pem -out MyCertificate.crt

[Howto] Git history cleanup

920839987_135ba34fffGit is great. It stores everything you hand over to it. Unfortunately it also stores stuff you, later on, realize you should better not have handed over, for example due to security concerns. Here are two short ways to remove stuff from git, to cleanup the history.

Most people using Git in their daily routine sooner or later stumble in a situation where you realize that you have committed files which should not be in the repository anymore. In my (rather special case I admit) I was working in a Git repo and created a new branch to add some further stuff in a new sub-directory. Later on, however, when I had to clone the content of the new branch to another remote location I realized that there were some old files in the repo (and thus also in the new branch) which could not be exported to another location due to security concerns. They had to be removed beforehand!

So, I had to screw around with Git – but since Git is awesome, there are ways. One way I found under the marvelous title git rocks even when it sucks is to go through the entire commit history and rewrite each and every commit by deleting everything related to the given file name. This can be done by using git filter-branch.

For example, given that I had to remove a folder called “Calculation” the command is:

$ git filter-branch -f --index-filter 'git rm -rf --cached --ignore-unmatch Calculation' -- --all
Rewrite 5089fb36c64934c1b7a8301fe346a214a7cccdaa (360/365)rm 'Calculation'
Rewrite cc232788dfa60355dd6db6c672305700717835b4 (361/365)rm 'Calculation'
Rewrite 33d1782fdd6de5c75b7db994abfe228a028d7351 (362/365)rm 'Calculation'
Rewrite 7416d33cac120fd782f75c6eb91157ce8135590b (363/365)rm 'Calculation'
Rewrite 81e77acb22bd08c9de743b38a02341682ca369dd (364/365)rm 'Calculation'
Rewrite 2dce54592832f333f3ab947b020c0f98c94d1f51 (365/365)rm 'Calculation'

Ref 'refs/heads/documentation' was rewritten
Ref 'refs/remotes/origin/master' was rewritten
Ref 'refs/remotes/origin/documentation' was rewritten
WARNING: Ref 'refs/remotes/origin/master' is unchanged

The folder was removed entirely! However, old commit logs still there, so you better not have any relevant data in the commit messages! And as mentioned in the linked blog post, to really get rid of all traces of the files the best is to clone the repository again once afterwards.

In my case an even simpler way was to take the new subdirectory, make it the new root or the repository and rewrite everything regarding to the root. All other files not under the new root are discarded in such case. Here is the proper command, given that I have added my new content under the subdir “documentation”:

$ git filter-branch --subdirectory-filter documentation -- --all
Rewrite dd1d03f648e983208b1acd9a9db853ee820129b9 (34/34)
Ref 'refs/heads/documentation' was rewritten
WARNING: Ref 'refs/remotes/origin/master' is unchanged
Ref 'refs/remotes/origin/documentation' was rewritten
WARNING: Ref 'refs/remotes/origin/master' is unchanged

Please note that in both cases you have to be extra careful when you renamed the directories in the meantime. If you are not sure, better check all files which have ever been in the repository:

$ for commit in `git log --all --pretty=format:%H`; do git ls-tree -r -l $commit; done |awk '{print $5}'
documentation/foobar.txt
documentation/barfoo.txt
Calculation/example.txt
Calculation/HiglySecureStuff.txt
...

Skype is following your links – that’s proprietary for you

network-63770_150
Yesterday it was reported that Skype, owned by Microsoft these days, seems to automatically follow each exchanged https link. Besides the fact that this is a huge security and personal rights problem in its own it again shows how important it is to not trust a proprietary system.

The problem, skin deep

Heise reported yesterday that Skype follows https links which have been exchanged in chats on a regular basis. First and foremost, this is a privacy issue: it looks like Skype, and thus Microsoft, scans your chat history and acts based on these findings on a regular base. That cannot be explained by “security measures” or anything like it and is not acceptable. My personal data are mine, and Microsoft should not have anything to do with as long as there is no need!

Second, there is the security problem: imagine you are exchanging private links, or even links containing passwords and usernames for direct access (you shouldn’t, but sometimes you have to). Microsoft does follows these links -and therefore gains full access to all data hidden there. Imagine these are sensitive data (private or business), you have no idea what Microsoft is going to do with them.

Third, there is the disturbing part: Microsoft only follows the https links, only the encrypted URLs. If this action would be a security thing, they would surely follow the http links as well. So there must be another explanation – but which one? It is disturbing to know that Microsoft has a motivation to regularly follow links to specifically secured content.

The problem, profound

While these news are shocking, the root problem is not Skype or the behavior of Microsoft – I am pretty sure that their Licence Agreement will cover such actions. And it is most likely that others like WhatsApp, Facebook Chat or whatnot do behave in similar ways. So the actual problem is handing over all your data to a company which you have no inside to. You have no idea what they are doing, you have no control about it, and you cannot even be sure that nothing bad is done with it. Also, most vendors try to lock you in with your service, so that switching away from them is painfully due to used workflows, tools and social networks.

The solution

From my point of view, my personal perfect solution is hosting such sensitive services on my own. However, that cannot be a solution for everyone, and I for myself cannot provide for example the SLAs others need.

Thus I guess the best solution is to be conscious about what you do – and what the consequences are. Try to avoid proprietary solutions where possible. For example for chats, try to use open protocols like XMPP. Google Talk is a good example here: company based, but still using open protocols, they even push the development forward (Jingle, …). Or, if you upload files to web services, make sure you have local backup. Also, try not to upload sensitive data – if you have to, encrypt it beforehand. And if you use social networks, try to not depend on one of them too much, use cross posts for various services at the same time if possible.

And, last but not least: ask your service providers to establish transparency and rules for a responsible and acceptable usage of your data. After all, they depend on the users trust, and if enough users are requesting such changes, they will have to follow.

[Howto] Changing the expiry date of GPG keys

920839987_135ba34fff
GnuPG keys can have an expiry date. When the key expires, it cannot be used to encrypt data anymore, and thus is a good way to enforce security measures. However, what most people does not seem to know is that this expiry date can be changed quite easily.

Setting an expiry date for a GPG key is usually a good thing: it makes sure that even if you forget the password and do not have a revocation certificate the key will not be valid at some point in the future. Additionally it might force users to replace keys ever so often to enforce specific security measures. Last but not least it forces the key owner to think about his or her own GPG infrastructure and if changes are needed.

Still, there might be times where it makes sense to change the expiry date – if only because you realized that your GPG keys are all fine.

First, you need to know the key ID, in this example ABCDEF12:

$ gpg --list-keys liquidat@example.com
pub   2048R/ABCDEF12 2012-09-10 [expires: 2032-09-10]
uid                  liquidat <liquidat@example.com>
sub   2048R/BCDEF123 2012-09-10 [expires: 2032-09-10]]

With that ID at hand you can now edit the key:

gpg --edit-key ABCDEF12
gpg (GnuPG) 1.4.12; Copyright (C) 2012 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  2048R/ABCDEF12  created: 2012-09-10  expires: 2013-09-10  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/BCDEF123  created: 2012-09-10  expires: 2013-09-10  usage: E   
[ultimate] (1). liquidat <liquidat@example.com>

gpg>

As you see this key is going to expire in fall 2013. The gpg> indicates a prompt, so you are basically at a gpg specific shell. So, let’s actually change the expiry date:

gpg> expire
Changing expiration time for the primary key.
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 3y
Key expires at Fri May  6 15:45:42 2016 CEST
Is this correct? (y/N) y

You need a passphrase to unlock the secret key for
user: "liquidat <liquidat@example.com>"
2048-bit RSA key, ID ABCDEF12, created 2012-09-10

The passphrase is usually queried by standard means, so on a desktop systems a pop up windows should come up asking you for the passphrase.

Afterwards, list the key again to check the new expiry date:

gpg> list
pub  2048R/ABCDEF12  created: 2012-09-10  expires: 2016-09-10  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/BCDEF123  created: 2012-09-10  expires: 2013-09-10  usage: E   
[ultimate] (1). liquidat <liquidat@example.com>

gpg>

As you see, the expiry date has only changed for the first key, but not for the pub key. The edit procedure is always for one key only. Thus, change the focus from the first key, called “key 0”, to the sub key, “key 1”. A star sign * will indicate the focus on the subkey:

gpg> key 1
pub  2048R/ABCDEF12  created: 2012-09-10  expires: 2016-09-10  usage: SC  
                     trust: ultimate      validity: ultimate
sub*  2048R/BCDEF123  created: 2012-09-10  expires: 2013-09-10  usage: E   
[ultimate] (1). liquidat <liquidat@example.com>

gpg> expire

Changing expiration time for a subkey.
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 3y
Key expires at Fri May  6 15:45:42 2016 CEST
Is this correct? (y/N) y

You need a passphrase to unlock the secret key for
user: "liquidat <liquidat@example.com>"
2048-bit RSA key, ID BCDEF123, created 2012-09-10

gpg> list
pub  2048R/ABCDEF12  created: 2012-09-10  expires: 2016-09-10  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/BCDEF123  created: 2012-09-10  expires: 2016-09-10  usage: E   
[ultimate] (1). liquidat <liquidat@example.com>

As you see, you are done, both dates are changed now. The changes finally need to be saved:

gpg> save

And, last but not least, don’t forget to upload the updated public key to the key servers:

$ gpg --keyserver pgp.mit.edu --send-keys ABCDEF12
gpg: sending key ABCDEF12 to hkp server pgp.mit.edu