Linux Software Installation, Part V: Arguments

package
After I wrote about possible solutions this article of my series dedicated to Linux Software Installation now comes to the tough stuff: the arguments pro and contra binary installers. I will try to discuss the most important arguments I read and heard over time and also in the comments of the previous articles.

To summarize: in my previous post I stated that the perfect solution would be an API provided by the native package manager. This API would make it possible to register new packages and also new files to the package manager. The tool which would use that API would be a binary installer.

Contra

There are several objections I read against binary installers. I will pick up the topics quality, security, technical details, updates, some other arguments and one dumb one.

Quality

Quality wise a general binary installer which is not fully part of the native package management is sub-optimal. That’s right. You can go even further: every package maintained by someone else than the original developer is most certainly not optimal.
But on the other side it is not realistic to assume that every developer of software will learn several different package managers, even more packaging rules, and after that keep a track of several different distributions. Therefore it doesn’t make sense to wait for the perfect situation.
And although a native package is usually better than other attempts, a package following clear rules (think of LSB and FHS here) and which registers with the native database would not totally wreck the quality. In fact, if everything is done right the quality could be acceptable. And compared to a set of different self made binaries provided by different projects/developers (which already exists) such a solution would be much better. Also quality wise.

So, yes, you need to have strict rules for such installers. But that’s possible. You can set guidelines as well as providing testing tools like rpmlint.

Security

This one is raised most often: if you have a binary installer it can do evil things. And I have to admit: this argument is the one I never got at all. No question: binaries provide you with the possibilities to do evil things like starting services, installing root kits, and read your e-mail.
But that is possible already: you can provide binary installers which install on almost every Linux platform and even compile a kernel module. VirtualBox uses such an installer. Mono also provides an all-distribution-binary.
Also, if you are experienced enough you can already build a set of native packages for different distributions. Skype does that, for example.
There would be nothing new at all with a binary installer just because it registers with the local package management.

Also, if you trust someone that he develops good source code, why shouldn’t you trust him with the binary he provides – given that it is according the strict necessary rules? If you do not trust that person, don’t install the program at all – the developer wouldn’t make good source code but attach bad stuff to the binary, he would just include it into the source code, deeply hidden in obfuscated code.

And: the way of integrating a package first into the native repositories adds one more level of security problems. Take my case: I even maintain a program like ktorrent for Fedora which sends and receives a lot of network traffic. You have to trust me that I do not suddenly turn evil and introduce some bad scripts into the package (as post-install scripts or patches, for example). Or that my computer is not suddenly overtaken and that evil code is introduced by some evil cracker. Or that I’m blackmailed.
There is not more security in the repository system, but less: given a repository like Fedora’s you add more than 200 extra people you have to trust. Trust that they don’t turn evil, trust that they keep a good eye at their computer, trust that they are not blackmailed in any way, etc.

The only security advantage I see from native packages at the moment is that the packages are signed and that you can be sure that they haven’t been changed in the way from the server to yours. But binary installer could be singed as well.

Technical details

One major technical concern against binary installers was that it only solves the problem to a certain degree: you would still have to provide different packages for different architectures. But that is plain wrong: Apple showed that it is indeed possible to provide cross-architecture binaries even for big and complex applications. Sure, they have their shortcomings also (well, mainly size as it looks like), but it works, and if it is done right and if the appropriate tools are available the developers start to adopt that technique. Firefox for example provides Universal Binaries for Mac OS X.

Another technical concern, this time from the other side, is the question why the binary installer would have to deal with the native package management – it could just drop its software to /opt/. If it would correspond to the LSB specifications for installing software than this part would be even easier since the chance of overwriting files would be almost zero. It would be enough to query for the LSB version.
This is true, and I think some of the installers out there are working in this way. However, in the long term there should be more applications delivered by this way, and these applications would have dependencies. And in the long term such a package manager API could also provide that. Not necessarily call for installing additional software, but at least asking if library X of package Y is already installed or not.
Additionally history has proven that such package-manager-ignoring attempts are not liked by the native package manager developers. And we can’t fight them (let alone that I do not want to). Also, this option is there already – and it wasn’t picked up by the majority.

This argument also ignores that most of the software developers I am talking about are running a Linux distribution and like their package manager. This might be different for big companies, but as I said my focus is not on them.

Updates

I give you that: a binary installer planned in the way described here would not have a proper way to update itself. But that is more due to the fact that the attempt described here is evolutionary. There is no reason why future API versions could not also include possibilities to register update techniques with the native software management.
However, lets start with small steps. When the first version of such an API is there we can think about more possibilities: real dependencies for specific software like databases, apache, compilers, and so on, and also ways to update software.

Some other arguments

There were also some strange arguments I read and heard during discussing that topic.

Once I was faced with: “source code is the Unix way of delivering software”. Well, no. First of all Linux (which is technically not a Unix, btw.) distributions normally provide a huge amount of pre-compiled software, and second Apple’s Unix (the probably most used Unix around) provides binaries also. No source-code-only.
Also this argument is a bit odd because “the Unix way” was also to be never attracting to normal people, just to software developers and system administrators. That has changed. And “the Unix way” was for years that it never had a usability focused GUI. Gnome and KDE changed that.

Another time I read “providing binaries is not the developers job”. That is very interesting because: who says that? I personally never read that from a software developer. In contrast, I much too often crossed pages were there were binaries provided – for Windows, but not for Linux.
I would like to leave this up to the developer. One developer even decided to write his own binary installer out of the need that he wanted to provide the binary!

One reader mentioned that the current problem teaches the users: if they are not provided with binary installers they become educated computer users who can compile software by themselves. Well, this might be true, but there is no need for that. Sure, we need educated computer users security wise. One of my current jobs in real life is actually to educate computer users and to explain why the internet can be dangerous. We need to tell the computer users what it means to download binaries, what it means to open e-mail attachments, and so on.
But that has nothing to do with compiling software. My friends do not have to know about that: for them a computer is the tool to work with, not the tool to work at. It should help them solving problems, not become a new problem. Everyone should know how to use a computer in a secure way, but not everyone should know how to build OpenOffice.

The dumb one

Well,the dumb one is the one I hear when an opponent ran out of arguments: “you just want to duplicate Windows, go and use that!”.
No. I don’t want to duplicate Windows. And I don’t want to use it. I love the choice I have on Linux (you might want to remember the word “choice” for another argument later on…), I want to have a possibility for people I know to easily install software provided at usual places on their Linux machines.
And, btw.: this is not a features existing on Windows only. Mac OS has that feature. Most smaller operating systems have that feature also.

And, for everyone using this argument because there is no real one left: feel free to not use such a binary installer if it ever comes into existence. Why object a technical feature you don’t have to use? After all, the source code will not magically vanish when such a binary installer comes to life.

Pro

Well, I already said quite a lot about the binary installer and the API, and why we need them. Out of that reason this section will mainly describe the disadvantages of the current situation. I think most people do not really think about that reasons and that it is important to point them out.

The main argument

Just to have it in this article as well: the main argument for an API/a binary unified installer is that software installation on Linux would get easy enough for average computer users. It would empower the users to be able to install software they like instead of just the software their distributions likes. On the other hand developers would have an easy way to provide their software in an actual usable way (from an average users point of view).

If both problems are solved more users come to the Linux platform which in return brings more power to the Linux platform.

Pure mainstream software

From an average computer users point of view the only software available for Linux is pure mainstream software: Firefox, OpenOffice, etc. Other software, especially small tools or niche applications, are not. And they will not be available.

To say it in even more drastic words: currently Linux, the platform dedicated to free choice in every case, does limit the software choice brutally to the big, main applications. Small or specialized applications have little to no chance to get a foot on the Linux platform because they cannot be installed by average users.
(Remembered it? :) )

Linux is hostile to small applications and niche software.

Commercial vendors

This is another wacky thing: you need quite some knowledge and time to provide different binaries for different distributions. Therefore, projects with a lot of money – especially proprietary software projects – which have this money are more likely to provide such binaries and therefore have a much better stand against competitors.
Sure, Free Software can find its way into the repositories eventually. But to get there it has to be good enough and popular enough. But how to become popular and widespread if you have a strong opponent? How to attract good developers and usability experts if you don’t get popular? Also, it can take a long time to be integrated and it is not granted at all that the repositories will ever include your software. And I know of much too much software which is not in all large repositories yet although the software is around for a long time already.

Against testers and new software

Currently testing new software or new versions of software is possible only for people who know how to compile software. There is no way how a broader audience for example could test a RC or beta software. Also, even if you release a total new stable version you only slowly get feedback because the repositories have slow response times. Some repositories only need days to catch up, but others need weeks or months.
And every developer knows how crucial it can be to have a broader audience testing a new piece of software.

No one gets hurt

I know that some people will not pick up the idea even after reading through this arguments page. But that is ok, because: nothing would change for these people. Nothing at all. There would still be source code provided, and there would still be the big distribution repositories. The API for the binary installer would just add another possibility, not a necessity.

Other arguments

There are several arguments I haven’t picked up. For example, in the long term a unified binary could also take usage of functions like triggering by the package management (re-registering of plugins, kernel rebuilds, etc.). Also, I didn’t even look at proprietary software vendors which would also benefit from such an installer. Also non-internet distribution of software (additional software on CD when you buy hardware xy, distribution of software to people who have slow connections, etc.) would become possible.

Closing words

So far about the arguments. I’m pretty sure that not everyone agrees on every point, feel free to leave a comment. I will not dive into page long discussions here since a blog is the wrong place for it but sharing arguments would be nice.
However, if you actually read the entire blog, please keep also one thing in mind: my focus is on the real world. I want to have a solution which is working good in real life. Not one which is absolutely perfect in the sense of philosophical trueness. If you search for your personal religion here there is no need to share arguments because it would not make sense.

This series is almost at the end, I will just sum up everything in a last article. Well, at least that’s the plan. In the meantime, you can have a look at the extra page I created which lists these articles. It makes it easier to get an overview about this series.

About these ads

8 thoughts on “Linux Software Installation, Part V: Arguments

  1. You didn’t awnser my critisum I made in the previous post, ISVs on Windows have proven that even with big-name, widely used software, they’ll still take serious liberties like make it autostart and live in the system tray.

    This isn’t really a security risk so much as a cultrual problem, any installer needs to be locked down to send a clear message that doin this stuff on linux is not an option. It won’t actually make it impossible for ISV’s to do that sort of stuff. They could just make the program rather than the installer do it, but it will help.

  2. Yes, you’re right: I actually simply forgot that part, sorry.
    However, I think that is part of Quality: if such actions would be standardised it would be possible to include a quality check which looks for such actions.
    Portland could do the standardization part in case of icons and autostart for example, and an rpmlint-equivalent could check if that is included in the right way (for example with the user option to opt out, etc.).

  3. Opt-out? Why would a user EVER want a media player or similar to be added to the autostart?

    It should be compleately forbidden, if a user actually wants to autostart it just copy the .desktop from the menu to ~./kde/autostart

    Really what should happen is that the installer is given a folder, say /opt/games/foobar or /home/games/foobar and then bluntly refuses to install anything outside that folder. Portland can provide a user controled special case for menu items and MIME types.

    dotfiles should be created by the program, not the installer, as that method would provide better multi-user support.

  4. It should be compleately forbidden, if a user actually wants to autostart it just copy the .desktop from the menu to ~./kde/autostart

    No – self copying just doesn’t make sense usability wise. It would be better to let the program handle it by itself. Actually almost all applications I use do it exactly like that, even the proprietary ones: I have an option where I can switch if the program is started at the beginning or not.

    Really what should happen is that the installer is given a folder, say /opt/games/foobar or /home/games/foobar and then bluntly refuses to install anything outside that folder.

    Well, that *is* an LSB rule, therefore that is nothing new. Again, a test tool could verify that without any problem at all.

    In general I think we agree that no installer should do anything to a home directory, don’t we? And of course you’re right that Portland tools also shouldn’t do that, forget that part.

  5. “Well, that *is* an LSB rule, therefore that is nothing new. Again, a test tool could verify that without any problem at all.”

    And since its a rule, it should be enforced. And that rule dose make it impossible for a program to control its autostart status, since the rule forbids editing ~./kde/autostart or /etc/rc.local or whatever.

    And whats wrong with the useibility of self copying? its a easy trick to learn once you know about the folder, and it lets you have easy control over every program. Not just a few that give an option.

  6. What exactly do you mean with “enforced” in this circumstances? You can hardly create a binary installer which automatically forbids stuff like that because this is the task of a test program – as I said, an rpmlint-equivalent is exactly the program for stuff like that.
    But that’s minor stuff, first we need the API, then I would love to think about which program it the better place to enforce that stuff.

    And whats wrong with the useibility of self copying? its a easy trick to learn once you know about the folder, and it lets you have easy control over every program. Not just a few that give an option.

    It is shell handling, and from my experience average computer users are afraid of the shell and you cannot teach them to use it. They will just not be willing to adopt it, regardless of how easy and helpful it might be.

  7. You don’t need an RPMlint like tool, you could just isntall things relitive to the base dir of the pacakge. So if a pacakge contains 3 files:

    /etc/foobar.txt
    /usr/bin/foobar
    /readme

    then it will end up as
    /opt/games/foobar/etc/foobar.txt
    /opt/games/foobar/usr/bin/foobar
    /opt/games/foobar/readme

    It won’t take long before people useing this method of install learn the rules and chose sensible file locations to work with it. Probobly around beta testing of the first ever pacakge.

    As for manual linking, its not shell work (well it can be, but then what can’t) To add a program I open the autostart folder in konquoror, then drag the program from the Kmenu into the autostart folder. The only real change needed is a direct link from /home/user/ to the autostart folder.

  8. About rpmlint: if you want to enforce stuff you need a testing suite. It would have to check if there are files outside of /opt/, to begin with. The rpmlint example was just an example for having a tool to test, nothing more.

    And still, out of my experience drag&drop somewhere in the filesystem is much more than several computer users can do. Ever tried to explain computer basics to people over the age of 80?

    Anyway, let’s please focus on the topic: you want to make sure that there are no effects like you’re used to from the Windows world, and stuff like that can be enforced/checked with appropriate tools and rules. The rest is hypothesis because we have no installer/API like I favour.

    If you think the API is unnecessary there is no need to discuss anything because there are lots of installers already around.

Comments are closed.