The Open Source Year 2009

Tux
Every year Open Source technology is improved and extended. This post sheds some light on new technologies which might arrive in 2009.

Btrfs

With ext4 a new file just left the developer corner. However, ext4 is an old-style file system and does not offer “hot” features like on-line snap shots, versioning and so on. ZFS does, but it is not an option for Linux due to licence reasons. Here comes Btrfs into play: it is in development for quite some time now and many Kernel developers already asked to include Btrfs in the Kernel to speed up the development process. Additionally, several Kernel developers already mentioned that they expect Btrfs to be the next generation default file system for Linux in the mid-term.

In 2009 Btrfs will most likely stabilize its file system format and publish a beta version for testing purposes.

oVirt

oVirt is a small host image that provides libvirt services and hosts virtual machines. Additionally it also has a well designed web based management system. The aim is to provide an enterprise ready VM management console capable of managing large sever clusters hosting large numbers of virtual machines, but is also supposed for single users.

In 2009 oVirt will hopefully see its first beta release ready for first real-world-like tests. Additionally, with some luck, it might be bundled with openfiler to ease the storage management. Last but not least it could include support for Xen in a future version.

OpenGL 3.0

The release of OpenGL 3.0 this year was rather surprising: it was delayed for almost a year without any notice at all, which is usually a clear sign that a project is dead. However, left aside the question if the OpenGL 3.0 release is the beginning of a new era or or just a last breath of the project, OpenGL 3.0 is now out in the wild and the Free Software community will adopt it sooner or later.

While Nvidia has already released a first version of an OpenGL 3.0 capable driver the FLOSS OpenGL implementation Mesa hasn’t released anything yet. But Mesa is alive and vibrant again since 2007 and a new release can be expected in the near future. Also likely is that AMD/ATI will release a new version of their OpenGL stack featuring the newest OpenGL spec. I would like to see AMD/TI team up with Mesa on that one but that’s just a wish, I’m afraid.

So in 2009 we will see OpenGL 3.0 coming to the masses – in proprietary as well as in Free drivers. This way newest graphics card technology will come to Linux and application developers can built upon that.

Gallium

Simply said, Gallium3D is an attempt to make graphics card driver development on Linux much easier: it abstracts the driver development from the underlying graphics standard implementation (for example OpenGL). Due to that abstraction, switching to another graphics standard should also be fairly easy. That way it should be easier to write one single graphics card driver for different devices (which do often need something else than OpenGL). And in case OpenGL is really dead, it could be a way to more or less painlessly replace it with something new. 😉
Right now Gallium3D is in heavy development and we yet have to see it in the wild. There are only few drivers ported to it and I haven’t seen any distribution shipping it yet.

In 2009 this could switch: a first testing release for the broader masses is likely, and it could speed up the development of drivers for Gallium3D.

Gem and KMS

Speaking about graphics, there are other things which are in development and which are already surfacing here and there: the new graphics memory manager GEM. Using GEM the graphics cards does not have to be re-initialized as soon as you switch to another application. Also, everything will be written to the memory and the composition manager can simply access it there, avoiding some problems current drivers have when for example running videos on AIGLX.
Besides, Kernel Mode Setting (KMS) will move other tasks of the graphic subsystem away from X towards the kernel. As a result it will be much easier and flicker-free to switch from X to a tty console, and the graphical system will be able to show kernel oops. Linux will get its own blue screen capability, finally!

These features have partially found their way into newer Fedora releases, but only for specific hardware and under certain conditions. In 2009 it can be expected that the current FLOSS/Nvidia/AMD drivers will switch over to GEM and KMS to provide a much saner graphics experience to the user.

KDE 4.3: Pimp your PIM

Curently KDE’s PIM is in a difficult situation: Kontact is one of the best free groupware clients out there, but it was never designed to be one, and using it as such today can be an adventure. To fix that Akonadi was created. It was shipped with KDE 4.1 to back up Mailody, KDE 4.2 will see it the first time together with Kontact. This will give the developers quite some time to sanitize and improve the Akonadi service as well as to add new plugins to provide something revolutionary right in time for KDE 4.3.

In 2009 we will finally see a FLOSS groupware client which is working with a broad range of groupware servers, has a maintainable code base – and is perfectly integrated on all major platforms.

Qt on the mobile mass market

This year almost started with the news that Nokia acquired Trolltech. Recently it was announced that Qt now runs on Symbian S60. Also, with the iPhone, Google’s G1 and even a new Blackberry Nokia seriously needs a cool new device with fancy graphics and an appealing software platform.

Now put two and two together. With a bit of luck we will see the first Qt-Nokia devices with multi touch screen in 2009. With even a bit more of luck, it will be shipped in a way that Qt developers can use the tools they are used to to develop software for the new platform. Think of running KDE on these devices.

Gnome 3.0 development

In summer this year the Gnome developers started planning their next big release – Gnome 3.0. Currently not too many information have surfaced, but such breaks need their time. A state tracker for the Gtk+ changes is online and shows that indeed some work is underway already.

In 2009 first Alpha release could surface to show in which direction Gtk+ and Gnome are heading, and how the transition progress works out. That will definitely be an interesting time – the transition was a major task for KDE, and the Gnome team better takes a close look at that to learn from KDE’s experience.

Conclusion

While I already called 2007 the Year Of Open Source Graphics, 2009 can become a good candidate for it as well. In this post it got three paragraphs, and if everything comes true, 2009 will revolutionize the world of Linux graphics. This will, however, happen mostly under the hood. The users will not notice several fixes, but not the large underlying changes, which is different to 2007.

But in general 2009 will be exciting in almost all FLOSS areas. Keep in mind that this list is not and cannot be complete! So I ask every reader to drop a comment here containing his or her tip for revolutionary changes or news in the FLOSS world in 2009!

Advertisement

22 thoughts on “The Open Source Year 2009”

  1. You forgot to mention that it is possible that the nouveau people could make a release in 2009 which would enable a whole bunch of people with nvidia graphic cards to finally use them without needing propriatery drivers. They would “just-work”.

  2. “Besides, Kernel Mode Setting (KMS) will move other tasks of the graphic subsystem away from X towards the kernel. As a result it will be much easier and flicker-free to switch from X to a tty console, and the graphical system will be able to show kernel oops. Linux will get its own blue screen capability, finally!”

    Why do we need to move graphic subsystem to OS (monolith kernel = Operating System)? Are we totally a wise to do same mistakes as Microsoft did with Windows NT?

    For me, it looks like that Linux OS starts getting more and more stuff integrated to it, what does not belong there. Because Linux OS is a monolith and not microkernel structured, every bit of code can bring whole system down if the code brakes in the kernel level. I understand the virtualisation (Xen) and such things to be runned from OS level, but graphical subsystem there too? There has be lots of talk about the “little sister” of Xorg, what’s idea is to be a part of OS. And many people says it is just grazy idea and can bring more bad than good.

    I do not mind about 1 second blink when switching between TTY and Xorg, I am happy that if Xorg crash, I can switch to TTY and fix stuff.

    On Fedora 10, the graphics are faster to boot but I get lots of problems on multiple different machines, as Nvidia, AMD and Intel chips used… not so good move guys…

  3. Fri13:
    To get the basics straight: Linux is not a monolith kernel, it is a hybrid kernel these days. The same is true for Windows. However, they started from different positions: the Windows kernel was designed as a microkernel, Linux as a monolith kernel, and relicts from these designs can still be seen everywhere.
    Also: currently X runs as root. So by crashing X you can bring down everything else as well. Actually, only seldom you can switch back to a tty if it was a real X *crash*.
    Last but not least: the kernel development moves *away* from “getting more and more stuff integrated”. Things like FUSE and the stable userspace driver API came up in recent development and show the trend to clean out the kernel as much as possible.

    So – to make the entire system more bullet proof and less crashy it is important to move all stuff which needs root away from X – to the kernel. When that is done, X can be run as a user.

    Having said all that, you are right that, while doing this switch, the kernel developers need to make sure that the inclusion must be sane, stable and knows how to deal with unstable drivers.

  4. I think you are giving graphics card manufacturers too much credit here. Xrandr was released over a year and a half ago and nVidia drivers still do not have support for it. It has been almost a year since the release of KDE 4.0 and the nVidia beta drivers are only now having decent performance on it. These things move extremely slowly and, although I don’t know about ATI, nVidia has a policy of not supporting any beta (not to mention alpha or pre-alpha) software in Linux. So it could be years before we see any of this stuff make it into some of these drivers. Not to mention nVidia constantly dropping support for any card more than a few years old.

  5. The important thing to note with the graphics drivers going into the kernel is that this is not about putting X into the kernel, just the hardware managers. This is not unlike having the ethernet driver in there, for intance. These drivers manage mode line settings, memory allocation and rather low-level hardware items like that.

    Doing it in the driver separates this code from the rest of the user interface stack where it can be kept simple and shared by all items which need access to the hardware.

    Imagine if every network using application had to drive the ethernet card itself? 😉 Ok, it’s not quite *that* bad today, but it’s someone analogous.

  6. TheBlackCat, I don’t see that I give the graphic card manufacturers too much credit. Actually, I don’t give them any credit at all besides the positive words about Nvidia who support OpenGL already.
    I do not expect AMD/ATI or Nvidia to support Gallium, KMS or Gem in the near future – it first have to be stable. But currently Intel and AMD/ATI cards will get pretty good open/free drivers for the new features and technologies, either because Intel pays the developers or because AMD/ATI released the specs.
    And Nvidia will have to react here sooner rather than later, so even tehre is hope for changes.

    Aaron, thx for this clarification, quite a good example actually (good in the sense of a “good lie” 😉 ).

  7. Liquidat: Linux was first a Macrokernel (Monolith) a one big giant binary blob. Then on 2.2 version it became a modular monolith kernel. Linux has not since been a *pure* monolith kernel but is classified as such. Not as hybrid kernel what does not actually even exist because it is marketing. At least Linux was few months ago when we were on lessons about Linux history and structure in the same famous place where Linus stydied when he started Linux project 😉

    And even that we have FUSE kind technologies, it does not mean that Linux is hybrid. Because FUSE module is running in kernel space as any other Linux module, not in different way separeted from the kernel and running on the userspace. Or that monolith kernels modules are stored to diskspace and loaded to memory when needed, they get located to kernel space, not to userland what is possible on “hybrid kernel”. Like on Windows Vista, the 3D drivers are on userland and not on the kernel space.

    But actually, you are first person who has even hinted that Linux is a “hybrid kernel”. Do you have any books what to suggest to read to get more information about it?

  8. @Aaron: “The important thing to note with the graphics drivers going into the kernel is that this is not about putting X into the kernel, just the hardware managers.”

    That is good to know. I mistaken it then to that other “minimal X” project what’s plan is to include the X as small as possible to Linux itself.

  9. liquidat: Very nice article but bretty much forgets important information. Like that Linux moduels can not exist in the userland, like microkernel modules can (no, the FUSE kernel module exist only in the kernel space as all other modules, even it serve the filesystem to userland)

    On microkernel structured OS, you can recompile any module or kernel alone, without need to recompile all other modules or kernel. So you can update a kernel or network module without need to update the kernel or filesystem. The modules are independent from kernel and each others. In this case, the microkernel in kernelspace + OS servers in the userland, makes the OS.

    On the monolith kernel structured OS, if you update a kernel, you need to recompile all the modules too. If you want to add new kind module what needs kernel upgrade, you need to compile again all modules, not just upgraded module and kernel. All the modules are very integrated very tight to the kernel itself so they can not be separeted at all. Even that some parts of the OS are loadable modules, they should not be mistaken to microkernel modules what are very differently builded.

    The conclusion about that article makes assumption that because Linux use modules, they are same as microkernel modules and because hybrid kernel is the kernel space and userspace, the Linux is so on the hybrid. That is not true because 1) the “hybrid kernel” is on both sides, on kernel space and userland, what is impossible because kernel is located to kernel space and is not located (or any parts of it) to userland. 2) The “hybrid kernel” use microkernel, so the hybrid kernel would include a microkernel itself. So you would have two kernels and you would need to talk wich one you talk. And the conclusion just says that hybrid _design_ is most used. That does not mean that the kernel is a so called “hybrid”, but that the most used kernels ain’t pure versions of the desigs what were first time when the OS was invented.

    If the operating system use microkernel, it use a microkernel. Even that it has own wierd buildway, where some of the OS server are brought back from userland to kernel space, they ain’t part of the kernel itself, still different kinds modules than monolith kernel has. On “hybrid kernel” you can swap/upgrade OS servers (modules) just like on the normal MIcrokernel modules, they dont affect to each others if they are secured from each others, like on pure microkernel structured.

    The correct conclusion is that we have a two main class of kernels, a microkernel and a monolith kernel and then a different models from these. A pure monolith kernel is a macrokernel. And Linux was such, but then it was designed a modular version, so it was no more a macrokernel but a modular monolith kernel, still a monolith but not a pure such. A Windows NT OS use a microkernel, but some of the OS parts are located to kernel space like on monolith, but because all the OS servers are microkernel kinds modules, the OS is not same kind as the monolith, but sliced version.

    It is impossible to place the Windows NT to other of categories because it has some elements of both, but still, it can not be called as new kind kernel (aka “Hybrid kernel”) because it’s structure is still same as on microkernel. Thats why it is just a marketing agenda that it is an “hybrid kernel” and not an “OS using microkernel and some of the modules moved back to kernel space where the microkenel exist alone”.

    And you can always compile the Linux without any of the modules, so you can get such OS as pure monolith would be, but still it would not be a pure monolith, because you can get some of the OS servers as loadable modules to stored to disk, and loaded to RAM only when needed. Still in every situation, a bug in monolith kernel module, crash the OS (monolith kernel) and so on, the whole system comes down. On microkernel, the OS server module what crash, is isolated and OS is protected from it. But if the microkernel module is not protected well, it might bring down the OS (=crashing module affects all other modules and microkernel) what brings again down the whole system.

    This is one weakpoint of Windows NT, it does not have well protected modules from each others, so the 3D driver can still bring the whole system down because the OS crash. This is the case on the Windows NT 6.0 (Vista) where the 3D drivers are located to userland and not to kernel space. This was done because biggest problem of the Windows NT crashing, was 3rd party drivers. So Microsoft wanted to protect rest of the OS by moving the drivers to userspace, but those process were not protected well enough and BSOD is still happening.

    Biggest malware problems come for Windows because Microsoft integrated the IE browser to OS. Part of IE was a part of Windows NT and when you got trought to those IE parts what were on windows NT, you got passed all other securities because you got your code to run in kernel space where it should not be got in first place. Thats why it is very important to check what functionality actually gets to OS (modules or the kernel itself) and this way the OS can do it’s job and protect itself and all other processes being affected when the malware gets installed itself to system in one of the applications running on it.

    Microsoft have had the graphical subsystem off from the OS, then integrated to microkernel itself, then moved to out of microkernel as own subsyste, still belonging to OS, but now there is plans that the graphical subsystem is even removed from the OS totally, to protect OS and make the system more like Linux systems, where the OS (Linux) does not have a own graphical subsystem (Altough Linux has own text mode before even the INIT starts and you have a framebuffer what draws graphs like the Tux on the TTY etc before any other applications (example GNU tools) gets started after the OS is loaded to memory). So this new idea to bring more stuff to Linux, to a OS. Can be a very stupid idea, because more code in monolith OS, brings just more bugs. And some ideas should stay just as ideas, never being implented in reality. And even that Linux would be made as microkernel, it would still be stupid to bring stuff to the #1 software of the computer, what makes the rules how stable the whole system can be.

    Mayby most important thing is, that when talking about kernels, they can not be groupped together as they all were “just a kernel” and for getting OS is needed other software too. Because every OS is different and you need to stick on that when you discuss about it. You can compare Linux vs Hurd but it is just like comparing a Truk to Main Battle Tank. Yes, both are kernels but other one is a OS and other is just a microkernel, what still needs other parts on the userland and so on is just a most important part of the OS. And what comes to “hybrid kernel”. It just is an OS what use microkernel and it’s modules in different places.

    🙂

  10. liquidat, you are welcome. I dont know sure what makes the OS structure and meaning of the kernel so dificult for normal user. But I have spended lots of time to find out the problem. And with the information what I have needed to find, I can do only a one conclusion… and it is the media. Media does not use correct terms and use incorrect marketing terms so much that even people who studies the information technology, gets misleaded to politics and PR-media terms of OS.

    And even the greatest OS books does not make things so clearly for reader, all the information is in those books but because 99,9% of book is the technical data, the simple information gets hided under those. And because we have two different OS structures (Monolith kernel vs Microkernel) and multiple different variants, we can not speak from / refer to Operating System as one simple meaning. The closest simple explenation goes like: “Operating System is bottom layer of the software system” and closest technical explenation goes like: “Operating System is running in the kernel space or in supervisor mode”, what actually is the most simplest way, because every OS server (module) what is running in the microkernel based OS, is runned as supervised process to kernel and other OS servers (modules), and then there is lots of normal process what are normal applications etc.

    It can be very dificult to try to explain things for normal user because what they hear and see, is only a marketing. Because the software “borders” are invisible, you cant trust your own eyes or basic logic like you can with physical objects. Like if you go to store and you buy a one liter of Milk, you can see that the Milk is thing what you buy, but to buy that milk, you need a canister what to use to store that milk. And you know that the price of that milk includes the price of canister manufacturing and the store price of storing it etc. But on software, because you can not see so well the border, you believe that everything what you get on the Operating System install media, is part of Operating System. But in reality, you pay about Operating System, but you get the Operating System and lots of other applications and all kind software too with same media, not just the operating system what you are paying for. (And that you pay about package, printer manuals, support etc too).

    And some people really try to explain that even the printed manual or 90-days telephone support belongs to operating system. What is very absurd because OS is sftware, digital 1 and 0’s and printer manual is physical object what is not needed to run computer. And the 90-days telephone support is just a support what you can use if you need it, but it has nothing to do with the Operating System itself.

    Reason for this false explenation is that all these things together makes one product. You can have multiple different products with same parts on them, but you can get many different kinds packages, a products what to sell. And then some people really believe that product is same thing as the part of the product what you are actually needing and from what you actually are bying to get just the needed thing for your computer. It is same as saying that the Milk’s cardboard container is part of the milk, because together they are one kind product.

  11. Fri13, Interesting point of view towards the term operating system. In the Wikipedia we once had a very hot debate about the term OS, and this was indeed very difficult to pin point.
    It was of course made even more difficult because some people forgot what a Wikipedia article has to have inside.
    But we also had to settle that there is no unified accepted definition for the term OS, and that even the most respectable books about this topic had varying definitions – or simply settled on the fact that a single and simple definition cannot be given.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: