System.NullReferenceException

Tag: security

The importance of Open drivers and openness in general

An interesting question was asked on the Ubuntu Forums regarding openness and why some people were reacting the way they are on the issue of proprietary software. The example given was the driver Nvidia provides for their videocards. I wrote this as a response, instead of going into ideologically definitions of freedom I feel that new users might like to see the real world measurable advantages.

Looking at the proprietary closed source nvidia driver which is currently needed for supporting 3D acceleration and many other features supported by this range of hardware. I would like to specifically point to these 4 arguments.

1) Security

It’s several megabytes of code running in your kernel with access to all kinds of things. You can’t see what it’s doing and it has been subject to at least one major security issue. We can’t fix it, if Nvidia doesn’t find the problem worth the effort then we either have to remove the driver or leave users vulnerable to attack as a distribution.

2) Portability

The nvidia driver only runs on the platforms Nvidia deems they can support. This means e.g. that right now PS3 owners who wishes to run Linux on their machines (a fully supported feature from Sony btw. though not on the Slim models) are left without such things as 3D acceleration and video codec acceleration.

3) Stability

Looking over the top kerneloopses a clear trend is that kernels with the nvidia driver (and the ati proprietary driver) are high scoring components of these and related problems. Users can (and have) experience crashes in applications, problems for which the root cause is in code in these modules. Such problems we can’t fix since we aren’t privy to the code, we are depending on the vendor providing such support in a timely fashion. As a Linux distribution you might also encounter problems with users getting a poor experience and thus losing customers – meaning Nvidia in theory could hold distributions at ransom till an open alternative appears with the same functionality or we do as they tell us.

This scenario though due to the public backlash it would cause seems absurd. What isn’t though is that Nvidia has their own development schedule and if we want to develop our software stack we occasionally have to make changes that change APIs and thus breaks the nvidia driver (this has happened). This forces us to either break this piece of the functionality for users when we import the new underlying stack or hold it back till Nvidia decides to release a compatible version. This effectively lets nvidia dictate the development pace and release process of a large part of Linux.

4) Support for outdated/unavailable for sale hardware and saving the environment

Nvidia regularly moves older devices into a subset of their driver called legacy. This driver isn’t well maintained, on purpose to lessen their support burden and naturally to sell new videocards. We thus can’t support users existing hardware, therefor we (though in reality Nvidia) force them to upgrade their machines or stay on their existing platform. Preventing distributions from gaining users and thus also potential customers. It also lessens the applicability of the age old benefit Linux always was known for, running on an old clunker and give it new life.

E.g. I participate in a project that sends old hardware to Africa to use in schools. When the time comes that the machines that come in through the door contain Nvidia chips that aren’t supported we give poor African children machines that do less than they can, are less fun, will interest them less. Making school a less exciting break in what must otherwise be a pretty bleak day.

Yes, I did just manage to invoke starving African kids while making an argument on software. Please do not see this as an emotional argument but rather a matter of making education as appealing as we can to everyone and thereby encourage more people to get engaged. The positive effects of education are hard to deny and pretty much any effort being made to increase the likelihood that people will enter into such programs should be welcomed.

Every time you are forced to upgrade perfectly working hardware to get to a supported version of Linux (even Ubuntu’s Long Term Support releases are only supported for 3½ years on the desktop) you are left with spare hardware. Often this ends up getting thrown out, replacing it thus forces upon us amongst others the following problems:

– Needlessly depleting our natural resources more

– Needlessly imposing more waste which contains toxic chemicals.

– Wasting production capacity

– Wasting money

With Open Source drivers we have the means to take these problems into our own hands.

I hope this is helpful in providing arguments for open drivers. This is a complicated area where we need to convince vendors to work with us and we need to understand that we are asking them to change their culture. They are used to sharing coming only with the exchange of large sums of money in the form of licensing agreements. We cannot expect them to change overnight but we can inform users of the arguments for openness and then together do our best to work with vendors towards greater cooperation on terms that serve the user.

It’s not just a Linux issue, even Microsoft is faced with downsides of not having access to the driver code and being able to update them at will. A study showed that 30% of Vista crashes where caused by drivers from Nvidia. Vista was notoriously poorly received for many complex reasons, it’s is just one problem area.

This is not to pick on Nvidia specifically, I use them as an example as this is a situation that is fairly well documented and many people use this driver.

Pushing kernels more aggressively to updates-testing

On thing struck me tonight about the recent fiasco relating to the stable marking of a kernel that just happened to also kill wifi for a great number of users. We did the correct thing, to a degree naturally, the update was in relation to a security update something Fedora takes very seriously. As such our users should always feel safe knowing that we will push such updates fast, keeping their systems secure through multiple means including proactive security and rapid updates.

However the problem is that we don’t apply the update to the existing stable kernel, the patch is always applied on top of the progressing kernel, meaning we also end up shipping a lot of other things such as bugfixes, updates to the latest upstream STABLE tree and various other things. This however is confronted with one problem, the kernels in between the current stable and next update are not all being pushed to updates-testing – only selected kernel updates are. In cases where we then have to release a security fix we are forced to ship a bunch of stuff additionally which is not likely to have been tested extensively.

It occures to me that catching these bugs before they become a problem for average users could be accompliced by making better use of updates-testing, testers are normally willing to experience a degree of breakage and are qualified to file bugs for the most part. Then at least when an urgent update is required we will not likely be surprised by massive unrelated breakage – it might still occure but we can warn people if avoiding massive breakage is impossible and reverting the offending patches is impossible prior to release.

An additional problem caused by this is that when an urgent release contains bugs we will be urged to ship another update straight afterwards. Opening us to even more bugs from another untested delta (since other development is likely to have gone on along side the bugfix) and having our users suck down a second kernel package shortly after the original update.

The other option would be applying the security update to the current stable kernel and not carrying the current delta in the update, but this is expensive in terms of man power and time, it also goes counter to the rapidly developing nature of Fedora in general. This is the realm of the enterprise distros, if people want this approach something like RHEL/CentOS is likely a better fit for them.