Chain of trust
There is a lot of talk about secure boot recently – mostly because of the windows 8 certification requirements. Secure boot gives makes it harder (but not impossible) for malware such as rootkits to start at boot time and then control the operating system. It creates a chain of trust from the hardware to the bootloader to the operating system, so criminals cannot break that chain.
However, if we want to protect open source software against criminals there is a different chain of trust that we need to protect: the chain from upstream developers to the end-user. There are quite a few bits and pieces in place already, but some essential parts are missing. In august 2009, for example, the web server from the squirrelmail webmail project was hacked, and two plugins were compromised with malware. It was lucky that the hack was discovered very quickly so little harm was done, but that could have been much worse.
Most open source projects use a version control system for their source code that involves some form of crypto authentication. Some very basic, like a login over ssh to a subversion or cvs server, other require every patch to be signed with a PGP key. Most distributions use some form of signing to protect the integrity of their packages. But the step from upstream development to the distribution is often not secured at all. Many upstream developers offer a md5 or sha1 sum of the downloads – but hey, once the criminals hack your webserver, they can change both the md5 and the tarball with the source code!
So what should we do?
If all upstream developers would sign their releases with PGP, and distributions should check if the source tarball is correctly signed, and signed with a trusted key, it would be much harder for criminals to interfere with this step. The level of trust could be very minimal (just check if the source tarball is signed with the same key as the previous time we downloaded the package) or very high (require a web of trust where keys have to signed after an official government issued ID has been checked such as Debian requires), just depending on the importance of the package.
Bluefish source tarballs have been PGP signed for a while already. Now it’s time for the distributions to automatically check these signatures when building a package.
For desktops it is generally considered a good idea that all user created data is stored on a NAS on which backup and restore is implemented.
For Linux desktops NFS is commonly used. However, NFSv3 is usually not acceptable because in large organisations there is too little control over IP adresses. So NFSv4 with Kerberos authentication is the answer. Large organisations also tend to have large networks, so latency is another factor, and again NFS4 (with the delegation feature) allows better client side caching. There is also FS-Cache/CacheFS that does a lot more caching on clients, but it does not improve performance in all situations (if bandwidth is not an issue don’t use it).
But now laptops. What you would like for laptops is the situation where the users work locally with their data, but whenever they have a network connection the data is synchronised to the enterprise NAS. That way they can disconnect their network at any time and continue working. There is the OFS (offline file system) that works on SMB network file systems, but that seems to be not completely mature yet. A second problem with laptops is authentication. A user may want to log on locally without network, and then connect the laptop to the network and expect it to start sychronising data. But that won’t work unless we first get our Kerberos ticket. I wonder what Windows laptops do in this situation, would they cache the password and re-use in the background to obtain a Kerberos ticket? Related to this: you need a feature sometimes called “cached credentials” to allow you to log on locally if your kerberos/ldap server is not available. There are some projects trying to adress this, but this is also still not well integrated yet.
Cyber-crime is rising. Open source software is slowly becoming more mainstream. And thus cyber-criminals will more and more try to target open source software users.
One of the weak paths in software security is the distribution path from developer to the end user. But this is often quite different for open source users compared to proprietary software. There are big advantages, but also some big disadvantages.
The author/maintainer creates a release, and uploads it to the download server. Then………??????? And in the end an end-user is running a binary on his/her system. Notice the ??????! What happens on the download server? There have been examples where open source software was hacked on the download server (for example squirrelmail had a serious issue). And do you trust all of the mirrors? Can you trust the packager? Do you know who the packager is? Do you trust the download server from the packager?
Several Linux distributions do good work already. Debian and Ubuntu sign their distribution lists. So once the user trusts the distribution key, and the process that keeps the key secure, the path from Linux distribution to their own system is quite secure. This is a tremendous advantage compared to the situation on the average windows machine. But is it good enough? The path from authors/maintainers to the Linux distributions is not always signed with keys. Some developers do sign all their release, but are the signatures checked by the distribution packagers?
Sharpen up before the cyber-criminals get to you!