Progress

Here's the fourth and final contribution in a series of four short pieces by the IPKat's friend Keith Braithwaite (read the prologues to pieces one, two and three for the background) on the delicate balance between proprietary and open source models for the development and adoption of computer software.


Progress

Prologue

Last time we looked at the impact on users of removing a low-level technology from an operating system. This time we examine the claim, sometimes made by advocates of Free and Open Source software (FOSS), that so little progress has been made in recent decades by proprietary development that there is nothing to be lost by avoiding patented technology.

Proprietary technology is a source of advances

The irony is that many FOSS software products are intended as alternatives to established commercial products. OpenOffice is intended to be an alternative to the Microsoft Office suite and other commercial productivity tools. The GIMP is intended to be an alternative to Adobe Photoshop and Inkscape to Adobe Illustrator. These programs are clearly inspired by the commercial products they might be used to replace. One could argue that the spreadsheet application within OpenOffice has inherited all the innovation that has been applied to spreadsheets since VisiCalc in 1979, Lotus 1-2-3 in 1983 and Microsoft Excel in 1985. Each had a history of innovation and enhancement; all were developed as closed-source programs by corporations. Several FOSS spreadsheets exist, though none is as feature-rich as the commercial closed-source tools they ape.

So much for user applications. The previous articles in this series looked at operating system features. Some in the FOSS community hold that, since there have been no significant innovations in operating systems since the late 1960s, proprietary operating system offerings present no advantage.

It is surprising how many of the features that we now take for granted in a modern operating system were available on machines predating, say, the 1970s. So-called “time-sharing”, the use of one computer to run multiple programs for multiple users, was developed in the 1950s and became available in commercial products in the mid 1960s. For the user (at least, a very small group of privileged users), systems with a mouse and an interactive graphical user interface were available in 1968. It can look as if all the features of a modern operating system were available, although perhaps not all on the same system at the same time, by 1970.

1970 is a key year because the development of Unix at AT&T began in 1969 and the product was released in 1971. Unix is the indirect ancestor of both commercial operating systems such as Solaris and Mac OS and the free Linux distributions. Although Linux does not share any code with the original, proprietary, Unix operating system it does aim to be conformant with the various standards that define a “Unix-like” operating system. If it were really true that there has been little innovation in recent decades we might expect, then, that contemporary OS’s, including Linux, would not have any major features not in Unix. Is this true?

Not really. Here’s some detail on three examples of innovation appearing in modern systems. Since it is true that a many of the OS features familiar to users were developed a while ago these newer features can be quite technical.

Pre-emptable Kernel code

Modern operating systems are typically separated into a “kernel” which manages the hardware resources of the machine and “userland” where other services and applications live. Applications access the disk drive, network and other system resources by executing code in the kernel. The actual hardware involved is controlled by a piece of code called a “driver”. In earlier systems the kernel code might itself choose to defer a request in order to service one with higher priority, but there was no way to make the kernel preempt a request once it was started. This approach leads to latency problems, where a request that is actually very quick to service (such as pulling some data from the network) seems to a take a relatively long time to complete because the network driver has to wait for another task running in the kernel to complete.

To avoid this problem the kernel can be made preemptible. In a preemptible kernel a request can be overridden. In our example when data arrives from the network the driver interrupts another request currently running in the kernel and moves that data into memory or on to disk, at which point the first task resumes. This greatly reduces the latency of activities, like network access, where the computer needs to respond to an event that will happen at some unknown time in the future but that can be dealt with very quickly.

Microsoft Windows NT was released in the mid 1990s with a preemptive kernel, a feature inherited by its descendants, XP, Vista and 7. Linux has featured a preemptive kernel since version 6.2 which was released in 2003

RAID

Disk drives are marvels of engineering, but are not perfect. They can suffer mechanical or electrical failures and data can be corrupted or lost. Since reading and writing to a disk involves physically moving a read head from one location over the disk to another there can also be delays in retrieving data. One approach to this problem is to build higher quality disks. This is expensive and offers diminishing returns. In 1987 an alternative was introduced, the Redundant Array of Inexpensive Disks, or “RAID”.

In a RAID system data is recorded twice, so that if a single copy has a fault the data is not lost: this is called “mirroring”. Data can also be “striped” across multiple disks, with the different parts of the data on different disks, so that reading successive parts of the data does not need any one disk to make many movements of the head.I Instead, data is simply read form the disk with the next stripe. Striping and mirroring are combined in various ways to give different combinations of speed and safety. Some RAID systems also include additional measures identify and correct corrupted data.

Working with a RAID array is much more complicated that working with a single disk. While there are hardware-based solutions, where a “RAID controller” manages the disks in the RAID array, modern operating systems can manage RAID themselves.

Microkernels

We saw before that the operating system kernel is responsible for managing system resources via drivers which talk to network connections, disk drives, graphics cards and other components.
The Linux kernel is a very traditional design and uses the so-called “monolithic” approach, where the drivers and other kernel components are all hidden behind a single interface. Facilities such as the file system that we discussed in the first article can vary by loading different drivers, but this is hidden from userland applications. Having a monolithic kernel has been a popular design choice since the 1960s and was used in the Microsoft Windows 95 family of operating systems. In the 1980s an alternative “microkernel” architecture was developed in which the kernel itself is pared down to the very barest minimum and facilities such as the file system are implemented as userland programs which user applications talk to (perhaps via the kernel, perhaps directly). This has the potential to make the operating system much more reliable. With a monolithic kernel a badly written driver or some other problem can crash the entire OS. With a microkernel the driver would crash and need to be restarted but the kernel itself and the other programs providing operating system services would continue.

At first this technology had very poor performance, but intensive research overcame that problem and microkernel architectures have been used in mobile telephone handsets.
Between the two extremes is the “hybrid kernel” approach, in which some operating system services are provided by the kernel itself and some by userland processes. Both MacOSX and Windows NT- derived OSs use this kind of approach.

Other features

Some of the operating system features discussed in previous articles also pass the “1970” test. For example, the power management features mentioned in the first article and the zero-copy IO mentioned in the previous article arose in the 1990s

Summary

It is true that a surprising amount of the computing experience we take for granted today was available to someone forty or fifty years ago. As hardware has become more capable, technology choices that were once only available on gigantic research systems are available to all. We can now buy netbook class machines for a couple of hundred pounds that provide the same kind of environment that Doug Englebart demonstrated to an amazed audience in 1968. This can lead to the erroneous conclusion that all the interesting work has been done, but this is not the case. Innovation continues, although often in ways that enhance the computer user's experience: it’s faster, smoother, more responsive, more reliable, without necessarily presenting a big new feature.

Series Summary

We’ve seen that, when a technology vendor decides to use a patented technology without taking a licence, it can be putting at risk features that users can come to rely upon. If the technology supporting these features has to be removed, or a less capable replacement used, the functionality of a device or system can be degraded. This applies to domestic end users and corporate IT departments. We’ve also seen that choosing not to use a technology because it is patented, either to avoid a licence fee or on principle, can rob users of the opportunity to take advantage of valuable advances. Some who advocate avoiding proprietary technology claim that little real progress has been made in some fundamental technology areas for so long that patented technologies do not in fact represent progress. We’ve seen that this is not the case.
The arguments around software patents are complex and show no sign of resolving themselves soon. Increasingly often, the courts have to intervene to assist the legally recognised owners of technology defend their property rights. The opponents of software patents are increasingly bold in both their argument against the principle and their willingness to test the practice of intellectual property in software.
Progress Progress Reviewed by Jeremy on Tuesday, May 25, 2010 Rating: 5

1 comment:

  1. "We have seen" is akin to the professorial technique known as proof by intimidation: "Thus, it is obvious..."

    From a purely technocratic perspective and within narrow parameters; that patented technology is unavailable to those that chose not to use it unarguable.

    However that the patent system seems to have been gamed into a horrible travesty of its intention (flawed or not) of promoting innovation is a societal issue not well addressed by technocratic arguments.

    On my favourite topic of software patents it is not clear to me that the arguments are complex. May I invite you to read "Maths you can't use" a bit more boring but better quality than the more entertaining "patently absurd" video available from the FSF.

    I thought the post by Amerikat regarding the NFL was particularly apposite as the Supremes are drawing attention to the possibility that collective exclusionary action (I'm thinking video codecs here...) might well be an issue for anti-trust.

    Then there is the question of over broad claims by the powerful and the $1m table stake with a further $3m if the defendant wants to stay in the game.

    A alternative software perspective (and I would suggest, more correctly framed) is buried on pages 84/85 of the Conservative Party's energy security policy paper (alas no longer obviously available from their website, but luckily I have a copy):

    * Open networks - In making decisions on the renewal of transmission and distribution network infrastructure, our priority - as in the case of smart meter roll-out - will be to establish a flexible open platform for ongoing technological and business development.

    * Open standards - In establishing common technical and non-technical standards for the industry, our default position will be to play a supporting rather than a directing role. There may, however, be occasions on which direct involvement is necessary. For instance, where governments are involved in agreeing international standards; or if established industry players attempt to use proprietary standards to restrict competition and consumer choice.

    * Open markets - Wherever Government influences the shape of markets for smart grid technology and services, they are open the widest range of providers. This means that the measures we enact to rebuild Britain's energy security [...] - will be structured to enable smart grid based solutions to compete on equal terms with other
    options.

    A supporting view was expressed on the news this morning on the subject of bio-engineering when fears were expressed that the group that have created artificial life (ish) will use the patent system to lock out other players. That seems like a really good idea to promote technical progress.

    ReplyDelete

All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.

It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.

Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html

Powered by Blogger.