Kill the Operating System!When designing computers, companies could take a lesson from Hollywood.

You use Windows, I use a Mac, and we both know people who use GNU/Linux. But for all the differences between these three families of computer operating systems, they implement the same fundamental design; all are equally powerful, and equally limiting.



Virtually every operating system in use today is based on a single computer system architecture developed in the 1960s and '70s. This architecture divides code running on computers into a "kernel," responsible for controlling the computer's hardware, and so-called application programs, which are loaded into the computer's memory to perform individual tasks. Applications, in turn, operate on named files arranged in a tree of folders. True, there are a few niche operating systems that don't adhere to this tripartite structure, but they are but bit players on the digital stage. Even PalmOS has a kernel, apps, and files (which PalmOS mistakenly calls "databases"). It's almost inconceivable that this approach won't be the dominant paradigm for many years to come. And that's a deep problem for the future of computing.


Hollywood, though, has a better idea. When computers show up in good science fiction movies, they rarely have interfaces with windows, icons, applications, and files. Instead, Hollywood's systems let people rapidly navigate through a sea of information and quickly address their needs. Some technical folks scoff at this representation as unrealistic. But why is that so?



Computing's standard model owes its success to the economics of the computer industry. The first computer programs were monolithic systems that talked to the hardware, communicated with users, and got the job done. But soon it became clear that organizations were spending far more money on software, custom software development, and training then they would ever spend on hardware alone. These businesses wanted guarantees that the programs they were creating would run on next year's computer. The only way to assure this was to take all of the hardware-specific code and put it into some kind of "supervisor" program-what we now call the kernel. The supervisor evolved into a kind of traffic cop that could allow multiple programs to run on the same computer at the same time without interfering with one another. That was vital back in the day when a single computer might have dozens of simultaneous users. It's equally important today for people who run dozens of programs simultaneously on their desktop systems.


But you could imagine building computers differently. Movie directors have pointed the way, showing interfaces that appear to make all of the computer's data and power always instantly available. Achieving such flexibility, however, would require us to rethink operating-system dogma. For example, instead of isolating applications from each other-where transferring data between them requires cutting, pasting, and usually reformatting-a hypothetical computer might run all programs at the same time and in the same workspace. Programs might not display information in their own distinct windows, the way they do now; instead, they would work behind the scenes, contributing as needed to a common display.


Most people can't imagine how such a system would work. The idea of editing an Adobe Illustrator document with Microsoft Word seems nonsensical: one program is designed for drawings, the other for words-and besides, they're made by different companies! Yet many Illustrator documents contain blocks of text: why not use Word's superior text-editing capabilities? In our imagined new computer, the boundaries between applications would melt away.