next up previous contents
Next: Kernel Space and User Up: The Linux Edge Previous: Microkernels

From Alpha to Portability

The Alpha port started in 1993, and took about a year to complete. The port wasn't entirely done after a year, but the basics were there. While this first port was difficult, it established some design principles that Linux has followed since, and that have made other ports easier.

The Linux kernel isn't written to be portable to any architecture. I decided that if a target architecture is fundamentally sane enough, and follows some basic rules then Linux would fundamentally support that kind of model. For example, memory management can be very different from one machine to another. I read up on the 68K, the Sparc, the Alpha, and the PowerPC memory management documents, and found that while there are differences in the details, there was a lot in common in the use of paging, caching, and so on. The Linux kernel memory management could be written to a common denominator among these architectures, and then it would not be so hard to modify the memory management code for the details of a specific architecture.

A few assumptions simplify the porting problem a lot. For example, if you say that a CPU must have paging, then it must by extension have some kind of translation lookup buffer (TLB), which tells the CPU how to map the virtual memory for use by the CPU. Of course, what form the TLB takes you aren't sure. But really, the only thing you need to know is how to fill it and how to flush it when you decide it has to go away. So in this sane architecture you know you need to have a few machine-specific parts in the kernel, but most of the code is based on the general mechanisms by which something like the TLB works.

Another rule of thumb that I follow is that it is always better to use a compile time constant rather than using a variable, and often by following this rule, the compiler will do a lot better job at code optimization. This is obviously wise, because you can set up your code so as to be flexibly defined, but easily optimized.

What's interesting about this approach -- the approach of trying to define a sane common architecture -- is that by doing this you can present a better architecture to the OS than is really available on the actual hardware platform. This sounds counter-intuitive, but it's important. The generalizations you're looking for when surveying systems are frequently the same as the optimizations you'd like to make to improve the kernel's performance.

You see, when you do a large enough survey of things like page table implementation and you make a decision based on your observations -- say, that the page tree should be only three deep -- you find later that you could only have done it that way if you were truly interested in having high performance. In other words, if you had not been thinking about portability as a design goal, but had just been thinking about optimization of the kernel on a particular architecture, you would frequently reach the same conclusion -- say, that the optimal depth for the kernel to represent the page tree is three deep.

This isn't just luck. Often when an architecture deviates from a sane general design in some of its details that's because it's a bad design. So the same principles that make you write around the design specifics to achieve portability also make you write around the bad design features and stick to a more optimized general design. Basically I have tried to reach middle ground by mixing the best of theory into the realistic facts of life on today's computer architectures.


next up previous contents
Next: Kernel Space and User Up: The Linux Edge Previous: Microkernels

Download this document: [src.tar.gz][ps.gz][html.tar.gz][dvi.gz]

Open Resources (www.openresources.com)
Last updated: 1999-08-06