Been reading more of the MINIX book—I am repeating myself, but it is well written.
I’ve learned a lot more C from reading it. However, I think I object to how hierarchical its process management system is. User processes cannot send messages to each other, instead having to still rely on kernel calls that filter down from layer 4, where user processes live, down through servers in layer 3, to drivers in layer 2, to kernel calls in layer one, which then execute the desired protected hardware operations.
I can’t help but wonder if it makes more sense to either (a) let userland, layer 4 processes communicate with one another; or (b) simply adopt a monolithic design, like traditional UNIX, which does not observe all these complex layer divisions.
My first impression of this so-called microkernel design is that it honestly seems far more complex than it actually has to be. Really, it seems like a very complex monolithic design.
Also, the third edition of the book, which I have, and which I think is still the latest version, only deals with 16 and 32 bit processors. Forgive me if I am wrong, but 32 bit support has been deprecated from most modern operating systems, so I think a lot of the lessons may be lost in, or inapplicable to the contemporary practice of UNIX.
MINIX interprocess communication largely uses the Rendezvous Method, which can lead to deadlocks, and requires a bit of hackery and wink-wink-nudge-nudge conventions when writing software for MINIX. The UNIX practice of just knowing the right conventions to get the system working efficiently still applies.
Still, I am fascinated by the book and the MINIX system overall. A close reading of the book, like I am doing, remains worthwhile, even though I really object to many of the design choices.
I honestly wonder if the model of operating system resource management has been distorted by the politics of MINIX’s engineers. Processes are modelled on resource-greedy agents, who want to use up, and dominate as much of the CPU and memory as possible.
Why aren’t system processes and resources modelled on a politics of mutual aid? Why has capitalist productivism been the background against which personal computing resources are viewed?
I can imagine in my mind how this concept of devouring as much power as possible has become interlinked with the rush to develop even smaller transistors, and even more powerful computer hardware. Gobble gobble guzzle guzzle.
Does this model of viewing systems as resources to be exploited and devoured apply to C? At this stage my gut feeling says ‘yes’—C is still the main tool used to manufacture Linux and the BSDs, so there must be some relationship between C, and viewing processes as selfish, greedy, power-guzzling components.
The kernel is like a police officer, in some ways, standing guard in front of the riches of the ruling class (hardware resources), and starved, hungry masses (user processes attempting to get to the hardware).
LISP does not observe this distinction between kernels and processes. LISP operating systems do not have kernels, and instead adopt a model of a computer system as a web of interlinked functions and programs. I think I much prefer this model.
In this connexion, I am keen to do more reading on plan9.