Also fake because zombie processes.
I once spent several angry hours researching zombie processes in a quest to kill them by any means necessary. Ended up rebooting, which was a sort of baby-with-the bath-water solution.
Zombie processes still infuriate me. While Iβm not a Rust developer, nor do I particularly care about the language, Iβm eagerly watching Redox OS, as it looks like the micro kernel OS with the best chance to make to it useful desktop status. A good micro kernel would address so so many of the worst aspects of Linux.
Zombie processes are already dead. They arenβt executing, the kernel is just keeping a reference to them so their parent process can check their return code (waitpid
).
All processes becomes zombies briefly after they exit, just usually their parents wait on them correctly. If their parents exit without waiting on the child, then the child gets reparented to init, which will wait on it. If the parent stays alive, but doesnβt wait on the child, then it will remain zombied until the parent exits and triggers the reparenting.
Its not really Linuxβs fault if processes donβt clean up their children correctly, and Iβm 99% sure you can zombie a child on redox given its a POSIX OS.
I havenβt tried this, but if you just need the parent to call waitpid on the childβs pid then you should be able to do that by attaching to the process via gdb, breaking, and then manually invoking waitpid and continuing.
I think that should do it. Iβll try later today and report back.
Of course, this risks getting into an even worse state, because if the parent later tries to correctly wait for its child, the call will hang.
Edit: Will clean up the orphan/defunct process.
If the parent ever tried to wait, they would either get ECHILD if there are no children, or it would block until a child exited.
Will likely cause follow on issues - reaping someone elses children is generally frowned upon :D.
Zombie processes are hilarious. They are the unkillable package delivery person of the Linux system. They have some data that must be delivered before they can die. Before they are allowed to die.
Sometimes just listening to them is all they want. (Strace or redirect their output anywhere.)
Sometimes, the whole village has to burn. (Reboot)
Performance is the major flaw with microkernels that have prevented the half-dozen or more serious attempts at this to succeed.
Incurring context switching for low-level operations is just too slow.
An alternative might be a safe/provable language for kernel and drivers where the compiler can guarantee properties of kernel modules instead of requiring hardware guarantees, and it ends up in one address space/protection boundary. But then the compiler (and its output) becomes a trusted component.
Thank you. Came here to say this. Microkernels are great for limited scope devices like microcontrollers but really suffer in general computing.
Quite opposite. Most firmware microcontrollers run is giant kernel. Some microcontrollers donβt even have context switching at all. And Iβm not even starting to talk about MMU.
RedoxOS would likely never become feature complete enough to be a stable, useful and daily-drivable OS. Itβs currently a hobbyist OS that is mainly used as a testbed for OS programming in Rust.
If the RedoxOs devs could port the Cosmic DE, theyβd become one of the best Toy OS and maybe become used on some serious projects . This could give them enough funds to become a viable OS used by megacorps on infrastructures where security is critical and it may lead it to develop into a truly daily drivable OS.
They are planning to port Cosmic DE, and have already ported several applications from Cosmic including the file manager and text editor if I remember correctly.
What does this have to do with Rust? Or redox, or micro kernels or Linux?
Ok, how change of kernel would fix userspace program not reading return value? And if you just want to use microkernel, then use either HURD or whatever DragonflyBSD uses.
But generally microkernels are not solution to problems most people claim they would solve, especially in post-meltdown era.
This particular issue could be solved in most cases in a monolithic kernel. That it isnβt, is by design. But itβs a terrible design decision, because it can lead to situations where (for example) a zombie process locks a mount point and prevents unmounting because the kernel insists itβs still in use by the zombie process. Which the kernel provides no mechanism for terminating.
It is provable via experiment in Linux by use of fuse filesystems. Create a program that is guaranteed to become a zombie. Run it within a filesystem mounted by an in-kernel module, like a remote nfs mount. You now have a permanently mounted NFS mount point. Now, use mount something using fuse, say a WebDAV remote point. Run the same zombie process there. Again, the mount point is unmountable. Now, kill the fuse process itself. The mount point will be unmounted and disappear.
This is exactly how microkernels work. Every module is killable, crashable, upgradable - all without forcing a reboot or affecting any processes not using the module. And in a well-designed microkernel, even processes using the module can in many cases continue functioning as if the restarted kernel module never changed.
Fuse is really close to the capabilities of microkernels, except itβs only filesystems. In a microkernel, nearly everything is like fuse. A linux kernel compiled such that everything is a loadable module, and not hard linked into the kernel, is close to a microkernel, except without the benefits of actually being a microkernel.
Microkernels are better. Popularity does not prove superiority, except in the metric of popularity.
This particular issue could be solved in most cases in a monolithic kernel. That it isnβt, is by design.
It was(see CLONE_DETATCHED here) and is(source)
Create a program that is guaranteed to become a zombie. Run it within a filesystem mounted by an in-kernel module, like a remote nfs mount. You now have a permanently mounted NFS mount point.
Ok, this is not really good implementation. Iβm not sure that standard requires zombie processes to keep mountpoints(unless its executable is located in that fs) untill return value is read. Unless there is call to get CWD of another process. Oh, wait. Canβt ptrace issue syscall on behalf of zombie process or like that? Or use vfs of that process? If so, then it makes sense to keep mountpoint.
Every module is killable, crashable, upgradable - all without forcing a reboot or affecting any processes not using the module.
except without the benefits of actually being a microkernel.
Except Linux does it too. If graphics module crashes, I still can SSH into system. And when I developed driver for RK3328 TRNG, it crashed a lot. Replaced it without reboot.
Microkernels are better. Popularity does not prove superiority, except in the metric of popularity.
As I said, we live in post-meltdown world. Microkernels are MUCH slower.
But generally microkernels are not solution to problems most people claim they would solve, especially in post-meltdown era.
Can you elaborate? I am not an OS design expert, and I thought microkernels had some advantages.
Can you elaborate? I am not an OS design expert, and I thought microkernels had some advantages.
Many people think that microcernels are only way to run one program on multiple machines without modyfing them. Counterexample to such statement is Plan 9, which had such capability with monolithic kernel.
nah, you can have micro-kernel features on linux, but you canβt have monolithc kernel features on microkernel, thereβs zero arguments in favor of a micro kernel, except being a novel project
ORLY.
Do explain how you can have micro kernel features on Linux. Explain, please, how I can kill the filesystem module and restart it when it bugs out, and how I can prevent hard kernel crashes when a bug in a kernel module causes a lock-up. Iβm really interested in hearing how I can upgrade a kernel module with a patch without forcing a reboot; thatβd really help on Arch, where minor, patch-level kernel updates force reboots multiple times a week (without locking me into an -lts kernel that isnβt getting security patches).
Iβd love to hear how monolithic kernels have solved these.
I thought the point of lts kernels is they still get patches despite being old.
Other than that though youβre right on the money. I think they donβt know what the characteristics of a microkernel are. I think they mean that a microkernel canβt have all the features of a monolithic kernel, what they fail to realise is that might actually be a good thing.
you donβt need a micro kernel to install medules, nor to make a crash in certain module donβt bring the kernel down, you program it isolated, they donβt do that now because itβs unecessary, but android do that, and thereβs work being doing in that way https://www.phoronix.com/news/Ubuntu-Rust-Scheduler-Micro
the thing is that itβs harder todo that, thatβs why no one does, but not impossible, you also need to give the kernel the foundation to support that