Neat.
I was not expecting Pascal, thats an interesting choice. One thing I do like is that Freepascal has one of the better ways of making GUIs meanwhile every other language had decided that just letting Javascript build UIs is the way.
Now I write Javascript and SQL.
:)
Plus these might be smaller and might run faster than containers too.
[0]: https://www.tritondatacenter.com/blog/unikernels-are-unfit-f...
Unikernels don't work for him; there are many of us who are very thankful for them.
Why? Can you explain, in light of the article, and for those of us who may not be familiar with qubes-mirage-firewall, why?
I think unikernels potentially have their place, but as he points, they remain mostly untried, so that's fair. We should analyze why that is.
On performance: I think the kernel makes many general assumptions that some specialized domains may want to short circuit entirely. In particular I am thinking how there's a whole body of research of database buffer pool management basically having elaborate work arounds for kernel virtual memory subsystme page management techniques, and I suspect there's wins there in unikernel world. Same likely goes for inference engines for LLMs.
The Linux kernel is a general purpose utility optimizing for the entire range of "normal things" people do with their Linux machines. It naturally has to make compromises that might impact individual domains.
That and startup times, big world of difference.
Is it going to help people better sling silly old web pages and whatever it is people do with computers conventionally? Yeah, I'd expect not.
On security, I don't think it's unreasonable or pure "security theatre" to go removing an attack surface entirely by simply not having it if you don't need it (no users, no passwords, no filesystem, whatever). I feel like he was a bit dismissive here? That is also the principle behind capability-passing security to some degree.
I would hate to see people close the door on a whole world of potentials based on this kind of summary dismissal. I think people should be encouraged to explore this domain, at least in terms of research.
Docker was conceived to solve the problem of things "working on my machine", and not anywhere else. This was generally caused by the differences in the configuration and versions of dependencies. Its approach was simple: bundle both of these together with the application in unified images, and deploy these images as atomic units.
Somewhere along the lines however, the problem has mutated into "works on my container host". How is that possible? Turns out that with larger modular applications, the configuration and dependencies naturally demand separation. This results in them moving up a layer, in this case creating a network of inter-dependent containers that you now have to put together for the whole thing to start... and we're back to square one, with way more bloat in between.
Now hardware virtualization. I like how AArch64 generalizes this: there are 4 levels of privilege baked into the architecture. Each has control over the lower and can call up the one immediately above to request a service. Simple. Let's narrow our focus to the lowest three: EL0 (classically the user space), EL1 (the kernel), EL2 (the hypervisor). EL0, in most operating systems, isn't capable of doing much on its own; its sole purpose is to do raw computation and request I/O from EL1. EL1, on the other hand, has the powers to directly talk to the hardware.
Everyone is happy, until the complexity of EL1 grows out of control and becomes a huge attack surface, difficult to secure and easy to exploit from EL0. Not good. The naive solution? Go a level above, and create a layer that will constrain EL1, or actually, run multiple, per-application EL1s, and punch some holes through for them to still be able to do the job—create a hypervisor. But then, as those vaguely defined "holes", also called system calls and hyper calls, grow, won't so the attack surface?
Or in other words, with the user space shifting to EL1, will our hypervisor become the operating system, just like docker-compose became a dynamic linker?
I think you're over-egging the pudding. In reality, you're unlikely to use more than 2 types of container host (local dev and normal deployment maybe), so I think we've moved way beyond square 1. Config is normally very similar, just expressed differently, and being able to encapsulate dependencies removes a ton of headaches.
Unfortunately containers have always had an absolutely horrendous security story and they degrade performance by quite a lot.
The hypervisor is not going away anytime soon - it is what the entire public cloud is built on.
While you are correct that containers do add more layers - unikernels go the opposite direction and actively remove those layers. Also, imo the "attack surface" is by far the smallest security benefit - other architectural concepts such as the complete lack of an interactive userland is far more beneficial when you consider what an attacker actually wants to do after landing on your box. (eg: run their software)
When you deploy to AWS you have two layers of linux - one that AWS runs and one that you run - but you don't really need that second layer and you can have much faster/safer software without it.
The story is quite different in HP-UX, Aix, Solaris, BSD, Windows, IBM i, z/OS,...
Aren't there ways of overwriting the existing kernel memory/extending it to contain an a new application if an attacker is able to attack the running unikernel?
What protections are provided by the unikernel to prevent this?
Also always a little wary of projects that have bad typos or grammar problems in a README - in particular on one of the headings (thought it's possible these are on purpose?). But that's just me :\
itsthecourier•1h ago
file sharing is complex too it seems
would be good to see a benchmark or something showing where it shines
Imustaskforhelp•1h ago