Sorry, but I bought Proxmox 7, but it is not comparable. Incus does everything (and more) with better interface, WAY better reliability, and also not like a hundred EUR or whatever. (100 EUR is fine with me if better, but not if not better...)
Incus looks nice, though it looks to be more API driven , at least from the landing page. I can't attest to Proxmox in a production/cluster environment but (barring GPU passthrough) it's very accessible for homelab and small network.
It's been forever, but to do passthrough you need proper bios support and configuration.
* https://forum.proxmox.com/threads/pci-passthrough-arc-a380-i...
I'm not really sure what the difference is.
So, the software versions that go into the enterprise repo are considered stable by then.
(If we're talking about Proxmox, that is.)
Nothing is stopping you from doing this with Proxmox, right?
The demo does take ~10m to get into a working instance.
Proxmox has nice web GUI
[1]: https://blog.simos.info/how-to-install-and-setup-the-incus-w...
also, looking at the link you posted, it looks like incus can only do like a fraction of what proxmox can do. is that the case or is that web ui a limiting factor?
When it matures I’ll look into switching from Proxmox.
But the moment you stop trying to do everything locally Proxmox, as it is today, is a dream.
It's easy enough to spin up a VM, throw a clients docker/podman + other insanity onto it and have a running dev instance in minutes. It's easy enough to work remotely in your favorite IDE/dev env. DO I need to "try something wild", clone it... build a new one... back it up and restore if it doesn't work...
Do I need to emulate production at a more fine grained level than what docker can provide: easy enough to build something that looks like production on my proxmox box.
And when I'm done with all that work... my daily driver laptop and desktop remain free of cruft and clutter.
Cool part is I needed a more powerful Linux shell than my regular servers (NUCs, etc.) for a one off project, so I spun up a VM on it and instantly had more than enough compute.
I thought it might need gpu virtualization?
do you do it with passthrough?
Another benefit is block-level backups of the VMs (either with qcow2 disks files or ZFS block storage, which both support snapshots and easy incremental backups of changed block data only)
Proxmox is great for this, although maybe not on a laptop unless you're ready to do a lot of tweaks for sleep, etc.
https://youtu.be/4-u4x9L6k1s?t=21
>no mature orchestration
Seems to borrow the LXC tooling...which has a decent command line tool at least. You could in theory automate against that.
Presumably it'll mature
Then the opportunity to get rich by offering an open source product combined with closed source extras+support was invented. I don't like this new world.
Edit: Somewhere along the line, we also lost the concept of having a sysadmin/developer person working at like a municipality contributring like 20% of their time towards maintenance of such projects. Invaluable when keeping things running.
Remember: Not all commercial users are FAANG rich. Counties/local municipalities count as commercial users, as an example.
In general, I don't think this is a threat. I think the problems begin with proprietary offerings, like they so often do in the cloud. Then's the time when vendor lock-in takes its toll. But even with AWS, if you stick to open interfaces, it's easy to leave.
Lots and lots of organizations already have SAN/storage fabric networks presenting block storage over the network which was heavily used for VMware environments.
You could use NFS if your arrays support it, but MPIO block storage via iscsi is ubiquitous in my experience.
You lose snapshots (but can have your SAN doing snaps, of course) and a few other small things I can't recall right now, but overall it works great. Have had zero troubles.
And how does Ceph/RBD work over Fibre Channel SANs? (Speaking as someone who is running Proxmox-Ceph (and at another gig did OpenStack-Ceph).)
There exist many OCI runtimes, and our container toolkit already provides a (ball parked) 90% feature overlap with them. Maintaining two stacks here is just needless extra work and asking for extra pain for us devs and our users, so no, thanks.
That said, PVE is not OCI runtime compatible yet, that's why this is marked as tech preview, but it can be still useful for many that control their OCI images themselves or have an existing automation stack that can drive the current implementation. That said, we plan to work more on this in the future, but for the midterm it will be not that interesting for those that want a very simple hand-off approach (let's call it "casual hobby homelabber"), or want to replace some more complex stack with it; but I think we'll get there.
I understand sticking with compatibility at that layer from an "ideal goal" POV, but that is unlikely to see a lot of adoption precisely because applications don't target generic OCI runtimes.
We would have required to implement runtime integration anyway, and I hardly see any benefit in not releasing that lower level integration earlier.
Adventures in upgrading Proxmox - https://news.ycombinator.com/item?id=45981666 - Nov 2025 (10 comments)
I learned stuff like this years ago with upgrades to debian/ubuntu/etc - upgrading a distribution is a mess, and I've learned not to trust it.
throw0101c•2mo ago
(Perhaps if you're a Microsoft shop you're looking at Hyper-V?)
luma•2mo ago
My personal dark horse favorite right now is HPE VM Essentials. HPE has a terrible track record of being awesome at enterprise software, but their support org is solid and the solution checks a heck of a lot of boxes, including broad support for non-HPE servers, storage, and networking. Solution is priced to move and I expect HPE smells blood in these waters, they're clearly dumping a lot of development resources into the product in this past year.
nezirus•2mo ago
commandar•2mo ago
This wasn't my experience in over a decade in the industry.
It's Windows dominant, but our environment was typically around a 70/30 split of Windows/Linux servers.
Cerner shops in particular are going to have a larger Linux footprint. Radiology, biomed, interface engines, and med records also tended to have quite a bit of nix infrastructure.
One thing that can be said is that containerization has basically zero penetration with any vendors in the space. Pretty much everyone is still doing a pets over cattle model in the industry.
nyrikki•2mo ago
You can grow out of either by just moving to self hosted, or you can avoid both for the virtualization part if you don't care about the VMware like GUI if you are an automation focused company.
If we could do it 20 years ago once VT-x for production Oracle EBS instances for a smaller but publicly traded company with a IT team of 4, almost any midmarket enterprise could do it today, especially with modern tools.
It is culture and web-ui requirements and FUD that cause issues, not the underlying products that are stable today, but hidden from view.
luma•2mo ago
nyrikki•2mo ago
Libvirt is the abstraction API that mostly hides the concrete implementation details.
I haven't tried oVirt or the other UIs on top of libvirt, but it seems less painful to me than digging through the Proxmox Perl modules when I hit a limitation of their system, but most people may not.
All of those UI's have to make sacrifices to be usable, I just miss the full power of libvirt/qemu/kvm for placement and reduced latency, especially in the era of p vs e cores, dozen's of numa nodes etc...
I would argue for long lived machines, automation is the trick for dealing with 1000's of things, but I get that is not always true for others use-cases.
I think some people may be supprised by just targeting libvirt vs looking for some web-ui.
tlamponi•2mo ago
The web UI part is actually one of our smaller code bases relative to the whole API and lower level backend code.
nyrikki•2mo ago
I would strongly suggest more work on your NUMA/cpuset limitations. I know people have been working on it slowly but with the rise of E and P cores, you can't stick to pinning for many use cases and while I get hyperconvergence has it's costs, and platforms have to choose simple, the kernels cpuset proc system works pretty well there and dramatically reduces latency, especially for lakehouse style DP.
I do have customers who would be better served by a proxmox type solution, but need to isolate critical loads and/or avoid the problems with asymmetric cores and non-locality in the OLAP space.
IIRC lots of things that have worked for years in qemu-kvm are ignored when added to <VMID>.conf etc...
tlamponi•2mo ago
We already support CPUsets and pinning for Container VMs, but definitively can be improved, especially if you mean something more automated/guided by the PVE stack.
If you have something more specific, ideally somewhat actionable, it would be great if you could create an enhancement request at https://bugzilla.proxmox.com/ so that we can actually keep track of these requests.
nyrikki•2mo ago
While the input for qemu is called a "pve-cpuset" for affinity[0], it is using explicitly the taskset[1][3] command.
This is different than cpuset[2], or how libvirt allows the creation of partitions[3] using systemd slices in your case.
The huge advantage is that setting up basic slices can be done when provisioning the hypervisor, and you don't have the hard code cpu pinning numbers as you would in taskset, plus in theory it could be dynamic.
From the libvirt page[4]
As cpusets are hierarchical, one could use various namespace schemes, which change per hypervisor, not exposing that implementation detail to the guest configuration. Think migrating from an old 16 core CPU to something more modern, and how all those guests will be pinned to a fraction of the new cores without user interaction.Unfortunately I am deep into podman right now and don't have a proxmox at the moment or I would try to submit a bug.
This page[5] covers how even inter CCD traffic even on Ryzen is ~5x compared to local. That is something that would break the normal affinity if you move to a chip with more cores on a CCD as an example. And you can't see CCD placement in the normal numa-ish tools.
To be honest most of what I do wouldn't generalize, but you could use cpusets, with a hierarchy and open the choice to try and improve latency without requiring each person launching a self service VM to hard code the core ID's.
I do wish I had the time and resources to document this well, but hopefully that helps explain more about at least the cpuset part, not even applying the hard partitioning you could do to ensure say ceph is still running when you start to thrash etc...
[0] https://git.proxmox.com/?p=qemu-server.git;a=blob;f=src/PVE/...
[1] https://git.proxmox.com/?p=qemu-server.git;a=blob;f=src/PVE/...
[2] https://docs.kernel.org/admin-guide/cgroup-v2.html#cpuset
[3] https://man7.org/linux/man-pages/man1/taskset.1.html
[4] https://libvirt.org/cgroups.html#using-custom-partitions
[5] https://kb.blockbridge.com/technote/proxmox-tuning-low-laten...
proxysna•2mo ago
baq•2mo ago
stackskipton•2mo ago
baq•2mo ago
throw0101c•2mo ago
> KKR & Co. Inc., also known as Kohlberg Kravis Roberts & Co., is an American global private equity and investment company.
* https://en.wikipedia.org/wiki/KKR_%26_Co.
You can have a public company that invests in private companies, as opposed to investing in publicly listed companies (like $BRK/Buffett does (in addition to PE stuff)).
zamadatix•2mo ago
fuzzylightbulb•2mo ago
zamadatix•2mo ago