I remember at a former company, we had a major migration away from Perl 12 years ago. The Perl code base was considered extremely ancient even back then.
New language features and things that might alter existing behavior are usually locked behind a version declaration opt-in at the top of your script. You need to do something similar to `use v5.39;` to unlock new behaviors from that version of Perl -- which may contain breaking changes but those are also generally backward compatible too.
But that's for the base language and standard library. Individual third-party packages may vary in their backward compatibility.
I've just consulted my first issue of "Programming Perl" (printed in 1990l and it says:
keys (assoc ARRAY) this function returns a normal array consisting of all the keys of the named associative array. The keys are returned in an apparently random order, but it is the same order as either the values() or each() function produces (given that the associative array has not been modified)
(Edited for clarity)
It was never a _defined_ order, but before version 5.17.6 (November 2012), each hash returned its list of keys and values in a _consistent_ order between runtimes, so some code ended up getting written that depended on this ordering (say in a unit test, or a list that would get assigned into a database). The change was to make the ordering random/inconsistent/unpredictable every time the list was fetched, which as I recall did break some number of tests in CPAN modules and required new releases.
Perl 6?
It was so breaking they don't even call it Perl anymore.
Perl 5 is still supported, Perl 6 (Raku) continues independently.
We thought that it would take a year or two, not decades.
Also, the intent was to have a Perl 5 compatibility mode.
Maybe not the one that was originally planned. But that was only a real possibility if Perl 5 would be able to get rid of its XS addiction.
Ask yourself: how many of the up river Perl modules are Perl only?
If you let your codebase get into an "ancient" state then that's a problem of your own creation rather than that of the language or system in which it is written.
Perl just hovers them up and does whatever processing. We did test Python but it just felt clunky in it's habits and felt lacking in terms of performance such as stalling in building a dictionary list of files to work with. Perl may be one of the older languages, but it still holds strong.
If Perl is supported for your $OS your script is guaranteed to execute. Sure, adjustments may have to be made if you're targeting the underside of the rainbow such as Windows but it's trivial for *nix hosts. Migrating from Ubuntu to Debian, BSD? -- 99.9% chance your script will run.
I am bias, in that their isn't anything majority wrong with perl. It was used as the main language back in the 80's for a reason and Perl's ecosystem (cpan) is still pretty comprehensive and still holding weight.
As it's not taught anymore due to newer trends this is causing it's shine to dull and it's overall presence dropping away. I wouldn't disagree that new languages boast optimism in to the future of programming technologies but with perl it has been battle tested and just works.
The rest of your comment makes me think the "non-trivial" was a typo and meant to be something else.
The advantages are the same as they have been for years: cross-platform compatibility, one system to run all aspects of a large project, the flexibility to get the job done in the simplest, most efficient and most maintainable manner.
One caveat: we don't do any MSWin32 development at all. I'm vaguely aware that there are some extra considerations on that O/S but it isn't something which we have to deal with.
Edit: well it occasionally had a problem where it would confuse things for Moose or Raku … but by and large it wasn't wrong it's just the syntax is new. I know Perl developers who have the same issues.
What I want is TITBWTDI (this is the best way to do it).
It would hiccup where it would write the existing perl codebase in to a hallucinated python syntax but this was two years ago.
Keep in mind though that the current state of Perl includes being in the process of getting a native object model baked into the language core. So that’s still in some flux but it’ll be better than choosing among eight different external object model libraries. It’s also more performant. The docs for that I’m not sure are in a bound paper book anywhere yet, but I’d happily be corrected.
It's dear to me because it came along at a time when I needed short breaks from thesis writing.
[0] https://www.oreilly.com/library/view/perl-best-practices/0596001738/
[1] https://metacpan.org/dist/Perl-Tidy/view/bin/perltidy
[2] https://metacpan.org/pod/Perl::CriticTo avoid creating new Perl code from scratch we created a REST API many years ok which new frontends and middlewares use instead of interacting with the core itself. That has been successful to some extent as we can have frontend teams coding in JS/TypeScript without them needing to interact with Perl. But re-writing the API‘s implementation is risky and the company has shied away from that.
Fixing API bugs still require to dive into a Perl system. However, I found it easier turning Python or JS devs into Perl devs than into DB engineers. So, usually, the DB subsystem bears the greater risk and requires more expensive personnel.
That's the point of PCRE though - you get Perl's excellent regex implementation in more places.
When writing a Perl script, there's literally zero friction to using regular expressions because they're baked in. I just type what I want to run.
When writing a Python script, I realize I want a regular expression, and I have to stop the code I'm writing, jump to the top of the file, import re, go back to where I was.
Now, maybe I'm writing a quick script and I don't care about tidiness, and I only have to stop what I'm doing, put "import re", and then write the code I want, but still, I'm stopping.
Or I'll wait until I've written the code I want, and go back and put it in. If I remember. If not, I'll find out when I run it that I forgot it.
Or my IDE will show me in red ugly squigglies that I need to right click or hover, and there'll be a recommended solution to import it.
This is all friction that isn't there with Perl.
If you're just shitting out a one off script and that's really a problem for you, you can just "import re" where you are, there's nothing other than convention and scope that forces you to import everything at the top. Given thats also a problem with sys and os, importing all the commonly used libraries and carrying that around as a template (from print import pprint, anyone?) also seems reasonable, if that's really a problem for you.
So you can do while ($foo =~ /bar/) or even ($result) = $foo =~ /bar/.
Python has only raw strings and until recently you couldn't even do if ($regex) { ... } you had to assign the match to a variable and check it, causing lots of nested if/then blocks. It's just clunkier.
Now, I don't actually believe this, because that puts Perl way ahead of Rust (currently at #18). So the big thing I'm taking away from this little research post is that I no longer trust the Tiobe index. Too bad - it felt pretty reliable for a long time.
a) there is probably several orders of magnitude more Perl code running out there in the wild than Rust?
b) the TIOBE index was ever meaningful?
I like their transparency about who actually supports them, and what the whole community gets for it. I wish other projects would do that, if for no other reason than to make it obvious that FOSS isn't just something that happens.
I use it with Cursor and create vm templates and clone them with a proxmox MCP server I've been adding features to and it's been incredibly satisfying to just prompt "create template from vm 552 and clone a full VM with it".
I like having a local server I can carry with me and control using just Cursor to manage it.
So basically the freedom that comes with a homelab without using proxmox UI and ssh.
I found Turnkey Linux pretty nice. They provide ready to use Linux images for different services. Proxmox integrates with them, so for example to install Nextcloud, all I needed to do is to click around a bunch on the Proxmox interface, and I'm good to go. They have around 80-90 images to choose from.
I guess I do also sometimes use it for ephemeral things without having to worry about cleaning up after too. E.g. I can dork around with some project I saw on GitHub for an afternoon then hit "revert to snapshot" without having to worry about what that means for the permanent stuff running at the same time.
And we naturally try to contribute to every project we use however possible, be it in the form of patches, QA, detailed bug reports, ... and especially for QEMU et al. we're pretty content with our impact, at least compared to resources of other companies leveraging it, I think.
If all it'd take is being "just" a simple UI wrapper, it would make lots of things way easier for us :-)
This would be appealing in a world where Kubernetes doesn't exist as a mature option.
Don't the vast majority of Proxmox users use it for small VM labs, without all the bells and whistles?
In my little world this largely was true until Broadcom bought VMware and proceeded to blow it up.
I know of a handful of rather “large” (100s of physical servers at least) VMware based products that are as quickly as possible migrating to proxmox.
Given how universal this is for my little slice of IT, I imagine this is quite pervasive for the “mid tier” VMware organization.
The feature set of Kubernetes and Proxmox VE do not really overlap completely IMO, or would need much more hands-on approach and specialized in-house staff to set up and maintain, why go for that if you can do everything one need with much less headache.
As the former needs much more dedicated management and maintenance resources and is often pulling in more complexity than most need, but there one needs also to differentiate from those developing and releasing their own applications and being fine with what e.g. kubernetes offers, quite possibly even wanting that, compared to those e.g. providing infrastructure for internal use or just favoring more monolithic applications, often boils down to taste and what one is comfortable with. Our enterprise customers are very diverse, from small shops with two or three node clusters to host the office infra to five digit hosts and 6 digits VMs spread out. We even know a few setups using Proxmox VE as basis for their Kubernetes cluster.
Finally, PVE is quite a bit older than Kubernetes, we still exist and see a lot of adoption (already before the Broadcom deal, albeit there was an uptick since then), so even without some technical comparison of features or use cases or the like it seems clear that Kubernetes isn't an alternative for all, just like Proxmox VE certainly isn't one.
> Don't the vast majority of Proxmox users use it for small VM labs, without all the bells and whistles?
One should not confuse being very popular in the home lab scene due to being very approachable, simple to set up, and most importantly 100% FLOSS, not just open core or the like, with it being the main or target audience, but we're really happy with the synergies it provides.
And actually I'd not frame it as bad thing that Proxmox VE is used that way, we do not want to be a club that is hard to access, neither cost wise nor hindering scaling smaller VM labs to bigger setups, and certainly not from a complexity barrier.
Let's not pretend that Proxmox or any of these are silver bullets that kill the (inherent) complexity demon. Anything touching SDN or clustered storage, in any ecosystem, will need dedicated in-house experts that know networking, storage, Linux and how Proxmox (or Kubernetes) approaches those domains.
Unless you are just using Proxmox for small VM labs, in which case it ought to be compared with libvirt and standalone QEMU.
> Anything touching SDN or clustered storage, in any ecosystem, will need dedicated in-house experts that know networking, storage, Linux and how Proxmox (or Kubernetes) approaches those domains.
There are widely different level of expertise needed though, and the setups that often are managed by admins with not in-depth expertise in clustering or SDN can still get things done with Proxmox VE, and if they are out of idea we got our enterprise support and naturally also the very active and friendly community forum to help.
> Unless you are just using Proxmox for small VM labs, in which case it ought to be compared with libvirt and standalone QEMU.
Yeah, I really do not get that point, you basically invalidate 95% of Proxmox VE's feature set because it might not be fully leveraged by a specific user group and because there are some different solutions that also allow one to do similar things. That would also invalidate Kubernetes, it's not completely unpopular in small labs either after all.
To be honest, to me, it feels a bit to me like a justification attempt for the initial post in this chain here that brushed off Proxmox VE as just some small UI/UX wrapper around QEMU/KVM and all the Real Work™ being done by others, possibly because you never actually used it, but I might read it the wrong way, and I'm certainly not offended in any way, just find it a bit odd.
And FWIW, I tried several times to point out that QEMU itself is only a small part of what we provide–even if not, just providing a good API abstraction around that is significant work, especially if it should allow two decades (and counting!) of stable upgrade paths and _without_ libvirt. And we nowhere hide the underlying technologies, we're proudly building upon–and trying to give back–to all projects we use, be it Debian, QEMU/KVM, LXC (which we co-maintain), Linux kernel, FRR, rust, or–like here–Perl...
But as you're rather dismissive and now even start to call people trolling I hardly see any need to take your writings as serious discussion, they do not seem to be done in good faith, and IMO doing it this way certainly won't help to promote FLOSS, that should be possible without being dismissive to others work.
Would have been cool of you to just say "oh neat, I didn't realize that you did all that too".
There could be tons! Or not many. It's completely irrelevant to my comment.
The funny thing is with Cursor I can just generate a new capability, like the clone and template actions were created after asking Sonnet 4.
For context, here's that classic thread: https://news.ycombinator.com/item?id=8863 :)
It's nice when companies contribute fixes and testing up-stream, even when it's not a monetary contribution.
I am really impressed of how Raku developers keep their motivation to work on it
And I am a bit curious to know if Proxmox is interested in Raku at all, or are they only using Perl
...and when looking through the sources, I thought <the proxmox folks> "still use perl?"
I guess proxmox IS 17 years old...
Some companies immediately understand the value of this kind of support. Getting that news out will hopefully allow me to find more orgs who can/will donate in this range.
So, if anyone has any leads, please do contact me: olaf@perlfoundation.org If you take a close look at your stack, you'll probably find Perl in there somewhere.
Unpopular opinion that's gonna get downvoted/flagged soon based on my experience here: That just shows you how broke EU tech companies are that even 10k is newsworthy for them.
For context, I only worked for average (not big-tech/unicorns) EU/French/Dutch/German tech companies my whole life and I was shocked to see how much more, the average US tech companies spend on frivolities(forget the wages) than the European companies I worked at.
From what I saw, for US tech companies big or small, buying fully decked MacBooks and Herman Miler chairs for their SW devs was the norm, while I only saw the discount bin chairs and base spec HP/Dell/Lenovo laptops wherever I worked here.
Even where I work now, SW engineers get the same crappy HP notebooks that HR uses to read emails, since beancounters love fleets of the same cheap laptops for everyone, so my manager needs to jump through hoops for the beancounters to let him order more powerful notebooks for the SW engineers in his team, cause ya' know, Docker and VMs eat more resources than Outlook and Excel. This is relatively unheard of for US SW engineers where little expense is spared on equipment.
They even reduced costs related to facility management like AC runtime and cleaning services, so trash bins are emptied twice a week instead of daily, leading to some funny office aromas in summer, if you catch my drift. And of course, they blame Putin for this.
I always naively assumed (mostly due to anti-US propaganda we get here in EU) that due to how expensive US labor and healthcare is, it would be the opposite that US companies would have to cost cut to the bone, but no, despite that, US companies still have way more money to spend than European ones. Crazy.
My explanation is, European companies are just run the same way EU governments are run, on austerity and cost cutting, so instead of trying to make money by innovating and investing and splurging to get the best of everything, they instead try to increase the profits by cutting all possible costs from purchasing to labors' wages and offshoring, since I assume that's also what's taught in European MBA schools.
If your anti-EU rant is right, open source projects should just naturally come into a whole bunch of donations this size from these rich US companies. Does that happen? Especially once we get past the big 3?
How is it anti-EU? I was explaining what I observed, nothing what I said has to do with the EU political union, but with the way EU companies operate. Criticizing the way European companies choose to do business doesn't make you "anti-EU".
What's with this hyper-partisan George-Bush-style "You're either with us or against us" type of arguments, where you can't criticize something without someone accusing you of being a hater?
Can't we just debate things reasonably and assuming good faith as per HN rules, instead of resorting to attacks and accusations of partisan?
Craigslist used to donate $100k per month until 2017.
Perl has been a member of Google Summer of Code for several years now
And Duck Duck Go (based in Pittsburg) recently donated $25k [0] (imo they could have donated more, but something is better than nothing)
[0] - https://spreadprivacy.com/2024-duckduckgo-charitable-donatio...
It's actually almost the opposite here in the US. Because labor is expensive, especially engineering labor, it doesn't make sense to scrimp on expenses like computer equipment or office chairs that support that labor, as you want to reduce development costs. If a faster laptop costs $500/unit more but reduces toil/wasted time for a $300k/yr developer of 30 minutes a day, it is easily justified that it's worth the cost since labor cost is so much higher.
To a certain extent yes.
But that's because of margins. An large automotive parts vendor or a major biopharma company isn't generating revenue on software, so software engineering is treated as a cost center.
It's the same in the US, but at least you have tech first companies that generate most of their revenue of tech.
Europe's tech industry died out because of a mixture of mismanagement and bad luck. Velti [0] used to outcompete AdSense, but their management was incompetent and borderline fraudulent, and that's why the AdTech industry boomed in the US and collapsed in Europe.
The same thing happened to Nokia, Ericcson, and others.
The good European startups like Datadog, Spotify, Databricks, and UIPath all ended up relocating to the US so an entire generation of startups doesn't exist in the EU.
> they instead try to increase the profits by cutting all possible costs from purchasing to labors' wages and offshoring, since I assume that's also what's taught in European MBA schools
I've dealt with MBA grads from the EU and US - they aren't taught any of this. IMO, the issue is most leadership in European companies tend to be ex-accountants from KPMG/EY/PWC/Deloitte/Mazars-type companies, because manufacturing industries like pharma, automotive, etc have severely low margins.
[0] - https://www.businessinsider.com/how-velti-one-of-the-largest...
Was it really bad luck or just bad management refusing to accept the reality that they were bad so they blamed luck?
Nokia CEO was famous for saying "We did everything right and still lost". Clearly he didn't do everything right if they lost.
I think EU leaders and CEOs have no accountability, hindsight or self reflection capacity to see how badly they're failing so then just blame bad luck because it always has to be someone else's fault.
Bit of column A and bit of column B. There was a LOT of bad management, but the Eurozone Crisis (2007-2014, 2018 for Greece) was also extremely severe and out of corporate leadership's hands.
If 70-80% of your customers across the Eurozone were delinquent in payments and in the midst of bankruptcies and rolling down operations, you wouldn't be generating the same amount of revenue needed.
In the US, enforcing vendor arrears and collecting dues from bankrupt customers is much easier. These are business law issues that business leadership within the 26+ EU/EFTA nations cannot solve, as this is a legislative issue.
> I think EU leaders and CEOs have no accountability, hindsight or self reflection capacity to see how badly they're failing so then just blame bad luck because it always has to be someone else's fault.
To a certain extent yes, but that's because they don't empower the private sector beyond a handful of politically connected firms to have an actual say or input. Lobbying happens in the EU as well, but it is much harder for startups and companies that aren't national champions to get a say.
Of course, this depends on the country as well as the EU as a whole, which is the crux of the issue - should national law or "EU law" be prioritized? In action, enforcement and regulatory capacity is devolved from the EU to individual member states, and logically, companies do jurisdiction shopping, hence why you see American companies over-represented in Ireland and a strong tech ecosystem in Czechia, Poland, and Romania.
What does Greece's economic crisis have to do with the way European companies are run?
>In the US, enforcing vendor arrears and collecting dues from bankrupt customers is much easier. These are business law issues that business leadership within the 26+ EU/EFTA nations cannot solve, as this is a legislative issue.
I think you'll find it's exactly the opposite.
In many EU countries, if you're behind on payment, the government's debt collection agency comes after you and seizes your assets to pay off your debt, while in the US a lot of people have 5 figure debts spread over 20 different credit cards, debt they'll never repay because the collection process is up to the CC companies, not on federal government, so CC companies give up after a while since the cost to chase down people to collect a 1000$ debt is not worth it.
This doesn't work in the EU. If you have 20 Euro unpaid debt, the state's debt collection agency will slap a 1000 Euro collection fee on that and seize 1020 Euros worth of your assets be it from your home, bank account or from your pension fund.
That's how Swedish payment provider Klarna si going bust in the US due to how many people are defaulting on their debts from buying burritos and pizzas off DoorDash, probably because Klarna naively assumed it was gonna be like in Sweden where the government goes after you to collect the 20 Euros you owe.
Correct me if I'm wrong.
Velti and a number of other mobile AdTech and app vendors were clustered in Greece+Balkans. Greece was also in the process of spinning up a tech investment promotion policy comparable to what Israel and India did in the 1990s right before the Eurozone crisis happened.
Velti was the largest AdTech platform in the late 2000s and early 2010s - outcompeting Google AdSense and what became Google AdMob, but they went under due to their heavy European presence. Same with plenty of other European startups.
An entire generation of potential unicorns died out.
> Correct me if I'm wrong
I'm talking about corporate bankruptcy. It's much harder to shut down a business or go thru bankruptcy and arrears when you maybe to deal with a state collections agency due to the associated compliance and red tape hurdles.
Consumer Insolvency tends to be pro-consumer in the US, and Business Insolvency tends to be pro-business in the US.
I don't know enough about these companies because I never heard about them so I can't contradict this but googling tells me they're Irish, not Balkan.
And on the other hand, best to look at how important and influential they are to the EU economy, because at the end of the day that's what matters and what drives the economy, politics and citizen votes.
The truth is, nobody cares about the failure of some random ad companies with a workforce of ~100 maybe, that barely pay any taxes locally.
So I don't see how some no-name ad companies are relevant to this. If they were to be highly profitable for the EU, pay a lot in taxes in the EU, and hire tens of thousands of EU workers, we would have heard about them and they would have had lobbying power and support form workers like the likes of VW, Renault or Airbus do. But they're most likely irrelevant in the grand scheme of things so I don't see how they fit in this discussion, as small companies go bust every day around the world, be it US, EU, Asia, etc.
> It's much harder to shut down a business or go thru bankruptcy and arrears when you maybe to deal with a state collections agency due to the associated compliance and red tape hurdles.
It really isn't more difficult since the government helps you with the shutting down part the moment you are unable to pay workers' wages.
>Consumer Insolvency tends to be pro-consumer in the US, and Business Insolvency tends to be pro-business in the US.
Maybe true. Which is why very few Europeans want to start businesses and hire workers, when they face the wrath of the government the moment the finances of their business go south and can't pay the workers wages.
But from the proof I saw, the defaulting on your debt situation is bad in Europe whether you're a business or a consumer, while in US is not that bad as you have a lot of opt-outs legal and otherwise.
if you're motivated to do OSS work, the best bet is to figure out how to take VC money to do that and don't end up on some blacklist.
How important it is, how much it's used and the money they get.
For example, note how all the stories about institutional investors buying up housing stock mention BlackRock, even though REITs and private equity firms are doing much, much more of this business.
There are concerns because by default they usually cast votes by board recommendations and since they have such a huge stewardship of the stock market it means others can have more direct influence, but Vanguard itself is otherwise as neutral a stock ownership intermediary as can pretty much exist.
BlackRock != BlackStone, if that’s what you were thinking of.
also proxmox is German.
Austrian, please.
I had my Perl phase. I even wrote the first piece of code for my employer in Perl. Well, it was a CGI script, so that was kind of natural back then.
But really, since all the hollow Perl6 stuff I've seen, I've never really read or heard anything about the language in the past, what, 10 to 15 years?
There are tons of languages out there, all with their own merits. But everything beyond Perl5 felt like someone was trying to piggyback on a legacy. If you invent a new language, thats fine, call it foobar and move on. But pretending to be the successor to Perl feels like a marketing stunt by now...
Raku has perl DNA running through it … both languages were authored by Larry Wall and the Raku (perl6 at the time) design process was to take RFAs from the perl community and to weave them together.
I do wonder why you consider Raku to be hollow? Sure it has suffered from a general collapse of the user base and exodus to eg. Python. This has slowed the pace of development, but the language has been in full release since Oct 2015 and continues to have a very skilled core dev team developing and improving.
There are several pretty unique features in Raku - built in Grammars, full on unicode grapheme support (in regexes), lazy evaluation, hyper operators, and so on that work well together.
Maybe unpopular, true. But hollow?
Maybe take a look at the search results from
https://duckduckgo.com/?q=perl%20conference%202025
and you'll learn that there are ongoing events related to Perl and Raku.
Perl isn't possible without C
Python isn't possible without C
Go isn't possible without C
With those languages you can't get any more raw than C.
Unlike a language like Pascal that is still modern today that as is based on ALGOL are forgotten about. Make's me wonder why such older languages were left behind. Just a ramble.
Nowadays the codebase of Go toolchain and runtime contains almost no C. Just for OS and C libraries interoperability.
https://github.com/search?q=repo%3Agolang%2Fgo+lang%3AC+NOT+...
No. V8 killed that.
Less than a month of compensation at FAANG is newsworhty.
"Purchasing silver sponsership with [org] as a way to grow our brand awareness" is intrinsically understandable to pretty much any businesses manager.
"Giving away money for something we already have", which is what most technical managers will hear regardless of your actual pitch, is completely inexplicable to many.
It does require that sponsership is even possible, and recurring sponsership may be harder than recurring license fees of course, so its not a sure thing, just an option to try.
LetMeLogin•6mo ago
dewey•6mo ago
johng•6mo ago
gmuslera•6mo ago
SoftTalker•6mo ago
hda111•6mo ago
applied_heat•6mo ago
npteljes•6mo ago
This is their repo: https://git.proxmox.com/?p=pve-manager.git;a=tree
0cf8612b2e1e•6mo ago
Working for a F500 company, I have tried to use budget surplus to reward open source projects we use. Management looked at me like I had grown an additional head.
naikrovek•6mo ago
this is my experience, too. they'll gladly take, take, take, take, take, and take some more, but when it comes time to give just a little bit, they balk.
npteljes•6mo ago
petdance•6mo ago
"We've never heard of that happening, so no."
npteljes•6mo ago
neuroelectron•6mo ago
Paradigm2020•6mo ago
I'm almost willing to bet that you did more than good enough work and if you still want to become a (n even) great(er) programmer I'm sure there's quite a few OSS projects you could contribute to as well if you wanted to :))
Re:matching it you could also spread it amongst a couple of projects.
Anyway have a great day