Instead I find myself more concerned with which virtual machine or compiler tool chain the language operates against. Does it need to ship with a VM or does it compile to a binary? Do I want garbage collection for this project?
Maybe in that way the decision moves up an abstaction layer the same way we largely moved away from assembly languages and caring about specific processor features.
The tooling and ecosystem aren’t great compared to some of these languages, but Java itself can be pretty damn good.
I used to be a hater many years ago but I’ve since grown to love it.
One of my complaints with Gradle is that if you write a plugin (Java) it shares the classpath with other plugins. You might find some other plugin depending on some old version of a transitive dependency.
I too would like some illustration of why the tooling (Intellij, etc) is insufficient. Maybe gradle as a build system? Although I have to say with LLMs, gradle build scripting is a LOT easier to "build" out.
One thing is for sure, don't get tight down to one language cause it is popular today. Go with whatever make sense for you and your project.
LabView is a kick in the pants...
I'd wager it is the installed base keeping LabView on life support. =3
My favorite Julia also made the list this year... nonzero users means there is hope for fun languages yet.
With the new Intel+NVIDIA RTX SoC deal, we can expect Python and C++ to dominate that list in the next few years. =3
After working with Node and Ruby for a while I really miss a static type system. - Typescript was limited by its option to allow non strictness.
Nothing catches my eye, as it’s either Java/.Net and its enterprisey companies or Go, which might not be old but feels like it is, by design. Rust sounds fun, but its usecases don’t align much with my background.
Any advice?
Rust is also a general purpose language, there's no reason you can't use it for just about any problem space.
You might have trouble finding small companies using anything but JS/Ruby/Python. These companies align more with velocity and cost of engineering, and not so much with performance. That's probably why the volume of interpreted languages is greater than that of "enterprisey" or "performance" languages.
IEEE's methodology[2] is sensible given what's possible, but the data sources are all flawed in some ways (that don't necessarily cancel each other out). The number of search results reported by Google is the most volatile indirect proxy signal. Search results include everything mentioning the query, without promising it being a fair representation of 2025. People using a language rarely refer to it literally as the "X programming language", and it's a stretch to count all publicity as a "top language" publicity.
TIOBE uses this method too, and has the audacity to display it as a popularity with two decimal places, but their historical data shows that the "popularity" of C has dropped by half over two years, and then doubled next year. Meanwhile, C didn't budge at all. This method has a +/- 50% error margin.
[1]: https://redmonk.com/rstephens/2023/12/14/language-rankings-u... [2]: https://spectrum.ieee.org/top-programming-languages-methodol...
Right now, it's apparent to me that LLMs are mostly tuned in the programming space for what n-gate would call "webshit", but I think it is a clear (to me) evolutionary step towards getting much better "porting" ability in LLMs.
I don't think that is in the priority list of the LLM companies. But I think it would be a real economic boon: certainly there is a backlog of code/systems that needs to be "modernized" in enterprises, so there is a market.
Ultimately I wonder if an LLM can be engineered to represent code in an intermediate form that is language-independent to a large extent, and "render" it to the desired language/platform when requested.
In all of these Python is artificially over-represented. Search hits and Stackoverflow questions represent beginners who are force fed Python in university or in expensive Python consultancy sessions. Journal articles are full of "AI" topics, which use Python.
Python is not used in any application on my machine apart from OS package managers. Python is used in web back ends, but is replaced by Go. "AI" is the only real stronghold due to inertia and marketing.
Like the TIOBE index, the results of this so called survey are meaningless and of no relevance to the jobs market.
hackthemack•2h ago
In an alternate universe, if LLM only had object oriented code to train on, would anyone push programming forward in other styles?
mock-possum•2h ago
christophilus•1h ago
fuzztester•1h ago
I had looked at it recently while checking out C-like languages. (Others included Odin and C3.) I read some of the Hare docs and examples, and had watched a video about it on Kris Jenkins' Developer Voices channel, which was where I got to know about it.
christophilus•1h ago
fuzztester•17m ago
zenmac•1h ago
Not only that they also tend to answer using the the more popular languages or tool event when it is NOT necessary. And when you call it out on it, it will respond with something like:
"you are absolutely right, this is not necessary and potentially confusing. Let me provide you with a cleaner, more appropriate setup...."
Why doesn't it just respond that the first time? And the code it provided works, but very convoluted. if wasn't checked carefully by an experienced dev person to ask the right question one would never get the second answer, and then that vibe code will just end up in git repo and deployed all over the place.
Got the feeling some big corp may just paid some money to have their plugin/code to on the first answer even when it is NOT necessary.
This could be very problematic, I'm sure people in advertising are just all licking their chops on how they can capitalized on that. If one thing currently ad industry is bad, wait until that is infused into all the models.
We really need ways to
1. Train our own models in the open, with weight and the data it is trained on. Kinda like the reproducible built process that Nix is doing for building repos.
2. Ways to debug the model on inference time. The <think> tag is great, and I suspect not everything is transparent in that process.
Is there something equivalent of formal verification for model inference?