And Oracle (well before the Sun acquisition - in fact, control of Java was basically the main cause of that move).
Any technology that could bag both IBM and Oracle is (or rather was) likely to dominate the enterprise space.
Microsoft had C#, at one point IBM pushed SmallTalk. C++ for these environments is doable but going to slow you down at development a lot, as well as being much harder to secure.
At that time the dynamic alternative was Perl, and that remained true basically until Rails came along.
I would say that many things in IT are not chosen on technical merits alone. You have people that do not want to accrue any blame. Back then, by choosing what IBM endorses or what Microsoft endorses, you absolve yourself of fallout from if and when things go wrong.
Back in the 90s, it felt like IBM, Redhat, Sun kind of, sort of, got together and wanted to keep Microsoft from taking over the Enterprise space by offering Java solutions.
In the late 90s, I got stuck making a scheduling program in Java, but it had to run on the 16-bit Windows systems of the time. That was a huge pain, because the 16-bit version didn't have all the capabilities that management was expecting based on the hype. These days, I sometimes have to install enormous enterprise applications that tie up north of 32G of RAM even though they're just basic hardware management tools that would take a fraction of that if built in something like C++ with a standard GUI library. I manage to avoid Java most of the time, but it's been an occasional thorn in my side for 30 years.
Decent tooling. Been around for long enough that a lot of the quirks of it are well known and documented. Basically it's a blue collar programming language without too many gotchas. Modern(ish) day Cobol.
(I'm predominantly a Java dev still, even after diversions over the years to Javascript, Python and C#).
Things tend to form fractal systems of systems for efficiency. A cleanly delineated org chart maps to a cleanly delineated codebase.
How did he build something adopted by so many enterprises?
It does some things at scale very well and has been afforded the performance improvements of very smart people for 30y.
It’s not to say the language isn’t verbose, one of my favourite features was the ability to write code in other languages right inside the a Java app pretty well in-line by using the JVM, thanks to JSR-223.
It was possible to write Ruby or Python code via Jruby or Jython and run it in the JVM.
Clojure also runs on the JVM.
https://docs.oracle.com/javase/8/docs/technotes/guides/scrip...
And it's still stable, fast and reliable with a massive ecosystem of stable, fast and reliable libraries and software. With good developer tooling, profilers and debuggers to go with it. And big enterprise support teams from RedHat, Oracle, IBM, etc. throwing in their (paid) support services.
It might not be the best language in any of the categories (speed - runtime and compile time, tooling, ecosystem, portability, employee pool), but there's pretty much almost no languages that are as good in all categories at once.
And to top it off, JVM can host other languages so it can easily interoperate with more modern takes on language design like Kotlin while still running on pretty much all major operating systems used in the wild and most CPU architectures as well. It'll run on your car's SoC, your phone and on your server. In many cases, using the same libraries and same code underneath.
Not. And certainly not semantically.
The key thing I think with Java is the programming model & structure scale well with team size and with codebase size, perhaps even in a way that tolerates junior developers, outsourced sub-teams, and even lower quality developers. All of those things end up becoming part of your reality on big Enterprise products, so if the language is somehow adding some tolerance for it that is a good thing.
The other things around Syntax and such that people complain about? Those are often minor considerations once the team size and code base size get large enough. Across my career there has always been the lone guy complaining that if we did everything in a LISP derived language everything would be perfect. But that guy has almost always been the guy who worked on a small tool off by himself, not on the main product.
Java has changed a tremendous amount as well. A modern Java system has very little in common with something written before Generics and before all the Functional code has been added. Where I work now we have heavily exploited the Functional java add-ons for years, it has been fantastic.
Java syntax isn't perfect, but it is consistent, and predictable. And hey, if you're using an Idea or Eclipse (and not notepad, atom, etc), it's just pressing control-space all day and you're fine.
Java memory management seems weird from a Unix Philosophy POV, till you understand whats happening. Again, not perfect, but a good tradeoff.
What do you get for all of these tradeoffs? Speed, memory safety. But with that you still still have dynamic invocation capabilities (making things like interception possible) and hotswap/live redefinition (things that C/CPP cannot do).
Perfect? No, but very practical for the real world use case.
(BTW Clojure, as a Lisp dialect, has almost no syntax. You can learn THAT in 5 minutes. The challenge is in training your programming mind to think in totally new concepts)
They're really amenable to the REPL.
The GC story is just great, however. Pretty much the best you can get in the entire ecosystem of managed-memory languages.
You have different GC algorithms implemented, and you can pick and tune the one that best fits your use-case.
The elephant in the room is of course ZGC, which has been delivering great improvements in lowering the Stop-the-world GC pauses. I've seen it consistently deliver sub-millisecond pauses whereas other algorithms would usually do 40-60 msec.
Needless to say, you can also write GC-free code, if you need that. It's not really advertised, but it's feasible.
As someone who's always been interested in gamedev, I genuinely wonder whether that would be good enough to implement cutting-edge combo modern acceleration structures/streaming systems (e.g. UE5's Nanite level-of-detail system.)
I have the ability to understand these modern systems abstractly, and I have the ability to write some high-intensity nearly stutter-free gamedev code that balances memory collection and allocation for predicable latency, but not both, at least without mistakes.
The GC would be the least of your problems.
Java is neat, but the memory model (on which the GC relies) and lack of operator overloading does mean that for games going for that level of performance would be incredibly tedious. You also have the warm up time, and the various hacks to get around that which exist.
Back when J2ME was a thing there was a mini industry of people cranking out games with no object allocation, everything in primitive arrays and so on. I knew of several studios with C and even C++ to Java translators because it was easier to write such code in a subset of those and automatically translate and optimize than it was to write the Java of the same thing by hand.
It is not feasible under the JVM type system. Even once Valhalla gets released it will carry restrictions that will keep that highly impractical.
It's much less needed with ZGC but even the poster child C# from the GC-based language family when it comes to writing allocation-free and zero-cost abstraction code presents challenges the second you need to use code written by someone who does not care as much about performance.
Java the language eventually drove me away because the productivity was so poor until it started improving around 2006-2007.
Now I keep an eye on it for other languages that run on the JVM: JRuby, Clojure, Scala, Groovy, Kotlin, etc.
IMO JRuby is the most interesting since you gain access to 2 very mature ecosystems by using it. When Java introduced Project Loom and made it possible to use Ruby's Fibers on the JVM via Virtual Threads it was a win for both.
Charles Nutter really doesn't get anywhere close to enough credit for his work there.
You can take pretty much any code written for Java 1.0 and you can still build and run it on Java 24. There are exceptions (sun.misc.Unsafe usage, for example) but they are few and far between. Moreso than nearly any other language backwards compatibility has been key to java. Heck, there's a pretty good chance you can take a jar compiled for 1.0 and still use it to this day without recompiling it.
Both Ruby and Python, with pedigrees nearly as old as Java's, have made changes to their languages which make things look better, but ultimately break things. Heck, C++ tends to have so many undefined quirks and common compiler extensions that it's not uncommon to see code that only compiles with specific C++ compilers.
Although Python is pretty close, if you exclude Windows (and don't we all want to do that?).
I don’t know if it is a me problem or if I’m missing the right incantations to set up the environment or whatever. Never had that much problems with Java.
But I’m a Java and Ruby person so it might really be missing knowledge.
Prior to that I would frequently have issues (and still have issues with one-off random scripts that use system python).
This takes nothing away from Java and the Java ecosystem though. The JVM allows around the same number of target systems to run not one language but dozens. There’s JRuby, Jython, Clojure, Scala, Kotlin, jgo, multiple COBOL compilers that target JVM, Armed Bear Common Lisp, Eta, Sulong, Oxygene (Object Pascal IIRC), Rakudo (the main compiler for Perl’s sister language Raku) can target JVM, JPHP, Renjin (R), multiple implementations of Scheme, Yeti, Open Source Simula, Redline (Smalltalk), Ballerina, Fantom, Haxe (which targets multiple VM backends), Ceylon, and more.
Perl has a way to inline other languages, but is only really targeted by Perl and by a really ancient version of PHP. The JVM is a bona fide target for so many. Even LLVM intermediate code has a tool to target the JVM, so basically any language with an LLVM frontend. I wouldn’t be surprised if there’s a PCode to JVM tool somewhere.
JavaScript has a few languages targeting it. WebAssembly has a bunch and growing, including C, Rust, and Go. That’s probably the closest thing to the JVM.
No, you really can't. Not anything significant anyway. There are too many deviations between some of those systems to all you to run the same code.
There are differences, but they’re usually esoteric ( https://perldoc.perl.org/perlport#PLATFORMS ).
There's also Groovy.
I wonder what other languages run on the JVM. What about Perl, Icon, SNOBOL, Prolog, Forth, Rexx, Nim, MUMPS, Haskell, OCaml, Ada, Rust, BASIC, Rebol, Haxe, Red, etc.?
Partly facetious question, because I think there are some limitations in some cases that prevent it (not sure, but a language being too closely tied to Unix or hardware could be why), but also serious. Since the JVM platform has all that power and performance, some of these languages could benefit from that, I'm guessing.
#lazyweb
> I can run basically any Perl code back to Perl 4 (March 1991) on Perl 5.40.2 which is current.
Yes, but can you _read_ it?I'm only half joking. Perl has so many ways to do things, many of them obscure but preferable for specific cases. It's often a write-only language if you can't get ahold of the dev who wrote whatever script you're trying to debug.
I wonder if modern LLMs could actually help with that.
From experience, they can.
It always made me wonder why I hear about companies who are running very old versions of Java though. It always seemed like backwards compatibility would make keeping up to date with the latest an almost automatic thing.
Another problem is crashes. Java runtime is highly reliable, but still bugs happens.
Which also points to another thing where Java compatibility shines. One can have a GUI application that is from nineties and it still runs. It can be very ugly especially on a high DPI screen, but still one can use it.
(It's still a valid point. It's just not the point you labeled it as.)
This is not my experience with Java at all. I very often have to modify $JAVA_HOME.
The whole spec was great with the exception of entitybeans.
It provided things that are still not available it anything else.. why do we store configuration/credentials in git (encrypted, but still).
And the context were easy to configure/enter.
Caucho’s resin, highly underrated app server. Maybe not underrated, but at least not very well known
I grew to be a big fan of JBoss and was really disappointed when the Torquebox project stopped keeping up (Rubyized version of JBoss).
and that's before you throw in real multithreading
I'd also throw in what was possibly their greatest idea that sped adoption and that's javadoc. I'm not sure it was a 100% original idea, but having inline docs baked into the compiler and generating HTML documentation automatically was a real godsend for building libraries and making them usable very quickly. Strong typing also lined up nicely with making the documents hyper-linkable.
Java was really made to solve problems for large engineering teams moreso than a single developer at a keyboard.
Fastest at what, exactly?
So giving the entire system to the JVM, performing some warmup prior to a service considering itself “healthy”, and the JVM was reasonably fast. It devoured memory and you couldn’t really do anything else with the host, but you got the Java ecosystem, for better or worse).
There was a lot of good tooling that is missing from other platforms, but also a ton of overhead that I am happy to not have to deal with at the moment.
With the JVM you basically outsource all the work you need to do in C/C++ to optimize memory management and a typical developer is going to have a hell of a time beating it for non-trivial, heterogenous workloads. The main disadvantage (at least as I understand) is the memory overhead that Java objects incur which prevent it from being fully optimized the way you can with C/C++.
When Java got popular, around 1999-2001, it was not a close third behind C (or C++).
At that time, on those machines, the gap between programs written in C and programs written in Java was about the same as the gap right now between programs written in Java and programs written in pure Python.
This is something I greatly value with the recent changes to Java. They found a great way to include sealed classes, switch expression, project Loom, records that feels at home in the existing Java syntax.
The tooling with heap dump analyzers, thread dump analyzers, GC analyzers is also top notch.
Hearing the work he and others did to gradually introduce pattern matching without painting themselves into a corner was fascinating and inspiring.
Not anymore, the competition is doing much better these days.
Java's tools are really top notch. Using IntelliJ for Java feels a whole new different world from using IDEs for other languages.
Speaking of Go, does anyone know why Go community is not hot on developing containers for concurrent data structures? I see Mutex this and lock that scattering in Go code, while in Java community the #1 advice on writing concurrency code is to use Java's amazing containers. Sometimes, I do miss the java.util.concurrent and JCTools.
The patterns are available, its up to the community to apply proper concurrency patterns.
You can use platform threads, user-space threads, language-provided "green" threads, goroutines, continuations or whatever you wish for concurrency management, but that's almost orthogonal to data safety.
Don't communicate by sharing memory; share memory by communicating.
The overuse of Mutex and Lock are from developers bringing along patterns from other language where they are used to communicating via shared memory. So this aspect of the language just doesn't click as well for many people at first. How long it takes you to get it depends on your experience.But then I have also encountered Rust people that will look down on Java but had no idea buffered I/O had higher throughput than unbuffered.
Edit: 1.4, not 1.7
My other issues with the JVM is how much of a black box it is from a platform perspective, which makes debugging a PITA with standard ops tools like strace, gdb, etc. The JVM's over allocation of memory robs the kernel of real insight as to how the workload is actually performing. When you use the JVM, you are completely locked in and god help you if there isn't a JVM expert to debug your thing and unravel how it translates to a platform implementation.
Then of course there's the weird licensing, it's association with Oracle, managing JDK versions, it's lack of it factor in 2025, and a huge boatload of legacy holding it back (which is not unique to Java).
I have successfully navigated my career with minimal exposure to Java, and nowadays there's a glut of highly performant languages with GC that support minimal runtimes, static compilation, and just look like regular binaries such that the problems solved by something like the Java or Python VMs just aren't as relevant anymore - they just add operational complexity.
To reiterate, I admire JG just like any tech person should. Java's success is clear and apparent, but I'm glad I don't have to use it.
Java has one the greatest debugging capabilities ever. dynamic breakpoints, conditional breakpoints, hell you can ever restart a stack frame after hot deploying code without a restart. You can overwrite any variable in memory, set uncaught exception breakpoints, and even have the JVM wait for a debugger to connect before starting. There is no equivalent in any other language that does _all_ of these things. And to top this off, there is 0 equivalent to Idea or Eclipse for any other language.
For runtime dynamics, JMX/JConsole is good enough for daily use, Java Flight Recorder gives you deep insight, or in a system you don't have direct access to. Hell even running jstack on a JVM is a good debug tool. If those don't do the trick, there's plain old HPROF (similar to other languages) and Eclipse Memory Analyzer.
>Then of course there's the weird licensing,
The JVM is open source. There are no licensing issues. OpenJDK can be freely downloaded and run in production without restrictions on any use. If you really want to buy a JVM from Oracle... well thats your prerogative.
> it's lack of it factor in 2025,
sdkman
> a huge boatload of legacy holding it back
what legacy code?
Mentioning Java and Python in the same way in the context of performance is really odd. Python is nowhere near the JVM when it comes to performance
And I think there is some parallel with the kernel vs GC and mmap vs buffer pools - the GC simply has better context in the scope of the application. With other processes in the picture, though, yeah there is some provisioning complexity there.
I went to a Java school. I remember my operating systems class involved writing simulated OS code in Java (for example, round robin for context switching). The argument was that it would be easier to understand the algorithms if the hardware complexities were minimized. I understand that sentiment, but I don't think Java was the right choice. Python would have accomplished the same task even better (understanding algorithms). I think there was a huge influence from industry to teach college students Java from day one. I had taught myself BASIC and some C back in high school, so it was a bit of a step backwards to learn a high-level language just to do simulated low-level OS programming.
For instance, Java introduced the fork/join pool for work stealing and recommended it for short-lived tasks that decomposed into smaller tasks. .NET decided to simply add work-stealing to their global thread pool. The result: sync-over-async code, which is the only way to fold an asynchronous library into a synchronous codebase, frequently results in whole-application deadlocks on .NET, and this issue is well-documented: https://blog.stephencleary.com/2012/07/dont-block-on-async-c...
Notice the solution in this blog is "convert all your sync code to async", which can be infeasible for a large existing codebase.
There are so many other cases like this that I run into. While there have been many mistakes in the Java ecosystem they've mostly been in the library/framework level so it's easier to move on when people finally realize the dead end. However, when you mess up in the standard library, the runtime, or language, it's very hard to fix, and Java seems to have gotten it more right here than anywhere else.
Tell us you don't write any sort of .NET code without telling us so explicitly.
You should pick a platform you have better command of for back-handed comments.
Or at least you should try to do a better job than referencing a post from 13 years ago.
Here's an article from 5 years ago:
https://medium.com/criteo-engineering/net-threadpool-starvat...
But does citing a more-recent article matter to you? Probably not. A source being 13 years old only matters if something relevant has changed since then, and you certainly couldn't be bothered to point out any relevant change to support your otherwise fallacious and misleading comment.
What actually amazes me most about this is that people in .NET seem to want to blame the person writing sync-over-async code like they are doing something wrong, even going so far as to call it an "anti-pattern", when in reality it is the fault of poor decision-making from the .NET team to fold work-stealing into the global thread queue. The red-blue function coloring problem is real, and you can't make it go away by pretending everyone can just rewrite all their existing synchronous code and no other solution is needed.
If all you know is one ecosystem, then it seems you are susceptible to a form of Stockholm syndrome when that ecosystem abuses you.
It does. For example, starting with .NET 6 there is a pure C# threadpool implementation that acts differently under problematic scenarios.
You do not understand performance profile and UX pros/cons of why async model is better than opting into stackful coroutine based design. You're probably of opinion that the only way to use async is to always await immediately rather than idiomatically composing tasks/futures.
I'm certain you're basing this off of your personal experience from more than a decade ago of some forsaken codebase written in an especially sloppy way. Do you know why sloppy async code is possible in the first place? That's because even the initial threadpool implementation was so resilient.
Because that's where this kind of sentiment from supposed "seasoned experts" on .NET stack usually comes from.
Moreover, there isn't a single mention that the real way to get into actual deadlock situation is when dealing with applications enriched with synchronization context. You will only ever run into this in GUI scenarios, where .NET's implementation of async continues to be a better alternative to reactive frameworks or, worse, manual threading.
> If all you know is one ecosystem, then it seems you are susceptible to a form of Stockholm syndrome when that ecosystem abuses you.
Pathetic attempt at strawman.
My opinion of .NET kept improving since .NET Core 2.1/3.1 days up until today because of researching the details of and getting exposed to other languages (like Rust, Go, Java, Kotlin, Swift, C and C++, you name it).
Hell, I'm not even arguing that what we have today is the holy grail of writing concurrent and parallel programs, only that most other alternatives are worse.
We're seeing this issue entirely in .NET core. We started on .NET 6, are currently on .NET 8, and will likely migrate to 10 soon after it is released. It's again worth mentioning that you provide zero evidence that .NET 6 solves this problem in any way. Although, as we will see below, it seems like you don't even understand the problem!
> I'm certain you're basing this off of your personal experience from more than a decade ago of some forsaken codebase written in an especially sloppy way.
No, I'm referring to code written recently, at the job I work at now, at which I've been involved in discussions about, and implementations of, workarounds for the issue.
> Moreover, there isn't a single mention that the real way to get into actual deadlock situation is when dealing with applications enriched with synchronization context.
100% false. This deadlock issue has nothing to do with synchronization contexts. Please actually read the 2020 article I linked as it explains the issue much better.
> Pathetic attempt at strawman.
I realize responding to this is to just fight pettiness with more pettiness, but I can't resist. You should probably look up the definition of a strawman argument since you are using the word incorrectly.
As for async and tasks - have you ever considered just not writing the code that is so bad it managed to bypass cooperative blocking detection and starvation mitigations? It's certainly an impressive achievement if you managed to pull this off while starting with .NET 6.
Edit: I agree with the subsequent reply and you are right. Concurrency primitives are always a contentious topic.
"Your deadlock scenario is related to synchronization contexts and can be avoided by ..."
rather than:
"You clearly don't know what you're talking about (but I won't bother telling you why)"
Then we could have had a much more productive and pleasant conversation. I would have responded with:
"Sorry, that article wasn't the right one to share. Here is a better one. The issue I am talking about isn't synchronization context-related at all. It's actually much more insidious."
Enjoying or tolerating Hibernate is undiagnosed Stockholm Syndrome :D
The thread pool implementation has been tweaked over the years to reduce the impact of this problem. The latest tweak that will be in .NET 10:
https://github.com/dotnet/runtime/pull/112796
I’m not sure a thread pool implementation can immune to misuse (many tasks that synchronously block on the completion of other tasks in the pool). All you can do is add more threads or try to be smarter about the order tasks are run. I’m not a thread pool expert, so I might have no idea what I’m talking about.
Zero money programmers >> Zero interest rate programmers :-)
For a simple IPv4 address normally representable using 4 bytes/ 32 bits Java uses 56 bytes. The reason for it is Inet4Address object takes 24 B and the InetAddressHolder object takes another 32 B. The InetAddressHolder can contain not only the address but also the address family and original hostname that was possibly resolved to the address.
For an IPv6 address normally representable using 16 bytes/ 128 bits Java uses 120 bytes. An Inet6Address contains the InetAddressHolder inherited from InetAddress and adds an Inet6AddressHolder that has additional information such as the scope of the address and a byte array containing the actual address. This is an interesting approach especially when compared to the implementation of UUID, which uses two longs for storing the 128 bits of data.
Java's approach is causing 15x overhead for IPv4 and 7.5x overhead for IPv6 which seems excessive. What am I missing here? Can or should this be streamlined?
For my part, most of the Java code that I have written that needs to use IP addresses needs somewhere between 1 and 10 of them, so I'd never notice this overhead. If you want to write, like, a BGP server in Java I guess you should write your own class for handling IP addresses.
GC. Single file modules. No "forward". The Collection suite. Fast compiles.
The magic of the ClassLoader. The ClassLoader, that was insightful. I don't know how much thought went into that when they came up with it, but, wow. That ClassLoader is behind a large swath of Java magic. It really hasn't changed much over time, but boy is it powerful.
When I started Java, I started it because of the nascent Java web stack of the day. Early servlets and JSP. I picked because of two things. One, JSPs were just Servlets. A JSP was compiled down into a Servlet, and shazam, Servlet all the way down. Two, single language stack. Java in JSPs, Java in Servlets. Java in library code. Java everywhere. In contrast to the MS ASP (pre .NET) world.
Mono-language meant my page building controller folks could talk to my backend folks and share expertise. Big win.
Servlets were a great model. Filters were easy and powerful. Free sessions. Free database connection pools in the server. I mean, we had that in '98, '99.
And, of course, portability. First project was using Netscapes server, which was spitting up bits 2 weeks before we went live, so we switched to JRun in a day or two (yay standard-ish things...). Then, Management(tm) decided "No, Sun/Oracle, we're going NT/SQL Server". Oh no. But, yup, transitioned to that in a week. Month later, CTO was fired, and we went back to Sun/Oracle.
Java EE had a rough start, but it offered a single thing nobody else was offering. Not out of the box. Not "cheap", and that was a transaction manager, and declarative transactions on top of that. We're talking about legit "Enterprise grade" transaction manager. Before you had Tuxedo, or MS MTS. Not cheap, not "out of the box", not integrated. JBoss came out and gave all that tech away. Then Sun jumped on with early, free, Sun Java Enterprise 8 which begat Glassfish which was open source. Glassfish was amazing. Did I mention that the included message queues are part and parcel of the integrated, distributed transaction model for Java EE? Doesn't everyone get to rollback their message queue transactions when their DB commit fails? Message Driven Beans, sigh, warms my heart.
There were certainly some bad decisions in early Java EE. The component model was far too flexible for 95% of the applications and got in the way of the Happy Path. Early persistence (BMP, CMP) was just Not Good. We punted on those almost right away and just stuck with Session Beans for transaction management and JDBC. We were content with that.
The whole "everything is remote" over CORBA IIOP and such. But none of that really lasted. EJB 3 knocked it out of the park with local beans, annotations in lieu of XML, etc. Introduction of the JPA. Modern Jakarta EE is amazing, lightweight, stupid powerful (and I'm not even talking Spring, that whole Other Enterprise Stack). There's lots of baggage in there, you just don't have to use it. JAX-RS alone will take you VERY far. Just be gentle, Java Enterprise offers lots and lots of rope.
None of this speaks to the advances in the JVM. The early HotSpot JIT was amazing. "Don't mind me, I'm just going to seamlessly sneak in some binary compiled code where that stack machine stuff was a nano-second ago. I've been watching it, this is better. Off you go!" Like presents from Santa. The current rocket ship that in JDK development (this is good and bad, I still do not like the Java 9 JPMS module stuff, I think it's too intrusive for the vast majority of applications). But OpenJDK, the Graal stuff. Sheesh, just get all light headed thinking about it.
Along with the JVM we have the JDK, its trivial install. Pretty sure I have, like, 20 of them installed on my machine. Swapped out with a PATH and JAVA_HOME change. The JVM is our VM, the Servlet container is our container. Maven is our dependency manager. Our WAR files are self-contained. And all that doesn't go stomping on our computer files like Paul Bunyan and Blue making lakes in Minnesota.
It's no wonder I was struggling to grok all the talk about VMs, Dockers, and containers and all that stuff folks mess with to install software. We never had to deal with that. It just was not an issue.
I can distribute source code, with a pom.xml, and a mvnw wrapper script, and anyone can build that project with pretty much zero drama. Without breaking everything on their system. And whatever IDE they're using can trivially import that project. It's also fast. My current little project, > 10K lines of code, < 3s to clean/build/package.
Obviously, there's always issues. The Stories folks hear are all true. The legacy stuff, the FactoryInterfaceFactoryImpl stuff. The Old Days. It's all real. It's imperfect.
But, boy, is it impressive. (And, hey, portable GUI folks, Java FX is pretty darn good...)
You can keep the Java, thanks.
Even as early as Java 1.1 and 1.2 he was not particularly involved in making runtime, library, or even language decisions, and later he wasn't the key to generics, etc.
Mark Reinhold has been the hand's-on lead since 1.1, first integrating early JIT's, HotSpot, the 1.2 10X class explosion, and has been running the team all the way through Oracle's purchase, making the JVM suitable for dynamic language like Kotlin and Clojure, open-sourcing, moving to a faster release cadence, pushing JVM method and field handles that form the basis for modern language features, migrating between GC's, and on and on.
As far as I can tell, everything that makes Java great has come down to Mark Reinhold pushing and guiding.
I have no love for Oracle the big bad company. But I am deeply greatful they've managed to keep that group moving forward.
Gosling, unsurprisingly, designed Java with the NeWS model in mind, where web pages were programs, not just static HTML documents. When I got him to sign my copy of "The Java Programming Language", I asked him if Java was the revenge of NeWS. He just smiled.
We could not depend on the printer to stay functional, though. Have you heard of a Winmodem? SPARCprinters were essentially that: they were configured as a "dumb display device" where all the imaging logic was contained in the software and run on the server. A page was written in PostScript, rendered on the print server, and dispatched to the printer as if it were a framebuffer/monitor.
Unfortunately, for whatever reason, the server software was not so reliable, or the printer hardware wasn't reliable, and because of this peculiar symbiotic parasitism, whenever our printer wedged, our server was also toast. Every process went into "D" for device wait; load averages spiked and all our work ground to a halt. We would need to pull the worker off the desktop, reboot the whole server, and start over with the printer.
That printer haunted my dreams, all though my transition from clerk, to network operator, to sysadmin, and it wasn't until 2011 when I was able to reconcile with printers in general. I still miss SunOS 4 and the whole SPARC ecosystem, but good riddance to Display PostScript.
It's a shame imo that it's not seen as a "cool" option for startups, because at this point, the productivity gap compared to other languages is small, if nonexistent.
Of all the languages I've had to work with trying to get to know unfamiliar code-bases, it's the Go codebases I've been quickest to grok, and yielded the fewest surprises since as the code I'm looking for is almost always where I expect it to be.
Rust feels like walking on a minefield, praying to never meet any lifetime problem that's going to ruin your afternoon productivity ( recently lost an afternoon on something that could very well be a known compiler bug, but on a method with such a horrible signature that i never can be sure. in the end i recoded the thing with macros instead).
The feeling of typesafety is satisfying , i agree. But calling the overall experience a "joy" ?
> recently lost an afternoon on something that could very well be a known compiler bug
With respect, at two months, you're still in the throes of the learning curve, and it seems highly unlikely you've found a compiler bug. Most folks (myself included) struggled for a few months before we hit the 'joyful' part of Rust.
Simply using axum with code using multiple layers of async was enough.
But then again, it looked like this bug (the error message is the same), however at this point i'm really unsure if it's exactly the same. The error message and the method signature was so atrocious that i just gave up and found a simpler design using macros that dodged the bullet.
Rust has a horrid learning curve
I've programmed for decades in many languages, and I felt the same as you
Persevere.
Surrender! to compile
Weather the ferocious storm
You will find, true bliss
Also, as open-source folks say, "rewrite is always better". It also serves as a good security review. But companies typically don't have resources to do complete rewrites every so often, I saw it only in Google.
But nobody seems to talk about or care about C# except for Unity. Microsoft really missed the boat on getting mindshare for it back in the day.
Java kept growing and wound up everywhere. It played nice with Linux. Enterprise Mac developers didn't have trouble writing it with IntelliJ. It spread faster because it was open.
Satya Nadella fixed a lot of Microsoft's ills, but it was too late for C# to rise to prominence. It's now the Github / TypeScript / AI era, and Satya is killing it.
The one good thing to say about Ballmer is that he kicked off Azure. Microsoft grew from strength to strength after that.
MS does have an uphill PR battle though.
Finally managed to get a job offer (after being unemployed for a bit) doing Python. It's starting to look like demand for JVM experience is beginning to wane. Might be time to move on anyway :shrug:
I'm old... as long as there's a steady paycheck involved, I'll code in whatever language you say.
Though, currently working on a little personal project in Scala. :)
It may not be cool to use Java for startups, but we do and are immensely productive with it.
- Bad language and standard library design, all the inconsistencies that make you waste your time to this date unless perhaps you have very good static analysis: but history happened, nothing we can do about fate and circumstance
- Great implementation of the technology itself (surface design not really relevant): heroic deeds and feats triumphed over fate and circumstance, this is all to our own credit
The main problem is the legacy code and attitude out there, dependency injection, using Spring or Spring Boot etc SUCKS. VertX is/was good but now with virtual threads you dont need all the async complexity.
mark_l_watson•8h ago
As long as I am expressing gratitude, I would also like to call out the Clojure team for developing a wonderful ecosystem in top of Java and the JVM.
It must be wonderful to do work that positively affects the lives of millions of people.
jgneff•8h ago
I took the severance package when Taligent imploded, dropped everything I was doing at the time, and have been working with Java and its related software ever since.
fidotron•7h ago
It remains a shame that it didn't launch with generics though, and I still think operator overloading would have been good. Had it done so I think a lot more people would have stuck around for when the performance improved with HotSpot.
dragandj•7h ago