As is always the case with such rewrites, the big question is whether the improvements came from the choice of language or because they updated a crusty legacy codebase and fixed bugs/bottlenecks.
Given that they also experienced a 90% reduction in Memory Usage (presumably from Java GC vs Swift AOT memory management) - it seems more likely the gains are in fact from the difference in languages.
As for performance and locality, Java's on-the-fly pointer reordering/compression can give it an edge over even some compiled languages in certain algorithms. Hard to say if that's relevant for whatever web framework Apple based their service on, but I wouldn't discount Java's locality optimisations just because it uses a GC.
You mean it does escape analysis and stack-allocates what it can? That would definitely help, but not eliminate the GC. Or are you thinking of something else?
Thinking about it more, I remember that Java also has some performance-hostile design decisions baked in (e.g. almost everything's an Object, arrays aren't packed, dynamic dispatch everywhere). Swift doesn't have that legacy to deal with.
I’ve replaced Java code with Python a few times and each time even though we did it for maintenance (more Python devs available) we saw memory usage more than halved while performance at least doubled because the code used simpler functions and structures. Java has a far more advanced GC and JIT but at some point the weight of code and indirection wins out.
I am old enough to have seen enterprise C and C++ developers.
Where do you think stuff like DCE, CORBA, DCOM has come from?
Also many of the things people blame Java for, where born as Smalltalk, Objective-C and C++ frameworks before being re-written in Java.
Since we are in a Apple discussion thread, here is some Objective-C ids from Apple frameworks,
https://github.com/Quotation/LongestCocoa
I also advise getting hold of the original WebObjects documentation in Objective-C, before its port to Java.
This is true to some extent but the reason I focused on culture is that there are patterns which people learn and pass on differently in each language. For example, enterprise COBOL programmers didn’t duplicate data in memory to the same extent not only due hardware constraints but also because there wasn’t a culture telling every young programmer that was the exemplar style to follow.
I totally agree about C++ having had the same problems but most of the enterprise folks jumped to Java or C# which felt like the community of people writing C++ improved the ratio of performance sensitive developers. Python had a bit of that, especially in the 2000s, but a lot of the Very Serious Architects didn’t like the language and so they didn’t influence the community anywhere near as much.
I’m not saying everyone involved are terrible, I just find it interesting how we like to talk about software engineering but there are a lot of major factors which are basically things people want to believe are good.
> I’ve replaced Java code with Python a few times ... while performance at least doubled
Are you saying you made Python code run twice as fast as Java code? I have written lots of both. I really struggle to make Python go fast. What am I doing wrong?This is not “Java slow, Python fast” – I expected it to be the reverse – but rather that the developers who cranked out a messy Spring app somehow managed to cancel out all of the work the JVM developers have done without doing anything obviously wrong. There wasn’t a single bottleneck, just death by a thousand cuts with data access patterns, indirection, very deep stack traces, etc.
I have no doubt that there are people here who could’ve rewritten it in better Java for significant wins but the goal with the rewrite was to align a project originally written by a departed team with a larger suite of Python code for the rest of the app, and to deal with various correctness issues. Using Pydantic for the data models not only reduced the amount of code significantly, it flushed out a bunch of inconsistency in the input validation and that’s what I’d been looking for along with reusing our common code libraries for consistency. The performance win was just gravy and, to be clear, I don’t think that’s saying anything about the JVM other than that it does not yet have an optimization to call an LLM to make code less enterprise-y.
As a counterpoint: Look at Crazy Bob's (Lee/R.I.P.) Google Guice or Norman Maurer's Netty.IO or Tim Fox's Vert.x: All of them are examples of how to write ultra-lean, low-level, high-performance modern Java apps... but are frequently overlooked to hire cheap, low-skill Java devs to write "yet another Spring app".
Yeah, that’s why I labeled it culture since it was totally a business failure with contracting companies basically doing the “why hire these expensive people when we get paid the same either way?” No point at ranting about the language, it can’t fix the business but unfortunately there’s a ton of inertia around that kind of development and a lot of people have been trained that way. I imagine this must be very frustrating for the Java team at Oracle knowing that their hard work is going to be buried by half of the users.
Nowadays to add to your comment, all major free beer implementations, OpenJDK, OpenJ9, GraalVM, and the ART cousin do AOT and JIT caches.
Even without Valhala, there are quite a few tricks possible with Panama, one can manually create C like struct memory layouts.
Yes it is a lot of boilerplate, however one can get around the boilerplate with AI (maybe), or just write the C declarations and point jextract to it.
Let's say you have an object that looks like A -> B -> C. Even if the allocation of A/B/C happened at very temporally different times and inbetween different allocations, the next time the GC runs as it traverses the graph it will see and place in memory [A, B, C] assuming A is still live. That means even if the memory originally looks something like [A, D, B, Q, R, S, T, C] the act of collecting and compacting has a tendency to colocate.
I don't know which of these is more important on a modern machine, and it probably depends upon the workload.
The downside is fragmentation and the CPU time required for memory management. If you have an A -> B -> C chain where A is the only owner of the B and B is the only owner of C, then when A hits 0, it has to do 2 pointer hops to deallocate B and then deallocate C (plus arena management for the deallocs).
One of the big benefits of JVM moving style collectors is that when A dies, the collector does not need to visit B or C to deallocate them. The collector only visits and moves live memory.
I suspect this puts greater emphasis on functionality like value types and flexibility in compositionally creating objects. You can trend toward larger objects rather than nesting inner objects for functionality. For example, you can use tagged unions to represent optionality rather than pointers.
The cost of deep A->B->C relationships in Java comes during collections, which still default to be halting. The difference is a reference counting GC will evaluate these chains while removing objects, while a reference tracking GC will evaluate live objects.
So, garbage collection is expensive for ref-counting if you are creating large transient datasets, and is expensive for ref-tracking GC if you are retaining large datasets.
In my experience Java is a memory hog even compared to other garbage collected languages (that's my main gripe about the language).
I think a good part of the reason is that if you exclude primitive types, almost everything in Java is an heap-allocated object and Java objects are fairly "fat": every single instance has an header of between 96 and 128 bit on 64-bit architectures [1]. That's... a lot. Just by making the headers smaller (the topic of the above link) you can get 20% decrease in heap usage and improvements in cpu and GC time [2].
My hope is that once value classes arrive [3][4], and libraries start to use them, we will see a substantial decrease in heap usage in an average java app.
[1] https://openjdk.org/jeps/450
[2] https://openjdk.org/jeps/519
Many other GCed languages, such as swift, CPython, Go, do not use a moving collector. Instead, they allocate and pin memory and free it when not in use.
The benefit to the JVM approach is heap allocations are wicked fast on pretty much all its collectors. Generally, to allocate it's a check to see if space is available and a pointer bump. For the other languages, you are bound to end up using a skiplist and/or arena allocator provided by your malloc implementation. Roughly O(log(n)) vs O(1) in performance terms.
Don't get me wrong, the object header does eat a fair chunk of memory. Roughly double what another language will take. However, a lot of people confuse the memory which the JVM has claimed from the OS (and is thus reported by the OS) with the memory the JVM is actively using. 2 different things.
It just so happens that for moving collectors like the JVM typically uses more reserved memory means fewer garbage collections and time spend garbage collecting.
Swift is not garbage collected, it uses reference counting. So, memory there is freed immediately when it is no longer in scope.
Chapter 5, https://gchandbook.org/contents.html
I know about the trade-offs that a moving GC does, but the rule is about double memory usage, not ten times more like a 90% reduction would seem to imply.
If memory is an issue, you can set a limit and the JVM will probably still work fine
Frankly, it just takes some motivated senior devs and the tantalizing ability to put out the OP blog post and you've got something management will sign off on. Bonus points you get to talk about how amazing it was to use Apple tech to get the job done.
I don't think they seriously approached this because the article only mentioned tuning G1GC. The fact is, they should have been talking about ZGC, AppCDS, and probably Graal if pause times and startup times were really that big a problem for them. Heck, even CRaC should have been mentioned.
It is not hard to get a JVM to startup in sub second time. Here's one framework where that's literally the glossy print on the front page. [1]
Then the next annual report talked about improved scalability because of this amazing technology from Google.
Delving into Java arcana instead of getting first hand experience in developing in Swift would've been great opportunity wasted to improve Swift.
However, they chose to replace an existing system with swift. The "arcana" I mentioned is start up options easily found and safe to apply. It's about as magical as "-O2" is to C++.
Sure, this may have been the right choice if the reason was to exercise swift. However, that shouldn't pretend like there was nothing to do to make Java better. The steps I described are like 1 or 2 days worth of dev work. How much time do you think a rewrite took?
I’m sure you’re right, there must’ve been ways to improve the job of deployment. But if they wanted to reduce resource usage and doing it in Swift aligned with some other company goal it would make sense they might just go straight to this.
Saving few weeks or months by learning 3rd party technology instead of applying and improving first party technology would be amateurish.
> However, that shouldn't pretend like there was nothing to do to make Java better.
This seems like constant refrain that Apple or anyone choosing their own tech over someone else's owe absolute fair shot to stuff they didn't choose. This is simply not the way world works.
Yes, there are endless stories companies spending enormous resources to optimize Java stack even up to working with Core Java team at Oracle to improve on JVM innards. But those companies are just (although heavy) user of core technology rather than developer of competing one. Apple is not one of those users, they are developers.
And not what I'm advocating for. Sometimes rewrites are necessary.
What I'm advocating is exercising a few well documented and fairly well known jvm flags that aren't particularly fiddly.
The jvm does have endless knobs, most of which you shouldn't touch and instead should let the heuristics do their work. These flags I'm mentioning are not that.
Swapping g1gc for zgc, for example, would have resolved one of their major complaints about GC impact under load. If the live set isn't near the max heap size then pause times are sub millisecond.
> This seems like constant refrain that Apple or anyone choosing their own tech over someone else's owe absolute fair shot to stuff they didn't choose. This is simply not the way world works.
The reason for this refrain is because Java is a very well known tech, easy to hire for (which Amazon that you cite heavily uses). And Apple had already adopted Java and wrote a product with it (I suspect they have several).
I would not be saying any of this if the article was a generic benchmark and comparison of Java with swift. I would not fault Apple for saying "we are rewriting in swift to minimize the number of languages used internally and improve the swift ecosystem".
I'm taking umbridge to them trying to sell this as an absolute necessity because of performance constraints while making questionable statements on the cause.
And, heck, the need to tweak some flags would be a valid thing to call out in the article "we got the performance we wanted with the default compiler options of Swift. To achieve the same thing with Java requires multiple changes from the default settings". I personally don't find it compelling, but it's honest and would sway someone that wants something that "just works" without fiddling.
Decades ago, I was working with three IBM employees on a client project. During a discussion about a backup solution, one of them suggested that we migrate all customer data into DB2 on a daily basis and then back up the DB2 database.
I asked why we couldn't just back up the client's existing database directly, skipping the migration step. The response? "Because we commercially want to sell DB2."
And yes, Apple is huge and rich, so they can get fast machines with less memory, but they likely have other tasks with different requirements they want to run on the same hardware.
You can actually witness this to some degree on Android vs iPhone. iPhone comfortably runs with 4GB RAM and Android would be slow as dog.
[1]: I don't dispute the results, but I also like to note that as a researcher in Computer Science in that domain, you were probably looking to prove how great GC is, not the opposite.
> iPhone comfortably runs with 4GB RAM and Android would be slow as dog.
This has nothing to do with RAM. Without load, Android wouldn’t even push 2GB, it would be still slower than iPhone because of different trade-offs they make in architecture.
Anyhow, that was just an anecdotal unscientific experiment to give you some idea--obviously they are two different codebases. The literature is there to quantify the matter as I noted.
Unless one knows exactly what ART version is installed on the device, what build options from AOSP were used on the firmware image, and what is the mainline version deployed via PlayStore (if on Android 12 or later), there are zero conclusions that one can take out of it.
Also iOS applications tend to just die, when there is no more memory to make use of, due to lack of paging and memory fragmentation.
It’s only Rust (or C++, but unsafe) that have mostly zero-cost abstractions.
Swift and Rust also allow their protocols to be erased and dispatched dynamically (dyn in Rust, any in Swift). But in both languages that's more of a "when you need it" thing, generics are the preferred tool.
This is not a bad thing, I was just pointing out that Go doesn't have a performance advantage over Swift.
This just isn't true. It's good marketing hype for Rust, but any language with an optimizing compiler (JIT or AOT) has plenty of "zero-cost abstractions."
JVM also like memory, but can be tailored to look okayish, still worse than opponents.
Do you know where Java EE comes from?
It started as an Objective-C framework, a language which Swift has full interoperability with.
https://en.wikipedia.org/wiki/Distributed_Objects_Everywhere
Yes, having lived on the daily build bleeding edge of Swift for several years, including while macros were being developed, I have indeed heard of them.
> Do you know where Java EE comes from?
Fully aware of the history.
The point stands: it is substantially harder with Swift to make the kind of spring style mess that JVM apps typically become (of course there are exceptions: I typically suggest people write Java like Martin Thompson instead of Martin Fowler). Furthermore, _people just don’t do it_. I imagine you could visualise the percentage of swift server apps using an IOC container using no hands.
Then I can start counting.
Any language can be a shit show, when enough people beyond the adoption curve write code in them, from the letcoders with MIT degree to the six week bootcamp learners shipping single functions as a package, and architects designing future proof architectures on whiteboards with SAFe.
When Swift finally crosses into this world, then we can compare how much of it has survived the world scale adoption exposure, beyond cozy Apple ecosystem.
It’s the history, standard libs, and all the legacy tutorials which don’t get erased from the net
For user defined stuff we’ve recently gained records, which are a step in that direction, and a full solution is coming.
What java is getting in the future is immutable data values where the reference is the value.
When you have something like
class Foo {
int a;
int b;
}
var c = new Foo();
in java, effectively the representation of `c` is a reference which ultimately points to the heap storage locations of `a, b`. In C++ terms, you could think of the interactions as being `c->b`.When values land, the representation of `c` can instead be (the JVM gets to decide, it could keep the old definition for various performance reasons) something like [type, a, b]. Or in C++ terms the memory layout can be analogous to the following:
struct Foo { int a, int b };
struct Foo c;
c.a = 1;
c.b = 2;
https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
Even records are not value-based types, but rather are classes limited to value-like semantics - e.g. they can't extend types, are expected to have immutable behavior by default where modification creates a new record instance, and the like.
The JVM theoretically can perform escape analysis to see that a record behaves a certain way and can be stack allocated, or embedded within the storage of an aggregating object rather than having a separate heap allocation.
A C# struct gets boxed to adapt it to certain things like an Object state parameter on a call. The JVM theoretically would just notice this possibility and decide to make the record heap-allocated from the start.
I say theoretically because I have not tracked if this feature is implemented yet, or what the limitations are if it has been.
Valhala is supposed to bring language level support, the biggest issue is how to introduce value types, without breaking the ABI from everything that is in Maven Central kind of.
Similar to the whole async/await engineering effort in .NET Framework, on how to introduce it, without adding new MSIL bytecodes, or requiring new CLR capabilities.
Create C like structs, in regards to memory layout segments, and access them via the Panama APIs.
I don’t think it’s meant to be a postmortem on figuring out what was going on and a solution, but more a mini white paper to point out Swift can be used on the server and has some nice benefits there.
So the exact problems with the Java implementation don’t matter past “it’s heavy and slow to start up, even though it does a good job”.
Reduced memory consumption for cloud applications was apparently also the primary reason IBM was interested in Swift for a while. Most cloud applications apparently sit mostly idle most of the time, so the number of clients you can multiplex on single physical host is limited by memory consumption, not CPU throughput.
And Java, with the JIT and the GC, has horrible memory consumption.
It has higher level ergonomics that something like rust lacks (as much as I like rust myself), doesn’t have many of the pitfalls of Go (error handling is much better for example) and is relatively easy to pickup. It’s also in the same ballpark performance as rust or C++.
It’s not perfect by any means, it has several issues but it’s quickly becoming my preferred language as well for knocking out projects.
Rust and Swift are pretty much the only two choices and Rust is arguably much more pain in the ass for the average joe enterprise coder.
You can share memory not only at the machine level, but between different applications.
Each extra GB per core you add to your shape, actually costs something. Hence every GB/core that can be saved results in actual cost savings. But even then, usually every extra GB/core is ~5% of the CPU cost. Hence, even when going from 10 GB/core (sort of a lot) to 1 GB/core, that only translates to ballpark ~50% less HW cost. Since they did not mention how many cores these instances have, it's hard to know what GB/core were used before and after, and hence whether there were any real cost savings in memory at all, and if so what the relative memory cost savings might have been compared to CPU cost.
My expectation is that if you put the work in you can get actual hard numbers, which will promptly be ignored by every future person asking the same "question" with the same implied answer.
If the "just rewrite it and it'll be better" people were as right as they often seem to believe they are, a big mystery is JWZ's "Cascade of Attention-Deficit Teenagers" phenomenon. In this scenario the same software is rewritten, over, and over, and over, yet it doesn't get faster and doesn't even fix many serious bugs.
If the "just rewrite it and it'll be better" people were as right as they often seem to believe
Generally speaking, technological progress over thousands of years serves to validate this. Sure, in the short term we might see some slippage depending on talent/expertise, but with education and updated application of learnings, it's generally true.I confess to having been part of the cascade at various parts of my career.
The macOS userland is based on BSD so you’d get a nice little fit there. And it’s not like some common BSD is bad at doing the job of a server. I know it can do great things.
Who knows if it was ever discussed. They wouldn’t want Windows (licensing, optics, etc) and macOS isn’t tuned to be a high performance server. Linux is incredibly performant and ubiquitous and a very very safe choice.
Fairly certain the iTunes store, their web store, etc. are all built upon enterprise Linux as well.
And there's nothing wrong with that. Use the best tool for the job. Most car owners have never looked in the engine compartment.
https://techcommunity.microsoft.com/blog/windowsosplatform/a...
Are you sure about this?
Was the Java Service in Spring (boot)?
What other technologies were considerd?
I'd assume Go was among them. Was it just the fact that Go's type system is to simplistic or what were the other factors?
Writing a long winded report/article for fair technical evaluation of competing technologies would utter waste of time and no one would believe if answer were still Swift.
> I'd assume Go was among them. ...
I don't see any reason to evaluate Go at all.
https://devblogs.microsoft.com/typescript/typescript-native-...
X years from now, another language will come along and then they can switch to that for whatever benefit it has. It is just the nature of these things in technology.
Rewriting it in assembly is the way to go, but that has other tradeoffs.
Of course it’s a trade off and their reasons are fine, but rewrites are expensive and disruptive. I would have picked something that can avoid a second rewrite later on.
> unavoidable costly abstractions in Go
Can you share some?Rust is an excellent language for embedding in other languages And underpinning developer tools.
That said, someday the new typescript binary will compile to WebAssembly, and it won’t matter much anyway.
I suspect they wanted the compiler speed more than they wanted a WASM target, though.
I think they already use Go in places, but they’ve clearly stated their intention to use Swift as much as possible where it’s reasonable.
I suspect they didn’t evaluate C++, Rust, Go, Erlang, Node, and 12 other things.
They have the experience from other Swift services to know it will perform well. Their people already know and use it.
If Swift (and the libraries used) weren’t good enough they’d get their people to improve it and then wait to switch off Java.
If you go to a Java shop and say you want to use C# for something Java can do, they’ll probably say to use Java.
I don’t read this post as “Swift is the best thing out there” but simply “hey Swift works great here too where you might not expect, it’s an option you might not have known about”.
I’m not in the .NET ecosystem so I don’t know if native AOT compilation to machine code is an option.
But anyway, in this case Apple is making an internal service for themselves. I think a better comparison for MS would be if they chose to rewrite some Windows service’s server back end. Would they choose Go for that?
I don’t know.
They’d have never touched Go with a 10 foot poll.
The article is just a marketing for a team looking for promo, there’s no deep meaning or larger Apple scheme here.
Azure team has no issues using AI to convert from C++ to Rust, see RustNation UK 2025 talks.
Also they mention the reason being a port not a rewrite, yet they had anyway to rewrite the whole AST datastructures due to the weaker typesystem in Go.
Finally, the WebAssembly tooling to support Blazor is much more mature than what Go has.
First of all, it is a missed opportunity for Microsoft to have another vector for people to learn C#.
Secondly at BUILD session, Anders ended up explaning that they needed to rewrite the AST data structures anyway, given that Go type system is much inferior to Typescript.
And Go's story on WebAssembly is quite poor, when compared with Blazor toolchain, they are hopping Google will make the necessary improvements, required for the TypeScript playground and when running VSCode on the browser.
Finally, some of the key develpers involved on this effort have been layed off during the latest round.
[0] https://devblogs.microsoft.com/typescript/typescript-native-...
It's one thing to say that we want to hire commonly available developers like in Java or C#, but if you have a long term plan and execution strategy, why not pick technology that may pay off larger dividends?
ITT: I get why they chose Swift, it's Apple's own in house technology. Makes total sense, not knocking that at all. Nice writeup.
There are not enough Rust experts in the world for a typical enterprise to hire and benefit from it.
Elixir/Phoenix are not order of magnitude improvements over Java like Rust, they are marginal improvements. Enterprise don't care for that.
Also it’s about short term balance sheet not long term product management in the SME and small SaaS space. I don’t think anyone other then the developers give a shit.
It really comes down to whether there's someone making the case that a particular application/subsystem has specialized needs that would warrant hiring experts, and whether they can make the case successfully that the system should use technologies that would require additional training and impose additional project risk.
Until you are dealing not with enterprise applications but actual services, it can be difficult to even maintain development teams for maintenance - if your one ruby dev leaves, there may be nobody keeping the app running.
Even when you are producing services - if they are monolithic, you'll also be strongly encouraged to stick with a single technology stack.
Of course, every company and org has to see whats best and feasible for them. Valid points you brought up no doubt.
but this seems to be a totally asynchronous service with extremely liberal latency requirements:
> On a regular interval, Password Monitoring checks a user’s passwords against a continuously updated and curated list of passwords that are known to have been exposed in a leak.
why not just run the checks at the backend's discretion?
Presumably it's a combination of needing to do it while the computer is awake and online, and also the Passwords app probably refreshes the data on launch if it hasn't updated recently.
Because the other side may not be listening when the compute is done, and you don't want to cache the result of the computation because of privacy.
The sequence of events is:
1. Phone fires off a request to the backend. 2. Phone waits for response from backend.
The gap between 1 and 2 cannot be long because the phone is burning battery the entire time while it's waiting, so there are limits to how long you can reasonably expect the device to wait before it hangs up.
In a less privacy-sensitive architecture you could:
1. Phone fires off request to the backend. Gets a token for response lookup later. 2. Phone checks for a response later with the token.
But that requires the backend to hold onto the response, which for privacy-sensitive applications you don't want!
If you forget to dump the key (or if the deletion is not clean) then you've got an absolute whopper of a privacy breach.
Also worth noting that you can't dump the key until the computation is complete, so you'd need to persist the key in some way which opens up another failure surface. Again, if it can't be avoided that's one thing, but if it can you'd rather not have the key persist at all.
Is it that hard?
Also I don’t think persisting a key generated per task is a big privacy issue.
Great read, thanks for sharing! This means to me you are mature, sharing stuff instead of making obscure secrets from basic stuff existing at many companies <3
The... existing and default GC since JDK 6 G1GC ?
>managing garbage collection at scale remains a challenge due to issues like prolonged GC pauses under high loads, increased performance overhead, and the complexity of fine-tuning for diverse workloads.
Man if only we had invented other garbage collectors like ZGC (https://openjdk.org/jeps/377) since JDK15 or even Shenandoah (https://wiki.openjdk.org/display/shenandoah/Main) backported all the way to JDK8 that have goals of sub millisecond GC pauses and scale incredibly well to even terabytes of RAM. Without really a need to tune much of the GC.
> inability to quickly provision and decommission instances due to the overhead of the JVM
Man if only we invented things like AOT compilation in Java and even native builds. We could call it GraalVM or something (https://www.graalvm.org/)
>In Java, we relied heavily on inheritance, which can lead to complex class hierarchies and tight coupling.
Man if only you could literally use interfaces the same way you use protocols. Imagine even using a JVM language like Kotlin that provides interface delegation! I guess we'll have to keep shooting ourselves in the foot.
>Swift’s optional type and safe unwrapping mechanisms eliminate the need for null checks everywhere, reducing the risk of null pointer exceptions and enhancing code readability.
I'll grant them this one. But man, if only NullAway existed :(
>Swift’s async/await support is a nice addition, streamlining how we handle async tasks.[ ... ] We can now write async code that reads like sync code
Putting aside the fact that Swift's whole MainActor based async story is so shit that I'd enjoy writing async in Rust with tokio and that Swift 6 is the cause of so many headaches because their guidelines on async have been terrible: Man, if only the JDK included things like virtual threads that would make writing async code feel like sync code with a tiny wrapper around. We could call such a thing that collects many threads... A Loom! (https://openjdk.org/projects/loom/)
>Overall, our experience with Swift has been overwhelmingly positive and we were able to finish the rewrite much faster than initially estimated
You rewrote an existing service with existing documentation using a language and libraries you entirely control and it was fast ? Wow!
>In addition to an excellent support system and tooling, Swift’s inherent emphasis on modularity and extensibility
I would rather rewrite all my build scripts in Ant scripts that call `make` manually before calling SPM "excellent tooling", but okay.
Anyways, sorry for the sarcasm, but this is just an Apple ad for Apple's Swift Language. Writing low allocation Java code is possible, and while writing efficient arenas is not possible... Java's GC generations are arenas. In the same way, yes, it's more performant. Maybe because their indirection heavy, inheritance abuse led to pointer chasing everywhere, and not having this opportunity in Swift means they can't waste that performance ? Most of what's holding performance back is coding standards and bad habits written at a dark time where the Gang of Four had burrowed its way into every programmer's mind. Add in some reflection for every endpoint because Java back then really loved that for some reason (mostly because writing source generation tasks, or compiler plugins was absolute hell back then, compared to just the "meh" it is today). With a bit of luck, any network serialization also used reflection (thanks GSON & Jackson) and you know just where your throughput has gone.
They had extremely verbose, existing Java 8 code, and just decided to rewrite them using Swift because that's most of what happens at Apple these days. Anything outlined in this post is just post-hoc rationalization. Had it failed, this post would have never happened. Modern Java, while still a bit rough in parts (and I absolutely understand preference in language, I'd much rather maintain something I enjoy writing into) can and will absolutely compete in every aspect with Swift. It just requires getting rid of bad habits, that you (cannot/never learned to) write in Swift
Also I've never had Java tell me it can't figure out the type of my thirty chained collector calls, so maybe Hindley-Milner was not the place to go.
Swift has some great attributes, and is almost a very pleasant systems language, give or take some sides that can be mostly attributed to Apple going "we need this feature for iOS apps".
An adticle that merely provides some unverifiable claims about how much better it is for them (that represent a large percentage of the best knowledge about Swift on Earth) is about as useful as an AI generated spam site. Anyone making decisions about using Swift on the backend with this post would be a clown.
I'm gonna look into server-side Swift.
Looks like it'll take some fiddling to find the right non-xcode tools approach for developing on linux.
I prefer Jetbrains tools over VSCode, if anyone has any hints in that direction.
While Swift can use Objective-C's message sending to communicate with Objective-C libraries, that isn't its primary dispatch mechanism. In the case of the service described in the article, it isn't even available (since it runs on Linux, which does not have an Objective-C runtime implementation).
Instead, like Rust, Swift's primary dispatch is static by default (either directly on types, or via reified generics), with dynamic dispatch possible via any (which is similar to Rust's dyn). Swift also has vtable-based dispatch when using subclassing, but again this is opt-in like any/dyn is.
Google AppEngine has been doing this successfully for a decade. It is not easy, but it is possible.
You can link against jemalloc, and use google perftools to get heap and CPU profiles, but it's challenging to make good use of them especially with swifts method mangling and aggressive inlining.
Is swift web yet?
And I'm not defending Java by any means, more often than not Java is like an 80s Volvo: incredibly reliable, but you'll spend more time figuring out its strange noises than actually driving it at full speed.
I'd be surprised if anything Apple wrote would satisfy you. TFA makes it clear that they first optimized the Java version as much as it could be under Java's GC, that they evaluated several languages (not just Swift) once it became clear that a rewrite was necessary, that they "benchmarked performance throughout the process of development and deployment", and they shared before/after benchmarks.
For example, they mention G1GC as being better than what was originally there but not good enough. Yet the problems they mention, prolonged GC pauses, indicates that G1GC was not the right collector for them. Instead, they should have been using ZGC.
The call out of G1GC as being "new" is also pretty odd as it was added to the JVM in Java 9, released in 2016. Meaning, they are talking about a 9 year old collector as if it were brand new. Did they JUST update to java 11?
And if they are complaining about startup time, then why no mention of AppCDS usage? Or more extreme, CRaC? What about doing an AOT compile with Graal?
The only mention they have is the garbage collector, which is simply just one aspect of the JVM.
And, not for nothing, the JVM has made pretty humongous strides in startup time and GC performance throughout the versions. Theres pretty large performance wins going from 11->17 and 17->21.
I'm sorry, but this really reads as Apple marketing looking for a reason to tout swift as being super great.
> Did they JUST update to java 11?
As an LTS release with "extended" support until 2032, that certainly seems possible.
Another near-certain factor in this decision was that Apple has an extreme, trauma-born abhorrence of critical external dependencies. With "premier" support for 11 LTS having ended last fall, it makes me wonder if a primary lever for this choice was the question of whether it was better to (1) spend time evaluating whether one of Oracle's subsequent Java LTS releases would solve their performance issues, or instead (2) use that time to dogfood server-side Swift (a homegrown, "more open" language that they understand better than anyone in the world) by applying it to a real-world problem at Apple scale, with the goal of eventually replacing all of their Java-based back-ends.
I would imagine many companies would not use anything newer than JDK 21, the latest LTS release.
JDK 11 LTS is from 2018 and after that Oracle pushed two LTS releases: JDK 17 in 2021 and JDK 21 in 2023. On top of that Oracle is commited to releasing a LTS every 2 years with the next one planed for later this year.
Using an LTS doesn't mean you have to create applications on the oldest available release, it means that if you target the latest LTS release your application is going to have a predictable and supported runtime for a very long time.
If they had to start a Java 11 project in 2024 that just points to a deeper organizational problem bigger than just GC.
No, it doesn’t.
This falls into the category of "we rewrote a thing written by people that didn't know what they were doing, and it was better"
Large class hierarchies (favour composition over inheritance since 1990!), poorly written async code (not even needed in java, due to virtual threads), poor startup times (probably due to spring), huge heaps sizes (likely memory leaks or other poor code, compounded by inability to restart due to routing and poor startup times)
Yawn.
G1GC is a fine collector, but if pause time is really important they should have used ZGC.
And if startup is a problem, recent versions of Java have introduced AppCDS which has gotten quite good throughout the releases.
And if that wasn't good enough, Graal has for a long time offered AOT compilation which gives you both fast startup and lower memory utilization.
None of these things are particularly hard to add into a build pipeline or deployment, they simply require Apple to use the latest version of Java.
I’ll definitely take a look again, great to see it becoming mature enough to be another viable option
Somewhat surprised Apple doesn’t run their services on XNU on internal Xserve-like devices (or work with or contribute to Asahi to get Linux working great natively on all Mx CPUs).
A 1U Mac Studio would be killer. I doubt it’d even be a huge engineering effort given that they’ve already done most of the work for the Studio.
If they’re going to run in public clouds + on prem, Linux makes sense. And if you’re doing Linux the x86-64 currently makes a ton of sense too.
As you mentioned they’d have to contribute to Asahi, which would take up resources. Even ignoring that the price/performance on Apple hardware in server workloads may not be worth it.
Even if it’s slightly better (they don’t pay retail) the fact that it’s a different setup than they run in Azure plus special work to rack them compared to bog-standard 1U PCs, etc may mean it simply isn’t worth it.
The choices that already serve the market are just too numerous and the existing tooling is already far greater in features, functionality and capability than anything that Apple can provide. What's funny is that they actually have the money to dedicate resources to this space to compete with C#, Java, go or Rust, but they're not going to because it's just too far afield of their core business. Any backend service written in Swift is not going to be running on a Mac in the cloud, and probably won't be serving just iPhone/iPad clients exclusively, so why bother when we know Apple leadership will treat it as an afterthought.
If it does takeoff, I'm betting it will be because the open source community provides a solution, not Apple and even then it will be in a tiny niche. Indeed, this entire project is enabled by Vapor, an open source Swift project that I'm guessing the team only chose because Vapor as a project finally reached the requisite level of maturity for what they wanted. It's not like Apple went out on their own and built their own web framework in Swift, like Microsoft does with C# and ASP.NET. All of this makes me feel even more skeptical about Swift on the backend. Apple won't do anything specifically to give Swift a legup in the backend space, beyond the basics, like porting Swift to linux, but will avail themselves of open source stuff that other people built.
No, I don’t know why they aren’t publicly available (at least yet). But I do know they power a number of public facing services.
I mean I am asking as general purpose language, not just back end.
It has its issues in places, mostly the compiler, but I love getting to develop in it.
I suspect you hear less about it because it’s no longer new and the open source community doesn’t seem to care about it that much, unlike Rust.
It still seems to be viewed as “the iOS language” even though it can do more than that and is available on other platforms.
Swift has a few extra bits for Objective-C compatibility (when running on Apple platforms), but otherwise the biggest differences come to design and ergonomics of the language itself.
If I need complete control over memory for say a web server, or am writing code to run on an embedded device (e.g. without Linux), I'll use Rust - even if Swift technically has the features to meet my needs.
Thats the reverse if I am creating mobile or desktop apps. I'd probably prefer using neither for web services like in this article, but would still probably pick Swift given a limited choice.
If one rather be OS agnostic, then not so much.
Apple car was farther?
Swift was a lovely language, up until recently where it should be renamed to Swift++.
quux•2d ago
sureglymop•2d ago
rescripting•2d ago
It was first released 7 years ago.
sureglymop•1d ago
candiddevmike•2d ago
st3fan•2d ago
tough•2d ago
klausa•2d ago
Making it cross-platform would require either reimplementing it from scratch, or doing a Safari-on-Windows level of shenanigans of reimplementing AppKit on other platforms.
AdamN•2d ago
cosmic_cheese•2d ago
Or if they are, they're treating macOS/iOS as “blind targets” where those platforms are rarely if ever QA’d or dogfooded.
> The more critical thing to meeting developers where they are would be for the entire developer loop to be doable from a JetBrains IDE on MacOS.
I think most current Apple platform devs would be happiest if they were equipped with the tools to build their own IDEs/toolchains/etc, so e.g. Panic's Nova could feasibly have an iOS/macOS/etc dev plugin or someone could turn Sublime Text into a minimalistic Apple platform IDE. JetBrains IDEs certainly have a wide following but among the longtime Mac user devs in particular they’re not seen as quite the panacea they’re presented as in the larger dev community.
rafram•1d ago
I was curious about this, so I downloaded it to take a look. It doesn't look like they actually shipped AppKit, at least as a separate DLL, but they did ship DLLs for Foundation, Core Graphics, and a few other core macOS frameworks.
quux•1d ago
paxys•1d ago
What you really mean is people want the iOS development toolchain to be cross platform, and that would mean porting iOS to run in a hypervisor on linux/windows (to get the simulator to work). That is way too big a lift to make sense.
atonse•1d ago
I’ve never wanted Xcode in more places. When I used to be a native mobile dev, I wanted to not have to use Xcode.
And it’s technically possible. But totally not smooth as of a few years ago.
st3fan•2d ago
And https://github.com/swiftlang/sourcekit-lsp an be used in any LSP compatible editor like Neovim.
lawgimenez•2d ago
elpakal•1d ago
kridsdale1•1d ago
DavidPiper•1d ago