An example: https://vapor.codes/
The problem is that people only think it’s generally useful in the Apple ecosystem.
I think Swift is vastly underestimated due to it's relation to Apple.
It reminds me a lot of what it was like to ship Node.js software 15 years ago – the ecosystem is still young, but there are enough resources out there to solve 95% of the problems you'll face.
https://news.ycombinator.com/item?id=9947193
https://news.ycombinator.com/item?id=42803489
Granted, my perception may be wrong, but trying it to know for sure costs time. Swift has not earned my time investment.
Basically, Vapor has to be re-written as it is, in order to work will with swift 6+. Which kinda kills already any little moment it had.
Was looking to use it with a new project, as it is a nice framework, but going with GoLang on the server side due to all this in flux changes.
But that said it can be frustrating. A lot of the documentation and tutorials you find out there assume you’re on an Apple platform where, e.g. Foundation APIs are available. And the direction of the language, the features being added etc, very clearly serve Apple’s goals.
(Side note: IBM was an early adopter of Swift off-platform but then stepped back: https://forums.swift.org/t/december-12th-2019/31735)
EDIT: Yes, ref. counting is garbage collection, before the replies blow up. haha
You can certainly make the case that reference counting is a form of garbage collection, but it is absolutely false to say Swift has "a garbage collector." There is no runtime process that is responsible for collecting garbage – allocated objects are freed deterministically when their reference count hits zero.
The same thing is true of `shared_ptr` instances in C++, and there's certainly no "garbage collector" there, either.
What does "it" refer to? The function calls to _swift_release_()? Because if function calls are a "garbage collector," then free() is a garbage collector. And if free() is a garbage collector, then the term is too vague to have any useful meaning.
Swift is great. And reference counting is exactly the right kind of GC for UIs because there are no pauses. But GC it still is. And it wrecks throughput and is not appropriate for situations where you don’t want GC.
And in reference to `shared_ptr`, or Rc and Arc in Rust, that's manual memory management because you're doing it... manually. Swift is like C++ or Rust if you were never allowed to have a reference to anything that wasn't behind an Arc. Then it's no longer manual, it's automatic.
Ok, what is calling `free` here? Point to the garbage collector. Show me the thing that is collecting the garbage.
> And in reference to `shared_ptr`, or Rc and Arc in Rust, that's manual memory management because you're doing it... manually.
You're also doing it manually when you decide to make a type a class in Swift. You're opting in to reference counting when you write a class, or use a type that is backed by a class.
It also seems that our goalposts have gone missing. Before, "it" (whatever "it" is) was a garbage collector because it happened at runtime:
> That reference counting is done at runtime. It’s a runtime garbage collector.
shared_ptr, Rc, and Arc also manage their memory at runtime. But now, "it's" a garbage collector because the compiler generates the retain/release calls...
But fine, no GC. I wonder why every language in the world doesn’t use reference counting, since it’s not GC AND you don’t have to clean up any memory you allocate. I guess everyone who ever designed a language is kinda dumb.
That's not the reason it uses reference counting. The overhead of scanning memory is too high, the overhead of precisely scanning it (avoiding false-positive pointers) is higher, and the entire concept of GC assumes memory can be read quickly which isn't true in the presence of swap.
That said, precise GC means you can have compaction, which can potentially be good for swap.
I thought Swift uses ARC just like Objective-C? The compiler elides much of the reference counting, even across inlined functions. It’s not like Python or Javascript where a variable binding is enough to cause an increment (although IIRC the V8 JIT can elide some of that too).
I don’t disagree that it’s a runtime GC but there’s a bit of nuance to its implementation that resists simple categorization like that.
[1] https://forums.swift.org/t/an-embedded-audio-product-built-w...
I used vapor server also and now I think that Swift really has advantage for cross platform development. Just because of SwiftUI they've to adopt nonsense updates. IBM took greater decision on Kitura.
Vapor is still with Swift version 5.9; Let's see how it's ends.
Maybe I'm just really new at programming, but this seems like an absolutely bad feature, and the example actually perfectly proves it:
You really want to name a function "Hello World!" instead of helloWorld, just so your stack traces can pass a 5th grade English class exam?
Regarding the int type, a better solution would be to provide the ability to define a restricted integer type, so that the compiler can help.
It’s built in validation.
I like that most languages seem to have reached consensus on backticks or other similarily awkward characters for this feature. Typing these identifiers is like hitting a speed bump, which seems to be enough to make devs avoid them except for very specific use-cases.
And apparently I never figured this out even after 3 years of Rust lol, thanks!
Although I still stress that it has never been an issue in Kotlin.
It's an important feature for FFI, as well as passing operator functions around. (It seems bizarre to me that you can't do `+` in Swift, but I don't know Swift so maybe there's another way to name such functions.)
Also, the Zig library now uses @"..." for variables with the same name as keywords, rather than using ad hoc naming conventions in those cases.
Yes, that is precisely why I don't like Ruby, it's actually impossible for tools to reason about many things that would make finding bugs before shipping feasible. Big companies like Shopify have to impose a lot of restrictions on it to make it work at scale, which is silly. Just use a different language!
Now Swift may not be in this situation because it's added yet more characters to wrap this nonsense so it is possible to reason about, but it's still just unnecessary, and I will be adding a lint rule against it at work. I don't expect a lot of pushback if any.
> foo["hello world!"]()
I'm halfway glad I've never needed to write C++ professionally, but it seems to me like all my TypeScript would probably transliterate to very clean C++31.
Also I like the backticks better than what zig came up with: @"404"
Common Lisp allows it as well, though I don't think I've ever seen it done outside a demonstration that it can be done.
test "strip html tags from string" {
...
}
For macro generated code it is convenient to use identifiers that people won't accidently use in their own code. The ability to use arbitrary identifiers solve that.
There are just so many ways to solve a problem now that it's more or less impossible for someone to be familiar with all of them.
I gave up on C++ for good reasons, after spending roughly 20 years trying to make sense of it.
That said, some of the, erm, "new ways" to solve problems have been significant advancements. EG: Async/Await was a huge improvement over Combine for a wide variety of scenarios.
What is the alternative to Combine's CurrentValueSubject or combineLatest()?
> What is the alternative to Combine's CurrentValueSubject or combineLatest()?
combine latest et al can be found in async algorithms from apple*https://github.com/apple/swift-async-algorithms
* though current value subject is not there its not hard to make one if you need it
Coming back to a Swift codebase after a few months in different languages is surreal, I can't remember what half of it means.
I find you can do apply java and javascript type thinking to swift but they're less preferred.
People aren't expected to really learn that there is a "feature" called global-actor isolated conformances, but at some point they'll try to make a UI type conform to `Equatable,` and instead of an error saying you can't do that, they'll get a fixit about needing to specify the conformance as `struct MyUIType: @MainActor Equatable` instead.
I bet 99% of codebases won't use "method and initializer key paths," but there's really no reason why you should be able to get a key path to a property, but not a method. It just hadn't been implemented yet, and the language is more consistent now.
Personally, I think raw identifiers are kinda dumb, but that just means I won't use them. IMO there isn't really more cognitive overhead when using the language because it's possible to use spaces in function names now.
If you want constrained numeric types in Swift, that's another problem to tackle. But `HTTPStatus.`404`` seems to be the least ideal way to go about it. It lets you do stuff like to declare `HttpStatus.`404`` with a value of 403 too.
It's like when you see a poisonous snake and can't remember if "red touches yellow" is paired with "deadly fellow" or "friendly fellow".
I'm not seeing why it's worth a whole language feature to avoid prefixing strange identifiers.
Not a fan personally, but Swift is littered with little "niceties" (complexities) like this.
If you can't figure out what stripHTMLTagsFromString() does, you have way bigger problems than a lack of spaces.
Have to disagree there: when I tried re-implementing a basic websocket server in multiple languages (https://news.ycombinator.com/item?id=43800784), I found it so frustrating when they'd insist on hiding the raw close-codes behind pretty names, because it meant having to stop what I was doing to jump into the documentation to figure out what pretty name they gave a particular close code.
All I wanted was to return 1003 (https://datatracker.ietf.org/doc/html/rfc6455#section-7.4.1) if the websocket sent a string message, but:
- Dart calls this "unsupportedData" (https://api.dart.dev/stable/latest/dart-io/WebSocketStatus/u...)
- Java-Websocket calls this "REFUSE" (https://javadoc.io/doc/org.java-websocket/Java-WebSocket/lat...)
- Ktor calls this "CANNOT_ACCEPT" (https://api.ktor.io/ktor-shared/ktor-websockets/io.ktor.webs...)
And some others:
- .NET calls this "InvalidMessageType" (https://learn.microsoft.com/en-us/dotnet/api/system.net.webs...)
- libwebsockets calls this "LWS_CLOSE_STATUS_UNACCEPTABLE_OPCODE" (https://libwebsockets.org/lws-api-doc-main/html/group__wsclo...)
Just... why? Just call the thing 1003 and link to the spec.
I have to say Paul Hudson has almost single-handedly taken over communicating the essentials of Swift to the world; he’s fantastically reliable, brief but enthusiastic, guiding people around the many pitfalls.
I really wish the entire Swift team would spend a quarter fixing bugs and improving the compiler speed, though. So many trivial SwiftUI cases trigger the "this is too complex for the compiler" errors which are so frustrating to fix.
I found myself awestruck that I *HAD* to use XCode or xcodebuild, and could not just run `swift build` and generate an iOS app.
In the end, I had to:
- compile the .app structure myself
- run codesign manually
- manage the simulator manually
- bundle xcAssets manually
- provide swift linker flags manually targeting the toolchain/sdk I needed
- manually copy an embedded.mobileprovision
It was a good learning experience, but what's the story here? Did Apple just give away Swift to the OSS community and then make no ongoing effort to integrate their platforms nicely into the non-xcode build tooling? Looking into https://github.com/stackotter/swift-bundler
(I then separately complained at all the steps it took to get it running without XCode, as I didn't want to be locked into using it)
If you ask me, those platform specific things should never be integrated part of the language.
As flawed as they are in my eyes, its dev tooling quality is something I appreciate and wish I saw here. There are two CLIs, one for the language (Dart) and one for the framework (Flutter). Some would say that the CLI equivalent would be xcodebuild, but that depends on the complex .xcodeproj/.xcworkspace structure and still requires XCode to be installed.
Swift was my favorite programming language after C++/Java since 2014. I've been faced major updates few times happily. It was one of the most easiest language. But now,
I tried to update a project from Swift 5.x to 6.x which has 150+ source files itself and no external libraries which is written by my own use and it has almost all swift 5.x features. They made up Swift as super hard. I decided not to use Swift 6 anymore and yes I don't need to reduce debugging time, Even though I don't have powerful computer and debugging time isn't matter to me & development time is actual matter to me.
The language itself becomes Rust (the programming language using by aliens). I Hope Swift language is upgrading itself for Aliens or the Swift team is filled with Aliens. Now, I feel ObjC is super easiest language than Swift.
PS: I understand Swift is Corporate Product & upgrading towards SwiftUI than general purpose programming language. I never gonna use Swift 6 even I do use ObjC for my projects.
I liked Swift when I tried it a couple of years ago, but it seems overloaded with features these days. I haven't tried Swift UI yet, but I did think the Objective-C approach with xibs, struts and such worked fine.
For Mac dev, AppKit is still fairly heavily weighted towards use of XIBs, but it’s not nearly as much of an issue there because on average each individual XIB isn’t as overloaded with controls because the UI is more split up.
I definitely prefer Compose these days though.
In my experience, most XIB files were either so small and easy that it was simply easier to replicate it in ten lines of code. Or so giant an impenetrable that it took thousands of lines of code to replicate it with, and at that point most people prefer to work with code over a dense XIB file.
SwiftUI is a wreck, that is still not good for advanced UI and you still have to use UIKit for some parts, and
Taking Objective-C, with DispatchQueue, and some modernization of it, and some new data structures, which it need, was all it was needed to make a good new langue.
It could have been Apple's rival to GoLang, but instead it ended up being hydra/monster with too many pardagimns, and none of them are good.
So true.
Skill issue. *ViewRepresentable exists.
(makes easy things super easy, but harder/complex things harder).
It has some ways to go......
The moment this language version is released I will move to Swift 6. In our project case it will happen. with no source code changes.
N years later, it doesn’t feel like there has been a step change in Apple software quality; if anything Apple software feels less solid, and looks cool “look what I did” extension points. I mean, some of the tings you could do with runtime categories, and runtime prototypes were really cool. Now when I work on my 2 apps that originally happily port to Swift/UIKit, I’m just left confused with how to make things work. I’m happy when it finally works, and don’t ever try to improve the thing, it’s too much work.
There’s lots of different variables at play here; I’m not trying to stretch inference too much. Heck, it could have been that with adding Swift to the mix, the forces that have contributed to reduced quality in Apples stuff would be even worse.
I’m just frustrated. When I work in Elixir, I’m like this is cool. When I work in Kotlin, I don’t feel like “Apples got a language like this too, but it’s got that extra special zing that used to make stuff Apple touched cool.”
Half a decade later it seems like it should be better and Swift stuff should be stabilized. But nope, I’ve had more little glitches in both iOS and MacOS. It’s hard to say it’s due to Swift, and not management priorities. Still it feels partially related to Swift.
Swift’s goals are great, I like the syntax, but the language implementation seems to just special case everything rather than having coherent language design.
That and Swift de-emphasizes Obj-C message passing. I have a pet theory that message passing produces more robust GUI software that’s easier to adapt to complex needs.
This could not be furthest for the truth. The entire process of proposing a new language feature to getting it implemented and shipped is out in the open for everyone to participate/see.
But my impression watching from the outside is that he had a finger in every pie.
https://youtu.be/Hu-jvAWTZ9o?si=PalSP6POofiRuj3a
I still feel like GUI programming hasn’t progressed in the years since this. Actually they’ve regressed in many ways.
I haven’t used Swift UI in a couple of years, but I always thought the basics of it were excellent.
They got the declarative API foundations right I thought.
Shame it’s still flakey.
The preview used to crash constantly last time I used it.
I have published a word game written entirely in SwiftUI [1], the effects and animations would have been much more difficult to do in UIKit, and the app itself would have been hairier to write and maintain [2]. I also track crashes, and this particular app has had four crashes in the past year, so I am very pleased with the stability
That said, there are definitely times, as you say, where you have to drop to UIKit. For the word game mentioned above, I had to drop down to UIKit to observe low-level keyboard events in order to support hardware keyboard input without explicitly using a control that accepts text input
SwiftUI is mature, it's pretty advanced — especially for graphics and animation heavy UI. It has limitations, particularly around advanced input event handling, as well as the application/scene lifecycle
I plan to continue to use both UIKit and SwiftUI where they make sense. It's easy enough to bridge between them with UIHostingController and UIViewRepresentable
[2] Specific examples include: image and alpha masking is trivial in SwiftUI, Metal Shaders can be applied with a one-line modifier, gradients are easy and automatic, SwiftUI's Timeline+Canvas is very performant and more powerful than custom drawing with UIKit. Creating glows, textured text and images, blurs and geometry-based transitions is much easier in SwiftUI
She comes across bugs on the regular that I’ve never seen in 16 years of Mac use, but only when the other user account is logged in (i.e. quick user switching rather than a full log out).
Stuff that user accounts shouldn’t even make any difference to, like the menu bar disappearing or rendering too far up so it’s half off screen. Save dialogs stop appearing. Windows that are open but appear to be rendering off screen somewhere. It’s wild. This is on a < 1 year old MacBook Air running the latest OS. It’s an absolute shambles.
After over 20 years, I’m really unhappy with macOS. The last five years have have been a huge productivity regression.
For example, I really really wish Kotlin would adopt Swift style if let/guard let statements. Kotlin smart casting doesn’t work just often enough to not be able to consistently rely on it and the foo?.let { } syntax is ugly.
Combined with the JVM warts of gradle and jankiness of code stripping and obfuscation, generally speaking if I could opt to use Swift in place of Kotlin for Android dev I would do so in a heartbeat.
It’s a set of tools intended to do just that.
> Skip supports both compiling Swift natively for Android, and transpiling Swift into Kotlin. Read about Skip’s modes in the documentation.
I don’t mind Objective-C when I’m the one writing it and can ensure that the code is responsibly written, but it wasn’t unusual to have to work on existing Obj-C codebases littered with untyped data, casts, etc some amount of which was inevitably erroneous and only seemed to work correctly. Chasing down the origin points of data, fixing a laundry list of method selectors, and adding in checks always sucked with the compiler doing little to give you a hand. With Swift, even in codebases with poor hygiene fixing things is easier since the compiler will yell when something’s wrong.
Or was there issue more intrinsic to the design of the language itself?
You’d basically need to implement the Swift compiler in the Objective-C compiler to get similar type safety, but to make it work you would probably need to change various bits of syntax, drop inline intermixture of C and C++, and remove a lot of Obj-C’s dynamism, basically making it a new language. That’s why Swift was created, and in the past Obj-C’s bracketed smalltalk-like syntax had proven unpopular amongst newcoming devs, so they chose a more mainstream syntax instead.
It seems Swift 6.2 is still unfinished and is acting more like Java. Eternal evolution of language. While it is popular among tech and HN crowds to have new language and framework to play around and work. It brings particular little if not negative user experience in terms of final products. I often wonder if Apple could just have some light touches of improvement on Objective-C in the past 10 - 12 years and instead focuses on actual OS and Apps quality.
It is this lack of focus that has been with Apple since Steve Jobs left.
In a sense, MacRuby was trying something similar, but the dependency on a GC doomed it.
You can have both. Rust feels "mature" and "finished" if you stick to the stable featureset, but it's still bringing compelling new features over time in spite of that. But this can only be achieved by carefully managing complexity and not letting it get out-of-hand with lots of ad-hoc special cases and tweaks.
I’m trying to imagine using it without the stream API just shuts my entire brain down. Records arrived pretty recently and are already essential for me.
You will always have to pay me to program Java, but you’d have to pay me 5x my current salary to do it in Java 8 or earlier.
A lean language reduces the surface area for beautiful expressiveness by clever people - making sure the dumb/junior guy who is maintaining your project in the future can actually fully understand what is written. And it can build and run fast - so you can iterate and test out software behaviors fast.
No one in this world is immortal. If it takes too much time to grok code, write/compile/run tests - a developer will be disincentivized to do so, no matter how amazing the language features are.
My guess is that Swift has adversely affected Apple's overall software quality. As more software moved from Objective-C to Swift, quality has dropped precipitously.
(People have tried to come up with simpler languages that still preserve safety and low-level power, like Austral - but that simplicity comes at the cost of existing intuition for most devs.)
Conversely, it's easier to debug an Objective-C app than a Swift app, simply because compiling and debugging is so much faster, and debugging so much more reliable
I don't know about a software quality drop being attributable to the migration to Swift. So many other things have also happened in that time — much more software that Apple produces is heavily reliant on network services, which they are not very good at. I find Apple's local-first software to be excellent (Final Cut Pro X, Logic, Keynote) and their network-first software is hit-or-miss
They have also saddled developers with a ton of APIs for working with their online services. Try to write correct and resilient application code that deals with files in iCloud? It's harder than it was to write an application that dealt with only local files a decade ago!
Swift is easy to blame, but I don't think it's responsible for poor software. Complexity is responsible for poor software, and we have so much more of that now days
Swift was never going to make Apple software great (nor Go or Rust or anything else for anyone else).
Though, honestly, if you're thinking about computer languages in terms of cool, you're going in the wrong direction.
Swift was put together by some great minds, and some minds, Apple still attracts talent, but in far lower density. This isn't even a jab, just from the fact that they are far larger and the talent pool is smaller with far more competition.
What percentage of genius level developers want to work for a company where they can't talk about their work and generally get zero public credit?
Glad they are backtracking on this, and I hope they start remove features and simplifying things. Swift's enemy is its own self and the people steering int into just an academic exercise, rather than a practical and performant language that can be used in many domains. Server side Swift right now is dead due to all these insane changes.
Hopefully things get better on the server/other non ios domains, but the language needs to get simplified and become practical and fun to use.
> or are so enamored with the actor paradigm (erlang)
swift actors are barely actors in the erlang sense, not even closemaybe you're referring to structured concurrency?
“Due to these insane changes” Which changes?
Why not? Does this mean I need to make a struct which wraps InlineArray and implements Collection? Why didn't they do that?
EDIT: found the answer (I still have concerns):
> While we could conform to these protocols when the element is copyable, InlineArray is unlike Array in that there are no copy-on-write semantics; it is eagerly copied. Conforming to these protocols would potentially open doors to lots of implicit copies of the underlying InlineArray instance which could be problematic given the prevalence of generic collection algorithms and slicing behavior. To avoid this potential performance pitfall, we're explicitly not opting into conforming this type to Sequence or Collection.
Tl;dr: Sequence and Collection are incompatible with noncopyable types and conditionally conforming when elements are copyable would result in too many implicit copies. They’re working on protocols to cover noncopyable collections such as this, which will probably have a similar API shape.
Why the protocols are designed the way they are is until very recently all types were implicitly copyable, but most of the collection types like array and dictionary were copy on write; so the copies were cheap. I think in general, though, there are a lot of performance footguns in the design, mainly around when copies aren’t cheap. The future protocols will hopefully rectify these performance issues.
All of these “replace C++” projects have been quite disappointing. Where they tried to make big simplifications they often just didn’t understand the original problem and inherent complexity - or they made a good, but opinionated design choice which has been unable to survive bureaucratic demand for more features, etc.
Is this another asinine onChange()-style mechanism that actually means WILL change? In other words, it tells you BEFORE the value is set on the object, so you can't do jack squat with it much of the time.
That's the M.O. of onChange now, which is utterly brain-dead. Gee, I've been told that a value changed in this object, so let's recalculate some stuff. WHOOPS, nope, none of the other objects (or hell, even the affected object) can take action yet because they can't interrogate the object for its new contents.
Truly incredible that they not only defaulted, but LIMITED change-detection to WILL-change, the least useful of the two choices.
That's not strictly true. You get the new value inside the closure. This is very useful for observing changes on data model that drives SwifUI from outside of SwiftUI. Before you had to write the code like this to achieve that:
func startObservation() { withObservationTracking {
print(store.state.toggle) // Here we have the new value (+ it is called once before any change)
} onChange: {
Task { startObservation() }
}
}
— Chris Lattner, 2024
To be fair, every new language version usually includes things that eliminate those special cases making writing the code more straightforward. Like the described support of functions in key paths, or the ability to set default global actor isolation.
Swift is becoming more rust like but also safer
But that may change with swift getting more and more into safety and reaching the limits of how many keywords a language can have and remain descent. I honestly don't know what the feeling of learning swift would be like today.
And on the other hand, i don't see how the rust language can really get nicer without sacrificing on a few design decisions and goals (in order to keep the language extremely versatile).
As a developer it becomes so very hard to reason about code behaviour. This is especially true with concurrency, which was meant to simplify concurrent operations, but in actual fact is a nasty can of worms. In an effort to "hide the complexity" it just introduces a huge amount of unexpected & hard to grok behaviour. The new "immediate" Task behaviour & non-isolated async behaviours are good examples of this.
Out of curiosity, I put in more than 150 genuine hours in 2024, trying to get deeply into Swift - and eventually just abandoned the language.
In comparison - I got very far experimenting with Go in the same amount of time.
Unless one needs to get into the Mac ecosystem, I see no reason why learning Swift should be necessary at all.
They're just shoveling stuff in to the language.
Individually, most items aren't so bad, but collectively they've got a mess and it's getting bigger fast.
None of the decision-makers seem to have the sense or incentive to say "no" to anything.
It's sad, because the language had such promise and there are some really nice things in there.
Well, at least it's relatively easy to avoid or ignore.
I enjoy "bloated" languages. Many languages are bloated nowadays, but the community agrees what set of features to use, what to avoid. Still, those rare features can be useful for stuff like making DSLs through clever use of the language.
It's much worse to have a minimal language that takes years to add new features, like how Go took over a decade to add generics.
How could you know that? Not all the 'no's show up as a proposal. The proposal template also has an "Impact on ABI" section which you can use to guide your "can I ignore it"-sense.
> It sure feels like Swift governance is broken.
What is the actual problem though? Not enough features that you would use? But I don't see how this is a governance problem
Java governance: slow at times but mostly sane. C++ governance: I won't even open this can of worms. Swift governance according to you: too many features I will ignore.
amichail•1d ago
hn-acct•1d ago
trevor-e•1d ago
favorited•1d ago
klabb3•1d ago
Even in Go which has my favorite parallel concurrency model, there are many footguns and needless walking on eggshells for business logic. You can still offload IO and bespoke compute to other threads when it makes sense. This view isn’t a panacea, but damn close for the 99% of use cases.
Coincidentally I also think the early success of JavaScript an largely be attributed to single-threaded run-loop and callback-oriented concurrency. Even on the server JS is still holding strong with this limitation, and that’s despite all the other limitations and insanity of JS.
monkeyelite•1d ago
It's not even clear the perf gains are great. Everything locking all the time has killed a lot of performance.
Hashex129542•23h ago
Before Swift 6, I've worked lot of unique projects both macOS and iOS and never spend time on debugging. I don't know what debugging time exactly?