> Even though Erlang’s asynchronous message-passing model allows it to handle network latency effectively, a process does not need to wait for a response after sending a message, allowing it to continue executing other tasks. It is still discouraged to use Erlang distribution in a geographically distributed system. The Erlang distribution was designed for communication within a data center or preferably within the same rack in a data center. For geographically distributed systems other asynchronous communication patterns are suggested.
Not clear why they make this claim, but I think it refers to how Erlang/OTP handles distribution out of the box. Tools like Partisan seem to provide better defaults: https://github.com/lasp-lang/partisan
It's pretty clear, IMHO, that dist was designed for local networking scenarios. Mnesia in particular was designed for a cluster of two nodes that live in the same chassis. The use case was a telephone switch that could recover from failures and have its software updated while in use.
That said, although OTP was designed for a small use case, it still works in use cases way outside of that. I've run dist clusters with thousands of nodes, spread across the US, with nodes on east coast, west coast and Texas. I've had net_adm:ping() response times measured in minutes ... not because the underlying latency was that high, but because there was congestion between data centers and the mnesia replication backlog was very long (but not beyond the dist and socket buffers) ... everything still worked, but it was pretty weird.
Re Partisan, I don't know that I'd trust a tool that says things like this in their README:
> Due to this heartbeating and other issues in the way Erlang handles certain internal data structures, Erlang systems present a limit to the number of connected nodes that depending on the application goes between 60 and 200 nodes.
The amount of traffic used by heartbeats is small. If managing connections and heartbeats for connections to 200 other nodes is not small for your nodes, your nodes must be very small ... you might ease your operations burden by running fewer but larger nodes.
I had thought I favorited a comment, but I can't find it again; someone had linked to a presentation from WhatsApp after I left, and they have some absurd number of nodes in clusters now. I want to say on the order of hundreds of thousands. While I was at WhatsApp, we were having issues with things like pg2 that used the global module to do cluster wide locking. If those locks weren't acquired very carefully, it was easy to get into livelock when you had a large cluster startup and every node was racing to take the same lock to do something. That sort of thing is dangerous, but after you hit it once, if you hit it again, you know what to hammer on, and it doesn't take too long to fix it.
Either way, someone who says you can't run a 200 node dist cluster is parroting old wives tales, and I don't trust them to tell you about scalability. Head of line blocking can be an issue in dist, but one has to be very careful to avoid breaking causality if you process messages out of order. Personally, I would focus on making your TCP networking rock solid, and then you don't have to worry about head of line blocking very often.
That said, to answer this from earlier in the thread
> I have read the erlang/OTP doesn’t work well in high latency environments (for example on a mobile device), is that true? Are there special considerations for running OTP across a WAN?
OTP dist is built upon the expectation that a TCP connection between two nodes can be maintained as long as both nodes are running. If that expectation isn't realistic for your network, you'll probably need to use something else, whether that's a custom dist transport, or some other application protocol.
For mobile ... I've seen TCP connections from mobile devices stay connected upwards of 60 days, but it's not very common, iOS and Android aren't built for it. But that's not really an issue, because the bigger issue is Dist has no security barriers. If someone is on your dist, they control all of the nodes in your cluster. There is no way that's a good idea for a phone to be connected into, especially if it's a phone you don't control, that's running an app you wrote to connect to your service --- there's no way to prevent someone from taking your app, injecting dist messages and spawning whatever they want on your server... that's what you're inviting if you use dist.
That said, I think Erlang is great, and if you wanted to run OTP on your phone, it could make sense. You'd need to tune runtime/startup, and you'd need to figure out some way to do UX, and you'd need to be OK with figuring out everything yourself, because I don't think there's a lot of people with experience running BEAM on Android. And you'd need to be ok with hiring people and training them on your stack.
`<div>Hello, world!!</div>`
we can do:
`<Text>Hello, world!</Text>`
I want to be clear: this is not a web renderer. We are not rendering HTML. We're rendering actual native UI. So the above in SwiftUI becomes:
`Text("Hello, world!")`
And yes we support modifiers via a stylesheet system, events, custom view registration, and really everything that you would normally be doing it all in Swift.
Where this library comes into play: the headless browser is being built in Elixir to run on device. We communicate with the SwiftUI renderer via disterl. We've built a virtual DOM where each node in the vDOM will have its own Erlang process. (I can get into process limit for DOMs if people want to) The Document will communicate the process directly to the corresponding SwiftUI view.
We've taken this a step further by actually compiling client-side JS libs to WASM and running them in our headless browser and bridging back to Elixir with WasmEx. If this works we'll be able to bring the development ergonomics of the Web to every native platform that has a composable UI framework. So think of actual native targets for Hotwire, LiveWire, etc...
We can currently build for nearly all SwiftUI targets: MacOS, iPhone, iPad, Apple Vision Pro, AppleTV. Watch is the odd duck out because it lacks on-device networking that we require for this library.
This originally started as the LiveView Native project but due to some difficulties collaborating with the upstream project we've decided to broaden our scope.
Swift's portability means we should be able to bring this to other languages as well.
We're nearing the point of integration where we can benchmark and validate this effort.
Happy to answer any questions!
* our vDOM: https://github.com/liveview-native/gen_dom
* selector parsing: https://github.com/liveview-native/selector
* compile Elixir to ios: https://github.com/otp-interop/elixir_pack
Years ago I worked at Xamarin, and our C# compiler compiled C# to native iOS code but there were some features that we could not support on iOS due to Apple's restrictions. Just curious if Apple still has those restrictions or if you're doing something different?
we compile without the JIT so we can satisfy the AppStore requirements
It's pretty well-established at this time that cross-platform development frameworks are hard for pretty much any team to accomplish... Is work winding down on the LiveView Native project, or do you expect to see an increase in development?
What is changing is how the client libraries are built. I mentioned in another comment that we're building a headless web browser, if you haven't read it I'd recommend it as it gives a lot of detail on what we're attempting to do. Right now we've more or less validated every part with the exception of the overall render performance. This effort replaces LVN Core which was built in Rust. The Rust effort used UniFFI to message pass to the SwiftUI client. Boot time was also almost instant. With The Elixir browser we will have more overhead. Boot time is slower and I believe disterl could carry over overhead than UniFFI bindings. However, the question will come down to if that overhead is significant or not. I know it will be slower, but if the overall render time is still performant then we're good.
The other issue we ran into was when we started implementing more complex LiveView things like Live Components. While LVN Core has worked very well its implementation I believe was incorrect. It had passed through four developers and was originally only intended to be a template parser. It grew with how we were figuring out what the best path forward should be. And sometimes that path meant backing up and ditching some tech we built that was a dead end for us. Refactoring LVN Core into a browser I felt was going to take more time than doing it in Elixir. I built the first implementation in about a week but the past few months has been spent on building GenDOM. That may still take over a year but we're prioritizing the DOM API that LiveView, Hotwire, and Livewire will require. Then the other 99% of DOM API will be a grind.
But to your original point, going the route of the browser implementation means we are no longer locked into LiveView as we should be able to support any web client that does similar server/client side interactivity. This means our focus will be no longer on LiveView Native individually but ensuring that the browser itself is stable and can run the API necessary for any JS built client to run on.
I don't think we'd get to 100% compatibility with LiveView itself without doing this.
Holy this will be much bigger than I thought! Cant wait to see it out.
How is it different from Lynx? React Native? (probably is, besides the xml like syntax, again state management?)
Quite interesting !
As far as the differentiator: backend. If you're sold on client-side development then I don't think our solution is for you. If however you value SSR and want a balnance between front end and backend that's our market. So for a Hotwire app you could have a Rails app deployed that can accept a "ACCEPT application/swiftui" and we can send the proper template to the client. Just like the browser we parse and build the DOM and insantiate the Views in the native client. There are already countless examples of SSR native apps in the AppStore. As long as we aren't shipping code it's OK, which we're not. Just markup that represents UI state. The state would be managed on the server.
Another areas we differ is that we target the native UI framework, we don't have a unified UI framework. So you will need to know HTML - web, SwiftUI - iOS, Jetpack Compose - Android. This is necessary to establish the primitives that we can hopefully get to the point to build on top of to create a unified UI framework (or maybe someone solves that for us?)
With our wasm compilation, we may even be able to compile React itself and have it emit native templates. No idea if that would work or not. The limits come when the JS library itself is enforcing HTML constraints that we don't observe, like case sensitive tag names and attributes.
What about offline mode? Well for use cases that don't require it you're all set. We have lifecycle templates that ship on device for different app states, like being offline. If you want offline we have a concept that we haven't implemented yet. For Elixir we can just ship a version of the LV server on device that works locally then just does a datasync.
Didn’t Firefox build its UI in XAML long ago?
https://en.m.wikipedia.org/wiki/Extensible_Application_Marku...
damn
Can you elaborate on this?
The writing was more or less on the wall with WASM. I don't know if this project is really The Answer that will solve all of the problems but it sounds like a step in that direction and I like it a lot, despite using neither Swift nor Erlang.
My output looked exactly like an embedded WebKit UIView though - so then the problem became - what was I making that was appreciably better?
Could you please elaborate on the statement about Apple Watch? Apple Watch can connect to WiFi directly with Bluetooth off on its paired iPhone. Specific variants also support cellular networks directly without depending on the paired iPhone. So is it something more nuanced than the networking part that’s missing in Apple Watch?
cyberax•10h ago
Does this allow somehow to sidestep this? Since the data is all thread-local, it should be possible to use non-atomic counters?
dizlexic•10h ago
https://stackoverflow.com/questions/25542416/swift-with-no-a...
liuliu•10h ago
Swift introduced bunch of ownership keywords to help you to use value objects for most of the needs to sidestep reference-counting and minimize copying.
Of course, to my understanding, "actor" in Swift is a "class"-like object, so it will be reference-counted. But I fail to see how that is different from other systems (as actor itself has to be mutable, hence, a reference object anyway).
brandonasuncion•9h ago
Example here: https://forums.swift.org/t/noncopyable-generics-in-swift-a-c...
An added plus is that the Swift compiler seems to stack-promote a lot more often, compared to class/ManagedBuffer implementations.
llm_nerd•10h ago
https://g.co/gemini/share/51670084cd0f - lame, but it references core concepts.
cyberax•8h ago
slavapestov•7h ago
In Swift 6 this is only true if the value’s type is Sendable.
llm_nerd•7h ago
Though the vast majority of cases where ARC would come into play are of the trivial variety.
elpakal•6h ago
Someone•9h ago
See https://dl.acm.org/doi/10.1145/3243176.3243195:
“BRC is based on the observation that most objects are only accessed by a single thread, which allows most RC operations to be performed non-atomically. BRC leverages this by biasing each object towards a specific thread, and keeping two counters for each object --- one updated by the owner thread and another updated by the other threads. This allows the owner thread to perform RC operations non-atomically, while the other threads update the second counter atomically.“
(I don’t know whether Swift uses this at the moment)