The future? I thought all apps were like this before this web2.0 thing ruined it.
'Offline-first' is trying to combine the benefits of both approaches.
While this may be true, the central issue is a different one: most users and/or developers are not very privacy-conscious, so they don't consider it to be worth the effort to solve the problems that go in hand with such distributed systems.
All this to say, it is not just the technical aspects that make it more difficult to provide local-first software.
Compared to what?
I.e., most people don’t care.
Local-first is optimal for creative and productivity apps. (Conversely, non-local-first are terrible for these.)
But most people are neither creative nor optimally productive (or care to be).
it's not that they "don't care", but that they dont know this is an issue that needs to be cared about. Like privacy, they didnt think they need it until they do, but by then it's too late.
If you think this is only a problem for distributed systems, I have bad news for you.
Local-first and decentralized apps haven't become popular because SaaS has a vastly superior economic model, and more money means more to be invested in both polish (UI/UX) and marketing.
All the technical challenges of decentralized or local-first apps are solvable. They are no harder than the technical challenges of doing cloud at scale. If there was money in it, those problems would be solved at least as well.
Cloud SaaS is both unbreakable DRM (you don't even give the user the code, sometimes not even their data) and an impossible to evade subscription model. That's why it's the dominant model for software delivery, at least 90% of the time. The billing system is the tail that wags the dog.
There are some types of apps that have intrinsic benefits to being in the cloud, but they're the minority. These are apps that require huge data sets, large amounts of burstable compute, or that integrate tightly with real world services to the point that they're really just front-ends for something IRL. Even for these, it would be possible to have only certain parts of them live in the cloud.
There’s also an upcoming generation that doesn’t know what a filesystem is which also doesn’t help matters.
This is why I sometimes think it's hopeless. For a while there -- 90s into the 2000s -- we were building something called "computer literacy." Then the phones came out and that stopped completely. Now we seem to have inverted the old paradigm. In that era people made jokes about old people not being able to use tech. Today the older people (30s onward) are the ones who can use tech and the younger people can only use app centric mobile style interfaces.
The future is gonna be like: "Hey grandpa, can you help me figure out why my wifi is down?"
Local first tends to suck in practice. For example, Office 365 with documents in the cloud is so much better for collaborating than dealing with "conflicted copy" in Dropbox.
It sucks that you need an internet connection, but I think that drawback is worth it for never having to manually merge a sync conflict.
That has nothing to do with where the code lives and runs. There are unique technical challenges to doing it all at the edge, but there are already known solutions to these. If there was money in it, you'd have a lot of local first and decentralized apps. As I said, these technical challenges are not harder than, say, scaling a cloud app to millions of concurrent users. In some cases they're the same. Behind the scenes in the cloud you have all kinds of data sync and consistency enforcement systems that algorithmically resemble what you need for consistent fluid interaction peer to peer.
When multiple people work on a document at the same time, you will have conflicts that will become very hard to resolve. I have never seen a good UI for resolving non-trivial changes. There is no way to make this merging easy.
The only way to avoid the merge problem is to make sure that the state is synchronised before making changes. With cloud based solutions this is trivial, since the processing happens on the server.
The local first variant of this would be that you have to somehow lock a document before you can work on it. I worked on a tool that worked like that in the early 2000s. Of course that always meant that records remained locked, and it was a bit cumbersome. You still needed to be online to work so you could lock the records you needed.
Now that people are used to having someone in a data center do their backing up and distributing for them, they don't want to that work themselves again, privacy be damned.
I guess I should bring my devices back to exactly 1 device. Or just take a subscription on one service.
But also nowadays you want to have information from other computers. Everything from shared calendars to the weather, or a social media entry. There's so much more you can do with internet access, you need to be able to access remote data.
There's no easy way to keep sync, either. Look at CAP theorem. You can decide which leg you can do without, but you can't solve the distributed computing "problem". Best is just be aware of what tradeoff you're making.
Git has largely solved asynchronous decentralized collaboration, but it requires file formats that are ideally as human understandable as machine-readable, or at least diffable/mergable in a way where both humans and machines can understand the process and results.
Admittedly git's ergonomics aren't the best or most user friendly, but it at least shows a different approach to this that undeniably works.
People say git is too "complex" or "complicated" but I never saw end users succeeding with CVS or Mercurial or SVN or Visual Sourcesafe the way they do with Git.
"Enterprise" tools (such as business rules engines) frequently prove themselves "not ready for the enterprise" because they don't have proper answers to version control, something essential when you have more than one person working on something. People say "do you really need (the index)" or other things git has but git seemed to get over the Ashby's law threshold and have enough internal complexity to confront the essential complexity of enterprise version control.
Yes, but then you are not using a "local first" tool but a typical server based workflow.
Fortunately, a lot of what chafes with git are UX issues more than anything else. Its abstractions are leaky, and its default settings are outright bad. It's very much a tool built by and for kernel developers with all that entails.
The principle itself has a lot of redeemable qualities, and could be applied to other similar syncing problems without most of the sharp edges that come with the particular implementation seen in git.
The merge workflow is not inherently complicated or convoluted. It's just that git is.
When dvcses came out there were three contendors: darcs, mercurial and git.
I evaluated all three and found darcs was the most intuitive but it was very slow. Git was a confused mess, and hg was a great compromise between fast and having a simple and intuitive merge model.
I became a big hg advocate but I eventually lost that battle and had to become a git expert. I spent a few years being the guy who could untangle the mess when a junior messed up a rebase merge then did a push --force to upstream.
Now I think I'm too git-brained to think about the problem with a clear head anymore, but I think it's a failure mostly attributable to git that dvcs has never found any uptake outside of software development and the fact that we as developers see dvcs as a "solved problem" outside more tooling around git is a failure of imagination.
For local-first async collaboration on something that isn't software development, you'd likely want something that is a lot more polished, and has a much more streamlined feature set. I think ultimately very few of git's chafing points are due to its model of async decentralized collaboration.
What makes merging in git complicated? And what's better about darcs and mercurial?
(PS Not disagreeing just curious, I've worked in Mercurial and git and personally I've never noticed a difference, but that doesn't mean there isn't one.)
It was the first practical manner to downsize mainframe applications.
Do I? What sort of information ...
> shared calendars
OK yes that would be a valid use, I can imagine some stressed executive with no signal in a tunnel wanting to change some planned event, but also to have the change superceded by an edit somebody else makes a few minutes later.
> the weather
But I don't usually edit the weather forecast.
> a social media entry
So ... OK ... because it's important that my selfie taken in a wilderness gets the timestamp of when I offline-pretend-posted it, instead of when I'm actually online and can see replies? Why is that? Or is the idea that I should reply to people offline while pretending that they can see, and then much later when my comments actually arrive they're backdated as if they'd been there all along?
It's a far, far more complicated mental model than simply posting it. It'd be a huge barrier for normal users (even tech-savvy users, I'd say). People want to post it online and that's it. No one wants an app what requires its users to be aware of syncing state constantly unless they really have no choice. We pretend we can step on gas instead of mixing the gas with air and ignite it with a spark plug until we need to change the damn plug.
At work: I write code, which is in version control. I write design documents (that nobody reads), and put them on a shared computer. I write presentations (you would better off sleeping through them...) and put them on a share computer. Often the above are edited by others.
Even at home, my grocery list is shared with my wife. I look up recipes online from a shared computer. My music (that I ripped from CDs) is shared with everyone else in the house. When I play a game I wish my saved games were shared with other game systems (I haven't had time since I had kids, more than 10 years ago). When I take notes about my kid's music lessons they are shared with my wife and kids...
It started with single computers, but they were so expensive nobody had them except labs. You wrote the program with your data, often toggling it in with switches.
From there we went to batch processing, then shared computers, then added a networking, with file sharing and RPC. Then the personal computer came and it was back to toggling your own programs, but soon we were running local apps, and now our computers are again mostly "smart terminals" (as opposed to dumb terminals), and the data is on shared computers again.
Sometimes we take data off the shared computer, but there is no perfect solution so distributed computing and since networks are mostly reliable nobody wants that anyway. What we do want is control of our data and that we don't get (mostly)
With the exception of messenger clients, Desktop apps are mostly "local-first" from day one.
At the time you're beginning to think about desktop behavior, it's also worth considering whether you should just build native.
To be precise, these apps where not local-_first_, they where local-_only_. Local-first implies that the app first and foremost works locally, but also that it, secondly, is capable of working online and non-locally (usually with some syncing mechanism).
There's no easy way to merge changes, but if you design around merging, then syncing becomes much less difficult to solve.
When you do serverside stuff you control everything. What users can do, and cannot do.
This lets you both reduce support costs as it is easier to resolve issues even by ad-hoc db query, and more importantly - it lets you retroactively lock more and more useful features behind paywall. This is basically The DRM for your software with extra bonus - you don't even have to compete with previous version of your own software!
i want my local programs back, but without regulatory change it will never happen.
Having built a sync product, it is dramatically simpler (from a technical standpoint) to require that clients are connected, send operations immediately to central location, and then succeed / fail there. Once things like offline sync are part of the picture, there's a whole set of infrequent corner cases that come in that are also very difficult to explain to non-technical people.
These are silly things like: If there's a network error after I sent the last byte to a server, what do I do? You (the client that made the request) don't know if the server actually processed the request. If you're completely reliant on the server for your state, this problem (cough) "doesn't exist", because when the user refreshes, they either see their change or they don't. But, if you have offline sync, you need to either have the server tolerate a duplicate submission, or you need some kind of way for the client to figure out that the server processed the submission.
[1] GitHub: https://github.com/hasanhaja/tasks-app/ [2] Deployed site: https://tasks.hasanhaja.com/
In a talk a few years ago [1], Martin Kleppman (one of the authors of the paper that introduced the term "local-first") included this line:
> If it doesn't work if the app developer goes out of business and shuts down the servers, it's not local-first.
That is obviously not something most companies want! If the app works without the company, why are you even paying them? It's much more lucrative to make a company indispensable, where it's very painful to customers if the company goes away (i.e. they stop giving the company money).
[1] https://speakerdeck.com/ept/the-past-present-and-future-of-l...
My current thinking is that the only way we get substantial local-first software is if it's built by a passionate open-source community.
Look at single player video games, cannot get more ideal for local-first. Still you need a launcher and internet connection.
Apple comes close with CloudKit, in that it takes the backend service and makes it generic, basically making it an OS platform API, backed by Apple's own cloud. Basically cloud and app decoupled. But, the fundamental issue remains, in that it's proprietary and only available on Apple devices.
An open source Firebase/CloudKit-like storage API that requires no cloud service, works by p2p sync, with awesome DX that is friendly to regular developers, would be the holy grail for this one.
Dealing with eventually consistent data models is not so unusual these days, even for devs working on traditional cloud SAAS systems, since clouds are distributed systems themselves.
I would be very happy to see such a thing built on top of Iroh (a p2p network layer, with all the NAT hole punching, tunnelling and addressing solved for you) for example, with great mobile-first support. https://github.com/n0-computer/iroh
The main problem with any sync system that allows extensive offline use is in communicating how the reconciliation happens so users don't get frustrated or confused. When all reconciliation happens as a black box your app won't be able to do a good job at that.
It seems like most of those are apps where I'm creating or working on something by myself and then sharing it later. The online part is almost the nice-to-have. A lot of other apps are either near-real-time-to-real-time communication where I want sending to succeed or fail pretty much immediately and queueing a message for hours and delivering it later only creates confusion. Or the app is mostly for consuming and interacting with content from elsewhere (be that an endless stream of content a la most "social media", news, video, etc. or be it content like banking apps and things) and I really mostly care about the latest information if the information is really that important at all. The cases in those apps where I interact, I also want immediate confirmation of success or failure because it's really important or not important at all.
What are the cases where offline-first is really essential? Maybe things that update, but referencing older material can be really useful or important (which does get back to messaging and email in particular, but other than something that's designed to be async like email, queueing actions when offline is still just nice-to-have in the best cases).
Otherwise the utility of CRDTs, OT, et al. is mostly collaborative editing tools that still need to be mostly online for the best experience.
It is interesting. I've thought about things I don in non-messaging apps (which are online0first for obvious reasons), and all of them create something, which can be EXPORTED to on-line presence, but doesn't require connected app.
Code? I write it locally and use separate app to share it: git. Yes, code is collaborative creation (I'm working in the team), but it is still separate tool and I like it, as I control what I'll publish for my colleagues.
Photos? Of course I want to share result, but I'm working on RAW files with non-destructive editing and I want to share final bitmap (as JPEG) and not RAW data and editing steps.
Same with music, if I create one (I doesn't).
Texts must be polished in solitude and presented as final result (maybe, as typographically set one, as PDF).
All my "heavy" applications are and should be offline-first!
I think most real-world applications fall under either "has to be done online", or "if there are conflicts, keep both files and let the user figure it out". Trying to automatically merge two independent edits can quickly turn into a massive mess, and I really don't want apps to do that automagically for me without giving me git-like tooling to fix the inevitable nightmare.
Sync problems are harder than just 'use a CRDT'.
What counts as 'consistent' depends on the domain and the exact thing that is being modelled.
Luckily this is a use case where conflict resolution is pretty straightforward (only you can update your workout data, and Last Write Wins)
[1] https://apps.apple.com/us/app/titan-workout-tracker/id644949...
Or Federated apps, again Self-Hosted.
And I think network infrastructure has been holding us back horribly. I think with something like Tailscale, we can make local-only apps or federated apps way, way easier to write.
I've found it to be a fun way to build apps.
Anyone uses IMAP email? Works just fine (save for IMAP design but that's another story).
Same with CalDAV.
For stydy I use Anki and it has brilliant sync (it can even automatically/automagaically merge study changes when I study some items on mobile and others on desktop).
Many seem to claim that it's impossible to sync correctly in "colaborative environment" as in it would always involved dozens of people constantly working and editing the document (which would be utterly difficult to track the evolution of)… Most of the time it's not that colaborative and having the data locally makes it easier to work with.
OTOH not everything has to be (web-)app…
In fact, many "normal" phone apps are basically just a web-site inside a thin wrapper. So the difference is largely academic in many cases.
It's not giving a "I'm installing an application" vibe, it is giving "I am creating a shortcut to a website" vibes. Apps are installed via the app store, not as weird quasi-bookmarks in your browser.
1. In the beginning, there were mainframes and terminals. You saved resources by running apps on a server and connecting to them with cheap terminals
2. Then, PCs happened. You could run reasonably complex programs on them, but communication capabilities were very limited: dialup modem connections or worse
3. Then, internet happened, and remote web apps overtook local apps in many areas (most of those that survived required massive usage of graphics, like games, which is difficult even with modern internet)
4. Then, smartphones happened. At the time of their appearance they didn't have ubiquitous network coverage, so many first apps for these platforms where local. This is eroding too, as communication coverage improves.
So if you look at this, it is clear that main share of computing oscillated back and forth between server and local, moving to local only when communication capabilities do not permit remote running, and once comms catch up, the task of running apps moves back to servers.
How do you make self-hosting appealing to more than weird nerds?
Regular people don't like the Magic Box Which Makes Things Work. They'll begrudgingly shove it in a cupboard and plug it in, but even that is already asking a lot. If it needs any kind of regular maintenance or attention, it is too much effort. "Plug in a harddrive once a month for backups"? You'll have just as much asking them to fly to Mars and yodel the national anthem while doing a cartwheel.
2. People value convenience over privacy and security
3. Cloud is easy.
Business trumps perfect software engineering almost every time.
I can't believe so many replies are struggling with the easy answer: privacy, security, "local first", "open source", "distributed", "open format" etc etc etc are developer goals projected onto a majority cohort of people who have never, and will never, care and yet hold all the potential revenue you need.
In this case, you would need two accounts, a credit and debit account, and then device A would write +20 to the credit account and -20 to the debit account, device B would write -20 to the credit account and +20 to the debit account, then using a HLC (or even not, depending on what your use-case is again), you get back to the 100 that seems from the description of the problem that it is the correct answer.
Obviously, if you are editing texts there are very different needs, but this as described is right in the wheelhouse of double-entry accounting.
...and here we go again. Time is a flat circle.
I think there should be way more local apps with sync capabilities. I haven’t finished the sync feature in WithAudio and you have very nice ideas there. Specially the eventual consistency. That’s what will work.
But I must say for sure the most difficult part of local apps is debugging customer issues. For someone who is used to logs and traces and metrics and most of users using a one version of the code in the backend, debugging an issue in customers computer on an old version without much insight (without destroying all your premises of privacy) is very challenging.
if i develop it as a webapplication, then i can do all the work on my computer, test with various browsers and deliver a working result. if the customer has issues, i can likely reproduce them on my machine.
but if it was a desktop application, my feeling was that testing would be a nightmare. i would have to set up a machine like my client, or worse visit the client and work with them directly, because my own machine is just to different from what the client uses. also not to forget distribution and updates
in short: web -> easy. desktop -> difficult.
"Local-first apps" are the worst of everything - crappy, dumbed down web UI; phoning home, telemetry, and other privacy violations; forced upgrades; closed source, etc.
At work, I don't have a choice, so it's Google Docs or Office 365 or whatever. And in that context it actually makes sense to have data stored on some server somewhere because it's not really my data but the company's. But at home I'll always choose the strictly offline application and share my data and files some other way.
which apps are you talking about here? that description doesn't make any sense to me.
What does any of this have to do with local first? Most online only apps have this stuff too.
Realistically the reason is probably that it's easier to make changes if you assume everything is phoning home to the mother ship for everything.
Also, an unrelated nit: "Why Local-First Apps Haven’t Become Popular?" is not a question. "Why Local-First Apps Haven’t Become Popular" is a noun phrase, and "Why Haven't Local-First Apps Become Popular?" is a question. You wouldn't say "How to ask question?" but instead "How do you ask a question?"
I think the truth of your statement is more that free software tends towards what you might call "offline" software (e.g., software that doesn't sync or offer real-time collaboration), because there's more friction for having a syncing backend with free software.
Our recent work on Loro CRDTs aims to bridge this gap by combining them with common UI state patterns. In React, developers can keep using `setState` as usual, while we automatically compute diffs and apply them to CRDTs; updates from CRDTs are then incrementally synced back into UI state [1]. This lets developers follow their existing habits without worrying about consistency between UI state and CRDT state. Paired with the synchronization protocol and hosted sync service, collaboration can feel as smooth as working with a purely local app. We’ve built a simple, account-free collaborative example app[2]. It only has a small amount of code related to synchronization; the rest looks almost the same as a purely local React app.
PaulHoule•1h ago
https://en.wikipedia.org/wiki/HCL_Notes
mschuster91•1h ago
echelon•1h ago
If that was deterministic, that was a very bad idea.
AlexandrB•54m ago
codegeek•39m ago