YMMV.
Of course, Doom 2 is full of Carmack shenanigans to squeeze every possible ounce of performance out of every byte, written in hand optimized C and assembly. Nextcloud is delivered in UTF-8 text, in a high level scripting language, entirely unoptimized with lots of low hanging fruit for improvement.
this is why i think there's another version for customers who are paying for it, with tuning, optimization, whatever.
Actually Carmack did squeeze every possible ounce of performance out of DOOM, however that does not always mean he was optimizing for size. If you want to see a project optimized for size you might check out ".kkrieger" from ".theprodukkt" which accomplishes a 3d shooter in 97,280bytes.
You know how many characters 20MB of UTF-8 text is right? If we are talking about javascript it's probably mostly ascii so quite close to 20 million characters. If we take a wild estimate of 80 characters per line that would be 250000 lines of code.
I personally think 20MB is outrageous for any website, webapp or similar. Especially if you want to offer a product to a wide range of devices on a lot of different networks. Reloading a huge chunk of that on every page load feels like bad design.
Developers usually take for granted the modern convenience of a good network connection, imagine using this on a slow connection it would be horrid. Even in the western "first world" countries there are still quite some people connecting with outdated hardware or slow connections, we often forget them.
If you are making any sort of webapp you ideally have to think about every byte you send to your customer.
This is like when people reminisce about the performance of windows 95 and its apps while forgetting about getting a blue screen of death every other hour.
Currently using Pop + Cosmic.
All said... I actually like TypeScript and React fine for teams of developers... I think NextCloud likely has coordination issues that go beyond the language or even libraries used.
[1]: https://www.youtube.com/watch?v=iXgseVYvhek
Nextcloud's client support is very good though and it has some great apps, I use PhoneTrack on road trips a lot
If every aspect of Nextcloud was as clean, quick and light-weight as PhoneTrack this world would be a different place. The interface is a little confusing but once I got the hang of it it's been awesome and there's just nothing like it. I use an old phone in my murse with PhoneTrack on it and that way if I leave it on the bus (again) I actually have a chance of finding it.
No $35/month subscription, and I'm not sharing my location data with some data aggregator (aside from Android of course).
I'm extremely tempted to write a lightweight alternative. I'm thinking sourcehut [1] vs GitHub.
Nextcloud is an old product that inherit from Owncloud developed in php since 2010. It has extensibility at its core through the thousands of extensions available.
So yaaay compare it with source hut ...
> So yaaay compare it with source hut ...
I'm not saying that sourcehut is the same in any way, but I want the difference between GitHub and sourcehut to be the difference between NextCloud and alternative.
> Nextcloud is an old product that inherit from Owncloud developed in php since 2010.
Tough situation to be in, I don't envy it.
> It has extensibility at its core through the thousands of extensions available.
Sure, but I think for some limited use cases, something better could be imagined.
So yes not perfect, bloated js but it works and is maintained.
So I'd rather thanks all developers involved in nextcloud than whine about bloated js.
Good news! You can do both.
Do I need them for my home server? No. Do I need them for my company? Yes, but costs compared to MS 365 are negligible.
There are a lot of requests made in general, these can be good, bad or indifferent depending on the actual connection channels and configuration with the server itself. The pieces are too disconnected from each other... the NextCloud org has 350 repositories on Github. I'm frankly surprised it's more than 30 or so... it's literally 10x what would be a larger expectation... I'd rather deal with a crazy mono-repo at that point.
> On a clean page load [of nextcloud], you will be downloading about 15-20 MB of Javascript, which does compress down to about 4-5 MB in transit, but that is still a huge amount of Javascript. For context, I consider 1 MB of Javascript to be on the heavy side for a web page/app.
> …Yes, that Javascript will be cached in the browser for a while, but you will still be executing all of that on each visit to your Nextcloud instance, and that will take a long time due to the sheer amount of code your browser now has to execute on the page.
While Nextcloud may have a ~60% bigger JS payload, sounds like perhaps that could have been a bit of a misdirection/misdiagnosis, and it's really about performance characteristics of the JS rather than strictly payload size or number of lines of code executed.
On a Google Doc load chosen by whatever my browser location bar autocompleted, I get around twenty JS files, the two biggest are 1MB and 2MB compressed.
1. Indiscriminate use of packages when a few lines of code would do.
2. Loading everything on every page.
3. Poor bundling strategy, if any.
4. No minification step.
5. Polyfilling for long dead, obsolete browsers
6. Having multiple libraries that accomplish the same thing
7. Using tools and then not doing any optimization at all (like using React and not enabling React Runtime)
Arguably things like an email client and file storage are apps and not pages so a SPA isn't unreasonable. The thing is, you don't end up with this much code by being diligent and following best practices. You get here by being lazy or uninformed.
They also treat every "module"/"apps" whatever you call it, as completely distinct spa without proving much of a sdk/framework. Which mean each app, add is own deps, manage is own build, etc...
Also don't forget that app can even be a part of a screen not the whole thing
82 / 86 requests 1,694 kB / 1,754 kB transferred 6,220 kB / 6,281 kB resources Finish: 11.73 s DOMContentLoaded: 1.07 s Load: 1.26 s
What frustrates me is that it looks like it works, but once in a while it breaks in a way that is pretty much irreparable (or at least not in a practical way).
I want to run an iOS/Android app that backs up images on my server. I tried the iOS app and when it works, it's cool. It's just that once in a while I get errors like "locked webdav" files and it never seems to recover, or sometimes it just stops synchronising and the only way to recover seems to be to restart the sync from zero. It will gladly upload 80GB of pictures "for nothing", discarding each one when it arrives on the server because it already exists (or so it seems, maybe it just overwrites everything).
The thing is that I want my family to use the app, so I can't access their phone for multiple hours every 2 weeks; it has to work reliably.
If it was just for backing up my photos... well I don't need Nextcloud for that.
Again, alternatives just don't seem to exist, where I can install an app on my parent's iOS and have it synchronise their photo gallery in the background. Except I guess iCloud, that is.
i have lots of txt files on my phone which are just not synced up to my server (the files on the server are 0 byte long).
i'm using txt files to take notes because the Notes app never worked for me (I get sync errors on any android phone while it works on iphone).
I don't say this to diminish anyone else's contribution or criticize the software, just to call out the absolutely herculean feat this one person accomplished.
For us Nextcloud AIO is the best thing under the sun. It works reasonably well for our small company (about 10 ppl) and saves us from Microsoft. I'm very grateful to the developers.
Hopefully they are able to act upon such findings or rewrite it with go :-). Mmh, if Berlin (Germany) wouldn't waste so much money in ill-advised ideology-driven and long-term state-destroying actions and "NGOs" they had enough money to fund 100s of such rewrites. Alas...
My biggest gripe with having used it for far longer than I should have was always that it expected far too much maintenance (4 month release cadence) to make sense for individual use.
Doing that kind of regular upkeep on a tool meant for a whole team of people is a far more reasonable cost-benefit analysis. Especially since it only needs one technically savvy person working behind the scenes, and is very intuitive and familiar on its front-end. Making for great savings overall.
The only downside is you can‘t use apps/plugins which require additional local tools (e.g. ocrmypdf) but others can be used just fine.
Calling remotely hosted services works (e.g. if you have elasticsearch on an vps and setup the Nextcloud fulltext search app accordingly)
* or same, if excluding nextcloud talk, but then missing a chat feature
IIRC they also display a banner on the login screen to all users advertising the enterprise license, and start emailing enterprise ads to all admin users.
Their "fair use policy"[1] also includes some "and more" wording.
I find your "open-source-washed" remark deplaced and quite deragoraty. Nextcloud is, imo, totally right to (try to) monetize. They have to, they must further improve the technical backbone to stay competitive with the big boys.
> NOTE: full bidirectional sync, like what nextcloud and syncthing does, will never be supported! Only single-direction sync (server-to-client, or client-to-server) is possible with copyparty
Is sync not the primary use of nextcloud?
It makes a little more sense when you’re using their cloud version, because otherwise you’re storing the data twice.
It's what I want to try next. Written in go, it looks promising.
I had really good luck with Seafile[0]. It's not a full groupware solution, just primarily a really good file syncing/Dropbox solution.
Upsides are everything worked reliably for me, it was much faster, does chunk-level deduplication and some other things, has native apps for everything, is supported by rclone, has a fuse mount option, supports mounting as a "virtual drive" on Windows, supports publicly sharing files, shared "drives", end-to-end encryption, and practically everything else I'd want out of "file syncing solution".
The only thing I didn't like about it is that it stores all of your data as, essentially, opaque chunks on disk that are pieced together using the data in the database. This is how it achieves the performance, deduplication, and other things I _liked_. However it made me a little nervous that I would have a tough time extracting my data if anything went horribly wrong. I took backups. Nothing ever went horribly wrong over 4 or 5 years of running it. I only stopped because I shelved a lot of my self-hosting for a bit.
A good idea is to have it on an always-on server and add your share as an encrypted one (like you set the password on all your apps but not on the server); this pretty much results in a dropbox-like experience since you have a central place to sync even when your other devices are not online
I use it on my phone (configured to only sync on WiFi), laptop (connected 99% of the time), and server (up 100% of the time).
The always-up server/laptop as a "master node" are probably key.
Like if I backup photos from iOS, then remove a subset of those from iOS to make space on the phone (but obviously I want to keep them on the cloud), and later the mobile app gets out of sync, I don't want to end up in a situation where some photos are on iOS, some on the cloud, but none of the devices has everything, and I have no easy way to resync them.
I have had the server itself fail in strange ways where I had to restart it. I had to do a full fresh install once when it got hopelessly confused and I was getting database errors saying records either existed when they shouldn't or didn't exist when they should.
I think I am a pretty skilled sysadmin for these types of things, having both designed and administered very large distributed systems for two decades now, but maybe I am doing things wrong, but I think there are just some gotchas still with the project.
iCloud / Google Photos just don't have that, they really never lose a photo. It's very difficult for me to convince my family to move to something that may lose their data, when iCloud / Google Photos works and is really not that expensive.
For some reason the app disconnected from my account in the background from time to time (annoying but didn't think it was critical). Once I pasted data on Nextcloud through the Files app integration, it didn't sync because it was disconnected and didn't say anything, and it lost the data.
I know, it sucks that the official apps are buggy as hell, but the server side is real solid
maybe paying customers are getting a different/updated/tuned version of it. maybe not. but the only thing that keeps me using it is there isn't any real selfhosted alternatives.
why is it slow? if you just blink or take a breath, it touches the database. years ago i've tried to optimise it a bit and noticed that there are horrible amount of DB transactions there without any apparent reason.
also, the android client is so broken...
but the feeling is that the outdated or simply bad decisions aren't fixed or redesigned.
it could be made 100 times better.
Eventually I ran into FileRun and loved it, even though it wasn't completely open source. FileRun is fast, worked on both desktop and mobile via browser nicely, and I never had an issue with it. It was free for personal use a few years ago, and unfortunately is not anymore. But it's worth the license if you have the money for it.
I tried setting up SeaFile but I had issues getting it working via a reverse proxy and gave up on it.
I like copyparty (https://github.com/9001/copyparty) - really dead simple to use and quick like FIleRun - but the web interface is not geared towards casual users. I also miss Filerun's "Request a file" feature which worked very nicely if you just wanted someone to upload a file to you and then be done.
The only precaution I can think of is that copyparty's .hist folder should probably not be synced between devices. So if you intend to share an entire copyparty volume, or a folder which contains a copyparty volume, then you could use the `--hist` global-option or `hist` volflag to put it somewhere else.
As for high CPU usage, this would arise from copyparty deciding to reindex a file when it detects that the file has been modified. This shouldn't be a concern unless you point it at a folder which has continuously modifying files, such as a file that is currently being downloaded or otherwise slowly written to.
With the disclaimer that I've never used Filerun, I think this can be replicated with copyparty by means of the "shares" feature (--shr). That way, you can create a temporary link for other people to upload to, without granting access to browse or download existing files. It works like this: https://a.ocv.me/pub/demo/#gf-bb96d8ba&t=13:44
Here is a blog post I wrote at the time about the vulnerability (CVE-2020-8155): https://tripplyons.com/blog/nextcloud-bug-bounty
I could not find a way to do this without pdf.js.
What is the problem with that exactly in your case?
So I guess it is possible
Last time I heard a certain privacy community recommended against Nextcloud due to some issues with Nextcloud E2EE.
for me it's a family photo backup with calendars (private and shared ones) running in a VM on the net.
its webui is rarely used by anyone (except me), everyone is using their phones (calendars, files).
does it work? yes. does anyone other than me care about the bugs? no. but noone really _uses_ it as if it was deployed for a small office of 10-20-30 people. on the other hand, there are companies paying for it.
for this,
The Javascript performance trace shows over 50% of the work is in making the asynchronous calls to pull those calendars and other network calls one by one and then on all the refresh updates it causes putting them onto the page.
Supporting all these N calendar calls is pulls individually for calendar rooms and calendar resources and "principles" for the user. All separate individual network calls some of which must be gating the later individual calendar calls.
Its not just that, it also makes a call for notifications, groups, user status and multiple heartbeats to complete the page as well, all before it tries to get the calendar details.
This is why I think it feels slow, its pulling down the page and then the javascript is pulling down all the bits of data for everything on the screen with individual calls, waiting for the responses before it can progress in many ways to make the further calls of which there can be N many depending on what the user is doing.
So across the local network (2.5Gbps) that is a second and most of it in waiting for the network. If I use the regular 4G level of throttling it takes 33.10 seconds! Really goes to show how bad this design does with extra latency.
When it comes to JS optimization in the browser there's usually a few great big smoking guns:
1. Tons of tiny files: Bundle them! Big bundle > zillions of lazy-loaded files.
2. Lots of AJAX requests: We have WebSockets for a reason!
3. Race conditions: Fix your bugs :shrug:
4. Too many JS-driven animations: Use CSS or JS that just manipulates CSS.
Nextcloud appears to be slow because of #2. Both #1 and #2 are dependent on round-trip times (HTTP request to server -> HTTP response to client) which are the biggest cause of slowness on mobile networks (e.g. 5G).Modern mobile network connections have plenty of bandwidth to deliver great big files/streams but they're still super slow when it comes to round-trip times. Knowing this, it makes perfect sense that Nextcloud would be slow AF on mobile networks because it follows the REST philosophy.
My controversial take: GIVE REST A REST already! WebSockets are vastly superior and they've been around for FIFTEEN YEARS now. Do I understand why they're so much lower latency than REST calls on mobile networks? Not really: In theory, it's still a round-trip but for some reason an open connection can pass data through an order of magnitude (or more) lower latency on something like a 5G connection.
If there's 15MB of JS that gets run immediately after page load, that's a different story. Especially if there's lots of nested calls. Ever drill down deep into a series of function calls inside the performance report for the JS on a web page? The more layers of nesting you have, the greater the overhead.
DRY as a concept is great from a code readability standpoint but it's not ideal performance when it comes to things like JS execution (haha). I'm actually disappointed that modern bundlers don't normally inline calls at the JS layer. IMHO, they rely too much on the JIT to optimize hot call sites when that could've been done by the bundler. Instead, bundlers tend to optimize for file size which is becoming less and less of a concern as bandwidth has far outpaced JS bundle sizes.
The entire JS ecosystem is a giant mess of "tiny package does one thing well" that is dependent on n layers of "other tiny package does one thing well." This results in LOADS of unnecessary nesting when the "tiny package that does one thing well" could've just written their own implementation of that simple thing it relies on.
Don't think of it from the perspective of, "tree shaking is supposed to take care of that." Think of it from the perspective of, "tree shaking is only going to remove dead/duplicated code to save file sizes." It's not going to take that 10-line function that handles with <whatever> and put that logic right where its used (in order to shorten the call tree).
Also, 15MB of JS is nothing on modern "low end devices". Even an old, $5 Raspberry Pi 2 won't flinch at that and anything slower than that... isn't my problem! Haha =)
There comes a point where supporting 10yo devices isn't worth it when what you're offering/"selling" is the latest & greatest technology.
It shouldn't be, "this is why we can't have nice things!" It should be, "this is why YOU can't have nice things!"
Please don't.
It's because a TLS handshake takes more than one roundtrip to complete. Keeping the connection open means the handshake needs to be done only once, instead of over and over again.
I was very curious so I asked AI to explain why websockets would have such lower latency than regular HTTP and it gave some (uncited, but logical) reasons:
Once a WebSocket is open, each message avoids several sources of delay that an HTTP request can hit—especially on mobile. The big wins are skipping connection setup and radio wakeups, not shaving a few header bytes.
Why WebSocket “ping/pong” often beats HTTP GET /ping on mobile
No connection setup on the hot path
HTTP (worst case): DNS + TCP 3‑way handshake + TLS handshake (HTTPS) before you can send the request. On mobile RTTs (60–200+ ms), that’s 1–3 extra RTTs, i.e., 100–500+ ms just to get started.
HTTP with keep‑alive/H2/H3: Better (no new TCP/TLS), but pools can be empty or closed by OS/radios/idle timers, so you still pay setup sometimes.
WebSocket: You pay the TCP+TLS+Upgrade once. After that, a ping is just one round trip on an already‑open connection.
Mobile radio state promotions
Cellular modems drop to low‑power states when idle. A fresh HTTP request can force an RRC “promotion” from idle to connected, adding tens to hundreds of ms.
A long‑lived WebSocket with periodic keepalives tends to keep the radio in a faster state or makes promotion more likely to already be done, so your message departs immediately.
Trade‑off: keeping the radio “warm” costs battery; most realtime apps tune keepalive intervals to balance latency vs power.
Fewer app/stack layers per message
HTTP request path: request line + headers (often cookies, auth), routing/middleware, logging, etc. Even with HTTP/2 header compression, the server still parses and runs more machinery.
WebSocket after upgrade: tiny frame parsing (client→server frames are 2‑byte header + 4‑byte mask + payload), often handled in a lightweight event loop. Much less per‑message work.
No extra round trips from CORS preflight
A simple GET usually avoids preflight, but if you add non‑safelisted headers (e.g., Authorization) the browser will first send an OPTIONS request. That’s an extra RTT before your GET.
WebSocket doesn’t use CORS preflights; the Upgrade carries an Origin header that servers can validate.
Warm path effects
Persistent connections retain congestion window and NAT/firewall state, reducing first‑packet delays and occasional SYN drops that new HTTP connections can encounter on mobile networks.
What about encryption (HTTPS/WSS)? Handshake cost: TLS adds 1–2 RTTs (TLS 1.3 is 1‑RTT; 0‑RTT is possible but niche). If you open and close HTTP connections frequently, you keep paying this. A WebSocket pays it once, then amortizes it over many messages.
After the connection is up, the per‑message crypto cost is small compared to network RTT; the latency advantage mainly comes from avoiding repeated handshakes.
How much do headers/bytes matter? For tiny messages, both HTTP and WS fit in one MTU. The few hundred extra bytes of HTTP headers rarely change latency meaningfully on mobile; the dominant factor is extra round trips (connection setup, preflight) and radio state.
When the gap narrows If your HTTP requests reuse an existing HTTP/2 or HTTP/3 connection, have no preflight, and the radio is already in a connected state, a minimal GET /ping and a WS ping/pong both take roughly one network RTT. In that best case, latencies can be similar.
In real mobile conditions, the chances of hitting at least one of the slow paths above are high, so WebSocket usually looks faster and more consistent.>Yes and no: There's still a rather large latency improvement even when you're using plain HTTP (not that you should go without encryption).
Of course. An unencrypted HTTP request takes a single roundtrip to complete. The client sends the request and receives the response. The only additional cost is to set up the connection, which is also saved when the connection is kept open with a websocket.
/s
* If the browser has an optimal path for it, use HTTP (e.g. images where it caches them automatically or file uploads where you get a "free" progress API).
* If I know my end users will be behind some shitty firewall that can't handle WebSockets (like we're still living in the early 2010s), use HTTP.
* Requests will be rare (per client): Use HTTP.
* For all else, use WebSockets.
WebSockets are just too awesome! You can use a simple event dispatcher for both the frontend and the backend to handle any given request/response and it makes the code sooooo much simpler than REST. Example: WSDispatcher.on("pong", pongFunc);
...and `WSDispatcher` would be the (singleton) object that holds the WebSocket connection and has `on()`, `off()`, and `dispatch()` functions. When the server sends a message like `{"type": "pong", "payload": "<some timestamp>"}`, the client calls `WSDispatcher.dispatch("pong", "<some timestamp>")` which results in `pongFunc("<some timestamp>")` being called.It makes reasoning about your API so simple and human-readable! It's also highly performant and fully async. With a bit of Promise wrapping, you can even make it behave like a synchronous call in your code which keeps the logic nice and concise.
In my latest pet project (collaborative editor) I've got the WebSocket API using a strict "call"/"call:ok" structure. Here's an example from my WEBSOCKET_API.md:
### Create Resource
```javascript
// Create story
send('resources:create', {
resource_type: 'story',
title: 'My New Story',
content: '',
tags: {},
policy: {}
});
// Create chapter (child of story)
send('resources:create', {
resource_type: 'chapter',
parent_id: 'story_abc123', // This would actually be a UUID
title: 'Chapter 1'
});
// Response:
{
type: 'resources:create:ok', // <- Note the ":ok"
resource: { id: '...', resource_type: '...', ... }
}
```
I've got a `request()` helper that makes the async nature of the WebSocket feel more like a synchronous call. Here's what that looks like in action: const wsPromise = getWsService(); // Returns the WebSocket singleton
// Create resource (story, chapter, or file)
async function createResource(data: ResourcesCreateRequest) {
loading.value = true;
error.value = null;
try {
const ws = await wsPromise;
const response = await ws.request<ResourcesCreateResponse>(
"resources:create",
data // <- The payload
);
// resources.value because it's a Vue 3 `ref()`:
resources.value.push(response.resource);
return response.resource;
} catch (err: any) {
error.value = err?.message || "Failed to create resource";
throw err;
} finally {
loading.value = false;
}
}
For reference, errors are returned in a different, more verbose format where "type" is "error" in the object that the `request()` function knows how to deal with. It used to be ":err" instead of ":ok" but I made it different for a good reason I can't remember right now (LOL).Aside: There's still THREE firewalls that suck so bad they can't handle WebSockets: SophosXG Firewall, WatchGuard, and McAfee Web Gateway.
Having used WebSockets a lot, I’ve realised that it’s not the simple fact that WebSockets are duplex or that it’s more efficient than using HTTP long-polling or SSEs or something else… No, the real benefit is that once you have a “socket” object in your hands, and this object lives beyond the normal “request->response” lifecycle, you realise that your users DESERVE a persistent presence on your server.
You start letting your route handlers run longer, so that you can send the result of an action, rather than telling the user to “refresh the page” with a 5-second refresh timer.
You start connecting events/pubsub messages to your users and forwarding relevant updates over the socket you already hold. (Trying to build a delta update system for polling is complicated enough that the developers of most bespoke business software I’ve seen do not go to the effort of building such things… But with WebSockets it’s easy, as you just subscribe before starting the initial DB query and send all broadcasted updates events for your set of objects on the fly.)
You start wanting to output the progress of a route handler to the user as it happens (“Fetching payroll details…”, “Fetching timesheets…”, “Correlating timesheets and clock in/out data…”, “Making payments…”).
Suddenly, as a developer, you can get live debug log output IN THE UI as it happens. This is amazing.
AND THEN YOU WANT TO CANCEL SOMETHING because you realise you accidentally put in the actual payroll system API key. And that gets you thinking… can I add a cancel button in the UI?
Yes, you can! Just make a ‘ctx.progress()’ method. When called, if the user has cancelled the current RPC, then throw a RPCCancelled error that’s caught by the route handling system. There’s an optional first argument for a progress message to the end user. Maybe add a “no-cancel” flag too for critical sections.
And then you think about live collaboration for a bit… that’s a fun rabbit hole to dive down. I usually just do “this is locked for editing” or check the per-document incrementing version number and say “someone else edited this before you started editing, your changes will be lost — please reload”. Figma cracked live collaboration, but it was very difficult based on what they’ve shared on their blog.
And then… one day… the big one hits… where you have a multistep process and you want Y/N confirmation from the user or some other kind of selection. The sockets are duplex! You can send a message BACK to the RPC client, and have it handled by the initiating code! You just need to make it so devs can add event listeners on the RPC call handle on the client! Then, your server-side route handler can just “await” a response! No need to break up the handler into multiple functions. No need to pack state into the DB for resumability. Just await (and make sure the Promise is rejected if the RPC is cancelled).
If you have a very complex UI page with live-updating pieces, and you want parts of it to be filterable or searchable… This is when you add “nested RPCs”. And if the parent RPC is cancelled (because the user closes that tab, or navigates away, or such) then that RPC and all of its children RPCs are cancelled. The server-side route handler is a function closure, that holds a bunch of state that can be used by any of the sub-RPC handlers (they can be added with ‘ctx.addSubMethod’ or such).
The end result is: while building out any feature of any “non-web-scale” app, you can easily add levels of polish that are simply too annoying to obtain when stuck in a REST point of view. Sure, it’s possible to do the same thing there, but you’ll get frustrated (and so development of such features will not be prioritised). Also, perf-wise, REST is good for “web scale” / high-user-counts, but you will hit weird latency issues if you try to use for live, duplex comms.
WebSockets (and soon HTTP3 transport API) are game-changing. I highly recommend trying some of these things.
Efforts like Electric SQL to have APIs/protocols for bulk fetching all changes (to a "table") is where it's at. https://electric-sql.com/docs/api/http
It's so rare for teams to do data loading well, rarer still we get effective caching, and often a products footing here only degrades with time. The various sync ideas out there offer such an alluring potential, of having a consistent way to get the client the updated live data they need, in a consistent fashion.
Side note, I'm also hoping the js / TC39 source phase imports proposal aka import source can help let large apps like NextCloud defer loading more of it's JS until needed too. But the waterfall you call out here seems like the real bad side (of NextCloud's architecture)! https://github.com/tc39/proposal-source-phase-imports
Then at some point the Nextcloud calendar was "redesigned" and now it's completely terrible. Aesthetically, it looks like it was designed for toddlers. Functionally, adding and editing events is flat out painful. Trying to specify a time range for an event is weird and frustrating. It's better than not having a calendar, but only just.
There are plenty of open source calendar _servers_, but no good open source web-based calendars that I have been able to find.
The issue remains that the core itself feels like layers upon layers of encrusted code that instead of being fixed have just had another layer added ... "something fundamental wrong? Just add Redis as a dependency. Does it help? Unsure. Let's add something else. Don't like having the config in a db? Let's move some of it to ini files (or vice versa)..etc..etc." it feels like that's the cycle and it ain't pretty and I don't trust the result at all. Eventually abandoned the project.
Edit: at some point I reckon some part of the ecosystem recognised some of these issues and hence Owncloud remade a large part of the fundamentals in Golang. It remains unknown to me whether this sorted things or not. All of these projects feel like they suffer badly from "overbuild".
Edit-edit: another layer to add to the mix is that the "overbuild" situation is probably largely what allows the hosting economy around these open source solutions to thrive since Nextcloud and co. are so over-engineered and badly documented that they -require- a dedicated sys-admin team to run well.
For example the reason there's no cohesiveness with a common websocket bus for all those ajax calls is because they all started out as a separate plugin.
NC has gone full modularity and lost performance for it. What we need is a more focused and cohesive tool for document sharing.
Honestly I think today with IaC and containers, a better approach for selfhosting is to use many tools connected by SSO instead of one monstrosity. The old Unix philosophy, do one thing but do it well.
1. Did you open back port request with these basic patches? If you have orders of magnitude speed improvements it would be aswesome to share!
2. You definitively don't need an entire sysadmin team to run nextcloud, in my work (large organisation) there's three instances running (for different parts/purposes of which only one is run by more than one person, and I run myself both my personal instance and for a nonprofit with ~100 persons, it's really not much work after setup (and other systems are plenty of a lot more complicated systems to set up, trust me)
I would assume that the people for whom a slow web based calendar is a problem (among other slow things on the web interface) are people who want to be using it if it performed well.
They wouldn't just make a bad slow web interface on purpose to enlighten people as to how bad web interfaces are, as a complicated way of pushing them toward integrated apps.
But people rarely use the web apps. Instead, it's used more like a NAS with the desktop sync client being the primary interface. Nobody likes the web apps because they're slow. The Windows desktop sync client has a really annoying update process, but other than that is excellent.
I could replace it with a traditional NAS, but the main feature keeping me there is an IMAP authentication plugin. This allows users to sign in with their business email/password. It works so well and makes it so much easier to manage user accounts, revoke access, do password resets, etc.
Web apps don't have to be slow. I prefer web apps over system apps, as I don't have to install extra programs into my system and I have more control over those apps:
- a service decides it's a good idea to load some tracking stuff from 3rd-party? I just uMatrix block it;
- a page has an unwanted element? I just uBlock block it;
- a page could have a better look? I just userstyle style it;
- a page is missing something that could be added on client side? I just userscript script it
However my need for something like google drive has reduced massively, and nextcloud continues to be a massive maintenance pain due to its frustratingly fast release cadence.
I don't want to have to log into my admin account and baby it through a new release and migration every four months! Why aren't there any LTS branches? The amount of admin work that nextcloud requires only makes sense for when you legitimately have a whole group of people with accounts that are all utilizing it regularly.
This is honestly the kick in the pants I need to find a solution that actually fits my current use-case. (I just need to sync my fuckin keepass vault to my phone, man.) Syncthing looks promising with significantly less hassle...
As long as you only upgrade one major version at a time, it doesn't require putting the server in maintenance mode or using the occ cli.
Sure, some people might argue that there are specialized tools for each of these functions. And that’s true. But the tradeoff is that you'd need to manage a lot more with individual services. With Nextcloud, you get a unified platform that might be good enough to run a company, even if it’s not very fast and some features might have bugs.
The AIO has addressed issues like update management and reliability, it been very good in my experience. You get a fully tested, ready-to-go package from Nextcloud.
That said, I wonder, if the platform were rewritten in a more performance-efficient language than PHP, with a simplified codebase and trimmed-down features, would it run faster? The UI could also be more polished (see Synology DSM web interface). The interface in Synology looks really nice!
That said there's an Owncloud version called Infinite Scale which is written in Go.[1] Honestly I tried to go that route but it's requirements are pretty opinionated (Ubuntu LTS 22.04 or 24.04 and lots of docker containers littering your system) but it looks like it's getting a lot of development.
Hm?
> This guide describes an installation of Infinite Scale based on Ubuntu LTS and docker compose. The underlying hardware of the server can be anything as listed below as long it meets the OS requirements defined in the Software Stack
https://doc.owncloud.com/ocis/next/depl-examples/ubuntu-comp...
The Software Stack section goes on to say it's just needs Docker, Docker Compose, shell access, and sudo.
Ubuntu and sudo are probably only mentioned because the guide walks you through installing docker and docker compose.
From my experience, this doesn't meaningfully impact performance. Performance problems come from "accidentally quadratic" logic in the frontend, poorly optimised UI updates, and too many API calls.
Overeager warming/precomputation of resources on page load (rather than on use) can be a culprit as well.
https://dev.to/dehemi_fabio/why-php-is-still-worth-learning-...
I literally explained why this is not the case.
And Nextcloud being slow in general is not a new complaint from users.
pass in on $lan_if inet proto tcp to (egress) port 12345 rdr-to 192.168.1.10
It basically says "pass packets from the LAN interface towards the WAN (egress) on the game port and redirect the traffic to the local game server". The local client doesn't know anything happened, it just worked.If not, and you don't want to set up dnsmasq just for Nextcloud over LAN, then DNS-based adblock software like AdGuard Home would be a good option (as in, it would give you more benefit for the amount of time/effort required). With AdGuard, you just add a line under Filters -> DNS rewrites. PiHole can do this as well (it's been awhile since I've used it, but I believe there's a Local DNS settings page).
Otherwise, if you only have a small handful of devices, you could add an entry to /etc/hosts (or equivalent) on each device. Not pretty, but it works.
You could also upload directly to the filesystem and then run occ files:scan, or if the storage is mounted as external it just works.
Another method is to set your machines /etc/hosts (or equivalent) to the local IP of the instance (if the device is only on lan you can keep it, otherwise remove it after the large transfer).
Now your rounter should not send traffic to itself away, just loop it internally so it never has to go over your isps connection - so running over lan only helps if your switch is faster than your router..
That’s an interesting way to describe a lack of configuration on your part.
Imagine me saying: "The major shortcoming of Google drive, in my opinion, is that that it's not able to sync files from my phone. There is some workaround involving an app called 'Google drive' that I have to install on my phone, but I haven't gotten around to it. Other than that, Google drive is absolutely fantastic.
I feel like > 2kb of Javascript is heavy. Literally not needed.
IMO, the worst offenders are when you bring in charting/graphing libraries into things when either you don't really need them, or otherwise not lazy loading where/when needed. If you're using something like React, then a little reading on SVG can do wonders without bloating an application. I've ripped multi-mb graphing libraries out to replace them with a couple components dynamically generating SVG for simple charting or overlays.
Applications like linear and nextcloud aren't designed to be opened and closed constantly. You open them once and then work in that tab for the remainder of your session.
As others have pointed out in this thread, "feeling slow" is mostly due to the number of fetch requests and the backend serving those requests.
Some specific things I like about it:
* Basic todo app features are compatible with CalDAV clients like tasks.org
* Several ways of organizing tasks: subtasks, tags, projects, subprojects, and custom filters
* list, table, and kanban views
* A reasonably clean and performant frontend that isn't cluttered with stuff I don't need (i.e., not Jira)
And some other things that weren't hard requirements, but have been useful for me: * A REST API, which I use to export task summaries and comments to markdown files (to make them searchable along with my other plaintext notes)
* A 3rd party CLI tool: https://gitlab.com/ce72/vja
* OIDC integration (currently using it with Keycloak)
* Easily deployable with docker composeEither apps lack such an export, or its very minimal, or it includes lots of things, except comments...Sometimes an app might have a REST api, and I'd need to build something non-trivial to start pulling out the comments, etc. I feel like its silly in this day and age.
My desire for comments to be included in exports is for local search...but also because i use comments for sort of thinking aloud, sort of like an inline task journaling...and when comments are lacking, it sucks!
In fact, when i hear folks suggest to simply stop using such apps and merely embrace the text file todo approach, they cite their having full access to comments as a feature...and, i can't dispute their claim! But barely any non-text-based apps highlight the inclusion of comments. So, i have to ask: is it just me (who doesn't use a text-based todo workflow), and then all other folks who *do use* a text-based tdo flow, who actually care about access to comments!?!
<rant over>
Even on a modern browser on a brand new leading-edge computer, it was completely unusably slow.
Horrendous optimization aside, NC is also chasing the current fad of stripping out useful features and replacing them with oceans of padding. The stock photos app doesn't even have the ability to sort by date!. That's been table stakes for a photo viewer since the 20th goddamn century.
When Windows Explorer offers a more performant and featureful experience, you've fucked up real bad.
I would feel incredibly bad and ashamed to publish software in the condition that NextCloud is in. It is IMO completely unacceptable.
The http upload is miserable, it's slow, it fails with no message, it fails to start, it hangs. When uploading duplicate files the popup is confusing. The UI is slow, the addons break on every update. The gallery is very bad, now we use immich.
You sadly can't just install nextcloud on your vanilla server and expect it to perform well.
We had a similar situation with some notebooks running in production, which were quite slow to load because it was loading a lot of JS files / WASM for the purposes of showing the UI. This was not part of our core logic, and using a CDN to load these, but still relying on private prod instance for business logic helped significantly.
I have a feeling this would be helpful here as well.
You really consider 1 MB of JS too heavy for an application with hundreds of features? How exactly are developers supposed to fit an entire web app into that? Why does this minimalism suddenly apply only to JavaScript? Should every desktop app be under 1 MB too? Is Windows Calculator 30 MB binary also an offense to your principles?
What year is it, 2002? Even low-band 5G gives you 30–250 Mbps down. At those speeds, 20 MB of JS downloads in well under a second. So whats the math beihnd the 5–10 second figure? What about the cache? Is it turned off for you and you redownload the whole nextcloud from scratch every time?
Nextcloud is undeniably slow, but the real reasons show up in the profiler, not the network tab.
Yes, I don't know, because it runs in the browser, yes, yes.
First and foremost, I agree with the meat of your comment.
But I wanted to point about your comment, that it DOES very much matter that apps meant to be transmitted over a remote connection are, indeed, as slim as possible.
You must be thinking about 5G on a city with good infrastructure, right?
I'm right now having a coffee on a road trip, with a 4G connection, and just loading this HN page took like 8~10 seconds. Imagine a bulky and bloated web app if I needed to quickly check a copy of my ID stored in NextCloud.
It's time we normalize testing network-bounded apps through low-bandwidth, high-latency network simulators.
What frustrates me about modern web development is that everyone is focused on making it work much more than they are making it sure it works fast. Then when you go to push back, the response is always something like "we need to not spend time over-optimizing."
Sent this straight to the team slack haha.
On the other hand, Nextcloud is so far from being something like Google Docs, and I would never recommend it as a general replacement to someone who can't tolerate "jank", for lack of a better word. There are so many small papercuts you'll notice when using it as a power user. Right off the top of my head, uploading large files is finicky, and no amount of web server config tinkering gets it to always work; thumbnail loading is always spotty, and it's significantly slower than it needs to be (I'm talking orders of magnitude).
With all that said, I'm so grateful for Nextcloud since I don't have a replacement, and I would prefer not having all our baby and vacation pictures feeding some big corporation's AI. We really ought to have a safe, private place to store files in 2025 that the average person can wrap their head around. I only wish my family took better advantage of it, since I'm essentially providing them with unlimited storage.
floundy•5h ago
Radicale is a good calendar replacement. I'd rather have single-function apps at this point.
servercobra•5h ago
FredFS456•5h ago
selectodude•5h ago
rkagerer•4h ago
Does the AI run locally?
For anyone who might find it useful, here's a Reddit thread from 3 years ago on a few concerns about SeaFile I'd love to see revisited with some updated discussion: https://www.reddit.com/r/selfhosted/comments/wzdp2p/are_ther...
selectodude•3h ago
https://manual.seafile.com/13.0/extension/seafile-ai/
nickspacek•5h ago
ianopolous•5h ago
https://peergos.org
You can try it out easily here: https://peergos-demo.net
Our iOS app is still in the works still though.
Saris•5h ago
Owncloud infinite scale might be the best option for a full featured file sync setup, as thats all it does.
danielcberman•4h ago
1. https://docs.syncthing.net/users/ignoring.html
2. https://mobiussync.com
sira04•5h ago
imcritic•3h ago
zeagle•5h ago
lompad•4h ago
thesuitonym•4h ago
imcritic•3h ago