frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Louvre's CCTV password was "Louvre"

https://twitter.com/trash_italiano/status/1985010735542591684
1•JustSkyfall•28s ago•0 comments

Practice Language and AI Roleplay = Best way to learn language that sticks

https://apps.apple.com/us/app/amiko-ai-language-practice/id6752839098
1•nickyfantasy•4m ago•0 comments

Coinbase Said Web3

https://www.bloomberg.com/opinion/newsletters/2025-11-03/coinbase-said-web3
1•ioblomov•9m ago•1 comments

Waymo to expand robotaxi service to Las Vegas, San Diego and Detroit next year

https://www.reuters.com/technology/waymo-expand-robotaxi-service-las-vegas-san-diego-detroit-next...
3•standardUser•11m ago•2 comments

Private messages reveal GOP leaders joking about gas chambers, slavery and rape

https://www.politico.com/news/2025/10/14/private-chat-among-young-gop-club-members-00592146
7•myaccountonhn•11m ago•0 comments

Writing under my real name

https://psychotechnology.substack.com/p/writing-under-my-real-name-230
1•eatitraw•12m ago•0 comments

Show HN: Extrai – An open-source tool to fight LLM randomness in data extraction

https://github.com/Telsho/Extrai
3•elias_t•14m ago•0 comments

No Cell Service, Can Meshtastic Save Us [video]

https://www.youtube.com/watch?v=r2cKsqjuMaM
1•teleforce•14m ago•0 comments

Python steering council accepts lazy imports

https://lwn.net/Articles/1044844/
2•henrikhorluck•15m ago•0 comments

Apple Launches App Store for the Web

https://apps.apple.com/us/iphone/today
3•thm•15m ago•0 comments

Rare 'mad honey' is only found in two places in the world

https://www.cnn.com/travel/mad-honey-deli-bal-turkey-black-sea
4•mooreds•16m ago•0 comments

In a First, AI Models Analyze Language as Well as a Human Expert

https://www.quantamagazine.org/in-a-first-ai-models-analyze-language-as-well-as-a-human-expert-20...
1•Terretta•16m ago•0 comments

Wikipedia row erupts as Jimmy Wales intervenes on 'Gaza genocide' page

https://www.thenational.scot/news/25591165.wikipedia-row-erupts-jimmy-wales-intervenes-gaza-genoc...
4•lehi•17m ago•0 comments

Snap benefits will restart, but will be half the normal payment

https://www.npr.org/2025/11/03/nx-s1-5596121/snap-food-benefits-trump-government-shutdown
2•geox•17m ago•0 comments

Software Development in the Time of New Angels

https://davegriffith.substack.com/p/software-development-in-the-time
2•calosa•18m ago•0 comments

Show HN: Minuta – track your work sessions, focus time, tag them, and more

https://github.com/kevinmahrous/minuta
2•nullkevin•18m ago•0 comments

I tried Elon Musk's Wikipedia clone and boy is it racist

https://www.sfgate.com/sf-culture/article/elon-musk-fake-wikipedia-grokipedia-21131512.php
8•turtlegrids•21m ago•1 comments

Datalyzer – AI Analysis Report Generator

https://dataanalyzer.pro/
2•sunshiney0992•21m ago•1 comments

AI Meeting Notes – Summarization Optimization

https://www.schneier.com/blog/archives/2025/11/ai-summarization-optimization.html
2•walterbell•21m ago•0 comments

Refueling a Nuclear Power Plant – Smarter Every Day

https://www.youtube.com/watch?v=v0afQ6w3Bjw
1•helsinkiandrew•22m ago•0 comments

VoidZero Raises $12.5M Series A

https://voidzero.dev/posts/announcing-series-a
1•dzogchen•24m ago•0 comments

Is it aliens? Why that's the least important question about interstellar objects

https://theconversation.com/is-it-aliens-why-thats-the-least-important-question-about-interstella...
1•bikenaga•24m ago•0 comments

Rateless Bloom Filters

https://arxiv.org/abs/2510.27614
3•CarlosBaquero•25m ago•1 comments

X Payments Money Transmitter Licenses

https://money.x.com/en/licenses
2•nomilk•29m ago•0 comments

Stop Vibe Coding – Start Writing Elegant Code [video]

https://www.youtube.com/watch?v=anL8caCUWl0
3•josephleomoreno•30m ago•0 comments

Soft Magnetic Artificial Muscles with High Work Density and Actuation Strain

https://advanced.onlinelibrary.wiley.com/doi/10.1002/adfm.202516218
2•PaulHoule•31m ago•0 comments

Principle of Least Power

https://www.lihaoyi.com/post/StrategicScalaStylePrincipleofLeastPower.html
3•dzonga•31m ago•0 comments

Ikey Doherty's Gone Missing Again

https://fossforce.com/2025/11/ikey-dohertys-gone-missing-again/
1•speckx•32m ago•0 comments

Stop Making Your Team Figure Out AI on Their Own

https://www.nngroup.com/articles/ai-research-ops/
1•ulrischa•32m ago•0 comments

Waist-to-height ratio outperforms BMI in predicting heart disease risk

https://medicalxpress.com/news/2025-11-waist-height-ratio-outperforms-bmi.html
3•bikenaga•32m ago•1 comments
Open in hackernews

Why Nextcloud feels slow to use

https://ounapuu.ee/posts/2025/11/03/nextcloud-slow/
295•rpgbr•6h ago

Comments

floundy•5h ago
I'm still setting up my own home server, adding one functionality at a time. I wanted to like Nextcloud but it's just too bloated.

Radicale is a good calendar replacement. I'd rather have single-function apps at this point.

servercobra•5h ago
Any good file syncing/drive replacements? My Synology exists pretty much because Synology Drives works so well syncing Mac and iOS.
FredFS456•5h ago
I think you could replace Nextcloud's syncing and file access use cases with Syncthing and Copyparty respectively. IMO the biggest downside is that Copyparty's UX is... somewhat obtuse. It's super fast and functional, though.
selectodude•5h ago
Seafile works pretty well. The iOS app is ass though. Everything else is rock solid.
rkagerer•4h ago
Where does it store metadata like the additional file properties you can add? Does it use Alternate Data Streams for anything?

Does the AI run locally?

For anyone who might find it useful, here's a Reddit thread from 3 years ago on a few concerns about SeaFile I'd love to see revisited with some updated discussion: https://www.reddit.com/r/selfhosted/comments/wzdp2p/are_ther...

selectodude•3h ago
Seems like the AI runs wherever you want it - you enter an API endpoint.

https://manual.seafile.com/13.0/extension/seafile-ai/

nickspacek•5h ago
I've read good things about Seafile and have considered setting it up on my Homelab... though when I looked at the documentation, it too seemed quite large and I worried it wouldn't be the lightweight solution I'm looking for.
ianopolous•5h ago
You might like Peergos, which is E2EE as well. Disclosure (I work on it).

https://peergos.org

You can try it out easily here: https://peergos-demo.net

Our iOS app is still in the works still though.

Saris•5h ago
Syncthing is great, but doesnt offer selective sync or virtual files if you need those features.

Owncloud infinite scale might be the best option for a full featured file sync setup, as thats all it does.

danielcberman•4h ago
It’s not selective sync, but you can get something similar with Ignore Files [1] in SynchThing. This functionality can also be configured via the webGUI and within apps such as MobiusSync [2].

1. https://docs.syncthing.net/users/ignoring.html

2. https://mobiussync.com

sira04•5h ago
Pretty happy with Resilio Sync. I use it on Mac, and linux in a docker container.
imcritic•3h ago
It is proprietary: it has words license and price on their page => crapware.
zeagle•5h ago
I went from cloud to local smb shares to nextcloud to seafile. Really happy with the latter. Works, no bloat, versioning and some file sharing. The pro version is free with 3 or less usernames. I use the cli client to mount the libraries into folders and share that with smb + subst X: into the root directory on laptops for family. Borgbackup of that offsite for backup.
lompad•4h ago
Copyparty. Found that recently and absolutely love it.
thesuitonym•4h ago
rsync, ftp, and smb have all existed for decades and work very well on spotty, slow connections (maybe not smb) and are very, very small utilities.
imcritic•3h ago
Unison. Unfortunately it has no mobile apps, though.
mlok•5h ago
Could an installable PWA solve this ?
ilumanty•5h ago
Could more diligence in the codebase solve this?
thesuitonym•4h ago
> Could ignoring the problem solve this ?
andai•5h ago
For reference, 20 MB is three hundred and thirteen Commodores.
robin_reala•4h ago
The complete Doom 2, including all graphics, maps, music and sound effects, shipped on 4 floppies, totalling 5.76MB.
zdragnar•4h ago
The original Doom 2 ran 64,000 pixels (320x200). 4k UHD monitors now show 8.3 million pixels.

YMMV.

Of course, Doom 2 is full of Carmack shenanigans to squeeze every possible ounce of performance out of every byte, written in hand optimized C and assembly. Nextcloud is delivered in UTF-8 text, in a high level scripting language, entirely unoptimized with lots of low hanging fruit for improvement.

hamburglar•4h ago
I mean, if you’re going to include carmack’s relentless optimizer mindset in the description, I feel like your description of the NextCloud situation should probably end with “and written by people who think shipping 15MB of JavaScript per page is reasonable.”
Yie1cho•3h ago
yes, but why isn't it optimised? not as extreme as doom had to be, but to be a bit better? especially the low hanging fruits.

this is why i think there's another version for customers who are paying for it, with tuning, optimization, whatever.

trashb•3h ago
Sure but i doubt there is more image data in the delivered nextcloud data compared to doom2, games famously need textures where a website usually needs mostly vector and css based graphics.

Actually Carmack did squeeze every possible ounce of performance out of DOOM, however that does not always mean he was optimizing for size. If you want to see a project optimized for size you might check out ".kkrieger" from ".theprodukkt" which accomplishes a 3d shooter in 97,280bytes.

You know how many characters 20MB of UTF-8 text is right? If we are talking about javascript it's probably mostly ascii so quite close to 20 million characters. If we take a wild estimate of 80 characters per line that would be 250000 lines of code.

I personally think 20MB is outrageous for any website, webapp or similar. Especially if you want to offer a product to a wide range of devices on a lot of different networks. Reloading a huge chunk of that on every page load feels like bad design.

Developers usually take for granted the modern convenience of a good network connection, imagine using this on a slow connection it would be horrid. Even in the western "first world" countries there are still quite some people connecting with outdated hardware or slow connections, we often forget them.

If you are making any sort of webapp you ideally have to think about every byte you send to your customer.

ekjhgkejhgk•1h ago
You know apps don't store pixels, right? So why are you counting pixels?
zdragnar•1h ago
A single picture that looks decent on a modern screen, taken from a modern camera, can easily be larger than the original Doom 2 binary.
ekjhgkejhgk•52m ago
You don't need pictures for a CRUD app. Should all be vectorial in any case.
chaostheory•4h ago
Sure, but what people leave out is that it’s mostly C and assembly. That just isn’t realistic anymore if you want a better developer experience that leads to faster feature rollout, better security, and better stabilty.

This is like when people reminisce about the performance of windows 95 and its apps while forgetting about getting a blue screen of death every other hour.

magicalhippo•3h ago
Windows 2000 was quite snappy on my Pentium 150, and pretty rock solid. It was when I stopped being good at fixing computers because it just worked, so I didn't get much practice.
tracker1•2h ago
I did get a BSOD from a few software packages in Win2k, but it was fewer and much farther between than Win9x/me... I didn't bump to XP until after SP3 came out... I also liked Win7 a lot. I haven't liked much of Windows since 7 though.

Currently using Pop + Cosmic.

chaostheory•1h ago
Win2000 is in the same class as Win95 despite being slightly more stable. It still locked up and crashed more frequently than modern software.
trashb•3h ago
Exactly javascript is a higher level language with a lot of required functionality build in. When compared to C you would need to (for most tasks) write way less actual code in javascript to achieve the same result, for example graphics or maths routines. Therefore it's crazy that it's that big.
tracker1•2h ago
I think it's a double edged sword of Open-Source/FLOSS... some problems are hard and take a lot of effort. One example I consistently point to is core component libraries... React has MUI and Mantine, and I'm not familiar with any open-source alternatives that come close. As a developer, if there was one for Leptos/Yew/Dioxus, I'd have likely jumped ship to Rust+WASM. They're all fast enough with different advantges and disadvantages.

All said... I actually like TypeScript and React fine for teams of developers... I think NextCloud likely has coordination issues that go beyond the language or even libraries used.

mrweasel•4h ago
The article suggests that it takes 14MB of Javascript to do just the calendar. I doubt that all of my calendar events for 2025 is 14MB.
magicalhippo•3h ago
Or the same number of 64k intros[1][2][3]...

[1]: https://www.youtube.com/watch?v=iXgseVYvhek

[2]: https://www.youtube.com/watch?v=ZWCQfg2IuUE

[3]: https://www.youtube.com/watch?v=4lWbKcPEy_w

branon•5h ago
I have been considering https://bewcloud.com/ + Immich as an alternative

Nextcloud's client support is very good though and it has some great apps, I use PhoneTrack on road trips a lot

zeagle•5h ago
Immich is a night and day improvement for photos vs nextcloud. You could roll it in addition if you wanted to try.
glenstein•3h ago
Fantastic recommendation, it's like exactly what the doctor ordered given the premise of this thread. Does Bewcloud play nice with DAV or other open protocols or (dare I hope) nextcloud apps? I wouldn't mind using nextcloud apps paired with a better web front end.
troyvit•3h ago
> I use PhoneTrack on road trips a lot

If every aspect of Nextcloud was as clean, quick and light-weight as PhoneTrack this world would be a different place. The interface is a little confusing but once I got the hang of it it's been awesome and there's just nothing like it. I use an old phone in my murse with PhoneTrack on it and that way if I leave it on the bus (again) I actually have a chance of finding it.

No $35/month subscription, and I'm not sharing my location data with some data aggregator (aside from Android of course).

bArray•5h ago
NextCloud does feel slow. What I want is not only a cloud service that does lots of common tasks, but it also should do it lightly and simply.

I'm extremely tempted to write a lightweight alternative. I'm thinking sourcehut [1] vs GitHub.

[1] https://sourcehut.org/

mickael-kerjean•5h ago
I made one such lightweight alternative frontend: https://github.com/mickael-kerjean/filestash
tokarf•5h ago
Just compare comparable products.

Nextcloud is an old product that inherit from Owncloud developed in php since 2010. It has extensibility at its core through the thousands of extensions available.

So yaaay compare it with source hut ...

alecsm•4h ago
Maybe that's the problem "old product that inherit from Owncloud".
bn-usd-mistake•4h ago
Aren't you just confirming the parent that Nextcloud is the big, feature-rich behemoth like Github?
bArray•4h ago
> Just compare comparable products.

> So yaaay compare it with source hut ...

I'm not saying that sourcehut is the same in any way, but I want the difference between GitHub and sourcehut to be the difference between NextCloud and alternative.

> Nextcloud is an old product that inherit from Owncloud developed in php since 2010.

Tough situation to be in, I don't envy it.

> It has extensibility at its core through the thousands of extensions available.

Sure, but I think for some limited use cases, something better could be imagined.

tokarf•5h ago
Nextcloud not perfect but it's still one of a major project that has not shifted to business oriented licence and where all components are available and not paywalled with enterprise edition.

So yes not perfect, bloated js but it works and is maintained.

So I'd rather thanks all developers involved in nextcloud than whine about bloated js.

yupyupyups•4h ago
>So I'd rather thanks all developers involved in nextcloud than whine about bloated js.

Good news! You can do both.

Propelloni•3h ago
That's not quite right. There are features that are only available to enterprise customers, or require proprietary plug-ins like Sendent.

Do I need them for my home server? No. Do I need them for my company? Yes, but costs compared to MS 365 are negligible.

jrochkind1•5h ago
I'm curious how much Javascript eg gmail and google docs/drive give you, in comparison.
a3w•4h ago
gmail should be server sided, with as much JS as you want to use. Unless they moved away from the philosophy they started with GWT (Google Web Toolkit) for Gmail, and perhaps even Inbox (RIP)
tracker1•3h ago
I just checked google calendar it's under 3mb download for js (around 8mb uncompressed).. it's also a lot more responsive than nextcloud web. Even then, it's not necessarily the size, I think that's mostly a symptom of the larger issues likely at play.

There are a lot of requests made in general, these can be good, bad or indifferent depending on the actual connection channels and configuration with the server itself. The pieces are too disconnected from each other... the NextCloud org has 350 repositories on Github. I'm frankly surprised it's more than 30 or so... it's literally 10x what would be a larger expectation... I'd rather deal with a crazy mono-repo at that point.

jrochkind1•2h ago
OP really focused on payload size, is why I was curious.

> On a clean page load [of nextcloud], you will be downloading about 15-20 MB of Javascript, which does compress down to about 4-5 MB in transit, but that is still a huge amount of Javascript. For context, I consider 1 MB of Javascript to be on the heavy side for a web page/app.

> …Yes, that Javascript will be cached in the browser for a while, but you will still be executing all of that on each visit to your Nextcloud instance, and that will take a long time due to the sheer amount of code your browser now has to execute on the page.

While Nextcloud may have a ~60% bigger JS payload, sounds like perhaps that could have been a bit of a misdirection/misdiagnosis, and it's really about performance characteristics of the JS rather than strictly payload size or number of lines of code executed.

On a Google Doc load chosen by whatever my browser location bar autocompleted, I get around twenty JS files, the two biggest are 1MB and 2MB compressed.

tracker1•2h ago
Yeah, without a deeper understanding it's really hard to say... just the surface level look, I'm not really at all interested in diving deeper myself. I'd like to like it... I tried out a test install a couple times but just felt it was clunky. Having a surface glance at the org and a couple of the projects, it doesn't surprise me that it felt that way.
esafak•5h ago
Does anyone know what they are doing wrong to create such large bundles? What is the lesson here?
bastawhiz•4h ago
Not paying attention.

1. Indiscriminate use of packages when a few lines of code would do.

2. Loading everything on every page.

3. Poor bundling strategy, if any.

4. No minification step.

5. Polyfilling for long dead, obsolete browsers

6. Having multiple libraries that accomplish the same thing

7. Using tools and then not doing any optimization at all (like using React and not enabling React Runtime)

Arguably things like an email client and file storage are apps and not pages so a SPA isn't unreasonable. The thing is, you don't end up with this much code by being diligent and following best practices. You get here by being lazy or uninformed.

nullgeo•4h ago
What is React runtime? I looked it up and the closest thing I came across is the newly announced React compiler. I have a vested interest in this because currently working on a micro-SaaS that uses React heavily and still suffering bundle bloat even after performing all the usual optimizations.
bastawhiz•3h ago
When you compile JSX to JavaScript, it produces a series of function calls representing the structure of the JSX. In a recent major version, React added a new set of functions which are more efficient at both runtime and during transport, and don't require an explicit import (which helps cut down on unnecessary dependencies).
adzm•2h ago
React compiler is awesome for minimizing unnecessary renders but doesn't help with bundle size; might even make it worse. But in my experience it really helps with runtime performance if your code was not already highly optimized.
eMerzh•2h ago
I think, some of the issues here is that first nextcloud tries to be compatible with any managed / mutualized hosting.

They also treat every "module"/"apps" whatever you call it, as completely distinct spa without proving much of a sdk/framework. Which mean each app, add is own deps, manage is own build, etc...

Also don't forget that app can even be a part of a screen not the whole thing

ivolimmen•4h ago
On the same note a jira ticket as configured where I work the entire page is 42mb. And I use ad blockers so I already skip the page counting stuff
freefaler•4h ago
Wow, that's a lot. Our local installation zero cache request (to not suffer their slooooow cloud):

82 / 86 requests 1,694 kB / 1,754 kB transferred 6,220 kB / 6,281 kB resources Finish: 11.73 s DOMContentLoaded: 1.07 s Load: 1.26 s

palata•4h ago
I would love to like Nextcloud, it's pretty great that it does exist. Just that makes it better than... well everything else I haven't found.

What frustrates me is that it looks like it works, but once in a while it breaks in a way that is pretty much irreparable (or at least not in a practical way).

I want to run an iOS/Android app that backs up images on my server. I tried the iOS app and when it works, it's cool. It's just that once in a while I get errors like "locked webdav" files and it never seems to recover, or sometimes it just stops synchronising and the only way to recover seems to be to restart the sync from zero. It will gladly upload 80GB of pictures "for nothing", discarding each one when it arrives on the server because it already exists (or so it seems, maybe it just overwrites everything).

The thing is that I want my family to use the app, so I can't access their phone for multiple hours every 2 weeks; it has to work reliably.

If it was just for backing up my photos... well I don't need Nextcloud for that.

Again, alternatives just don't seem to exist, where I can install an app on my parent's iOS and have it synchronise their photo gallery in the background. Except I guess iCloud, that is.

dade_•4h ago
The next cloud android app is particularly bad if you use it to back up your cameras DCIM directory then you delete the photos on your phone. It overwrite the files on Nextcloud as new photos are taken. I get why this happened but it is terrible.
Yie1cho•4h ago
it's bad for everything.

i have lots of txt files on my phone which are just not synced up to my server (the files on the server are 0 byte long).

i'm using txt files to take notes because the Notes app never worked for me (I get sync errors on any android phone while it works on iphone).

lompad•4h ago
Recently people built a super-lightweigt alternative, named copyparty[0]. To me that looks like it does everything people tend to need without all the bloat.

[0]: https://github.com/9001/copyparty

nucleardog•4h ago
I think "people" deserves clarification: Almost the entire thing was written by a single person and with a _seriously_ impressive feature set. The launch video is well worth a quick watch: https://www.youtube.com/watch?v=15_-hgsX2V0&pp=ygUJY29weXBhc...

I don't say this to diminish anyone else's contribution or criticize the software, just to call out the absolutely herculean feat this one person accomplished.

seemaze•4h ago
I found copyparty to be too busy on the UI/UX side of things. I've settled on dufs[0], quick to deploy, fast to use use, and cross platform.

[0] https://github.com/sigoden/dufs

davidcollantes•3h ago
Do you have a systemd for it, run it with Docker, or simply manually as needed? I find its simplicity perfect!
seemaze•3h ago
I run it manually as needed. It's already packaged for both Alpine Linux and Homebrew which suits my ad-hoc needs wonderfully!
chappi42•3h ago
This is not an alternative as it only covers files. Mind what is in the article: "I like what Nextcloud offers with its feature set and how easily it replaces a bunch of services under one roof (files, calendar, contacts, notes, to-do lists, photos etc.), but ".

For us Nextcloud AIO is the best thing under the sun. It works reasonably well for our small company (about 10 ppl) and saves us from Microsoft. I'm very grateful to the developers.

Hopefully they are able to act upon such findings or rewrite it with go :-). Mmh, if Berlin (Germany) wouldn't waste so much money in ill-advised ideology-driven and long-term state-destroying actions and "NGOs" they had enough money to fund 100s of such rewrites. Alas...

mynameisvlad•3h ago
There is no way it’s going to be completely rewritten from scratch in Go, and none of whatever Germany is or isn’t doing affects that in any way shape or form.
cbondurant•3h ago
It makes perfect sense to me that nextcloud is a good fit for a small company.

My biggest gripe with having used it for far longer than I should have was always that it expected far too much maintenance (4 month release cadence) to make sense for individual use.

Doing that kind of regular upkeep on a tool meant for a whole team of people is a far more reasonable cost-benefit analysis. Especially since it only needs one technically savvy person working behind the scenes, and is very intuitive and familiar on its front-end. Making for great savings overall.

TuningYourCode•1h ago
Hetzner‘s storage share product line offers a managed Nextcloud instance. I‘m using them as I didn‘t want to care about updating it myself.

The only downside is you can‘t use apps/plugins which require additional local tools (e.g. ocrmypdf) but others can be used just fine.

Calling remotely hosted services works (e.g. if you have elasticsearch on an vps and setup the Nextcloud fulltext search app accordingly)

lachiflippi•3h ago
Why should Germany be wasting public money on a private company who keeps shoveling more and more restrictions on their open-source-washed "community" offering, and whose "enterprise" pricing comes in at twice* the price MS365 does for fewer features, worse integration, and with added costs for hosting, storage, and maintenance?

* or same, if excluding nextcloud talk, but then missing a chat feature

redrblackr•3h ago
Could you expand on what restrictions they have placed on the community version?
lachiflippi•2h ago
At the very least their app store, which is pretty much required for OIDC, most 2FA methods, and some other features, stops working at 500 users. AFAIK you can still manually install addons, it's just the integration that's gone, though I'm not 100% sure. Same with their notification push service (which is apparently closed source?[0]), which wouldn't be as much of an issue if there were proper docs on how to stand up your own instance of that.

IIRC they also display a banner on the login screen to all users advertising the enterprise license, and start emailing enterprise ads to all admin users.

Their "fair use policy"[1] also includes some "and more" wording.

[0] https://github.com/nextcloud/notifications/issues/82

[1] https://nextcloud.com/fairusepolicy/

chappi42•2h ago
It makes a lot of sense for Germany to keep some independance from foreign proprietary cloud providers (Microsoft, Google); Money very well invested imo. It helps the local industry and data stays under German sovereignity.

I find your "open-source-washed" remark deplaced and quite deragoraty. Nextcloud is, imo, totally right to (try to) monetize. They have to, they must further improve the technical backbone to stay competitive with the big boys.

upboundspiral•2h ago
I think what you described is basically ownCloud Infinite Scale (ocis). I haven't tested it myself but it's something I've been considering. I run normal owncloud right now over nextcloud as it avoided a few hiccups that I had.
scrollop•3h ago
Copyparty looks amazing, wow

https://www.youtube.com/watch?v=15_-hgsX2V0

Dylan16807•35m ago
> everything people tend to need

> NOTE: full bidirectional sync, like what nextcloud and syncthing does, will never be supported! Only single-direction sync (server-to-client, or client-to-server) is possible with copyparty

Is sync not the primary use of nextcloud?

Larrikin•4h ago
For your specific use case of photos, Immich is the front runner and a much better experience. Sadly for the general Dropbox replacement I haven't found anything either.
guilamu•4h ago
I'd say Ente-photo is at least as good if not better than Immich.

https://github.com/ente-io/ente

omnimus•2h ago
I would say the opposite. Ente has one huge advantage and that it is e2ee so it's a must if you are hosting someone else photos. But if you are planning to run something on your server/NAS for yourself then Immich has many advantages (that often relate to the e2ee). For example... your files are still files on the disk so less worry about something unrecoverably breaking. And you can add external locations. With Ente it is just about backing up your phone photos. Immich works pretty well as camera photo organizer.
dangus•2h ago
The Ente desktop app has a continuous export function that’ll just dump everything into plain file directories.

It makes a little more sense when you’re using their cloud version, because otherwise you’re storing the data twice.

fauigerzigerk•2h ago
I'm a very happy Ente Photos user as well.
palata•1h ago
Does it have a mobile app that backs up the photos while in the background and can essentially be "forgotten"? That's pretty much what I need for my family: their photos need to get to my server magically.
thuttinger•4h ago
For a general file sharing / storage solution there is also OpenCloud: https://opencloud.eu/de

It's what I want to try next. Written in go, it looks promising.

karamanolev•2h ago
Too many Cloud things! OwnCloud, NextCloud, OpenCloud. There have* to be better names available...
Handy-Man•4h ago
Have you looked into https://filebrowser.org/? While it's not drop-in replacement for Google Drive/Dropbox, it has been serving me well for similar quick usecase.
nucleardog•4h ago
> Sadly for the general Dropbox replacement I haven't found anything either.

I had really good luck with Seafile[0]. It's not a full groupware solution, just primarily a really good file syncing/Dropbox solution.

Upsides are everything worked reliably for me, it was much faster, does chunk-level deduplication and some other things, has native apps for everything, is supported by rclone, has a fuse mount option, supports mounting as a "virtual drive" on Windows, supports publicly sharing files, shared "drives", end-to-end encryption, and practically everything else I'd want out of "file syncing solution".

The only thing I didn't like about it is that it stores all of your data as, essentially, opaque chunks on disk that are pieced together using the data in the database. This is how it achieves the performance, deduplication, and other things I _liked_. However it made me a little nervous that I would have a tough time extracting my data if anything went horribly wrong. I took backups. Nothing ever went horribly wrong over 4 or 5 years of running it. I only stopped because I shelved a lot of my self-hosting for a bit.

[0]: https://www.seafile.com/en/home/

Semaphor•4h ago
Yeah, went with that as well. It’s blazingly fast compared to NC.
oompydoompy74•1h ago
Pretty sure that NextCloud uses Seafile behind the scenes unless I’m mistaken.
Semaphor•1h ago
You are mistaken.
justinparus•2h ago
thanks for sharing. been looking for something like this for awhile
63stack•4h ago
Look into syncthing for a dropbox replacement, have been using it for years, very satisfied.
troyvit•3h ago
Syncthing is under my "want to like" list but I gave up on it. I'm a one person show who just wants to sync a few dozen markdown files across a few laptops and a phone. Every time I'd run it I'd invariably end up with conflict files. It got to the point where I was spending more time merging diffs than writing. How it could do that with just one person running it I have no idea.
Oxodao•3h ago
That should not happen. I use it a lot and never had this issue, there maybe is something wrong about your setup.

A good idea is to have it on an always-on server and add your share as an encrypted one (like you set the password on all your apps but not on the server); this pretty much results in a dropbox-like experience since you have a central place to sync even when your other devices are not online

Joeri•2h ago
I had this when I had a windows system in the mix. Windows handles case differently in filenames than linux and macOS, and it caused conflicts.
Brian_K_White•1h ago
Same. I don't know why so many people like syncthing.
the_pwner224•1h ago
My Syncthing experience matches Oxodao's. Over years with >10k files / 100 gb, I've only ever had conflicts when I actually made conflicting simultaneous changes.

I use it on my phone (configured to only sync on WiFi), laptop (connected 99% of the time), and server (up 100% of the time).

The always-up server/laptop as a "master node" are probably key.

layer8•2h ago
If you just need a Dropbox replacement for file syncing, Nextcloud is fine if you use the native file system integrations and ignore the web and WebDAV interfaces.
treve•4h ago
I replaced all my Dropbox uses with SyncThing (and love it). I run an instance on my server at all times and on every client.
redrblackr•2h ago
There is also "memories for nextcloud" which basically matches immich in feature set (was ahead until last month), nextcloud+memories make a very strong replacement for gdrive or dropbox
palata•41m ago
Yeah I guess my issue is that if I can't trust the mobile app not to lose my photos (or stop syncing, or not sync everything), then I just can't use it at all. There is no point in having Nextcloud AND iCloud just because I don't trust Nextcloud :D.
cortesoft•2h ago
I love immich, too, but I have also ran into a lot of issues with syncing large libraries. The iPhone app will just hang sometimes.
palata•38m ago
Does it recover though, or do you end up in situations where your setup is essentially broken?

Like if I backup photos from iOS, then remove a subset of those from iOS to make space on the phone (but obviously I want to keep them on the cloud), and later the mobile app gets out of sync, I don't want to end up in a situation where some photos are on iOS, some on the cloud, but none of the devices has everything, and I have no easy way to resync them.

cortesoft•28m ago
It won't recover unless I do something... sometimes just quitting the iPhone app and then toggling enabling backups works, but not always. I had to completely delete and reinstall the app once to get it to work, and had to resync all 45000 images/videos I had.

I have had the server itself fail in strange ways where I had to restart it. I had to do a full fresh install once when it got hopelessly confused and I was getting database errors saying records either existed when they shouldn't or didn't exist when they should.

I think I am a pretty skilled sysadmin for these types of things, having both designed and administered very large distributed systems for two decades now, but maybe I am doing things wrong, but I think there are just some gotchas still with the project.

palata•15m ago
Right, that's the kind of issues I am concerned about.

iCloud / Google Photos just don't have that, they really never lose a photo. It's very difficult for me to convince my family to move to something that may lose their data, when iCloud / Google Photos works and is really not that expensive.

cortesoft•9m ago
It has gotten more stable as I have used it for a while. I think if you want to do it, just wait until it is stable and you have a good backup routine before relying on it.
conradev•2h ago
I use Syncthing as a Dropbox replacement, and I like it. I have a machine at home running it that is accessible over the net. Not the prettiest, but it works!
jaden•2h ago
I too have found Syncthing + Filebrowser to be a sufficient substitute for Dropbox.
palata•1h ago
Does its iOS/Android app automatically backup the photos in the background? When I looked into Immich (didn't try it) it sounded like it was more of a server thing. I need the automation so that my family can forget about it.
pjs_•4h ago
I’ve tried every scheme under the sun and Immich is the only thing I’ve ever seen that actually works for this use case
exe34•4h ago
I use syncthing, I've got a folder shared between my phone, laptop and media center, and it just syncs everything easily.
kelvinjps10•4h ago
I do the same it's so convenient
dns_snek•19m ago
It works well for smaller folders but it slows down to a crawl with folders that contain thousands of files. If I add a file to an empty shared folder it will sync almost instantly but if I take a photo both sides become aware of the change rather quickly but then they just sit around for 5 minutes doing nothing before starting the transfer.
benhurmarcel•3h ago
I stopped using Nextcloud when the iOS app lost data.

For some reason the app disconnected from my account in the background from time to time (annoying but didn't think it was critical). Once I pasted data on Nextcloud through the Files app integration, it didn't sync because it was disconnected and didn't say anything, and it lost the data.

pdntspa•2h ago
SyncThing
stavros•1h ago
For photos, you can't beat Immich.
nolan879•1h ago
This also happened to me with my nextcloud, thankfully I did not lose any photos. I transitioned to Immich for my photos and have not looked back.
jacomoRodriguez•16m ago
I switch to FolderSync for the upload from mobile. Works like a charm!

I know, it sucks that the official apps are buggy as hell, but the server side is real solid

Yie1cho•4h ago
nextcloud just feels abandoned, even if it isn't of course.

maybe paying customers are getting a different/updated/tuned version of it. maybe not. but the only thing that keeps me using it is there isn't any real selfhosted alternatives.

why is it slow? if you just blink or take a breath, it touches the database. years ago i've tried to optimise it a bit and noticed that there are horrible amount of DB transactions there without any apparent reason.

also, the android client is so broken...

MrDresden•4h ago
I'm not sure why you feel like it is abandoned. There is a steady release cadence and the changelog[0] clearly shows that much is being worked on.

[0]: https://nextcloud.com/changelog/#latest32

Yie1cho•3h ago
yes of course there's progress and new features and it's not really abandoned per se.

but the feeling is that the outdated or simply bad decisions aren't fixed or redesigned.

it could be made 100 times better.

RiverCrochet•4h ago
I've played around with many self-hosted file manager apps. My first one was Ajaxplorer which then became Pydio. I really liked Pydio but didn't stick with it because it was too slow. I briefly played with Nextcloud but didn't stick with it either.

Eventually I ran into FileRun and loved it, even though it wasn't completely open source. FileRun is fast, worked on both desktop and mobile via browser nicely, and I never had an issue with it. It was free for personal use a few years ago, and unfortunately is not anymore. But it's worth the license if you have the money for it.

I tried setting up SeaFile but I had issues getting it working via a reverse proxy and gave up on it.

I like copyparty (https://github.com/9001/copyparty) - really dead simple to use and quick like FIleRun - but the web interface is not geared towards casual users. I also miss Filerun's "Request a file" feature which worked very nicely if you just wanted someone to upload a file to you and then be done.

accrual•3h ago
On the topic of self-hosted file manager apps, I've really liked "filebrowser". Pair it with Syncthing or another sync daemon and you've got a minimal self-hosted Dropbox clone.

* https://github.com/filebrowser/filebrowser

* https://github.com/hurlenko/filebrowser-docker

t_mann•2h ago
Copyparty can't (and doesn't want to) replace Nextcloud for many use cases because it supports one-way sync only. The readme is pretty clear about that. I'm toying with the idea of combining it with Syncthing (for all those devices where I don't want to do a full sync), does anybody have experience with that? I've seen some posts that it can lead to extreme CPU usage when combined with other tools that read/write/index the same folders, but nothing specifically about Syncthing.
tripflag•32m ago
Combining copyparty with Syncthing is not something I have tested extensively, but I know people are doing this, and I have yet to hear about any related issues. It's also a usecase I want to support, so if you /do/ hit any issues, please give word! I've briefly checked how Syncthing handles the symlink-based file deduplication, and it seemed to work just fine.

The only precaution I can think of is that copyparty's .hist folder should probably not be synced between devices. So if you intend to share an entire copyparty volume, or a folder which contains a copyparty volume, then you could use the `--hist` global-option or `hist` volflag to put it somewhere else.

As for high CPU usage, this would arise from copyparty deciding to reindex a file when it detects that the file has been modified. This shouldn't be a concern unless you point it at a folder which has continuously modifying files, such as a file that is currently being downloaded or otherwise slowly written to.

tripflag•37m ago
> I also miss Filerun's "Request a file" feature which worked very nicely if you just wanted someone to upload a file to you and then be done.

With the disclaimer that I've never used Filerun, I think this can be replicated with copyparty by means of the "shares" feature (--shr). That way, you can create a temporary link for other people to upload to, without granting access to browse or download existing files. It works like this: https://a.ocv.me/pub/demo/#gf-bb96d8ba&t=13:44

buibuibui•4h ago
I find the Nextcloud client really buggy on the Mac, especially the VFS integration. The file syncing is also really slow. I switched back to P2P file syncing via Syncthing and Resilio Sync out of frustration.
tripplyons•4h ago
I once discovered and reported a vulnerability in Nextcloud's web client that was due to them including an outdated version of a JavaScript-based PDF viewer. I always wondered why they couldn't just use the browser's PDF viewer. I made $100, which was a large amount to me as a 16 year old at the time.

Here is a blog post I wrote at the time about the vulnerability (CVE-2020-8155): https://tripplyons.com/blog/nextcloud-bug-bounty

rahkiin•3h ago
I recently needed to show a pdf file inside a div in my app. All i wanted was to show it and make it scrollable. The file comes from a fetch() with authorzation headers.

I could not find a way to do this without pdf.js.

moi2388•3h ago
The html object tag can just show a pdf file by default. Just fetch it and pass the source there.

What is the problem with that exactly in your case?

jrochkind1•2h ago
I think it can't do that on iOS? Don't know if that is the relevant thing in the choice being discussed though. Not sure about Android.
rahkiin•3h ago
This made me try it once more and I got something to work with some Blobs, resource URLs, sanitazion and iframes.

So I guess it is possible

tripplyons•3h ago
Yeah, blobs seem like the right way to do it.
rahkiin•55m ago
There does not seem to be a way to configure anything though. It looks quite bad with the default zoom level and the toolbar…
internet_points•4h ago
syncthing otoh barely even has a web ui, so it's really fast :-P
imcritic•3h ago
It felt unnecessarily complex for such a simple task as file synchronization. I prefer unison. Unfortunately, it is a blast from the past written in ocaml and there is no Android app :-(
accrual•3h ago
Syncthing has been very "set it and forget it" for me. It updates itself occasionally but I haven't had to fix anything yet.
8cvor6j844qw_d6•4h ago
Is Nextcloud reliable enough for "production" use?

Last time I heard a certain privacy community recommended against Nextcloud due to some issues with Nextcloud E2EE.

imcritic•3h ago
Kinda. In the long run you will definitely stumble upon a ton of bugs, but they mostly have some workarounds. Mostly.
Yie1cho•3h ago
the question is, what's your use case?

for me it's a family photo backup with calendars (private and shared ones) running in a VM on the net.

its webui is rarely used by anyone (except me), everyone is using their phones (calendars, files).

does it work? yes. does anyone other than me care about the bugs? no. but noone really _uses_ it as if it was deployed for a small office of 10-20-30 people. on the other hand, there are companies paying for it.

for this,

PaulKeeble•4h ago
I don't doubt that large amounts of javascript can often cause issues but even when cached NextCloud feels sluggish. When I look at just the network tab of a refresh of the calendar page it does 124 network calls, 31 of which aren't cached. it seems to be making a call per calendar each of which is over 30ms. So that stacks up the more calendars you have(and you have a number by default like contact birthdays).

The Javascript performance trace shows over 50% of the work is in making the asynchronous calls to pull those calendars and other network calls one by one and then on all the refresh updates it causes putting them onto the page.

Supporting all these N calendar calls is pulls individually for calendar rooms and calendar resources and "principles" for the user. All separate individual network calls some of which must be gating the later individual calendar calls.

Its not just that, it also makes a call for notifications, groups, user status and multiple heartbeats to complete the page as well, all before it tries to get the calendar details.

This is why I think it feels slow, its pulling down the page and then the javascript is pulling down all the bits of data for everything on the screen with individual calls, waiting for the responses before it can progress in many ways to make the further calls of which there can be N many depending on what the user is doing.

So across the local network (2.5Gbps) that is a second and most of it in waiting for the network. If I use the regular 4G level of throttling it takes 33.10 seconds! Really goes to show how bad this design does with extra latency.

riskable•3h ago
I was going to say... The size of the JS only matters the first time you download it unless there's a lot of tiny files instead of a bundle or two. What the article is complaining about doesn't seem like it's root cause of the slowness.

When it comes to JS optimization in the browser there's usually a few great big smoking guns:

    1. Tons of tiny files: Bundle them! Big bundle > zillions of lazy-loaded files.
    2. Lots of AJAX requests: We have WebSockets for a reason!
    3. Race conditions: Fix your bugs :shrug:
    4. Too many JS-driven animations: Use CSS or JS that just manipulates CSS.
Nextcloud appears to be slow because of #2. Both #1 and #2 are dependent on round-trip times (HTTP request to server -> HTTP response to client) which are the biggest cause of slowness on mobile networks (e.g. 5G).

Modern mobile network connections have plenty of bandwidth to deliver great big files/streams but they're still super slow when it comes to round-trip times. Knowing this, it makes perfect sense that Nextcloud would be slow AF on mobile networks because it follows the REST philosophy.

My controversial take: GIVE REST A REST already! WebSockets are vastly superior and they've been around for FIFTEEN YEARS now. Do I understand why they're so much lower latency than REST calls on mobile networks? Not really: In theory, it's still a round-trip but for some reason an open connection can pass data through an order of magnitude (or more) lower latency on something like a 5G connection.

fwlr•3h ago
15MB of JavaScript is 15MB of code that your browser is trying to execute. It’s the same principle as “compiling a million lines of code takes a lot longer than compiling a thousand lines”.
riskable•2h ago
It's a lot more complicated than that. If I have a 15MB .js file and it's just a collection of functions that get called on-demand (later), that's going to have a very, very low overhead because modern JS engines JIT compile on-the-fly (as functions get used) with optimization happening for "hot" stuff (even later).

If there's 15MB of JS that gets run immediately after page load, that's a different story. Especially if there's lots of nested calls. Ever drill down deep into a series of function calls inside the performance report for the JS on a web page? The more layers of nesting you have, the greater the overhead.

DRY as a concept is great from a code readability standpoint but it's not ideal performance when it comes to things like JS execution (haha). I'm actually disappointed that modern bundlers don't normally inline calls at the JS layer. IMHO, they rely too much on the JIT to optimize hot call sites when that could've been done by the bundler. Instead, bundlers tend to optimize for file size which is becoming less and less of a concern as bandwidth has far outpaced JS bundle sizes.

The entire JS ecosystem is a giant mess of "tiny package does one thing well" that is dependent on n layers of "other tiny package does one thing well." This results in LOADS of unnecessary nesting when the "tiny package that does one thing well" could've just written their own implementation of that simple thing it relies on.

Don't think of it from the perspective of, "tree shaking is supposed to take care of that." Think of it from the perspective of, "tree shaking is only going to remove dead/duplicated code to save file sizes." It's not going to take that 10-line function that handles with <whatever> and put that logic right where its used (in order to shorten the call tree).

Joeri•2h ago
That 15mb still needs to be parsed on every page load, even if it runs in interpreted mode. And on low end devices there’s very little cache, so the working set is likely to be far bigger than available cache, which causes performance to crater.
riskable•1h ago
Ah, that's the thing: "on page load". A one-time expense! If you're using modern page routing, "loading a new URL" isn't actually loading a new page... The client is just simulating it via your router/framework by updating the page URL and adding an entry to the history.

Also, 15MB of JS is nothing on modern "low end devices". Even an old, $5 Raspberry Pi 2 won't flinch at that and anything slower than that... isn't my problem! Haha =)

There comes a point where supporting 10yo devices isn't worth it when what you're offering/"selling" is the latest & greatest technology.

It shouldn't be, "this is why we can't have nice things!" It should be, "this is why YOU can't have nice things!"

snovv_crash•1h ago
When you write code with this mentality it makes my modern CPU with 16 cores at 4HGz and 64GB of RAM feel like a Pentium 3 running at 900MHz with 512MB of RAM.

Please don't.

binary132•37m ago
THANK YOU
fluoridation•3h ago
>Do I understand why they're so much lower latency than REST calls on mobile networks? Not really: In theory, it's still a round-trip but for some reason an open connection can pass data through an order of magnitude (or more) lower latency on something like a 5G connection.

It's because a TLS handshake takes more than one roundtrip to complete. Keeping the connection open means the handshake needs to be done only once, instead of over and over again.

riskable•2h ago
Yes and no: There's still a rather large latency improvement even when you're using plain HTTP (not that you should go without encryption).

I was very curious so I asked AI to explain why websockets would have such lower latency than regular HTTP and it gave some (uncited, but logical) reasons:

Once a WebSocket is open, each message avoids several sources of delay that an HTTP request can hit—especially on mobile. The big wins are skipping connection setup and radio wakeups, not shaving a few header bytes.

Why WebSocket “ping/pong” often beats HTTP GET /ping on mobile

    No connection setup on the hot path
        HTTP (worst case): DNS + TCP 3‑way handshake + TLS handshake (HTTPS) before you can send the request. On mobile RTTs (60–200+ ms), that’s 1–3 extra RTTs, i.e., 100–500+ ms just to get started.
        HTTP with keep‑alive/H2/H3: Better (no new TCP/TLS), but pools can be empty or closed by OS/radios/idle timers, so you still pay setup sometimes.
        WebSocket: You pay the TCP+TLS+Upgrade once. After that, a ping is just one round trip on an already‑open connection.


    Mobile radio state promotions
        Cellular modems drop to low‑power states when idle. A fresh HTTP request can force an RRC “promotion” from idle to connected, adding tens to hundreds of ms.
        A long‑lived WebSocket with periodic keepalives tends to keep the radio in a faster state or makes promotion more likely to already be done, so your message departs immediately.
        Trade‑off: keeping the radio “warm” costs battery; most realtime apps tune keepalive intervals to balance latency vs power.


    Fewer app/stack layers per message
        HTTP request path: request line + headers (often cookies, auth), routing/middleware, logging, etc. Even with HTTP/2 header compression, the server still parses and runs more machinery.
        WebSocket after upgrade: tiny frame parsing (client→server frames are 2‑byte header + 4‑byte mask + payload), often handled in a lightweight event loop. Much less per‑message work.
         

    No extra round trips from CORS preflight
        A simple GET usually avoids preflight, but if you add non‑safelisted headers (e.g., Authorization) the browser will first send an OPTIONS request. That’s an extra RTT before your GET.
        WebSocket doesn’t use CORS preflights; the Upgrade carries an Origin header that servers can validate.


    Warm path effects
        Persistent connections retain congestion window and NAT/firewall state, reducing first‑packet delays and occasional SYN drops that new HTTP connections can encounter on mobile networks.

What about encryption (HTTPS/WSS)?

    Handshake cost: TLS adds 1–2 RTTs (TLS 1.3 is 1‑RTT; 0‑RTT is possible but niche). If you open and close HTTP connections frequently, you keep paying this. A WebSocket pays it once, then amortizes it over many messages.
    After the connection is up, the per‑message crypto cost is small compared to network RTT; the latency advantage mainly comes from avoiding repeated handshakes.
     
How much do headers/bytes matter?

    For tiny messages, both HTTP and WS fit in one MTU. The few hundred extra bytes of HTTP headers rarely change latency meaningfully on mobile; the dominant factor is extra round trips (connection setup, preflight) and radio state.
     
When the gap narrows

    If your HTTP requests reuse an existing HTTP/2 or HTTP/3 connection, have no preflight, and the radio is already in a connected state, a minimal GET /ping and a WS ping/pong both take roughly one network RTT. In that best case, latencies can be similar.
    In real mobile conditions, the chances of hitting at least one of the slow paths above are high, so WebSocket usually looks faster and more consistent.
fluoridation•2h ago
Wow. Talk about inefficiency. It just said the same thing I did, but using twenty times as many characters.

>Yes and no: There's still a rather large latency improvement even when you're using plain HTTP (not that you should go without encryption).

Of course. An unencrypted HTTP request takes a single roundtrip to complete. The client sends the request and receives the response. The only additional cost is to set up the connection, which is also saved when the connection is kept open with a websocket.

cloudfudge•38m ago
Yes and no. Have you considered that the problem is that a TLS handshake takes more than one round trip to complete?

/s

binary132•43m ago
doesn’t HTTP keep connections open?
fluoridation•4m ago
It's up to the client to do that. I'm merely explaining why someone would see a latency improvement switching from HTTPS to websockets. If there's no latency improvement then yes, the client is keeping the connection alive between requests.
Yokolos•3h ago
I've never seen anybody recommend WebSockets instead of REST. I take it this isn't a widely recommended solution? Do you mean specifically for mobile clients only?
riskable•2h ago
After all my years of web development, my rules are thus:

    * If the browser has an optimal path for it, use HTTP (e.g. images where it caches them automatically or file uploads where you get a "free" progress API).
    * If I know my end users will be behind some shitty firewall that can't handle WebSockets (like we're still living in the early 2010s), use HTTP.
    * Requests will be rare (per client):  Use HTTP.
    * For all else, use WebSockets.
WebSockets are just too awesome! You can use a simple event dispatcher for both the frontend and the backend to handle any given request/response and it makes the code sooooo much simpler than REST. Example:

    WSDispatcher.on("pong", pongFunc);
...and `WSDispatcher` would be the (singleton) object that holds the WebSocket connection and has `on()`, `off()`, and `dispatch()` functions. When the server sends a message like `{"type": "pong", "payload": "<some timestamp>"}`, the client calls `WSDispatcher.dispatch("pong", "<some timestamp>")` which results in `pongFunc("<some timestamp>")` being called.

It makes reasoning about your API so simple and human-readable! It's also highly performant and fully async. With a bit of Promise wrapping, you can even make it behave like a synchronous call in your code which keeps the logic nice and concise.

In my latest pet project (collaborative editor) I've got the WebSocket API using a strict "call"/"call:ok" structure. Here's an example from my WEBSOCKET_API.md:

    ### Create Resource
    ```javascript
    // Create story
    send('resources:create', {
      resource_type: 'story',
      title: 'My New Story',
      content: '',
      tags: {},
      policy: {}
    });
    
    // Create chapter (child of story)
    send('resources:create', {
      resource_type: 'chapter',
      parent_id: 'story_abc123', // This would actually be a UUID
      title: 'Chapter 1'
    });
    
    // Response:
    {
      type: 'resources:create:ok', // <- Note the ":ok"
      resource: { id: '...', resource_type: '...', ... }
    }
    ```
I've got a `request()` helper that makes the async nature of the WebSocket feel more like a synchronous call. Here's what that looks like in action:

    const wsPromise = getWsService(); // Returns the WebSocket singleton
    
    // Create resource (story, chapter, or file)
    async function createResource(data: ResourcesCreateRequest) {
      loading.value = true;
      error.value = null;
      try {
        const ws = await wsPromise;
        const response = await ws.request<ResourcesCreateResponse>(
          "resources:create",
          data // <- The payload
        );
        // resources.value because it's a Vue 3 `ref()`:
        resources.value.push(response.resource); 
        return response.resource;
      } catch (err: any) {
        error.value = err?.message || "Failed to create resource";
        throw err;
      } finally {
        loading.value = false;
      }
    }
For reference, errors are returned in a different, more verbose format where "type" is "error" in the object that the `request()` function knows how to deal with. It used to be ":err" instead of ":ok" but I made it different for a good reason I can't remember right now (LOL).

Aside: There's still THREE firewalls that suck so bad they can't handle WebSockets: SophosXG Firewall, WatchGuard, and McAfee Web Gateway.

DecoPerson•1h ago
WebSockets are the secret ingredient to amazing low- to medium-user-count software. If you practice using them enough and build a few abstractions over them, you can produce incredible “live” features that REST-designs struggle with.

Having used WebSockets a lot, I’ve realised that it’s not the simple fact that WebSockets are duplex or that it’s more efficient than using HTTP long-polling or SSEs or something else… No, the real benefit is that once you have a “socket” object in your hands, and this object lives beyond the normal “request->response” lifecycle, you realise that your users DESERVE a persistent presence on your server.

You start letting your route handlers run longer, so that you can send the result of an action, rather than telling the user to “refresh the page” with a 5-second refresh timer.

You start connecting events/pubsub messages to your users and forwarding relevant updates over the socket you already hold. (Trying to build a delta update system for polling is complicated enough that the developers of most bespoke business software I’ve seen do not go to the effort of building such things… But with WebSockets it’s easy, as you just subscribe before starting the initial DB query and send all broadcasted updates events for your set of objects on the fly.)

You start wanting to output the progress of a route handler to the user as it happens (“Fetching payroll details…”, “Fetching timesheets…”, “Correlating timesheets and clock in/out data…”, “Making payments…”).

Suddenly, as a developer, you can get live debug log output IN THE UI as it happens. This is amazing.

AND THEN YOU WANT TO CANCEL SOMETHING because you realise you accidentally put in the actual payroll system API key. And that gets you thinking… can I add a cancel button in the UI?

Yes, you can! Just make a ‘ctx.progress()’ method. When called, if the user has cancelled the current RPC, then throw a RPCCancelled error that’s caught by the route handling system. There’s an optional first argument for a progress message to the end user. Maybe add a “no-cancel” flag too for critical sections.

And then you think about live collaboration for a bit… that’s a fun rabbit hole to dive down. I usually just do “this is locked for editing” or check the per-document incrementing version number and say “someone else edited this before you started editing, your changes will be lost — please reload”. Figma cracked live collaboration, but it was very difficult based on what they’ve shared on their blog.

And then… one day… the big one hits… where you have a multistep process and you want Y/N confirmation from the user or some other kind of selection. The sockets are duplex! You can send a message BACK to the RPC client, and have it handled by the initiating code! You just need to make it so devs can add event listeners on the RPC call handle on the client! Then, your server-side route handler can just “await” a response! No need to break up the handler into multiple functions. No need to pack state into the DB for resumability. Just await (and make sure the Promise is rejected if the RPC is cancelled).

If you have a very complex UI page with live-updating pieces, and you want parts of it to be filterable or searchable… This is when you add “nested RPCs”. And if the parent RPC is cancelled (because the user closes that tab, or navigates away, or such) then that RPC and all of its children RPCs are cancelled. The server-side route handler is a function closure, that holds a bunch of state that can be used by any of the sub-RPC handlers (they can be added with ‘ctx.addSubMethod’ or such).

The end result is: while building out any feature of any “non-web-scale” app, you can easily add levels of polish that are simply too annoying to obtain when stuck in a REST point of view. Sure, it’s possible to do the same thing there, but you’ll get frustrated (and so development of such features will not be prioritised). Also, perf-wise, REST is good for “web scale” / high-user-counts, but you will hit weird latency issues if you try to use for live, duplex comms.

WebSockets (and soon HTTP3 transport API) are game-changing. I highly recommend trying some of these things.

jauntywundrkind•2h ago
Sync Conf is next week, and this sort of issue is so part of what I hope maybe can just go away. https://syncconf.dev/

Efforts like Electric SQL to have APIs/protocols for bulk fetching all changes (to a "table") is where it's at. https://electric-sql.com/docs/api/http

It's so rare for teams to do data loading well, rarer still we get effective caching, and often a products footing here only degrades with time. The various sync ideas out there offer such an alluring potential, of having a consistent way to get the client the updated live data they need, in a consistent fashion.

Side note, I'm also hoping the js / TC39 source phase imports proposal aka import source can help let large apps like NextCloud defer loading more of it's JS until needed too. But the waterfall you call out here seems like the real bad side (of NextCloud's architecture)! https://github.com/tc39/proposal-source-phase-imports

bityard•23m ago
The thing that kills me is that Nextcloud had an _amazing_ calendar a few years ago. It was way better than anything else I have used. (And I tried a lot, even the calendar add-on for Thunderbird. Which may or may not be built in these days, I can't keep track.)

Then at some point the Nextcloud calendar was "redesigned" and now it's completely terrible. Aesthetically, it looks like it was designed for toddlers. Functionally, adding and editing events is flat out painful. Trying to specify a time range for an event is weird and frustrating. It's better than not having a calendar, but only just.

There are plenty of open source calendar _servers_, but no good open source web-based calendars that I have been able to find.

dingdingdang•4h ago
Having at some point maintained a soft fork / patch-set for Nextcloud.. yes, there is so much performance left on the table. With a few basic patches the file manager, for example, sped up by magnitudes in terms of render speed.

The issue remains that the core itself feels like layers upon layers of encrusted code that instead of being fixed have just had another layer added ... "something fundamental wrong? Just add Redis as a dependency. Does it help? Unsure. Let's add something else. Don't like having the config in a db? Let's move some of it to ini files (or vice versa)..etc..etc." it feels like that's the cycle and it ain't pretty and I don't trust the result at all. Eventually abandoned the project.

Edit: at some point I reckon some part of the ecosystem recognised some of these issues and hence Owncloud remade a large part of the fundamentals in Golang. It remains unknown to me whether this sorted things or not. All of these projects feel like they suffer badly from "overbuild".

Edit-edit: another layer to add to the mix is that the "overbuild" situation is probably largely what allows the hosting economy around these open source solutions to thrive since Nextcloud and co. are so over-engineered and badly documented that they -require- a dedicated sys-admin team to run well.

INTPenis•3h ago
This is my theory as well. NC has grown gradually in silos almost, every piece of it is some plugin they've imported from contributions at some point.

For example the reason there's no cohesiveness with a common websocket bus for all those ajax calls is because they all started out as a separate plugin.

NC has gone full modularity and lost performance for it. What we need is a more focused and cohesive tool for document sharing.

Honestly I think today with IaC and containers, a better approach for selfhosting is to use many tools connected by SSO instead of one monstrosity. The old Unix philosophy, do one thing but do it well.

rahkiin•1h ago
This still needs cohesive authorization and central file sharing and access rules across apps. And some central concept of projects to move all content away from people and into the org and roles
redrblackr•2h ago
Two things:

1. Did you open back port request with these basic patches? If you have orders of magnitude speed improvements it would be aswesome to share!

2. You definitively don't need an entire sysadmin team to run nextcloud, in my work (large organisation) there's three instances running (for different parts/purposes of which only one is run by more than one person, and I run myself both my personal instance and for a nonprofit with ~100 persons, it's really not much work after setup (and other systems are plenty of a lot more complicated systems to set up, trust me)

bfkwlfkjf•4h ago
I've never used nextcloud, but I always imagined that the point is you can run services but then plug in any calendar app etc. You don't have to be running nextclouds calendar, I thought. Did I misundestand how it works?
imcritic•4h ago
Their calendar plugin provides CalDAV, so you could just use your local calendar app that syncs with the server over that protocol.
bfkwlfkjf•14m ago
Sooooo why not just host any caldav server instead? Like, why is nextcloud so popular compared to self hosting caldav?
glenstein•3h ago
If dav works best for you, you're using it right.

I would assume that the people for whom a slow web based calendar is a problem (among other slow things on the web interface) are people who want to be using it if it performed well.

They wouldn't just make a bad slow web interface on purpose to enlighten people as to how bad web interfaces are, as a complicated way of pushing them toward integrated apps.

bogwog•4h ago
Nextcloud is bloated and slow, but it works and is reliable. I've been running a small instance in a business setting with around 8 daily users for many years. It is rock solid and requires zero maintenance.

But people rarely use the web apps. Instead, it's used more like a NAS with the desktop sync client being the primary interface. Nobody likes the web apps because they're slow. The Windows desktop sync client has a really annoying update process, but other than that is excellent.

I could replace it with a traditional NAS, but the main feature keeping me there is an IMAP authentication plugin. This allows users to sign in with their business email/password. It works so well and makes it so much easier to manage user accounts, revoke access, do password resets, etc.

imcritic•4h ago
> Nobody likes the web apps because they're slow.

Web apps don't have to be slow. I prefer web apps over system apps, as I don't have to install extra programs into my system and I have more control over those apps:

- a service decides it's a good idea to load some tracking stuff from 3rd-party? I just uMatrix block it;

- a page has an unwanted element? I just uBlock block it;

- a page could have a better look? I just userstyle style it;

- a page is missing something that could be added on client side? I just userscript script it

Jaxan•1h ago
Do you also prefer a web-based file browser? My main use for Nextcloud is files and a desktop sync is crucial and integrates with the OS.
catapart•4h ago
Just like any other modern app: first you make it work using frameworks. Then, as soon as the "Core" product is done - just a few more features - then we'll circle back around to ripping out those bloated frameworks for something more lithe. Shouldn't be more than two weeks, now. Most of the base stuff is done. Just another feature or two. I mean, a little longer, if we have some issues with those features, sure. But we'll get back around to a simpler UI right after! Just those features, their bugs and support, and then - well documentation. Just the minimum stuff. Enough to know what we did when we come back to it. But we'll whip up those docs and then it's right on to slimming down the frontend! Won't be long now...
cbondurant•4h ago
I've used nextcloud for close to I think 8 years now as a replacement for google drive.

However my need for something like google drive has reduced massively, and nextcloud continues to be a massive maintenance pain due to its frustratingly fast release cadence.

I don't want to have to log into my admin account and baby it through a new release and migration every four months! Why aren't there any LTS branches? The amount of admin work that nextcloud requires only makes sense for when you legitimately have a whole group of people with accounts that are all utilizing it regularly.

This is honestly the kick in the pants I need to find a solution that actually fits my current use-case. (I just need to sync my fuckin keepass vault to my phone, man.) Syncthing looks promising with significantly less hassle...

tracker1•3h ago
Might also consider Vaultwarden/Bitwarden as a self-host alternative. Yeah it's client-server... that said, been pretty happy as a user.
jw_cook•1h ago
The linuxserver.io image for Nextcloud requires considerably less babysitting for upgrades: https://docs.linuxserver.io/images/docker-nextcloud

As long as you only upgrade one major version at a time, it doesn't require putting the server in maintenance mode or using the occ cli.

aborsy•4h ago
A good thing thing about Nextcloud is that by learning one tool, you get a full suite of collaboration apps: sync, file sharing, calendar, notes, collectives, office (via Collabora or OnlyOffice), and more. These features are pretty good, plus, you get things like photo management and Talk, which are decent.

Sure, some people might argue that there are specialized tools for each of these functions. And that’s true. But the tradeoff is that you'd need to manage a lot more with individual services. With Nextcloud, you get a unified platform that might be good enough to run a company, even if it’s not very fast and some features might have bugs.

The AIO has addressed issues like update management and reliability, it been very good in my experience. You get a fully tested, ready-to-go package from Nextcloud.

That said, I wonder, if the platform were rewritten in a more performance-efficient language than PHP, with a simplified codebase and trimmed-down features, would it run faster? The UI could also be more polished (see Synology DSM web interface). The interface in Synology looks really nice!

troyvit•3h ago
The thing I don't get is that based on the article the front-end is as bloated as the back-end.

That said there's an Owncloud version called Infinite Scale which is written in Go.[1] Honestly I tried to go that route but it's requirements are pretty opinionated (Ubuntu LTS 22.04 or 24.04 and lots of docker containers littering your system) but it looks like it's getting a lot of development.

[1] https://doc.owncloud.com/

c-hendricks•2h ago
> it's requirements are pretty opinionated (Ubuntu LTS 22.04 or 24.04

Hm?

> This guide describes an installation of Infinite Scale based on Ubuntu LTS and docker compose. The underlying hardware of the server can be anything as listed below as long it meets the OS requirements defined in the Software Stack

https://doc.owncloud.com/ocis/next/depl-examples/ubuntu-comp...

The Software Stack section goes on to say it's just needs Docker, Docker Compose, shell access, and sudo.

Ubuntu and sudo are probably only mentioned because the guide walks you through installing docker and docker compose.

s1mplicissimus•2h ago
rewriting in a lower-level language won't do too much for NC, because it's mostly slow due to inefficient IO organization - things like mountains of XHRs, inefficient fetching, db querying etc. - None of that will be implicitly fixed by a rewrite in any language and can be fixed in the PHP stack as well. I think one of the reasons that helped OC/NC get off the ground was precisely that the sysadmins running it can often do a little PHP, which is just enough to get it customized for the client. Raising the bar for contribution by using lower level languages might not be a desirable change of direction in that case.
xingped•3h ago
I gave up on using Nextcloud because every time it updated it accumulated more and more errors and there was no way I was going to use a software that I had to troubleshoot every single update. Also the defaults for pictures are apparently quite stupid and so instead of making and showing tiny thumbnails for pictures, the thumbnails are unnecessarily large and loading the thumbnails for a folder of pictures takes forever. You can fix this and tell it to make smaller thumbnails apparently, but again, why am I having to fix everything myself? These should be sane defaults. Unfortunately, I just can't trust Nextcloud.
paularmstrong•3h ago
I gave up updating Nextcloud. It works for what I use it for and I don't feel like I'm missing anything. I'd rather not spend 4+ hours updating and fixing confusing issues without any tangible benefit.
madeofpalk•3h ago
I don't think this article actually does a great job of explaining why Nextcloud feels slow. It shows lots of big numbers for MBs of Javascript being downloading, but how does that actually impact the user experience? Is the "slow" Nextcloud just sitting around waiting for these JS assets to load and parse?

From my experience, this doesn't meaningfully impact performance. Performance problems come from "accidentally quadratic" logic in the frontend, poorly optimised UI updates, and too many API calls.

shermantanktop•3h ago
Agreed. Plus if it truly downloads all of that every time, something has gone wrong with caching.

Overeager warming/precomputation of resources on page load (rather than on use) can be a culprit as well.

hamburglar•3h ago
Relying on cache to cover up a 15MB JavaScript load is a serious crutch.
hamburglar•3h ago
It downloads a lot of JavaScript, it decompresses a lot of JavaScript, it parses a lot of JavaScript, it runs a lot of JavaScript, it creates a gazillion onFoundMyNavel event callbacks which all run JavaScript, it does all manner of uncontrolled DOM-touching while its millions of script fragments do their thing, it xhr’s in response to xhrs in response to DOM content ready events, it throws and swallows untold exceptions, has several dozen slightly unoptimized (but not too terrible) page traversals, … the list goes on and on. The point is this all adds up, and having 15MB of code gives a LOT of opportunity for all this to happen. I used to work on a large site where we would break out the stopwatch and paring knife if the homepage got to more than 200KB of code, because it meant we were getting sloppy.
bob1029•3h ago
15+ megabytes of executable code begins to look quite insane when you start to take a gander at many AAA games. You can produce a non-trivial Unity WebGL build that fits in <10 megabytes.
hamburglar•3h ago
It’s the kind of code size where you analyze it and find 13 different versions of jquery and a hundred different bespoke console.log wrappers.
72deluxe•2h ago
Yes and Windows 3.11 came on 6 1.44MB floppy disks. Modern software is so offensive.
hamburglar•2h ago
Windows 3.11 also wasn’t shipped to you over a cellular connection when you clicked on it. If it were, 6x1.44MB would have been considered quite unacceptable.
kirito1337•3h ago
I don't think I will ever use something like that. I work in over 10 PCs everyday and my only synchronisation is a 16 GB USB stick. I keep all important work, apps and files there.
s_ting765•3h ago
Nextcloud server is written in PHP. Of course it is slow. It's also designed to be used as an office productivity suite meaning a lot of features you may not actually use are enabled by default and those services come with their own cronjobs and so on.
m-a-r-c-e-l•3h ago
PHP is super-fast today. I've built 2 customer facing web products with PHP which made each a million dollar business. And they were very fast!

https://dev.to/dehemi_fabio/why-php-is-still-worth-learning-...

s_ting765•3h ago
At the risk of sounding out the obvious. PHP is limited to single threaded processes and has garbage collection. It's certainly not the fastest language one could use for handling multiple concurrent jobs.
rafark•2h ago
They didn’t say it was the fastest. Just that the language per se is fast enough.
s_ting765•1h ago
> the language per se is fast enough

I literally explained why this is not the case.

And Nextcloud being slow in general is not a new complaint from users.

zeppelin101•3h ago
The major shortcoming of NextCloud, in my opinion, is that that it's not able to do sync over LAN. Imagine wanting to synchronize 1TB+ of data and not being able to do so over a 1 Gbps+ local connection, when another local device has all the necessary data. There is some workaround involving "split DNS", but I haven't gotten around to it. Other than that, I thought NC was absolutely fantastic.
accrual•3h ago
I had a similar issue with a public game server that required connecting through the WAN even if clients were local on the LAN. I considered split DNS (resolving the name differently depending on the source) but it was complicated for my setup. Instead I found a one-line solution on my OpenBSD router:

    pass in on $lan_if inet proto tcp to (egress) port 12345 rdr-to 192.168.1.10
It basically says "pass packets from the LAN interface towards the WAN (egress) on the game port and redirect the traffic to the local game server". The local client doesn't know anything happened, it just worked.
jw_cook•2h ago
Check if your router has an option to add custom DNS entries. If you're using OpenWRT, for example, it's already running dnsmasq, which can do split DNS relatively easily: https://blog.entek.org.uk/notes/2021/01/05/split-dns-with-dn...

If not, and you don't want to set up dnsmasq just for Nextcloud over LAN, then DNS-based adblock software like AdGuard Home would be a good option (as in, it would give you more benefit for the amount of time/effort required). With AdGuard, you just add a line under Filters -> DNS rewrites. PiHole can do this as well (it's been awhile since I've used it, but I believe there's a Local DNS settings page).

Otherwise, if you only have a small handful of devices, you could add an entry to /etc/hosts (or equivalent) on each device. Not pretty, but it works.

redrblackr•1h ago
Or just use ipv6!

You could also upload directly to the filesystem and then run occ files:scan, or if the storage is mounted as external it just works.

Another method is to set your machines /etc/hosts (or equivalent) to the local IP of the instance (if the device is only on lan you can keep it, otherwise remove it after the large transfer).

Now your rounter should not send traffic to itself away, just loop it internally so it never has to go over your isps connection - so running over lan only helps if your switch is faster than your router..

Jaxan•1h ago
I use it on LAN without a problem (using mDNS). Sure it runs with self signed certificates, but that’s ok with me.
DrammBA•43m ago
> The major shortcoming of NextCloud, in my opinion, is that that it's not able to do sync over LAN.

That’s an interesting way to describe a lack of configuration on your part.

Imagine me saying: "The major shortcoming of Google drive, in my opinion, is that that it's not able to sync files from my phone. There is some workaround involving an app called 'Google drive' that I have to install on my phone, but I haven't gotten around to it. Other than that, Google drive is absolutely fantastic.

exabrial•3h ago
>For context, I consider 1 MB of Javascript to be on the heavy side for a web page/app.

I feel like > 2kb of Javascript is heavy. Literally not needed.

tracker1•3h ago
While I tend to agree... I've been on enough relatively modern web apps that can hit 8mb pretty easily, usually because bundling and tree shaking are broken. You can save a lot by being judicious.

IMO, the worst offenders are when you bring in charting/graphing libraries into things when either you don't really need them, or otherwise not lazy loading where/when needed. If you're using something like React, then a little reading on SVG can do wonders without bloating an application. I've ripped multi-mb graphing libraries out to replace them with a couple components dynamically generating SVG for simple charting or overlays.

dmit•2h ago
Preact have been fairly faithful to being <10k (compressed)! (even though they haven't updated the original <3k claim since forever)
ndom91•3h ago
This post completely misses the point. Linear downloads ~6.1mb of JS over the network, decompressed to ~31mb and still feels snappy.

Applications like linear and nextcloud aren't designed to be opened and closed constantly. You open them once and then work in that tab for the remainder of your session.

As others have pointed out in this thread, "feeling slow" is mostly due to the number of fetch requests and the backend serving those requests.

ndom91•3h ago
Many have brought up more websockets instead of REST API calls. It looks like they're already working in that direction.. scroll down to "Developer tools and APIs": https://nextcloud.com/blog/nextcloud-hub25-autumn/
rpgbr•2h ago
I wonder how does bewCloud[1] stack up against NextCloud, since it's meant to be a “modern and simpler alternative” to it. Has anyone tested it?

[1] https://bewcloud.com/

PunchyHamster•2h ago
It is slow and code seems to be messy enough to be fragile. It's also in PHP that doesn't help performance.
jw_cook•2h ago
The article mentions Vikunja as an alternative to Nextcloud Tasks, and I can give it a solid recommendation as well. I wanted a self-hosted task management app with some lightweight features for organizing tasks into projects, ideally with a kanban view, but without a full-blown PM feature set. I tried just about every task management app out there, and Vikunja was the only one that ticked all the boxes for me.

Some specific things I like about it:

  * Basic todo app features are compatible with CalDAV clients like tasks.org
  * Several ways of organizing tasks: subtasks, tags, projects, subprojects, and custom filters
  * list, table, and kanban views
  * A reasonably clean and performant frontend that isn't cluttered with stuff I don't need (i.e., not Jira)
And some other things that weren't hard requirements, but have been useful for me:

  * A REST API, which I use to export task summaries and comments to markdown files (to make them searchable along with my other plaintext notes)
  * A 3rd party CLI tool: https://gitlab.com/ce72/vja
  * OIDC integration (currently using it with Keycloak)
  * Easily deployable with docker compose
mxuribe•27m ago
I know this post is more about nextcloud...but can i just say this one feature from Vikunja "...export task summaries and comments..." sounds great!!! One of the features i seek out when i look for a task, project management software is the ability to easily and comprehensivelt provide for nice exports, and that said exports *include comments*!!

Either apps lack such an export, or its very minimal, or it includes lots of things, except comments...Sometimes an app might have a REST api, and I'd need to build something non-trivial to start pulling out the comments, etc. I feel like its silly in this day and age.

My desire for comments to be included in exports is for local search...but also because i use comments for sort of thinking aloud, sort of like an inline task journaling...and when comments are lacking, it sucks!

In fact, when i hear folks suggest to simply stop using such apps and merely embrace the text file todo approach, they cite their having full access to comments as a feature...and, i can't dispute their claim! But barely any non-text-based apps highlight the inclusion of comments. So, i have to ask: is it just me (who doesn't use a text-based todo workflow), and then all other folks who *do use* a text-based tdo flow, who actually care about access to comments!?!

<rant over>

estimator7292•2h ago
Like most of us I think, I really, really wanted to like nextcloud. I put it on an admittedly somewhat slow dual Xeon server, gave it all 32 threads and many, many gigabytes of ram.

Even on a modern browser on a brand new leading-edge computer, it was completely unusably slow.

Horrendous optimization aside, NC is also chasing the current fad of stripping out useful features and replacing them with oceans of padding. The stock photos app doesn't even have the ability to sort by date!. That's been table stakes for a photo viewer since the 20th goddamn century.

When Windows Explorer offers a more performant and featureful experience, you've fucked up real bad.

I would feel incredibly bad and ashamed to publish software in the condition that NextCloud is in. It is IMO completely unacceptable.

macinjosh•2h ago
Javascript making PHP look bad.
dengolius•2h ago
Maybe it because of using PHP?
rafark•2h ago
Nope. Php is sufficiently fast.
jimangel2001•2h ago
Nextcloud is a mess. It tries to do everything. The only reason I keep it in production is because it's a hustle to transition my files and DAVx info elsewhere.

The http upload is miserable, it's slow, it fails with no message, it fails to start, it hangs. When uploading duplicate files the popup is confusing. The UI is slow, the addons break on every update. The gallery is very bad, now we use immich.

atoav•2h ago
As someone who has hosted a few Nextcloud instances for a few years: Nextcloud can be quick if you make it work. If you want to get a good feel for how it can be rent a Hetzner storage box (1TB for below 5 Euros a month).

You sadly can't just install nextcloud on your vanilla server and expect it to perform well.

elAhmo•1h ago
One thing that could help with this is to use CDN for these static assets, while still having the Nextcloud hosted on your own.

We had a similar situation with some notebooks running in production, which were quite slow to load because it was loading a lot of JS files / WASM for the purposes of showing the UI. This was not part of our core logic, and using a CDN to load these, but still relying on private prod instance for business logic helped significantly.

I have a feeling this would be helpful here as well.

gloosx•1h ago
I was expecting the author to open the profiler tab instead of just staring at network. But its yet another "heavy JavaScript bad" rant.

You really consider 1 MB of JS too heavy for an application with hundreds of features? How exactly are developers supposed to fit an entire web app into that? Why does this minimalism suddenly apply only to JavaScript? Should every desktop app be under 1 MB too? Is Windows Calculator 30 MB binary also an offense to your principles?

What year is it, 2002? Even low-band 5G gives you 30–250 Mbps down. At those speeds, 20 MB of JS downloads in well under a second. So whats the math beihnd the 5–10 second figure? What about the cache? Is it turned off for you and you redownload the whole nextcloud from scratch every time?

Nextcloud is undeniably slow, but the real reasons show up in the profiler, not the network tab.

znpy•1h ago
> You really consider 1 MB of JS too heavy for an application with hundreds of features? How exactly are developers supposed to fit an entire web app into that? Why does this minimalism suddenly apply only to JavaScript? Should every desktop app be under 1 MB too? Is Windows Calculator 30 MB binary also an offense to your principles?

Yes, I don't know, because it runs in the browser, yes, yes.

j1elo•1h ago
> low-band 5G gives you 30–250

First and foremost, I agree with the meat of your comment.

But I wanted to point about your comment, that it DOES very much matter that apps meant to be transmitted over a remote connection are, indeed, as slim as possible.

You must be thinking about 5G on a city with good infrastructure, right?

I'm right now having a coffee on a road trip, with a 4G connection, and just loading this HN page took like 8~10 seconds. Imagine a bulky and bloated web app if I needed to quickly check a copy of my ID stored in NextCloud.

It's time we normalize testing network-bounded apps through low-bandwidth, high-latency network simulators.

lurker_jMckQT99•1h ago
(tangential) Reading the comments, several mentioned "copyparty", never heard of it before, haven't used it, haven't reviewed but does there "feature showcase" video makes me want to give it a shot https://www.youtube.com/watch?v=15_-hgsX2V0 :)
skeptrune•52m ago
I know that this is supposed to be targeted at NextCloud in particular, but I think it's a good standalone "you should care about how much JavaScript you ship" post as well.

What frustrates me about modern web development is that everyone is focused on making it work much more than they are making it sure it works fast. Then when you go to push back, the response is always something like "we need to not spend time over-optimizing."

Sent this straight to the team slack haha.

nairboon•26m ago
Microsoft Teams goes hold my beer and downloads more than 75 MB of Javascript.
aeldidi•23m ago
Nextcloud is something I have a somewhat love-hate relationship with. On one hand, I've used Nextcloud for ~7 years to backup and provide access to all of my family's photos. We can look at our family pictures and memories from any computer, and it's all private and runs mostly without any headaches.

On the other hand, Nextcloud is so far from being something like Google Docs, and I would never recommend it as a general replacement to someone who can't tolerate "jank", for lack of a better word. There are so many small papercuts you'll notice when using it as a power user. Right off the top of my head, uploading large files is finicky, and no amount of web server config tinkering gets it to always work; thumbnail loading is always spotty, and it's significantly slower than it needs to be (I'm talking orders of magnitude).

With all that said, I'm so grateful for Nextcloud since I don't have a replacement, and I would prefer not having all our baby and vacation pictures feeding some big corporation's AI. We really ought to have a safe, private place to store files in 2025 that the average person can wrap their head around. I only wish my family took better advantage of it, since I'm essentially providing them with unlimited storage.