I do this all the time, right before open sourcing a project. Basically while it's private, commit quality can be a bit rough, and if I want to open source it, I'll remove .git, make a new init commit then open source it. No one needs to see what I do in my private abode :)
It's much better to refactor the messy commits, removing the personal or embarrassing stuff; although that might result in a "false" history, a series of smaller-sized commits will usually be much easier to follow than reading a whole code base all at once.
Really, I see a ton of open-source projects that do this, and it results in a lot of more opacity and friction than necessary.
It results in less people being able to check the code and contribute to the project.
With clients some of them have already made this bad decision; with my own personal files I get to avoid it.
And then I saw Npm references and thought “in JavaScript?!” But at least it’s typescript.
Was that because of team expertise or particular aspects of TS you thought suited the domain?
Is there a decade-old software that provides a UI or an API wrapper around these features for a "Google Drive" alternative? Maybe over the SAMBA protocol?
Samba is a complicated piece of software built around protocols from the 90s. It’s designed around the old idea of physical network security where it’s isolated on a LAN and has a long long history of serious critical security vulnerabilities (eg here’s an RCE from this month https://cybersecuritynews.com/critical-samba-rce-vulnerabili...).
put all kinds of versioned metadata on docs without coming up with strange encodings, and even though POSIX (and NodeJS) offers a lot of FS related features it probably makes sense to keep things reeeeally simple
and it's easy to hack on this even on Windows
Would love to see your source code for your take on this product.
There is a database in most if not all useful cases, but there could also be the actual files separately.
I have no idea how this project was designed, but a) it's expectable that disk operations can and should be cached, b) syncing file shares across multiple nodes can easily involve storing metadata.
For either case, once you realize you need to persist data then you'd be hard pressed to justify not using a database.
Like broadly, for all configuration Hashicorp Vault makes you do, you can achieve a much more useful set of permissions with a Samba fileshare and ACLs (certainly it makes it easy to grant targeted access to specific resources - and with IIS and Kerberos you even have an HTTP API).
Another issue would be permissions: if I wanted to restrict access to a file to a subset of users, I’d have to make a group for that subset. Linux supports a maximum of 65536 groups, which could quickly be exhausted for a nontrivial number of users.
The file system is already a database.
For naming, just name the directory the same way on your file system.
Shareable urls can be a hash of the path with some kind of hmac to prevent scraping.
Yes if you move a file, you can create a symlink to preserve it.
https://cockpit-project.org/applications
--
With no command line use needed, you can:
Navigate the entire filesystem,
Create, delete, and rename files,
Edit file contents,
Edit file ownership and permissions,
Create symbolic links to files and directories,
Reorganize files through cut, copy, and paste,
Upload files by dragging and dropping,
Download files and directories.FWIW, the people working on this project has Mission and Vision pages on their website: https://linagora.com/en/mission https://linagora.com/en/vision
Took me a whooping 17 seconds to find those two.
edit: it looks even worse knowing that they have their own chat project: https://twake-chat.com that is powered by the Matrix protocol
scirob•2h ago
javatuts•2h ago