But private stuff runs on my own servers.
In 2025 it's mostly maintenance free once the setup is running: Debian will be updated fully automated using unattended-update and the hosted application when there's the need to do one manually. Backups automatically every night using Proxmox and its little brother Proxmox Backup Server. Even with valid certificates using DNS-Auth from Lets Encrypt.
Have people forgotten that email exists?
It's a joke, not a manifesto in a joke suit.
Do you not see how much easier something like GH is?
I don't know if you've ever used GitHub, GitLab, or CodeBerg - but PRs just appear in a list there. I don't need to do any work. Very handy especially if they're big changes.
I can also leave comments on specific bits of code, rather than emailing someone.
local$ git push
upstream$ git fetch && git merge
it becomes: local$ git format-patch
local: open file picker for attachments.
upstream: save as ...
upstream$ git am
That's not that much different in time and effort.Wouldn't 2. make transitioning them into 1. "impossible"?
I dont want it in a vault, I dont want you to do anything other than read it on my site. I dont want an archive. most of my code is not licensed. All rights reserved.
It's there as a personal portfolio that's it.
And these scanners don't respect the LICENSE file, they think if its on the web - they can not just index it but make full copys and reproduce it.
By virtue of uploading code to github you are granting them license as per their terms of service.
In this article Choosing the right license We created choosealicense.com, to help you understand how to license your code. A software license tells others what they can and can't do with your source code, so it's important to make an informed decision.
You're under no obligation to choose a license. However, without a license, the default copyright laws apply, meaning that you retain all rights to your source code and no one may reproduce, distribute, or create derivative works from your work.
via https://docs.github.com/en/repositories/managing-your-reposi...
https://docs.github.com/en/site-policy/github-terms/github-t...
>on my site
That means not uploaded to github. That means self hosted, as is the point of the main discussion.
>these scanners don't respect the LICENSE file
I don't think github scans outside repos, but what is stated there certainly applies to OpenAI and others. They don't have a license to do what they are doing, but the US is not enforcing copyright law on them out of fear of losing the AI race.
TUI tools over SSH definitely aren't for everyone, but if that's your style and you want a place to dump all your non-public projects, it's a great choice.
Most non-private stuff goes on Sourcehut, and anyone can contribute via email (i.e. without any account) assuming they don't mind going through the arcana required to set up git-send-email.
But it does requires people to be disciplined with their changes (no wip commits). This may require learning about the flags for `git rebase` and `git commit`.
I've really enjoyed using them but I guess I don't do much with the web interface.
> TS_DEST_IP
So you run tailscale in your git server container so it gets a unique tailnet ip which won't create a conflict because you don't need to ssh into the container?
I might give that a go. I run tailscale on my host and use a custom port for git which you set once in your ~/.ssh/config for host/key config on client machines and then don't need to refer to it repo uris.
TBH, I think it's tailscale I'd like a light/fast alternative to! I have growing concerns because I often find it inexplicably consuming a lot of CPU, pointlessly spamming syslog (years old github issues without response) or otherwise getting fucked up.
They're plenty fast, but it's hard to match the speed of terminal tools if you're used to working that way. With Soft Serve, I'm maybe 10 keystrokes and two seconds away from whatever I want to access from a blank desktop. Even a really performant web application is always going to be a bit slower than that.
Normally that kind of micro-optimization isn't all that useful, but it's great for flitting back and forth between a bunch of projects without losing your place.
> So you run tailscale in your git server container so it gets a unique tailnet ip which won't create a conflict because you don't need to ssh into the container?
Pretty much. It's a separate container in the same pod, and shows up as its own device on the tailnet. I can still `kubectl exec` or port forward or whatever if I need to access Soft Serve directly, but practically I never do that.
> TBH, I think it's tailscale I'd like a light/fast alternative to!
I've never noticed Tailscale's performance on any clients, it "just works" in my experience. I'm running self-hosted Headscale, but wouldn't expect it to be all that different performance-wise.
I have hundreds of random tools and half-finished projects, having them all accessible and searchable from a single location is convenient.
but in all seriousness, i do think that there is a lot of merit in the LKML way of doing things. otherwise all of our servers would be on fire now!
maybe and the insane capacity to sell things from the githubs, gitlabs of the world have brainwashed us!
There is a native git request-pull command [1] that generates a summary of pending changes to pull into an upstream project, but it doesn’t enjoy support for all the features offered by GitHub pull requests or GitLab merge requests.
Initiatives like ForgeFed are trying to define a more neutral way to share this information, but I don't think there's any estimate date for when there'll be an actual implementation of it. If that ever happens, it'd be possible to get vendor-neutral tooling for that kind of collaboration.
Why are people so keen on having that network graph of forks? It's not necessary for collaboration. Status symbol?
If I want to fork your code and contribute back, that means I need to be on the same system as you.
There's a bunch of Gnome projects which require me to sign up to their specific git hosting service before I can contribute.
On most git servers, I have to fork in order to send a PR, which often means I have to create a fork on their system - which means I need to set up something to replicate it to my local machine.
It's all friction.
I'd love to see a project on (for example) GitHub and then clone it to my GitLab, work on it there, and send a PR from GL to GH.
You really don't. You just clone, code, commit, and send a patch (which is just one or more text files). That's it. You may just code and do a diff, if it's a simple fix.
The project may have a more complex policy to accept contributions. But a Fork and a PR is not a requirement.
How do I submit the patch to the repo on GitHub / GitLab / CodeBerg / whatever?
Presumably I need to hunt down the maintainer's email?
For example, I might want to host my code privately on GitHub and not have Microsoft use it to train their LLMs. That doesn't seem to be possible:
This is not the next billion dollar business, but I don't want to share the code until I write a couple of papers on it, and anchor the code's and idea's provenance correctly.
I had a softer stance on the issue, but since AI companies started to say "What's ours is ours, and what's yours is ours", I need to build thicker walls and deeper moats.
No, it won't be permissively licensed if I open source this.
I sort of punted on receiving patches and merge requests because most of the projects I'm distributing in this way aren't really open source, but "published source" and there's a small community of people who use them. The people who use the public, read-only repos know how to generate patches and distribute them via email or (in one case) uploading them to an S3 bucket.
Anyway... your mileage may vary, but it's worked reasonably well for a small community. Not sure I would recommend it for a huge open source project.
Anyone have experience with LFS on other repos?
And I wouldn’t be that concerned about contributors. It’s only the very top projects that get any contributors. And even then most of the time contributors are not worth the hassle.
I think the thing that sets it apart from others would be I run it on a m2 Mac mini? Very low power consumption, never makes any noise, and seemingly has plenty of power for whatever I need to get done.
monegator•4h ago
I don't even need to rent a server for that. Everything runs on my router (openWRT is amazing)
goku12•3h ago
Let me repeat this again. We didn't centralize git when we started using github/gitlab etc. We centralized discoverability and reach of projects.
So far, everything else can be decentralized - issue tracking, pull requests, project planning, ci, release management and more. But we still don't have a solution to search projects on potentially thousands of servers, including self-hosted ones.
skydhash•3h ago
We do.
https://mvnrepository.com/repos/central
https://npmjs.com
https://packagist.org/
https://pypi.org/
https://www.debian.org/distrib/packages#search_packages
https://pkg.go.dev/
https://elpa.gnu.org/packages/
And many others.
And we still have forums like this one and Reddit where people can just announce their project. Github is more of a bad code refuge than a high signal project discovery.
ioasuncvinvaer•2h ago
skydhash•2h ago
ioasuncvinvaer•2h ago
skydhash•2h ago
And most software have an actual websites or is present in some distribution. I don’t care that much for weekend projects.
goku12•1h ago
That sounds hardly like an alternative to what's possible with Github now. The only alternative that came anywhere close to that ideal was freshmeat - and even that didn't achieve the full potential. Check this discussion alone to see how many talk about 'network effects' or 'discoverability'.
franga2000•35m ago
As an example, I had to reverse engineer some kinda obscure piece of hardware and after getting everything I needed for my project, I put everything on github in case it was useful to anyone. A few months later, someone was in a similar situation and built on top of my work. Neither of us made a "package" or even "a piece of software", just some badly written scripts and a scattered notes. But it was still useful to publish somewhere where others can find it, especially a place with very good SEO and code-optimized search.
skydhash•7m ago
righthand•3h ago
Why do you need a search index on your self hosted git server? Doesn’t Kagi solve that?
goku12•44m ago
The search index doesn't have to be on your server, does it? What if there is an external (perhaps distributed/replicated) index that you could submit the relevant information to? Or if an external crawler could collect it on your behalf? (some sort of verification will also be needed.)
There are two reasons why such a dedicated index is useful. The first is that the general index is too full of noise. That's why people search projects directly on Github. The second problem is that the generic crawlers aren't very good at extracting relevant structured information from source projects, especially the information that the project owner wants to advertise. For example the readme, contribution guidelines, project status, installation and usage information, language(s), license(s), CoC, issue tracker location, bug and security reporting information, keywords, project type, etc. Github and sourcegraph allow you to do precise searches based on those. Try using a regular search engine to locate an obscure project that you already know about.