One minor note - on mobile safari there didn’t seem to be any state update on press of buttons, and submission was not clear until the backend server seemed to respond. My internet connection is a little slow and it was unclear tapping the button worked. I would expect a little loading state or at least ui to show the button as disabled during submission.
I self-host gitea. It took maybe 5 minutes to set up on TrueNAS and even that was only because I wanted to set up different datasets so I could snapshot independently. I love it. I have privacy. Integrating into a backup strategy is quite easy —- it goes along with the rest of my off-site NAS backup without me needing to retain local clones on my desktop. And my CI runners are substantially faster than what I get through GitHub Actions.
The complexity and maintenance burden of self-hosting is way overblown. The benefits are often understated and the deficiencies of whatever hosted service left unaddressed.
LLMs don't store the code, only the probability chains of tokens (words). AFAIK this is not plagiarism.
I remember the later 2000s, when a German company called "Rocket Internet" was copycatting companies like AirBnB, Zappos and others. Many consider this lame and some kind of moral freeloading, it's not prohibited.
I'm sure if you used that big, smug brain of yours you'd piece together exactly what I meant. Here's a search query to get the juices flowing:
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Whether you agree with why someone may be opting to self-host a git server is immaterial to why they've done so. Likewise, I'm not going to rehash the debate over fair use vs software licenses. Pretending like you don't understand why someone that published code under a copyleft license is displeased with it being locked in a proprietary model being used to build proprietary software is willful ignorance. But, again, it makes no difference whether you're right or they're right; no one is obligated to continue pushing open source code to GitHub or any other service.
If you do small scale (we're talking self-hosted git here after all), all these are either non-issue or a one-time issue.
Figuring out backups and firewall is the latter. Once figured out, you don't worry about that at all. Figuring these out isn't rocket science either.
As for the former. For minimum maintenance, I often run services in docker containers - one service (as in compose stack) per Debian VM. This makes OS upgrades very stable and, given docker is the only "3rd-party" package, they are very unlikely to break the system. That allows to set unattended-upgrades to upgrade everything.
With this approach most of the maintenance comes from managing containers' versions. It's a good practice to use fixed containers' versions which does mean there is some labor involved when it comes to upgrading them, but you don't always have to stick to the exact version. Many containers have tags for major versions and these are fairly safe to rely on for automatic upgrades. The manual part of the upgrades when a new major release comes out can be a really rare occasion.
If your services' containers don't do such versioning (GitLab and YouTrack are the examples of that), then you aren't as lucky, but bumping a version every few months or so shouldn't be too laborsome either.
Now, if DDoS is a concern, there is probably already the staff in place to deal with that. DDoS is mostly for popular public services to worry about, not for a private Gitea instance. Such pranks are costly to randomly poke around and require some actual incentive.
But why keep a private instance out in the open anyway? Put it behind a VPN and then you don't really have to account for security and upgrades as much.
A tool like this is not fundamentally more complex than a browser or a full-fledged IDE.
The only maintenance I have had to do was when the "I don't care about cookie's" extension got sold out, so had to switch to a fork [1]. That was 2-3 years ago.
[1] https://github.com/OhMyGuus/I-Still-Dont-Care-About-Cookies
Code discussion anywhere anytime
Select any code or diff to start discussion. Suggest and apply changes. Discussions stay with code to help code understanding.
How do the discussions stay with the code? Git notes?https://docs.onedev.io/tutorials/code/code-review#free-style...
...but I can't for the life of me recall why. Definitely wasn't a radioactive red flag issue...but some aspect around CI wasn't for me.
In general though with things like this that carry an open license and have a docker image you're better off trying it yourself than listening to randoms like me
I can guess why you think you need it, but whatever the reason it's not good enough. If you need job workers or some other kind of container, tell me how to run those with docker compose.
I get why you don't want to do that on a machine running other things and I wouldn't either, but you're pretending like this is such a strange, unnecessary and unexpected thing to require, when in reality, basically everything does it this way and there isn't really a good alternative without a ton of additional complexity the vast majority of people won't need.
Wrong. A container with access to the socket can compromise any other container, and start new containers with privileged access to the host system.
It compromises everything. This is a risk worth flagging.
So, I’m not sure this is something I’d worry much about. Perhaps they should flag this in the documentation as something to be noted, but otherwise, I’m not sure how else you get this functionality. Is there another way?
If there is a RCE vuln in the app, your users are just as unsafe if it's running as root on the host or if it's running as nobody in a container. The valuable data is all inside.
Only the app or runner container has socket access, which it uses to create new containers without socket access and it runs user code in there. If your get RCE in the app/runner, you get RCE on the host, yes, no shit. But if you get RCE in any other container on the system, you're properly contained.
> The code inside those containers is isolated, which is the whole point.
A container with socket access can replace code or binaries in any other container, read any containers volumes and environment variables, replace whole containers, etc. That does not meet any definition of "isolated"
It isn't for the reasons I stated in previous comments, which you are unable to refute. Your dogged insistence to the contrary is bizarre.
I hope you do not work in this area.
The runner (trusted code) is tasked with taking job specifications from the user (untrusted code) and running them in isolated environments. Correct?
The runner is in a container with a mounted docker socket. It sends a /containers/create request to the socket. It passes a base image, some resource limits and maybe a directory mount for the checked out repository (untrusted code). The code could alternatively be copied instead of mounted. Correct?
The new container is created by dockerd without the socket mounted, because that wasn't specified by the runner ("Volumes": [] or maybe ["/whatever/user/repo/:/repo/"]). Correct?
The untrusted code is now executed inside that container. Because the container was created with no special mounts or privileges, it is as isolated as if it was created manually with docker run. Correct?
The job finishes executing, the runner uses the socket to collect the logs and artifacts, then it destroys the container. Correct?
So please tell me how you think untrusted code could get access to the socket here?
On a related note, I believe Docker's design contributes to this issue. Its inflexible sandboxing model encourages such risky practices.
Of course it's an issue if you're using Docker to isolate OneDev from the rest of the apps running on your systems. But that's not everyone's use-case. Anything that intentionally spins up user-controlled containers should be isolated in a VM. That's how every sane person runs GitLab runners, for example.
I'll add it to my list of things to try out though, having more competition in the space is definitely a good thing.
aime4money•4mo ago
fair_enough•4mo ago
I also appreciate that the default workflow for undoing bad merges is a whiteout rather than a true "delete".
To each his own, but having worked with CVS, SVN, Perforce, Git, and Fossil, the centralized model is much less work for release engineering and administration most of the time. If I were a maintainer of the Linux kernel of one of the many Linux distros where you have potentially thousands of contributors to one codebase, I would use git because it scales up better.
However, I wouldn't underestimate the value of scaling down well, especially for all the people around here building some startup out of a barndominium. A VCS that includes its own GUI-based admin tool and is simple enough to used by some high school intro to web design class is a good thing in my book.
MangoToupe•4mo ago
It works exactly as advertised though; my gripes aren't technical.
fair_enough•4mo ago
Note: Before some third person pitches in their condescending two cents on the limitations of CVS, nobody here is recommending CVS.