By accident I landed on https://us.githubstatus.com/ and everything was green. At first, I thought, yeah sure, just report green, then I realized "GitHub Enterprise Cloud" in the title. There is also a EU mirror: https://eu.githubstatus.com
Edit:
The report just updated with the following interesting bit.
> We identified a faulty network component and have removed it from the infrastructure. Recovery has started and we expect full recovery shortly.
You can fix it through the API by generating an API token in your settings with notifications permission on and using this (warning: it will mark all your notifications as read up to the last_read_at day):
curl -L \
-X PUT \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer <YOUR-TOKEN>" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/notifications \
-d '{"last_read_at":"2025-10-09T00:00:00Z","read":true}'
GitHub has been experiencing mass waves of crypto scam bots opening repos and mass tagging tens of thousands of users on new issues. Using the issue content body to generate massive scam marketing like content bodies.
Then you can click the checkbox at the top and then "select all", and it'll mark the phantom notifications as read.
This has been a known issue since at least 2021, which is ridiculous.
1. Why Self-Host?
2. GitHub Issues
Expecting more and more downtime and random issues in the future.
At least I'm pretty sure the runners are, our account rep keeps trying to get us to use their GPU runners but they don't have a good GPU model selection and it seems to match what azure offers.
Change directory to your local git repository that you want to share with friends and colleagues and do a bare clone `git clone --bare . /tmp/repo.git`. You just created a copy of the .git folder without all the checked out files.
Upload /tmp/repo.git to your linux server over ssh. Don't have one? Just order a tiny cloud server from Hetzner. You can place your git repository anywhere, but the best way is to put it in a separate folder, e.g. /var/git. The command would look like `scp -r /tmp/repo.git me@server:/var/git/`.
To share the repository with others, create a group, e.g. `groupadd --users me git` You will be able to add more users to the group with groupmod.
Your git repository is now writable only by me. To make it writable by the git group, you have to change the group on all files in the repository to git with `chgrp -R git /var/repo.git` and enable the group write bit on them with `chmod -R g+w /var/repo.git`.
This fixes the shared access for existing files. For new files, we have to make sure the group write bit is always on by changing UMASK from 022 to 002 in /etc/login.defs.
There is one more trick. For now on, all new files and folders in /var/git will be created with the user's primary group. We could change users to have git as the primary group.
But we can also force all new files and folders to be created with the parent folder's group and not user primary group. For that, set the group sticky bit on all folders in /var/git with `find /var/git -type d -exec chmod g+s \{\} +`
You are done.
Want to host your git repository online? Install caddy and point to /var/git with something like
example.com {
root * /var/git
file_server
}
Your git repository will be instantly accessible via https://example.com/repo.git.At the same time, self-hosting is great for privacy, cost, or customization. It is not great for uptime.
It's true that outages are probably less frequent, as a consequence of never making any changes, however when something does break e.g. security forces someone to actually upgrade the 5-years-since-end-of-support Ubuntu version and it breaks, it may take several days or weeks to fix because nobody actually knows anything about the configuration because it was last touched 10 years ago by 1 guy who has long left the company.
Having everything in one service definitely increases interoperability between those solutions, but it definitely decreases stability. In addition, each of the other systems is not the best in their class (I really detest GH Actions for example).
Why do so many solutions grow so big? Is it done to increase enterprise adoption?
I do agree there are issues with a single provider for too many components, but I am not sure you get any decreased stability with that verse having a different provider for everything.
Getting the same level of interoperability with a separate tool takes significantly more work on both sides, so the monolithic approaches tend to thrive because it can get out the door faster and better.
Forgejo is doing the same thing with its actions. Honestly, I'd prefer if something like Woodpecker became the blessed choice instead, and really good integration with diverse tools was the approach.
That said, I agree that the execution of many features in GitHub has been lacking for some time now. Bugs everywhere and abysmal performance. We're moving to Forgejo at $startup.
I felt like Actions were a time sink that trick you into feeling productive - like you're pursuing 'best practice' - while stealing time that could otherwise be spent talking to users or working on your application.
For example, running tests before merge ensures you don't forget to. Running lints/formatters ensures you don't need to refactor later and waste time there.
For my website, it pushes main automatically, which means I can just do something else while it's all doing it's thing.
Perhaps you should invest in simplifying your build process instead?
The day I forget to run tests before merging I'll set up CI/CD (hasn't happened before, unlikely, but not impossible).
My build process is gp && gp heroku main. Minor commits straight to main. Major features get a branch. This is manual, simple and loveable. And involves zero all-nighters commit-spamming the .github directory :)
If you want more complex functionality, that's why I suggested improving your build system, so the actions themselves are pretty simple.
Where things get more frustrating for me is when you try using more advanced parts of actions like releases and artifacts which aren't as simple as running a script and checking it's output/exit code.
Just refreshed my memory by looking at mine. 103 lines. Just the glance brought back painful memories. The worst areas were:
- Installing ruby/bundler/gems/postgres/js libraries, dealing with versioning issues, and every few months have them suddenly stop working for some reason that had to be addressed in order to deploy.
- Installing capybara and headless chrome and running system tests (system tests can be flakey enough locally, let alone remotely).
- Minor issue of me developing on a mac, deploying to heroku, so linux on GHA needs a few more things installed than I'm used to, creating more work (not the end of the world, and good to learn, but slow when it's done via a yaml file that has to be committed and run for a few minutes just to see the error).
my setup before was just build and scp
now it takes like 3 mins for a deploy: i haven’t setup caching for builds etc. but that feels like a self made problem
my proj is pretty simple so thats probably why
For the personal home hacking projects I do, I often don't even make an external repo. I definitely don't do external CI/CD. Often a waste of time.
For more enterprise kind of development, you bet the final gold artifacts are built only by validated CI/CD instances and deployed by audited, repeatable workflows. If I'm deploying something from a machine I have in my hands with an active local login for, something is majorly on fire.
Are you saying that the act of setting up the CI pipeline is time consuming? Or the act of maintaining it?
The only time I think about my CI pipeline is when it fails. If it fails then it means I forgot to run tests locally.
I guess I can see getting in the weeds with maintaining it, but that always felt more likely when not deploying Dockerized containers since there was duplication in environmental configs that needed to be kept synchronized.
Or are you commenting on the fact that all cloud-provided services can go down and are thus a liability?
Or do you feel limited by the time it takes to recreate environments on each deployment? I haven't bumped into this scenario that often. Usually that dominating variable in my CI pipeline is the act of running tests themselves. Usually due to poor decisions around testing best practices that cause the test runner to execute far slower than desired. Those issues would also exist locally, though.
https://www.githubstatus.com/history
seems like Microsoft can't keep this thing from crashing at least three times a month. At this rate it would probably be cheaper just to buy out Gitlab.
Wondering when M$ will cut their losses and bail.
Just be warned if you try it out that if you don't specify which workflow to run, it will just run them all!
munksbeer•2h ago
Who else?
digitalsushi•2h ago