frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Incident with Webhooks

https://www.githubstatus.com/incidents/k7bhmjkblcwp
95•munksbeer•2h ago

Comments

munksbeer•2h ago
Getting failed pushes, failed PR creation, failed CI pipelines.

Who else?

digitalsushi•2h ago
probably everyone since its been on their status page since before this was asked
hakube•2h ago
Can't merge PRs atm
madethemcry•2h ago
Yeah there are some issues. PR is stuck at "Checking for the ability to merge automatically..."

By accident I landed on https://us.githubstatus.com/ and everything was green. At first, I thought, yeah sure, just report green, then I realized "GitHub Enterprise Cloud" in the title. There is also a EU mirror: https://eu.githubstatus.com

Edit:

The report just updated with the following interesting bit.

> We identified a faulty network component and have removed it from the infrastructure. Recovery has started and we expect full recovery shortly.

nightpool•1h ago
That's interesting—my understanding is that Github Enterprise Cloud is part of the same infrastructure as Github.com, so this status page seems maybe incorrect? Probably some missing step in the runbook to update both of these pages at the same time.
christophilus•2h ago
The thing that's been annoying me for a few weeks is an always-on notification, but my notification page shows no unread notifications.
diath•1h ago
I had that happen when somebody tagged me in a private repository that was later deleted (?).

You can fix it through the API by generating an API token in your settings with notifications permission on and using this (warning: it will mark all your notifications as read up to the last_read_at day):

    curl -L \
    -X PUT \
    -H "Accept: application/vnd.github+json" \
    -H "Authorization: Bearer <YOUR-TOKEN>" \                            
    -H "X-GitHub-Api-Version: 2022-11-28" \
    https://api.github.com/notifications \
    -d '{"last_read_at":"2025-10-09T00:00:00Z","read":true}'
delfinom•1h ago
Yes private repos that are deleted leave notifications behind.

GitHub has been experiencing mass waves of crypto scam bots opening repos and mass tagging tens of thousands of users on new issues. Using the issue content body to generate massive scam marketing like content bodies.

masklinn•1h ago
Yep, there was a huge spate of spam with hundreds of people pinged, and when they got reported / deleted the notifications didn't go...
MyOutfitIsVague•1h ago
I got hit with one of these, then commented on GitHub meetsn issue about it, and ironically got a hundred notifications for everybody's comments, many complaining about phantom notifications.
RyJones•33m ago
I have been getting added to spam repos and orgs several times a day for weeks. it's annoying
masklinn•1h ago
An other trick is to go into the "done" tab, and move at least 25 issues back to unread.

Then you can click the checkbox at the top and then "select all", and it'll mark the phantom notifications as read.

christophilus•1h ago
Oooooh. Snap! Thank you. This was driving me crazy.
brewmarche•1h ago
Interesting that they went with a custom MIME type and a custom version header. I would have expected the version to be in the MIME type, but I feel like there is a reason behind this.
IshKebab•1h ago
Yeah happens to me all the time. I haven't been able to find a pattern or a reliable way to fix it (without messing with curl).

This has been a known issue since at least 2021, which is ridiculous.

https://github.com/orgs/community/discussions/6874

levkk•1h ago
Switched to email notifications and disabled them in GitHub, mainly because of this. Huge quality of life improvement.
lbrito•1h ago
I experienced an outage (website and any push, pull commands) that lasted for about 1-2 hours on Oct 7th but didn't see anything on their status page. There was definitively a spike on https://downdetector.ca/status/github/, so I know it wasn't just my ISP.
mparnisari•1h ago
I experienced that too! In Canada. It was a head-scratcher, none of my teammates had issues, and i could access just fine on my phone but i couldn't on my home's wifi.
lbrito•1h ago
Same. Fraser Valley?
koolba•1h ago
It’s kind of funny that the top two posts right now are:

    1. Why Self-Host?
    2. GitHub Issues
chistev•1h ago
What's the joke?
dvmazur•1h ago
GitHub Pages are affected too
logicallee•1h ago
Github hosts git repos, so it's an example of when self-hosting git repos on one's own servers could remain operational despite a github outage.
jsheard•1h ago
And yesterday we had "GitHub pausing feature development to prioritize moving infra to Azure", immediately followed by them breaking their infra.
cluckindan•1h ago
Well, of course the corporation wants to dogfood their platform.

Expecting more and more downtime and random issues in the future.

progbits•1h ago
I thought they were already on Azure?

At least I'm pretty sure the runners are, our account rep keeps trying to get us to use their GPU runners but they don't have a good GPU model selection and it seems to match what azure offers.

thelastgallon•59m ago
Its better for github to self-host.
sam_lowry_•51m ago
Here's the step-by-step guide to self-hosting git repositories:

Change directory to your local git repository that you want to share with friends and colleagues and do a bare clone `git clone --bare . /tmp/repo.git`. You just created a copy of the .git folder without all the checked out files.

Upload /tmp/repo.git to your linux server over ssh. Don't have one? Just order a tiny cloud server from Hetzner. You can place your git repository anywhere, but the best way is to put it in a separate folder, e.g. /var/git. The command would look like `scp -r /tmp/repo.git me@server:/var/git/`.

To share the repository with others, create a group, e.g. `groupadd --users me git` You will be able to add more users to the group with groupmod.

Your git repository is now writable only by me. To make it writable by the git group, you have to change the group on all files in the repository to git with `chgrp -R git /var/repo.git` and enable the group write bit on them with `chmod -R g+w /var/repo.git`.

This fixes the shared access for existing files. For new files, we have to make sure the group write bit is always on by changing UMASK from 022 to 002 in /etc/login.defs.

There is one more trick. For now on, all new files and folders in /var/git will be created with the user's primary group. We could change users to have git as the primary group.

But we can also force all new files and folders to be created with the parent folder's group and not user primary group. For that, set the group sticky bit on all folders in /var/git with `find /var/git -type d -exec chmod g+s \{\} +`

You are done.

Want to host your git repository online? Install caddy and point to /var/git with something like

    example.com {
    root * /var/git
    file_server
    }
Your git repository will be instantly accessible via https://example.com/repo.git.
esafak•1h ago
3. QED
paulddraper•1h ago
It is funny.

At the same time, self-hosting is great for privacy, cost, or customization. It is not great for uptime.

danlugo92•1h ago
I self host a server for a website and I also compile executables in there and it's been running just fine for 2 years and it's not even like a big provider, a very niche one actually (mac servers)
kelvinjps10•1h ago
Why Mac servers?
guluarte•30m ago
are not mac servers more expensive than traditional ones? the only reason i've used them is to compile xcode
anon7000•1h ago
How long before moderately sized companies start hosting their own git servers again. Surely it wouldn’t be that difficult unless your repos are absolutely massive. GitHub outages are so common these days
bntyhntr•1h ago
And then you need to add another server to the infra / netops / tools team's maintenance burden and then they take it down for an upgrade and it doesn't come back up etc etc. I don't think outages/downtime are necessarily a good reason to switch to self-hosting. I worked at a company that self-hosted the repo and code review tool and it was great, but it still had the same issues.
mjr00•56m ago
Yeah, as someone old enough to have worked at mid-sized companies before cloud-everything became the norm, self-hosting is overly romanticized. People think you'll get a full infrastructure team dedicated to making sure your self-hosted Git/Artifactory/Jira/Grafana/whatever runs super smoothly and never goes down. In reality it ends up being installed by a dev or IT as sort of a side project, and once the config is hacked together (and of course it's a special pet server and not any kind of repeatable infrastructure setup with Ansible or Docker, so yes you're stuck on Ubuntu 12.04 for a decade) they let it "just run" forever because they don't want to touch it (because making changes is the #1 reason for outages) so you're constantly 2+ years behind the latest version of everything.

It's true that outages are probably less frequent, as a consequence of never making any changes, however when something does break e.g. security forces someone to actually upgrade the 5-years-since-end-of-support Ubuntu version and it breaks, it may take several days or weeks to fix because nobody actually knows anything about the configuration because it was last touched 10 years ago by 1 guy who has long left the company.

hypeatei•1h ago
I don't think running your own git server on its own is what's preventing this. It's all the other things you're missing like: CI/CD pipelines, code review tools, user management, etc...
myrmidon•1h ago
Run your own gitlab server then?
import•1h ago
That’s already what Gitlab and gitea is doing
d_silin•1h ago
How is GitLab doing?
Kavelach•1h ago
I wish the most popular software forge didn't include a bunch of other software solutions like issue tracking or forums.

Having everything in one service definitely increases interoperability between those solutions, but it definitely decreases stability. In addition, each of the other systems is not the best in their class (I really detest GH Actions for example).

Why do so many solutions grow so big? Is it done to increase enterprise adoption?

cortesoft•1h ago
If the alternative is each user has to patch together all of the different solutions into one, you are just increasing the number of parts that can go wrong, too. And when they do, it won't be immediately clear who the issue is with.

I do agree there are issues with a single provider for too many components, but I am not sure you get any decreased stability with that verse having a different provider for everything.

MyOutfitIsVague•1h ago
I agree to a degree, but issue tracking being able to directly work with branches and PRs is natural enough, and then discussions can share a lot of code with the issue tracker.

Getting the same level of interoperability with a separate tool takes significantly more work on both sides, so the monolithic approaches tend to thrive because it can get out the door faster and better.

Forgejo is doing the same thing with its actions. Honestly, I'd prefer if something like Woodpecker became the blessed choice instead, and really good integration with diverse tools was the approach.

poly2it•1h ago
Of everything potentially causing scope creep in GitHub, issue tracking and forums might be the least out of scope.

That said, I agree that the execution of many features in GitHub has been lacking for some time now. Bugs everywhere and abysmal performance. We're moving to Forgejo at $startup.

0-bad-sectors•1h ago
Oh it's that time of the week again.
nomilk•1h ago
I stopped using Actions for side projects a few months ago, things are simpler now (I run tests locally).

I felt like Actions were a time sink that trick you into feeling productive - like you're pursuing 'best practice' - while stealing time that could otherwise be spent talking to users or working on your application.

raybb•1h ago
Makes sense for side projects. I think there's real value for open source projects so people can get feedback quickly and maintainers can know that the tests are passing quickly.
nixpulvis•1h ago
Um... maybe some actions setups are overly complex, but CI/CD is valuable if done well.

For example, running tests before merge ensures you don't forget to. Running lints/formatters ensures you don't need to refactor later and waste time there.

For my website, it pushes main automatically, which means I can just do something else while it's all doing it's thing.

Perhaps you should invest in simplifying your build process instead?

nomilk•52m ago
It's valuable, but at what cost?

The day I forget to run tests before merging I'll set up CI/CD (hasn't happened before, unlikely, but not impossible).

My build process is gp && gp heroku main. Minor commits straight to main. Major features get a branch. This is manual, simple and loveable. And involves zero all-nighters commit-spamming the .github directory :)

nixpulvis•29m ago
I mean, I agree it would be nice to be able to test actions locally (maybe there's a tool for this). But I keep my actions very simple, so it rarely takes me a lot of time to get them right. See https://github.com/nixpulvis/grapl/blob/master/.github/workf...

If you want more complex functionality, that's why I suggested improving your build system, so the actions themselves are pretty simple.

Where things get more frustrating for me is when you try using more advanced parts of actions like releases and artifacts which aren't as simple as running a script and checking it's output/exit code.

nomilk•8m ago
< 20 lines is nice.

Just refreshed my memory by looking at mine. 103 lines. Just the glance brought back painful memories. The worst areas were:

- Installing ruby/bundler/gems/postgres/js libraries, dealing with versioning issues, and every few months have them suddenly stop working for some reason that had to be addressed in order to deploy.

- Installing capybara and headless chrome and running system tests (system tests can be flakey enough locally, let alone remotely).

- Minor issue of me developing on a mac, deploying to heroku, so linux on GHA needs a few more things installed than I'm used to, creating more work (not the end of the world, and good to learn, but slow when it's done via a yaml file that has to be committed and run for a few minutes just to see the error).

wara23arish•47m ago
i recently started and kinda agree?

my setup before was just build and scp

now it takes like 3 mins for a deploy: i haven’t setup caching for builds etc. but that feels like a self made problem

my proj is pretty simple so thats probably why

vel0city•38m ago
Is it a project where it's pretty much just you doing things, or something with a team of people working on things? Are you working in a space with strong auditability concerns or building pretty much hobby software?

For the personal home hacking projects I do, I often don't even make an external repo. I definitely don't do external CI/CD. Often a waste of time.

For more enterprise kind of development, you bet the final gold artifacts are built only by validated CI/CD instances and deployed by audited, repeatable workflows. If I'm deploying something from a machine I have in my hands with an active local login for, something is majorly on fire.

SeanAnderson•22m ago
Can you explain your thoughts here more? I don't think I'm following.

Are you saying that the act of setting up the CI pipeline is time consuming? Or the act of maintaining it?

The only time I think about my CI pipeline is when it fails. If it fails then it means I forgot to run tests locally.

I guess I can see getting in the weeds with maintaining it, but that always felt more likely when not deploying Dockerized containers since there was duplication in environmental configs that needed to be kept synchronized.

Or are you commenting on the fact that all cloud-provided services can go down and are thus a liability?

Or do you feel limited by the time it takes to recreate environments on each deployment? I haven't bumped into this scenario that often. Usually that dominating variable in my CI pipeline is the act of running tests themselves. Usually due to poor decisions around testing best practices that cause the test runner to execute far slower than desired. Those issues would also exist locally, though.

nimbius•1h ago
For anyone who wants context, here is the entire history of github "Issues"

https://www.githubstatus.com/history

seems like Microsoft can't keep this thing from crashing at least three times a month. At this rate it would probably be cheaper just to buy out Gitlab.

Wondering when M$ will cut their losses and bail.

kelvinjps10•1h ago
So they buy company, ruin it and then start again forever?
olao99•50m ago
I'm not looking forward to the Azure migration and potential for more issues in the coming year
montroser•43m ago
To run your github actions locally, we've had decent success with this tool: https://github.com/nektos/act

Just be warned if you try it out that if you don't specify which workflow to run, it will just run them all!

Malicious NPM Packages Host Phishing Infrastructure Targeting 135

https://socket.dev/blog/175-malicious-npm-packages-host-phishing-infrastructure
1•feross•54s ago•0 comments

GPUI – Rust UI framework that powers Zed

https://www.gpui.rs/
1•skilled•1m ago•0 comments

Brick Game on Garmin Instinct 2

https://github.com/black-square/BrickGame
1•meken•1m ago•0 comments

Gemini at Work 2025

https://blog.google/products/google-cloud/gemini-at-work-2025/
1•push0ret•1m ago•0 comments

Google says 'likely over 100' affected by Oracle-linked hacking campaign

https://www.reuters.com/sustainability/boards-policy-regulation/google-says-dozens-organizations-...
1•dannyphantom•1m ago•0 comments

Scientists detect the lowest mass dark object currently measured

https://www.mpg.de/25518363/1007-asph-astronomers-image-a-mysterious-dark-object-in-the-distant-u...
1•giuliomagnifico•2m ago•0 comments

Enabling the First 100x Writer

https://rivereditor.com/
2•chandlersupple•4m ago•0 comments

A new approach to analyzing Robin Hood hashing. (2014)

https://arxiv.org/abs/1401.7616
1•fanf2•6m ago•0 comments

Will A.I. Trap You in the "Permanent Underclass"?

https://www.newyorker.com/culture/infinite-scroll/will-ai-trap-you-in-the-permanent-underclass
1•rbanffy•7m ago•0 comments

OpenAI Is a Consumer Company

https://frontierai.substack.com/p/openai-is-a-consumer-company
1•cgwu•9m ago•0 comments

Subway Builder: A Realistic Subway Simulation Game

https://www.subwaybuilder.com/
2•0xbeefcab•9m ago•0 comments

Hybrid Architectures for Language Models: Systematic Analysis & Design Insights

https://arxiv.org/abs/2510.04800
1•matt_d•10m ago•0 comments

Sony teases new GPU tech coming to its next Playstation

https://www.theverge.com/news/797640/sony-ps6-handheld-gpu-ray-path-tracing-amd-radiance-cores
3•thunderbong•11m ago•0 comments

Fly.io Turned a Security Breach into "BrAnD" Damage Control

https://vp.net/l/en-US/blog/Fly-io-Turned-a-Security-Breach-Into-%E2%80%9CBrAnD%E2%80%9D-Damage-C...
1•rasengan•11m ago•1 comments

Semantic Layers Are Bad for AI

https://bagofwords.com/blog/semantic-layers-are-bad-for-ai/
1•y14•12m ago•0 comments

Trick-or-Treat Protocol (TTP/1.0) API Reference

https://doc.holiday/blog/trick-or-treat-protocol-api
2•sandgardenhq•14m ago•0 comments

The Usual Suspects: low-level Roland JP-8000 emulator plugin from silicon [video]

https://www.youtube.com/watch?v=7VPrG5RHwGg
1•giulioz•15m ago•0 comments

Microscopic Geared Metamachines

https://www.nature.com/articles/s41467-025-62869-6
2•PaulHoule•15m ago•0 comments

Echolocating Through the AGI Reality Distortion Field

https://medium.com/kobalt-labs-tech-blog/echolocating-through-the-agi-reality-distortion-field-f4...
2•ashia•16m ago•0 comments

Python 3.14 Released with Template String Literals, Deferred Annotations, and

https://socket.dev/blog/python-3-14-released
1•feross•16m ago•0 comments

Ownable ideas that execute transparently through AI

1•clubanga•17m ago•0 comments

What climate skeptics taught me about global warming (2016)

https://perspicacity.xyz/2016/12/10/what-climate-skeptics-taught-me-about-global-warming/
1•FrancoisBosun•19m ago•0 comments

Will the explainer post go extinct?

https://dynomight.substack.com/p/explainers
1•crescit_eundo•19m ago•0 comments

Ask HN: Is AI-based debugging for robotics feasible?

1•Lazaruscv•20m ago•2 comments

Quantum Computing Breaks Encryption. This Contest Saved It. [video]

https://www.youtube.com/watch?v=aw6J1JV_5Ec
2•SirRuthven•21m ago•0 comments

HN: In AI era, code sharing app is still valuable?

2•SendSnippet•21m ago•0 comments

I analyzed 70 Data Breaches. Three controls would have stopped 65% of them

https://securityblueprints.io/posts/three-security-invariants-ciso-challenge/
4•nielsprovos•22m ago•0 comments

"t-strings" and `string.templatelib`: new in Python 3.14

https://docs.python.org/3/library/string.templatelib.html
2•12_throw_away•23m ago•1 comments

Bosses Are Cutting Costs, Just Not the Private Jet

https://www.wsj.com/business/bosses-are-cutting-costs-just-not-the-private-jet-b519ab6b
4•malshe•23m ago•2 comments

Show HN: I built a TikTok recipe extractor

https://ingrdnt.app
2•daniyalbhaila•23m ago•0 comments