Though I will be the first to say I don't fully trust it based on the flakey git clone errors we see in CI.
The data is there, you just have to hover over each data point.
Looking at this now, you might as well self host and you would still get better uptime than GitHub.
"The Missing GitHub Status Page" with overall aggregate percentages. Currently at 90.84% over the last 90 days. It was at 90.00% a couple days ago.
Anecdotally, it seems believable that 1 in 50 times (2%) in Feb that Actions barfed. Which is not very nice, but it wasn't at 1 in 10 times (10%).
That being said, even when looking at the split uptimes, you'd have to do a very skewed weighting to achieve a number with more than one 9.
It's definitely bad no matter how it you slice the pie.
If GH pages is not serving content, my work is not blocked. (I don't use GH pages for anything personally)
The problem here is the specification of what the system is. It's a bit unfair to call GH a single service, but it's how Microsoft sells it.
It's not how I and many others calculate uptime. There is not uniformity, especially when you look at contracts.
When I was at IBM, they didn't meet their SLOs for Watson and customers got a refund for that portion of their spend
From the point of view of an individual developer, it may be "fraction of tasks affected by downtime" - which would lie between the average and the aggregate, as many tasks use multiple (but not all) features.
But if you take the point of view of a customer, it might not matter as much 'which' part is broken. To use a bad analogy, if my car is in the shop 10% of the time, it's not much comfort if each individual component is only broken 0.1% of the time.
Not to go too out of my way to defend GH's uptime because it's obviously pretty patchy, but I think this is a bad analogy. Most customers won't have a hard reliability on every user-facing gh feature. Or to put it another way there's only going to be a tiny fraction of users who actually experienced something like the 90% uptime reported by the site. Most people are in practice are probably experienceing something like 97-98%.
But yeah, totally agree that at the individual level, the observed reliability is between 90% and 99%, and probably toward the upper end of that range.
GitHub is a different situation. There's one "thing" users interact with, github.com, and it does a bunch of related things. Git operations, web hooks, the GitHub API (and thus their CLI tool), issues, pull requests, Actions; it's all part of the one product users think of as "GitHub", even if they happen to be implemented as different services which can fail separately.
EDIT: To illustrate the analogy: Google Code, Google Search and Google Drive are to Google what Microsoft GitHub, Microsoft Bing and Microsoft SharePoint are to Microsoft.
When I merge to master I expect a deploy to follow. This goes through git, webhooks and actions. Especially the latter two can fail silently if you haven't invested time in observation tools.
If maps is down I notice it and immediately can pivot. No such option with Github.
The graph being all nice before the Microsoft acquisition is a fun narrative, until you realize that some products (like actions, announced on October 16th, 2018) didn't exist and therefore had no outages. Easy to correct for by setting up start dates, but not done here. For the rest that did exist (API requests, Git ops, pages, etc) I figured they could just as easily be explained with GitHub improving their observability.
The whole "just because we could doesn't mean we should" quote applies here.
Maybe that's just the date when they started tracking uptime using this sytem?
GitHub’s reliability could stand to be improved but without narrowing down to products these sort of comparisons are meaningless.
Just the Git operations show way more instability post acquisition.
And even just that aspect of the service is now extremely unreliable. If outages in the LLM side can cause that to break, that would indicate some serious architectural problems.
It’s despicable to see everyone punching down on GitHub. Even under Microsoft they’ve continued to provide an invaluable and free service to open source developers .
And now , while vibe coders smother them to death, we ridicule them . Shameful , really
If it brings them down, they’ve only themselves to blame. More likely it’ll just hasten the end of free public repos, which will be a shame, but we’ll find other ways to share code that aren’t reliant on one semi-benevolent megacorp.
I hope GitHub shuts down free tier , maybe developers will finally be grateful .
They’re a big enough corporation that we can have nuanced feelings about them. Simultaneously grateful for one part of what they do, and unsympathetic for the consequences of a different part of what they do.
Actually the last 4-5 outages from Github, Our Azure environments have issues (that they rarely post on the status page) and lo and behold I'll notice that Github is also having the same problem.
I can only assume most of this is from the Azure migration path. Such an abysmal platform to be on. I loathe it.
Looks like there's an internal service health bulletin:
Impact Statement: Starting at 19:53 UTC on 31 Mar 2026, some customers using the Key Vault service in the East US region may experience issues accessing Key Vaults. This may directly impact performing operations on the control plane or data plane for Key Vault or for supported scenarios where Key Vault is integrated with other Azure services.
Honestly all of the key vault functions are offline for us in that region. Just another day in paradise.
Also the fact that the azure status page remains green is normal. Just assume it's statically green unless enough people notice.
If you started the y-axis at zero, you wouldn't see much of anything. Logarithmic scale would still be a bit much imo.
That's... kind of my point.
As a reliability engineer, I'm disappointed in GitHub's 99.5% availability periods, especially as they impact paying customers. On the other hand, most users are non-paying users, and a 99.5% availability for a free service seems to me to be a reasonable tradeoff relative to the potential cost of improving reliability for them.
The fact that we’re all talking about it, and not at all surprised, is a great example we can take when making the case for more 9’s of reliability.
* well, very technical power users.
zja•2h ago