"Friendly reminder" is typically used for reminding people of common knowledge. Especially for beneficial but inconvenient things that some or most people neglect to do, either because they're annoying, inconvenient, or time consuming. Things for which busy people might need a "wink wink, nudge nudge".
Friendly reminder to floss. Friendly reminder to have your cancer screening. Friendly reminder to check your tires. Friendly reminder to file your taxes early. Friendly reminder to drink more water, eat fiber, etc.
By what?
> The future is decentralized and p2p
I wish it was but that isn't how things are going to turn out, especially if it's only people like you pushing it.
I mean, that joke is as old as the universe (heck, in the brief period that I worked in an office, decades ago, I had a "# days since the last person asked a stupid question" sign to enact the exact same gag)...
However this means I'm now using the Github website and services 1000x more than I was previously, and they're trending towards having coin-flip uptime stats.
If Github sold a $5000 box I could plug into a corner in my house and just use that entire experience locally I'd seriously consider it. I'm guessing maybe I could get partway there by spending twice that on a Mac Pro but I have no idea what the software stack would look like today.
Is there a fully local equivalent out-of-the-box experience that anyone can vouch for? I've used local agents primarily through VSCode, but AFAIK that's limited to running a single active agent over your repo, and obviously limited by the constraints of running on a single M1 laptop I currently use. I know at least some people are managing local fleets of agents in some manner, but I really like how immensely easy Github has made it.
https://docs.github.com/en/enterprise-server@3.19/admin/over...
"GitHub Enterprise Server is a self-hosted version of the GitHub platform"
> If Github sold a $5000 box I could plug into a corner in my house and just use that entire experience locally I'd seriously consider it. I'm guessing maybe I could get partway there by spending twice that on a Mac Pro but I have no idea what the software stack would look like today.
Right now, the only reasons to host LLMs locally are if you want to do it as a hobby or you are sensitive about data leaving your local network. If you only want a substitute for Copilot when GitHub is down, any of the hosted LLMs will work right away with no up front investment and lower overall cost. Most IDEs and text editors have built-in support for connecting to other hosted models or installing plugins for it.
> I know at least some people are managing local fleets of agents in some manner,
If your goal is to run fleets of agents in parallel, local LLM hosting is going to be a bottleneck. Familiarize yourself with some of the different tool options out their (Claude Code, Cline, even the new Mistral Vibe) and sign up for their cloud API. You can also check OpenRouter for some more options. The cloud hosted LLMs will absorb parallel requests without problem.
The local models are just right on the edge of being really useful, there's a tipping point to where accuracy is high enough so that getting things done is easy vs models getting continuously stuck. We're in the neighborhood.
Alternatively, just have local GitLab and use one of the many APIs, those are much more stable than github. Honestly just get yourself a Claude subscription.
Adding Claude to my rotation is starting to look like the option with the least amount of building the universe from scratch. I have to imagine it can be used in a similar or identical workflow to the Copilot one where it can create PRs and make adjustments in response to feedback etc.
A big part of my success using LLMs to build software is building the tools to use LLMs and the LLMs making that tool building easy (and possible).
From m1? Yes, absolutely. M3 is marginal now but m5 will probably make it definite.
I recently got mirror support upstreamed into Nixpkgs for fetchdarcs & fetchpijul which actually work on my just-alpha-released pinning tool, Nixtamal <https://darcs.toastal.in.th/nixtamal/trunk/README.rst>, for just this sort of thing.
The internal conversation about moving away from Actions or possibly GitHub has been triggered. I didn't like Zig's post about leaving GitHub because it felt immature, but they weren't wrong. It's decaying.
GH Packages is something we're extricating ourselves from after today too. One more outage in the next year and maybe we get the ammunition to move away from GH entirely.
It's still hard to believe that they couldn't even keep the lights on on this thing.
It may have been updated, but nobody is reading the update.
Anger is a communication tool. It should absolutely be used when boundaries are being violated. Otherwise you’ll get walked all over.
See the edit history here: https://news.ycombinator.com/item?id=46133179
Edit: 1. just to be clear, it's very good that they have accepted the feedback and removed that part, but there's no apology (as far as I know) and it still makes you wonder about the culture. On the other side, people make mistakes under stress. 2. /s/not warranted/unwarranted/
We don't have that for developers. Maybe shame/offense is our next best bet. You are free to work for a terrible company accepting and/or encouraging terrible design decisions, but you need to take into account the potential of being laughed at for said decisions.
(Snarky way of saying: GitHub still has huge mindshare and networking effects, dealing with another forge is probably too much friction for a lot of projects)
Not that GitHub doesn’t suck…
I use both Gitlab and Github and have yet to experience any downtime on any of my stuff. I do however, work at a large corporation and the latest NPM bug that hit Github caused enough of a stir where it basically shut down development in all of our lower environments for about two weeks so there's that.
But I do agree, and it seems like their market share increased after the Microsoft acquisition which is contrary to what I heard in all my dev circles because of how uncool MSFT is to many of my friends.
GitHub - Historically, GitHub reports uptime around 99.95% or higher, which translates to roughly 20–25 minutes of downtime per month. They have a large infrastructure and redundancy, so outages are rare but can happen during major incidents.
GitLab - GitLab also targets 99.95% uptime for its SaaS offering (GitLab.com). However, GitLab has had slightly more frequent service disruptions compared to GitHub in the past, especially during scaling events or major upgrades. For self-hosted GitLab instances, uptime depends heavily on your own infrastructure.
We had that last year, with the full premium stuff ("pay as much as we can" mindset)
Please see this: a basic feature, much needed by lots of people (those who are stuck on azure ..): https://gitlab.com/gitlab-org/gitlab/-/issues/360592
Please read the entire thread with a particular attention to the timeline
```
gh api notifications\?all=true | jq -r 'map(select(.unread) | .id)[]' | xargs -L1 sh -c 'gh api -X PATCH notifications/threads/$0'
```
gh api notifications -X PUT -F last_read_at=2025-10-06T00:00:00Z
Just change the date to today. I also got that line from a gh issue somewhere - maybe it was the same issue that you’re referring to.
Are you still seeing it, would you mind checking? Our team will get on it if so.
https://github.com/orgs/community/discussions/174310#discuss...
I had the same issue too, and this was the only thing that fixed it for me.
I'm a big advocate for github to add ipv6 support , but let's not pretend it's critical for their business.
Just now I found:
* a job that's > 1 month old, still running
* another job that started 2 hours ago that had 0 output
* a job that was marked as pending, yet I could rerun it
* auto-merges that don't happen
* pull requests show (1), click it, no pull requests visible
Makes me wonder in how many places state is stored, because there is some serious disconnect between them.
_def•1d ago
laurmaedje•1d ago
rienbdj•1d ago