frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Anthropic Outage for Opus 4.5 and Sonnet 4/4.5 across all services

https://status.claude.com/incidents/9g6qpr72ttbr
165•pablo24602•2h ago•91 comments

2002: Last.fm and Audioscrobbler Herald the Social Web

https://cybercultural.com/p/lastfm-audioscrobbler-2002/
114•cdrnsf•2h ago•57 comments

JSDoc is TypeScript

https://culi.bearblog.dev/jsdoc-is-typescript/
78•culi•4h ago•103 comments

Hashcards: A plain-text spaced repetition system

https://borretti.me/article/hashcards-plain-text-spaced-repetition
218•thomascountz•7h ago•87 comments

History of Declarative Programming

https://shenlanguage.org/TBoS/tbos_15.html
7•measurablefunc•1h ago•0 comments

Ask HN: What Are You Working On? (December 2025)

118•david927•7h ago•416 comments

In the Beginning was the Command Line (1999)

https://web.stanford.edu/class/cs81n/command.txt
73•wseqyrku•6d ago•30 comments

Claude CLI deleted my home directory Wiped my whole Mac

https://old.reddit.com/r/ClaudeAI/comments/1pgxckk/claude_cli_deleted_my_entire_home_directory_wi...
42•tamnd•33m ago•25 comments

The Typeframe PX-88 Portable Computing System

https://www.typeframe.net/
84•birdculture•6h ago•21 comments

Developing a food-safe finish for my wooden spoons

https://alinpanaitiu.com/blog/developing-hardwax-oil/
123•alin23•4d ago•67 comments

Interview with Kent Overstreet (Bcachefs) [audio]

https://linuxunplugged.com/644
14•teekert•3d ago•9 comments

Advent of Swift

https://leahneukirchen.org/blog/archive/2025/12/advent-of-swift.html
40•chmaynard•3h ago•7 comments

AI and the ironies of automation – Part 2

https://www.ufried.com/blog/ironies_of_ai_2/
197•BinaryIgor•10h ago•84 comments

Shai-Hulud compromised a dev machine and raided GitHub org access: a post-mortem

https://trigger.dev/blog/shai-hulud-postmortem
172•nkko•13h ago•106 comments

GraphQL: The enterprise honeymoon is over

https://johnjames.blog/posts/graphql-the-enterprise-honeymoon-is-over
156•johnjames4214•6h ago•138 comments

Price of a bot army revealed across online platforms

https://www.cam.ac.uk/stories/price-bot-army-global-index
70•teleforce•7h ago•18 comments

Checkers Arcade

https://blog.fogus.me/games/checkers-arcade.html
10•fogus•2d ago•1 comments

GNU recutils: Plain text database

https://www.gnu.org/software/recutils/
92•polyrand•4h ago•29 comments

Linux Sandboxes and Fil-C

https://fil-c.org/seccomp
329•pizlonator•1d ago•131 comments

Standalone Meshtastic Command Center – One HTML File Offline

https://github.com/Jordan-Townsend/Standalone
42•Subtextofficial•5d ago•10 comments

Baumol's Cost Disease

https://en.wikipedia.org/wiki/Baumol_effect
68•drra•11h ago•76 comments

Illuminating the processor core with LLVM-mca

https://abseil.io/fast/99
54•ckennelly•8h ago•5 comments

From sci-fi to reality: Researchers realise quantum teleportation using tech

https://cordis.europa.eu/article/id/462587-from-sci-fi-to-reality-researchers-realise-quantum-tel...
11•donutloop•1h ago•4 comments

Compiler Engineering in Practice

https://chisophugis.github.io/2025/12/08/compiler-engineering-in-practice-part-1-what-is-a-compil...
99•dhruv3006•16h ago•18 comments

Efficient Basic Coding for the ZX Spectrum (2020)

https://blog.jafma.net/2020/02/24/efficient-basic-coding-for-the-zx-spectrum/
48•rcarmo•11h ago•11 comments

Ask HN: Is starting a personal blog still worth it in the age of AI?

6•nazarh•54m ago•5 comments

Do dyslexia fonts work? (2022)

https://www.edutopia.org/article/do-dyslexia-fonts-actually-work/
45•CharlesW•4h ago•41 comments

iOS 26.2 fixes 20 security vulnerabilities, 2 actively exploited

https://www.macrumors.com/2025/12/12/ios-26-2-security-vulnerabilities/
125•akyuu•7h ago•112 comments

I fed 24 years of my blog posts to a Markov model

https://susam.net/fed-24-years-of-posts-to-markov-model.html
286•zdw•1d ago•112 comments

Getting into Public Speaking

https://james.brooks.page/blog/getting-into-public-speaking
100•jbrooksuk•4d ago•35 comments
Open in hackernews

Anthropic Outage for Opus 4.5 and Sonnet 4/4.5 across all services

https://status.claude.com/incidents/9g6qpr72ttbr
161•pablo24602•2h ago

Comments

delaminator•2h ago
weird because I am using Sonnet right now. I guess my time is limited
sgt•1h ago
Maybe you're just using the cached Sonnet.
onionisafruit•1h ago
I've been using it through this and it occasionally stops with an error message saying something like "repeated 529 responses". Kind of annoying but it's fine.
asasidh•1h ago
"We have identified that the outage is related to Sonnet 4.0, Sonnet 4.5, and Opus 4.5."

What else is people using ? Haiku 4.5 ?

nunodonato•1h ago
I do. Its quite a nice and fast model
asasidh•1h ago
me too.. thats the reason I mentioned it there.
gunalx•1h ago
Haiku 4.5 is a pretty decent small ish model. It conforms pretty good to my style guides, when cleaning up text, for eksample.
epolanski•1h ago
You made me try Haiku as I can't get Opus, and made me realize how a quicker feedback simplifies many tasks, I should be more dynamic in my model selection.
riwsky•58m ago
I heard that Google and OpenAI also make coding models, but I’ve never bothered to confirm.
termos•1h ago
https://canivibe.ai/

So we can maybe vibe, depending what service we use.

giancarlostoro•1h ago
Nice website, embeds poorly on Discord and other chat apps sadly.
bonesss•1h ago
Vibedetector
ares623•1h ago
We need a service that rates vibe coding capabilities. A "vibe rater".
iLoveOncall•31m ago
Wow 89% availability is a joke
acedTrex•1h ago
An overall net positive event.
abigail95•1h ago
it's monday morning i'm going back to bed
Tom1380•1h ago
Australia?
abigail95•1h ago
yes and for political reasons i'm also taking the day off, this is just another excuse.
tom_•1h ago
Perhaps related to https://news.ycombinator.com/item?id=46266655 ? - it's just too powerful, and they had to shut it down before something bad happened.
michelsedgh•1h ago
If they shut down opus 4.5 I'll cry
agumonkey•1h ago
i already heard people ask for more api credits embarassed like drug addics
XCSme•1h ago
Just a few more credits and it will finally fix that bug without introducing new ones, exactly how I asked
baobabKoodaa•1h ago
I can stop any time I want, and in fact I am going to stop. Just one more (bug)fix.
michelsedgh•56m ago
This joke is getting old kinda Opus4.5 handles all the bugs in one go and also doesn’t introduce new ones at least for me. Very rarely i get stuck with it like i did with past generations of AI
agumonkey•39m ago
How long the usual self debugging cycle ? it seems to be around 10 minutes for me (untyped language)
teaearlgraycold•1h ago
I think we’re all very happy with the pricing on it.
tcdent•1h ago
I use it as much as my brain can handle and I never exceed my Max plan quota.
michelsedgh•58m ago
Exactly I feel like my brain burns out after a few days. Like Im the limit already (yet im the maximizer also) its a very weird feeling
AnotherGoodName•57m ago
Just a warning for those not on the max plan; if you pay by the token or have the lower tier plans you can easily blow through $100s or cap your plan in under an hour. The rates for paying by the token are insane and the scaling from pro to max is also pretty crazy.

They made pro have many times more value than paying per token and then they made max again have 25x more tokens than pro on the $200 plan.

It’s a bit like being offered rice at $1 per grain (pay per token) or a tiny bag of rice for $20 (pro) or a truck load for $200. That’s the pricing structure right now.

So while i agree you can’t easily exceed the quota on the big plans it’s a little crazy how they’ve tiered pricing. I hope no one out there’s paying per token!

square_usual•40m ago
> I hope no one out there’s paying per token!

Some companies are. Yes, for Claude Code. My co used to be like that as it's an easy ramp up instead of giving devs who might not use it that much a $150/mo seat; if you use it enough you can have a seat and save money, but if you're not touching $150 in credits a month just use the API. Oxide also recommends using API pricing. [0]

0: https://gist.github.com/david-crespo/5c5eaf36a2d20be8a3013ba...

tcdent•19m ago
Oh yeah totally my bill used to be closer to $1000/mo when paying per-token.
theropost•1h ago
Just came back online here
irishcoffee•1h ago
I’m imagining a steampunk dystopia in 50 years: “all world production stopped, LLM hosting went down. The market is in free-fall. Sam, are you there?”

Man that cracks me up.

michelsedgh•1h ago
Just like the internet, or Cloudflare going down?
irishcoffee•1h ago
No, not even close
bdangubic•1h ago
it is much worse, I forgot how to push to remote so deploys are delayed :)
patcon•1h ago
Agreed. When cloudflare (ugh, aka the internet) goes down, we can't access information to think and work through. ("the fuel" in some metaphor)

But what about when LLMs go down and a good chunk of a whole generation won't even know how to think, when the remote system goes down? (Is the ability to think "the engine" of self and agency in this metaphor?)

We are building a wildly irresponsible context to exist in.

semi-extrinsic•1h ago
E. M. Forster would like a word.
teaearlgraycold•1h ago
The nice thing is unlike Cloudflare or AWS you can actually host good LLMs locally. I see a future where a non-trivial percentage of devs have an expensive workstation that runs all of the AI locally.
breatheoften•1h ago
I'm more and more convinced of the importance of this.

There is a very interesting thing happening right now where the "llm over promisers" are incentivized to over promise for all the normal reasons -- but ALSO to create the perception that the "next/soon" breakthrough is only going to be applicable when run on huge cloud infra such that running locally is never going to be all that useful ... I tend to think that will prove wildly wrong and that we will very soon arrive at a world where state of art LLM workloads should be expected to be massively more efficiently runnable than they currently are -- to the point of not even being the bottleneck of the workflows that use these components. Additionally these workloads will be viable to run locally on common current_year consumer level hardware ...

"llm is about to be general intelligence and sufficient llm can never run locally" is a highly highly temporary state that should soon be falsifiable imo. I don't think the llm part of the "ai computation" will be the perf bottleneck for long.

lwhi•48m ago
Is there any utility in thinking about LLM provision in terms of the electricity grid?

I've often thought that local power generation (via solar or wind) could be (or could have been) a viable alternative to national grid supply.

tablets•25m ago
I think you're onto something re: electricity - https://www.latitudemedia.com/news/in-africa-the-first-signs...
exe34•1h ago
I think it's possible, but the current trend is that by the time you can run x level at home, they have 10-100x in the frontier models, so if you can run today's Claude.ai at home, then software engineering as a career is already over.
pstuart•58m ago
My poorly informed hope is that that we can have mixture of experts with highly tuned models on areas of focus. If I'm coding in language Foo, I only care about a model that understands Foo and its ecosystem. I imagine that should be self-hostable now.
colordrops•1h ago
What's the best you can do hosting an LLM locally for under $X dollars. Let's say $5000. Is there a reference guide online for this? Is there a straight answer or does it depend? I've looked at Nvidia spark and high end professional GPUs but they all seem to have serious drawbacks.
cft•56m ago
https://www.reddit.com/r/LocalLLaMA/
lxgr•1h ago
I’m fairly sure you can also still run computers locally and connect them to the Internet.
irishcoffee•1h ago
Ah, you need to buy into this dystopia wholesale. The internet is also down because the LLMs fucked up the BGP routing table, which congress agreed (at the time) should run through the LLM interface.

Imagination, either the first or last thing to die in 2075.

lxgr•44m ago
Congress administrating BGP? Now we’re talking dystopia!
cft•57m ago
That's the only future of open source that I can see.
lxgr•1h ago
Everybody using the same three centralized inference providers? That would be as absurd and unrealistic as everybody hosting in us-east-1 and behind Cloudflare today!
jsight•1h ago
I remember hearing Karpathy refer to these outages as a worldwide "intelligence brownout".

Crazy: https://www.youtube.com/shorts/SV4DMqAJ8RQ

sergiotapia•33m ago
Like if electricity went out, no? Same deal.
irishcoffee•25m ago
Imagination.

“So sorry, we can’t read the WiFi signal from your generator. We don’t know how much to tax you for the emissions. Sadly, we can’t allow it to start.”

The story writes itself.

cdelsolar•6m ago
Claude code cut me off a few days ago and I _seriously_ had no idea what to do. I’ve been coding for 33 years and I suddenly felt like anything I did manually would be an order of magnitude slower than it had to be.
dinkleberg•1h ago
Props to them for actually updating their status page as issues are happening rather than hours later. I was working with claude code and hit an API error, checked the status page and sure enough there was an outage.

This should be a given for any service that others rely on, but sadly this is seldom the case.

arach•1h ago
Same as you and I was glad to see the status page - hit subscribe on updates

Claude user base believes in Sunday PM work sessions

exe34•1h ago
hah I ran out of tokens a bit before it hit I reckon.
rnewme•44m ago
same here, and I just got started, Hm..
airstrike•30m ago
Sunday PM builder, reporting in.
gwd•18m ago
As a solo bootstrapped founder, I take my sabbath sundown on Saturday to sundown on Sunday. Sunday evening therefore is generally the start of my work week.
LanceH•1h ago
Confusingly, I was trying to debug something with a 529, and this outage really had me going for a minute.
fragmede•1h ago
Seldom? Most status pages I've seen do eventually get updated, just not within that first critical 3 minutes.
palcu•34m ago
Thank you! Opening an incident as soon as user impact begins is one of those instincts you develop after handling major incidents for years as an SRE at Google, and now at Anthropic.

I was also fortunate to be using Claude at that exact moment (for personal reasons), which meant I could immediately see the severity of the outage.

aduwah•17m ago
Take my condolences, Sunday outages are rough
edverma2•1h ago
time to go outside
triwats•1h ago
I had an hour to vibe tonight and it looks like it may have gone.

Spent it in bloody Figma instead :(

russellthehippo•1h ago
Anthropic is very focused on AI safety. It makes LLMs safe by shutting down anyone from using them
victor9000•1h ago
It's the best way to ensure model wellness
bitwize•1h ago
And just like that, the brightest engineers in Silicon Valley were unable to get any programming done.
sebastiennight•1h ago
In the Claude.ai chat, this was announced to me as

    "You have reached the messages quota for your account. It will reset in 2 hours, or you can upgrade now"
Either I have perfect timing for reaching my quota limits, or some product monetization manager deserves a raise.
manquer•1h ago
More likely that error handling is not well implemented - i.e Either backend is not throwing the equivalent of 429/402 errors or the gateway is not handling the errors well and returns this message even though a 429 is being thrown.
palcu•1h ago
Hello, I'm one of the engineers who worked on the incident. We have mitigated the incident as of 14:43 PT / 22:43 UTC. Sorry for the trouble.
dan_wood•58m ago
Can you divulge more on the issue?

Only curious as a developer and dev op. It's all quite interesting where and how things go wrong especially with large deployments like Anthropic.

binsquare•53m ago
I yearn for the nitty gritty details too
mulhoon•45m ago
They turned it off and on again.
cheschire•21m ago
Three times.

https://youtu.be/uRGljemfwUE?si=Sq0t-2ipXr_gDqao&t=69

Chance-Device•51m ago
Thank you for your service.
nickpeterson•49m ago
The one time you desperately need to ask Claude and it isn’t working…
dgellow•18m ago
Hope you have a good rest of your weekend
g-mork•18m ago
it's still down get back to work
l1n•11m ago
Also an engineer on this incident. This was a network routing misconfiguration - an overlapping route advertisement caused traffic to some of our inference backends to be blackholed. Detection took longer than we’d like (about 75 minutes from impact to identification), and some of our normal mitigation paths didn’t work as expected during the incident.

The bad route has been removed and service is restored. We’re doing a full review internally with a focus on synthetic monitoring and better visibility into high-impact infrastructure changes to catch these faster in the future.

WhyOhWhyQ•56m ago
Didn't notice. Guess I'm legit.
__0x01•50m ago
Engineering Room, panning over a bunch of hot Blackwells

"I can't change the laws of physics!"

matt3210•46m ago
When vibes coders do the infra
dorongrinstein•32m ago
You guys do an awesome job. I noticed the outage today and felt bad for your Sunday. If you guys need help with unbreakable compute, please reach out to https://controlplane.com

we let you run compute across any cloud so if a region/cloud experiences an issue, users are served on surviving regions/clouds/on-prem.

Cheers, Doron

6r17•12m ago
It seems resolved now (per the status-page) - i experienced a moment where the agent got stuck in the same error loop just to pop the result this time. Makes me wonder if there has been some kind of rule applied in order to automatically detect such failure occurring again - quiet inspiring work
m_ke•9m ago
Was it just me or did Opus start producing incredibly long responses before the crash. I was asking basic questions and it wouldn't stop trying to spit out full codebases worth of unrelated code. For some very simple questions about database schemas it ended up compacting twice on a 3 message conversation.
rvz•8m ago
Anthropic is surpassing GitHub on unreliability.

Looking forward to the post-mortem.