frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Introduce the Vouch/Denouncement Contribution Model

https://github.com/ghostty-org/ghostty/pull/10559
1•DustinEchoes•1m ago•0 comments

Show HN: SSHcode – Always-On Claude Code/OpenCode over Tailscale and Hetzner

https://github.com/sultanvaliyev/sshcode
1•sultanvaliyev•2m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/microsoft-appointed-a-quality-czar-he-has-no-direct-reports-and-no-b...
1•RickJWagner•3m ago•0 comments

Multi-agent coordination on Claude Code: 8 production pain points and patterns

https://gist.github.com/sigalovskinick/6cc1cef061f76b7edd198e0ebc863397
1•nikolasi•4m ago•0 comments

Washington Post CEO Will Lewis Steps Down After Stormy Tenure

https://www.nytimes.com/2026/02/07/technology/washington-post-will-lewis.html
1•jbegley•5m ago•0 comments

DevXT – Building the Future with AI That Acts

https://devxt.com
2•superpecmuscles•5m ago•4 comments

A Minimal OpenClaw Built with the OpenCode SDK

https://github.com/CefBoud/MonClaw
1•cefboud•6m ago•0 comments

The silent death of Good Code

https://amit.prasad.me/blog/rip-good-code
2•amitprasad•6m ago•0 comments

The Internal Negotiation You Have When Your Heart Rate Gets Uncomfortable

https://www.vo2maxpro.com/blog/internal-negotiation-heart-rate
1•GoodluckH•7m ago•0 comments

Show HN: Glance – Fast CSV inspection for the terminal (SIMD-accelerated)

https://github.com/AveryClapp/glance
2•AveryClapp•8m ago•0 comments

Busy for the Next Fifty to Sixty Bud

https://pestlemortar.substack.com/p/busy-for-the-next-fifty-to-sixty-had-all-my-money-in-bitcoin-...
1•mithradiumn•9m ago•0 comments

Imperative

https://pestlemortar.substack.com/p/imperative
1•mithradiumn•10m ago•0 comments

Show HN: I decomposed 87 tasks to find where AI agents structurally collapse

https://github.com/XxCotHGxX/Instruction_Entropy
1•XxCotHGxX•14m ago•1 comments

I went back to Linux and it was a mistake

https://www.theverge.com/report/875077/linux-was-a-mistake
2•timpera•15m ago•1 comments

Octrafic – open-source AI-assisted API testing from the CLI

https://github.com/Octrafic/octrafic-cli
1•mbadyl•16m ago•1 comments

US Accuses China of Secret Nuclear Testing

https://www.reuters.com/world/china/trump-has-been-clear-wanting-new-nuclear-arms-control-treaty-...
2•jandrewrogers•17m ago•1 comments

Peacock. A New Programming Language

1•hashhooshy•22m ago•1 comments

A postcard arrived: 'If you're reading this I'm dead, and I really liked you'

https://www.washingtonpost.com/lifestyle/2026/02/07/postcard-death-teacher-glickman/
2•bookofjoe•23m ago•1 comments

What to know about the software selloff

https://www.morningstar.com/markets/what-know-about-software-stock-selloff
2•RickJWagner•27m ago•0 comments

Show HN: Syntux – generative UI for websites, not agents

https://www.getsyntux.com/
3•Goose78•28m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/ab75cef97954
2•birdculture•28m ago•0 comments

AI overlay that reads anything on your screen (invisible to screen capture)

https://lowlighter.app/
1•andylytic•29m ago•1 comments

Show HN: Seafloor, be up and running with OpenClaw in 20 seconds

https://seafloor.bot/
1•k0mplex•29m ago•0 comments

Tesla turbine-inspired structure generates electricity using compressed air

https://techxplore.com/news/2026-01-tesla-turbine-generates-electricity-compressed.html
2•PaulHoule•31m ago•0 comments

State Department deleting 17 years of tweets (2009-2025); preservation needed

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
3•sleazylice•31m ago•1 comments

Learning to code, or building side projects with AI help, this one's for you

https://codeslick.dev/learn
1•vitorlourenco•32m ago•0 comments

Effulgence RPG Engine [video]

https://www.youtube.com/watch?v=xFQOUe9S7dU
1•msuniverse2026•33m ago•0 comments

Five disciplines discovered the same math independently – none of them knew

https://freethemath.org
4•energyscholar•34m ago•1 comments

We Scanned an AI Assistant for Security Issues: 12,465 Vulnerabilities

https://codeslick.dev/blog/openclaw-security-audit
1•vitorlourenco•34m ago•0 comments

Amazon no longer defend cloud customers against video patent infringement claims

https://ipfray.com/amazon-no-longer-defends-cloud-customers-against-video-patent-infringement-cla...
2•ffworld•35m ago•0 comments
Open in hackernews

Formal methods only solve half my problems

https://brooker.co.za/blog/2022/06/02/formal.html
78•signa11•1mo ago

Comments

HPsquared•1mo ago
Maybe they solve the first 90%, but not the other 90%.
chrisaycock•1mo ago
The article points out that tools like TLA+ can prove that a system is correct, but can't demonstrate that a system is performant. The author asks for ways to assess latency et al., which is currently handled by simulation. While this has worked for one-off cases, OP requests more generalized tooling.

It's like the quote attributed to Don Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it."

throw-qqqqq•1mo ago
There are methods of determining Worst Case Execution Time/WCET. I’ve been involved in real time embedded systems development, where that was a thing.

But one tool (like TLA+) can’t realistically support all formalisms for all types of analyses ¯\_(ツ)_/¯

pjmlp•1mo ago
From my point of view, they cannot even prove that, because in most cases there is no validation if the TLA+ model actually maps to the e.g. C code that was written.

I only believe in formal methods where we always have a machine validated way from model to implementation.

jgalt212•1mo ago
preach
pdhborges•1mo ago
Well Coq has program extraction built in.
Ericson2314•1mo ago
Yeah and that's why it's way better than the likes of TLA+.
ted_dunning•1mo ago
See Dafny
pjmlp•1mo ago
I know it, :)
NooneAtAll3•1mo ago
what is P?
aw1621107•1mo ago
Looks like it's this [0]:

> Distributed systems are notoriously hard to get right (i.e., guaranteeing correctness) as the programmer needs to reason about numerous control paths resulting from the myriad interleaving of events (or messages or failures). Unsurprisingly, programmers can easily introduce subtle errors when designing these systems. Moreover, it is extremely difficult to test distributed systems, as most control paths remain untested, and serious bugs lie dormant for months or even years after deployment.

> The P programming framework takes several steps towards addressing these challenges by providing a unified framework for modeling, specifying, implementing, testing, and verifying complex distributed systems.

It was last posted on HN about 2 years ago [1].

[0]: https://p-org.github.io/P/whatisP/

[1]: https://news.ycombinator.com/item?id=34273979

whinvik•1mo ago
Nice, I actually understood a lot of that post since I am trying to teach myself formal methods. Wrote up a bit here - https://vikramsg.github.io/introduction-to-formal-methods-pa...
jadbox•1mo ago
Are there any good formal method tools that work well with Node.js/Bun/Deno projects?
NovemberWhiskey•1mo ago
Outside of a very narrow range of safety- or otherwise ultra-critical systems, no-one is designing for actual guarantees of performance attributes like throughput or latency. The compromises involved in guarantees are just too high in terms of over-provisioning, cost to build and so on.

In large, distributed systems the best we're looking for is statistically acceptable. You can always tailor a workload that will break a guarantee in the real world.

So you engineer with techniques that reduce the likelihood that workloads you have characterized as realistic can be handled with headroom, and you worry about graceful degradation under oversubscription (i.e. maintaining "good-put"). In my experience, that usually comes down to good load-balancing, auto-scaling and load-shedding.

Virtually all of the truly bad incidents I've seen in large-scale distributed systems are caused by an inability to recover back to steady-state after some kind of unexpected perturbation.

If I had to characterize problem number one, it's bad subscriber-service request patterns that don't provide back pressure appropriately. e.g. subscribers that don't know how to back-off properly and services that don't provide back-pressure. Classical example is a subscriber that retries requests on a static schedule and gives up on requests that have been in-flight "too long", coupled with services that continue to accept requests when oversubscribed.

amw-zero•1mo ago
I think this is less about guarantees and more about understanding behavioral characteristics in response to different loads.

I personally could care less about proving that an endpoint always responds in less than 100ms say, but I care very much about understanding where various saturation points are in my systems, or what values I should set for limits like database connections, or how what the effect of sporadic timeouts are, etc. I think that's more the point of this post (which you see him talk about in other posts on his blog).

NovemberWhiskey•1mo ago
I am not sure that static analysis is ever going to give answers to those questions. I think the best you can hope to do is surface knowledge about the tacit assumptions about dependencies in order to explore their behaviors through simulation or testing.

I think it often boils down to "know when you're going to start queuing, and how you will design the system to bound those queues". If you're not using that principle at design stage then I think you're already cooked.

amw-zero•1mo ago
Who brought up static analysis?

I think simulation is definitely a promising direction.

NovemberWhiskey•1mo ago
I mean, the fundamental premise of formal methods is that assurance of correctness is achieved through unambiguous specification/modeling and mathematical proof. The extent to which you're dependent on dynamic testing of actual code to achieve assurance does speak to the extent to which you're really relying on formal methods.
amw-zero•3w ago
That’s literally what the post is about. I don’t see your point. The post is saying that formal tools currently do not handle performance and reliability problems. No one said otherwise.
AlotOfReading•1mo ago
It's just realtime programming. I wouldn't say that realtime techniques are limited to a very narrow range of ultra critical systems, given that they encompass everything from the code on your SIM card to games in your steam library.

    In large, distributed systems the best we're looking for is statistically acceptable. You can always tailor a workload that will break a guarantee in the real world.
This is called "soft" realtime.
NovemberWhiskey•1mo ago
"Soft" realtime just means that you have a time-utility function that doesn't step-change to zero at an a priori deadline. Virtually everything in the real world is at least a soft realtime system.

I don't disagree with you that it's a realtime problem, I do however think that "just" is doing a lot of work there.

AlotOfReading•1mo ago
There are multiple ways to deal with deadline misses for soft systems. Only some of them actually deliver the correct data, just late. A lot of systems will abort the execution and move on with zeros/last computed data instead, or drop the data entirely. A modern network AQM system like CAKE uses both delayed scheduling and intelligent dropping.

Agreed though, "just" is hiding quite a deep rabbit hole.

bluGill•1mo ago
While you don't need performance guarantees for most things, you still need performance. You can safely let "a small number" of requests "take too long", but if you let "too many" your users will start to complain and go elsewhere. Of course everything in quotes is fuzzy (though sometimes we have very accurate measures for specific things), but you need to meet those requirements even if they are not formal.
amw-zero•1mo ago
This is the single most impactful blog post I've read in the last 2-3 years. It's so obvious in retrospect, but it really drove the point home for me that functional correctness is only the beginning. I personally had been over-indexing on functional correctness, which is understandable since a reliable but incorrect system isn't valuable.

But, in practice, I've spent just as much time on issues introduced by perf / scalability limitations. And the post thesis is correct: we don't have great tools for reasoning about this. This has been pretty much all I've been thinking about recently.

adamddev1•1mo ago
There could be more linear and "resource-aware" type systems coming down the pipes through research. These would allow the type checker to show performance / resource information. Check out Resource Aware ML.

https://www.raml.co/about/

https://arxiv.org/abs/2205.15211

amw-zero•1mo ago
Super interesting, but I think this will be very difficult in practice due to the gigantic effect of nondeterminism at the hardware level (caches, branch prediction, out of order execution, etc.)
Ericson2314•1mo ago
The author should try some more modern formal methods.

Tools like Lean and Rocq can do arbitrary math — the limit is your time and budget, not the tool.

These performance questions can be mathematically defined, so it is possible.

ted_dunning•1mo ago
Indeed.

And the SeL4 kernel has latency guarantees based on similar proofs (at considerable cost)

adamddev1•1mo ago
There is a bunch of research happening around "Resource-Aware" type theory. This kind of type theory checks performance, not just correctness. Just like the compiler can show correctness errors, the compiler could show performance stats/requirements.

https://arxiv.org/abs/2205.15211

Already we have Resource Aware ML which

> automatically and statically computes resource-use bounds for OCaml programs

https://www.raml.co/about/

deterministic•4w ago
It is unreasonable to expect that a tool built for proving correctness (a very hard problem) somehow should also be able to simulate performance and everything else somebody might need.

A hammer is great for certain things but I don't expect it to make good coffee. I use other tools for that. However that doesn't make hammers deficient.