frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

"There must be something like the opposite of suicide "

https://post.substack.com/p/there-must-be-something-like-the
1•rbanffy•1m ago•0 comments

Ask HN: Why doesn't Netflix add a “Theater Mode” that recreates the worst parts?

1•amichail•2m ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•8m ago•1 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•10m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•10m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•11m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•12m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•13m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•13m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•14m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•16m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
4•codexon•17m ago•2 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•18m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•21m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•22m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•22m ago•1 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•23m ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•23m ago•0 comments

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•26m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
2•_____k•26m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•28m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
3•CurtHagenlocher•30m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•31m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•31m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•31m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•32m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•34m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•36m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•41m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•42m ago•0 comments
Open in hackernews

Encryption is not enough: Shifting the economics of surveillance

https://drive.proton.me/urls/ZTM6NE0D70#5o2bMEjLUMXx
5•justinjeff•1mo ago

Comments

justinjeff•1mo ago
Modern secure messaging focuses on encrypting content while leaving the act of communication itself observable. In high-risk environments, this metadata observability is the primary vulnerability.

I'm proposing an alternative: Low-Observability Context Synchronization. Instead of transmitting explicit symbols, participants synchronize context through ordinary, non-salient interactions.

The goal is to shift the cost of surveillance from decryption to large-scale semantic inference. It’s a trade-off: we sacrifice scalability for reduced observability. I’d love to hear your thoughts on the economic feasibility of this model

schoen•1mo ago
There have been some asynchronous secure messenger projects in the past (Pond and Secure Scuttlebutt come to mind). High latency is really important for defeating traffic analysis, but people are so unaccustomed to it now because of all the engineering work that's gone into successfully reducing the latency of almost all of our communication systems. Accepting high-latency messaging as a defense against traffic analysis might involve psychology even more than engineering: cultivating patience.
aebtebeten•1mo ago
How "high" ought high-latency be? days? months? years?
justinjeff•1mo ago
There’s no single “correct” latency. It’s not a fixed parameter but a variable tied to the threat model and the economics of surveillance.

For low-risk, everyday coordination, minutes might be sufficient. For high-value intelligence, latency needs to be long enough to break the temporal correlation between input and outcome.

If monitoring a 24-hour window costs an adversary $X, the goal is to stretch the window until the cost of semantic inference exceeds the value of the information being inferred. Beyond that point, surveillance becomes economically irrational.

In that sense, latency functions like a currency: users “spend” time to buy lower observability. How much they’re willing to spend depends entirely on what they’re protecting and from whom.

justinjeff•1mo ago
Latency stops being a technical parameter and becomes a side effect of interaction. What matters is not delivery speed, but how meaning accumulates over time.
justinjeff•1mo ago
I think both points connect to the same underlying issue.

High latency isn’t a fixed number (days vs months), and it’s not something users are simply asked to tolerate. It’s a variable tied to both the threat model and how latency is experienced.

From a security perspective, latency only needs to be long enough to break meaningful temporal correlation. Once the cost of inferring “when coordination happened” exceeds the value of that inference, surveillance becomes economically irrational. In that sense, latency is a currency: time is spent to buy lower observability.

From a human perspective, the problem isn’t patience per se, but idle waiting. If latency is experienced as dead time, users reject it. If it’s embedded in ordinary interaction—play, participation, progression—then the wait stops feeling like a delay and starts feeling like part of the system’s normal operation.

So the model isn’t about slow messaging. It’s about replacing explicit message delivery with gradual context alignment. Latency becomes a side effect of interaction, not a parameter users stare at.

At that point, the limiting factor really is psychological—but less about endurance, and more about whether people can operate without expecting immediacy as a signal of meaning.