frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•42s ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•1m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•1m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•1m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•2m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•5m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•5m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•6m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•6m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•8m ago•1 comments

Spec-Driven Design with Kiro: Lessons from Seddle

https://medium.com/@dustin_44710/spec-driven-design-with-kiro-lessons-from-seddle-9320ef18a61f
1•nslog•8m ago•0 comments

Agents need good developer experience too

https://modal.com/blog/agents-devex
1•birdculture•9m ago•0 comments

The Dark Factory

https://twitter.com/i/status/2020161285376082326
1•Ozzie_osman•9m ago•0 comments

Free data transfer out to internet when moving out of AWS (2024)

https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-internet-when-moving-out-of-aws/
1•tosh•10m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•alwillis•11m ago•0 comments

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•12m ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•16m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•16m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•17m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•21m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•22m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•22m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
2•samuel246•25m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•25m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•26m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•26m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•27m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•29m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•30m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
3•jerpint•30m ago•0 comments
Open in hackernews

Show HN: Ayder – HTTP-native durable event log written in C (curl as client)

https://github.com/A1darbek/ayder
56•Aydarbek•3w ago
Hi HN,

I built Ayder — a single-binary, HTTP-native durable event log written in C. The wedge is simple: curl is the client (no JVM, no ZooKeeper, no thick client libs).

There’s a 2-minute demo that starts with an unclean SIGKILL, then restarts and verifies offsets + data are still there.

Numbers (3-node Raft, real network, sync-majority writes, 64B payload): ~50K msg/s sustained (wrk2 @ 50K req/s), client P99 ~3.46ms. Crash recovery after SIGKILL is ~40–50s with ~8M offsets.

Repo link has the video, benchmarks, and quick start. I’m looking for a few early design partners (any event ingestion/streaming workload).

Comments

Aydarbek•3w ago
The demo intentionally starts with SIGKILL to show crash recovery first.

For benchmarks: I used real network (not loopback) and sync-majority writes in a 3-node Raft cluster. Happy to answer questions about tradeoffs vs Kafka / Redis Streams and what’s still missing.

heipei•3w ago
Thank you for sharing, this looks really cool. The simplicity of setting this up and operating it reminds me a lot of nsq which received a lot less publicity than it should have.
Aydarbek•3w ago
That’s a great comparison nsq is a project I have a lot of respect for.

I think there’s a similar philosophy around simplicity and operator experience. Where Ayder diverges is in durability and recovery semantics nsq intentionally trades some of that off to stay lightweight.

The goal here is to keep the “easy to run” feeling, but with stronger guarantees around crash recovery and replication.

tontinton•3w ago
Very cool, have you taken a look into what TigerBeetle does with VSR (and why they chose it instead of raft)?
Aydarbek•3w ago
Yes I’ve read through TigerBeetle’s VSR design and their rationale for not using Raft.

VSR makes a lot of sense for their problem space: fixed schema, deterministic state machine, and a very tight control over replication + execution order.

Ayder has a different set of constraints: - append-only logs with streaming semantics - dynamic topics / partitions - external clients producing arbitrary payloads over HTTP

Raft here is a pragmatic choice: it’s well understood, easier to reason about for operators, and fits the “easy to try, easy to operate” goal of the system.

That said, I think VSR is a great example of what’s possible when you fully own the problem and can specialize aggressively. Definitely a project I’ve learned from.

BrouteMinou•3w ago
That's really interesting, I am even more eager to arrive at home to check that out.

Thank you for sharing this with us.

Aydarbek•3w ago
Thanks! If you hit any rough edges getting it running, tell me I’ll fix the docs/scripts.
roywiggins•3w ago
> No manual intervention. No partition reassignment. No ISR drama.

> Numbers are real, not marketing.

I'm not questioning the actual benchmarks or anything, but this README is substantially AI generated, yeah?

Aydarbek•3w ago
Fair question.

The benchmarks, logs, scripts, and recovery scenarios are all real and hand-run that’s the part I care most about being correct.

For the README text itself: I did iterate on wording and structure (including tooling), but the system, measurements, and tradeoffs are mine.

If any part reads unclear or misleading, I’m very open to tightening it up. Happy to clarify specifics.

tuhgdetzhh•3w ago
If I might ask without being offending: How much percentage of the actual code is written by AI?
roywiggins•3w ago
LLM tics like the bits I quoted feel more like marketingspeak by committee than an actual readme written by a human. I don't have any particular suggestions of what to write, but you just don't need to be this punchy in a readme. LLMs love this style though, for some reason.

When I read this type of prose it makes me feel like the author is more worried about trying to sell me something than just describing the project.

For instance, you don't need to tell me the numbers are "real". You just have to show me they're covering real-world use-cases, etc. LLMs love this sort of "telling not showing" where it's constantly saying "this is what I'm going to tell you, this is what I'm telling you, this is what I told you" structure. They do it within sections and then again at higher levels. They have, I think, been overindexed on "five-paragraph essays". They do it way more than most human writers do.

mgaunard•3w ago
Are those performance measurements meant be impressive? Seems on par with something threwn around with Python in 5 minutes.
dang•3w ago
Please don't be a jerk or put down others' work on HN. That's not the kind of site we're trying to be.

You're welcome to make your substantive points thoughtfully, of course.

https://news.ycombinator.com/newsguidelines.html

https://news.ycombinator.com/showhn.html

mgaunard•3w ago
Pointing out facts is not being a jerk. If you don't want feedback, don't solicit it.

Also if you disapprove, modding down is enough, you don't need to start a meta-discussion thread, which is itself a discouraged practice.

dang•3w ago
It depends on the context. For example, imagine telling a teenager that their face is covered in acne, or (to use an old example of pg's) telling an old person that they will die soon (https://news.ycombinator.com/item?id=6539403). It's not hard to imagine contexts in which pointing out those facts would be being a jerk.

There are infinitely many facts. They don't select themselves—humans do that, and we do it for reasons which are not particularly factual (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...).

> If you don't want feedback, don't solicit it.

If you read the lower part of https://news.ycombinator.com/showhn.html you'll see that the site has specific rules around how to offer feedback.

> you don't need to start a meta-discussion thread, which is itself a discouraged practice

That's true in general. I'm a mod here (sorry if that wasn't clear) and part of my job is to post replies when people are breaking the site guidelines. You're right that such comments are off topic and tediously meta - but it's a form of out-of-band communication that is necessary for keeping the site on-kilter. If it helps at all, these comments are even more tedious to write than they are to read :)

Aydarbek•3w ago
Totally fair, if this were “single-node HTTP handler on localhost”, then yeah, you can hit big numbers quickly in many stacks.

The point of these numbers is the envelope: 3-node consensus (Raft), real network (not loopback), and sync-majority writes (ACK after 2/3 replicas) plus the crash/recovery semantics (SIGKILL → restart → offsets/data still there).

If you have a quick Python setup that does majority-acked replication + fast crash recovery with similar measurements, I’d honestly love to compare apples-to-apples happy to share exact scripts/config and run the same test conditions.

mgaunard•3w ago
Good NICs get data out in a microsecond or two. That's still off by quite the order of magnitude, but that could be up to the network topology in question.
hedgehog•3w ago
Durable consensus means this is waiting for confirmed write to disk on a majority of nodes, it will always be much slower than the time it takes a NIC to put bits on the wire. That's the price of durability until someone figures out a more efficient way.
mgaunard•3w ago
A NVMe disk write is 20 microseconds.
hedgehog•3w ago
I'm not sure if you're going out of your way to be a dick or just obtuse but 1) that's not true on most SSDs, 2) there's overhead with all the indirection on a Digital Ocean droplet, and 3) this is obviously a straight forward user space implementation that's going to have all kinds of scheduler overhead. I'm not sure who it's for but it seems to make some reasonable trades for simplicity.
mgaunard•3w ago
If it's about making trade-offs for simplicity then use Kafka.

Some poor quality software with bad performance, but an established piece of tech regardless.

apitman•3w ago
Love seeing this written in C with an organic, grass-fed Makefile. Any details on why you decided to go that route instead of using something with more hype?
eddd-ddde•3w ago
That makefile could be made even simpler if it used the implicit rules that compile c files into object files!
ghxst•3w ago
If you go http native, could you leverage range headers for offsets?
Aydarbek•3w ago
Yes, that maps quite naturally.

Classic HTTP Range is byte-oriented, but custom range units (e.g. `Range: offsets=…`) or using `Link` headers for pagination both fit log semantics well.

I kept the initial API explicit (`offset` / `limit`) to stay obvious for curl users, but offset-range via headers is something I want to experiment with, especially if it helps generic tooling.

dagss•3w ago
Nice to see HTTP API for consuming events.

I wish there was a standard protocol for consuming event logs, and that all the client side tooling for processing them didn't care what server was there.

I was part of making this:

https://github.com/vippsas/feedapi-spec

https://github.com/vippsas/feedapi-spec/blob/main/SPEC.md

I hope some day there will be a widespread standard that looks something like this.

An ecosystem building on Kafka clients libraries with various non-Kafka servers would work fine too, but we didn't figure out how to easily do that.

Aydarbek•3w ago
This resonates a lot.

I’d love a world where “consume an event log” is a standard protocol and client-side tooling doesn’t care which broker is behind it.

Feed API is very close to the mental model I’d want: stable offsets, paging, resumability, and explicit semantics over HTTP. Ayder’s current wedge is keeping the surface area minimal and obvious (curl-first), but long-term I’d much rather converge toward a shared model than invent yet another bespoke API.

If you’re open to it, I’d be very curious what parts of Feed API were hardest to standardize in practice and where you felt the tradeoffs landed in real systems.

dagss•3w ago
I don't have that much to offer... we just implemented it for a few different backends sitting on top of SQL. The concept works (obviously as there is not much there). The main challenge was getting safe export mechanisms from SQL, i.e. a column in tables you can safely use as cursor. The complexity in achieving that was our only problem really.

But because there wasn't any official spec it was a topic of bikeshedding organizationally. That would have been avoided by having more mature client libs and spec provided externally..

This spec is I a bit complex but it is complexity that is needed to support a wide range of backend/database technologies.. Simpler specs are possible by making more assumptions/hardcoding of how backend/DB works.

It has been a few years since I worked with this, but reading it again now I still like it in this version. (This spec was the 2nd iteration.)

The partition splitting etc was a nice idea that wasn't actually implemented/needed in the end. I just felt it was important that it was in the protocol at the time.

Aydarbek•3w ago
That makes a lot of sense the hard part isn’t “HTTP paging”, it’s defining a safe cursor (in SQL that becomes “which column is actually stable/monotonic”), and without an external spec/libs it turns into bikeshedding. In Ayder the cursor is an explicit per-partition log offset, so resumability/paging is inherent, which is why Feed API’s mental model resonates a lot. I’d love to see a minimal “event log profile” of that spec someday.