frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I replaced the front page with AI slop and honestly it's an improvement

https://slop-news.pages.dev/slop-news
1•keepamovin•1m ago•0 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•3m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
1•tosh•9m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
2•oxxoxoxooo•13m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•13m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•17m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•18m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•20m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•22m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•25m ago•3 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•26m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
3•1vuio0pswjnm7•28m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•29m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•31m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•34m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•39m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•41m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•44m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•56m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•58m ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•59m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments
Open in hackernews

Sync vs. async vs. event-driven AI requests: what works in production

https://modelriver.com/how-modelriver-works/event-driven-async
4•akarshc•1w ago

Comments

akarshc•1w ago
I’m one of the builders. Once AI requests moved beyond simple sync calls, we kept running into the same problems in production: retries hiding failures, async flows that were hard to reason about, frontend state drifting, and providers timing out mid-request.

This page breaks down the three request patterns we see teams actually using in production (sync, async, and event-driven async), how data flows in each case, and why we ended up favoring an event-driven approach for interactive, streaming apps.

Happy to answer questions or go deeper on any part of the architecture.

vishaal_007•1w ago
I’m another founder on this. One thing that surprised us while building AI features was how often the hard problems weren’t about model choice, but about request lifecycle. Once you introduce streaming, retries, and multiple providers, a lot of implicit assumptions in typical request–response code stop holding.

We kept seeing teams reinvent similar patterns in slightly different ways, especially around correlating events, handling partial failures, and keeping the frontend in sync with what actually happened on the backend. The goal with this writeup was to make those tradeoffs explicit and show what’s actually happening on the wire in each approach.

Curious to hear how others here are handling long-lived or streaming AI requests in production, especially once things start failing in non-obvious ways.

amalv•1w ago
If a team adopts this pattern and later decides to remove ModelRiver, how hard is it to unwind? Are the request and event models close to provider APIs or fairly opinionated?
akarshc•1w ago
This was something we were careful about. The request and event models are intentionally close to what most providers already expose, rather than introducing a completely new abstraction.

Teams usually integrate it incrementally in front of existing calls. If you remove it, you’re mostly deleting the orchestration layer and keeping your provider integrations and client logic. You lose centralized retries and observability, but you’re not stuck rewriting your entire request model.

If adopting it requires a full rewrite, that’s usually a sign it’s being applied too broadly.

aparnavalsan43•1w ago
In practice, where does the event-driven approach break down? What kinds of workloads still fit better with simple sync or queue-based async?
vishaal_007•1w ago
In practice, event-driven starts to feel like overkill when requests are short-lived and failures are cheap. If a call is fast, idempotent, and the user isn’t waiting on partial output, a simple sync request is usually the clearest solution.

Queue-based async still works well for batch jobs, offline processing, or anything where latency and ordering aren’t user-visible. The event-driven approach mainly pays off once you have long-lived or interactive requests where failures can happen mid-response and you care about what the user actually sees.

aparnavalsan43•1w ago
That makes sense. How do you decide early on which requests are likely to “grow into” needing an event-driven approach, versus staying simple sync or queue-based long term?
vishaal_007•1w ago
In our experience, it usually comes down to whether the request has user-visible state over time. If the response is something you can treat as atomic and either succeed or fail cleanly, it tends to stay simple.

The requests that “grow” tend to share a few signals early on: they stream partial results, they take long enough that the frontend needs progress updates, or failures start happening after something has already been shown to the user. Another common signal is when retries stop being transparent and you start needing to explain to users what actually happened.

Once those patterns show up, teams usually end up reworking the flow anyway. The event-driven approach just makes that lifecycle explicit earlier, instead of letting it emerge implicitly and painfully over time.

GopikaDilip•1w ago
How do you reason about retries and correctness once a stream has already started? For example, how do you avoid duplicated or missing tokens if a provider fails mid-stream?
akarshc•1w ago
This is one of the harder problems, and there isn’t a perfect answer.

The main thing we try to avoid is pretending mid-stream retries are the same as pre-request retries. Once a stream has started, we treat it as a sequence of events with checkpoints rather than a single opaque response. Retries are scoped to known safe boundaries, and anything ambiguous is surfaced explicitly instead of silently re-emitting tokens.

In other words, correctness is prioritized over pretending the stream is seamless. If we can’t guarantee no duplication, we make that visible rather than hide it.