frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

From Offloading to Engagement (Study on Generative AI)

https://www.mdpi.com/2306-5729/10/11/172
1•boshomi•1m ago•1 comments

AI for People

https://justsitandgrin.im/posts/ai-for-people/
1•dive•2m ago•0 comments

Rome is studded with cannon balls (2022)

https://essenceofrome.com/rome-is-studded-with-cannon-balls
1•thomassmith65•7m ago•0 comments

8-piece tablebase development on Lichess (op1 partial)

https://lichess.org/@/Lichess/blog/op1-partial-8-piece-tablebase-available/1ptPBDpC
2•somethingp•9m ago•0 comments

US to bankroll far-right think tanks in Europe against digital laws

https://www.brusselstimes.com/1957195/us-to-fund-far-right-forces-in-europe-tbtb
2•saubeidl•10m ago•0 comments

Ask HN: Have AI companies replaced their own SaaS usage with agents?

1•tuxpenguine•13m ago•0 comments

pi-nes

https://twitter.com/thomasmustier/status/2018362041506132205
1•tosh•15m ago•0 comments

Show HN: Crew – Multi-agent orchestration tool for AI-assisted development

https://github.com/garnetliu/crew
1•gl2334•15m ago•0 comments

New hire fixed a problem so fast, their boss left to become a yoga instructor

https://www.theregister.com/2026/02/06/on_call/
1•Brajeshwar•17m ago•0 comments

Four horsemen of the AI-pocalypse line up capex bigger than Israel's GDP

https://www.theregister.com/2026/02/06/ai_capex_plans/
1•Brajeshwar•17m ago•0 comments

A free Dynamic QR Code generator (no expiring links)

https://free-dynamic-qr-generator.com/
1•nookeshkarri7•18m ago•1 comments

nextTick but for React.js

https://suhaotian.github.io/use-next-tick/
1•jeremy_su•19m ago•0 comments

Show HN: I Built an AI-Powered Pull Request Review Tool

https://github.com/HighGarden-Studio/HighReview
1•highgarden•20m ago•0 comments

Git-am applies commit message diffs

https://lore.kernel.org/git/bcqvh7ahjjgzpgxwnr4kh3hfkksfruf54refyry3ha7qk7dldf@fij5calmscvm/
1•rkta•22m ago•0 comments

ClawEmail: 1min setup for OpenClaw agents with Gmail, Docs

https://clawemail.com
1•aleks5678•29m ago•1 comments

UnAutomating the Economy: More Labor but at What Cost?

https://www.greshm.org/blog/unautomating-the-economy/
1•Suncho•36m ago•1 comments

Show HN: Gettorr – Stream magnet links in the browser via WebRTC (no install)

https://gettorr.com/
1•BenaouidateMed•37m ago•0 comments

Statin drugs safer than previously thought

https://www.semafor.com/article/02/06/2026/statin-drugs-safer-than-previously-thought
1•stareatgoats•39m ago•0 comments

Handy when you just want to distract yourself for a moment

https://d6.h5go.life/
1•TrendSpotterPro•40m ago•0 comments

More States Are Taking Aim at a Controversial Early Reading Method

https://www.edweek.org/teaching-learning/more-states-are-taking-aim-at-a-controversial-early-read...
2•lelanthran•42m ago•0 comments

AI will not save developer productivity

https://www.infoworld.com/article/4125409/ai-will-not-save-developer-productivity.html
1•indentit•47m ago•0 comments

How I do and don't use agents

https://twitter.com/jessfraz/status/2019975917863661760
1•tosh•53m ago•0 comments

BTDUex Safe? The Back End Withdrawal Anomalies

1•aoijfoqfw•55m ago•0 comments

Show HN: Compile-Time Vibe Coding

https://github.com/Michael-JB/vibecode
7•michaelchicory•58m ago•1 comments

Show HN: Ensemble – macOS App to Manage Claude Code Skills, MCPs, and Claude.md

https://github.com/O0000-code/Ensemble
1•IO0oI•1h ago•1 comments

PR to support XMPP channels in OpenClaw

https://github.com/openclaw/openclaw/pull/9741
1•mickael•1h ago•0 comments

Twenty: A Modern Alternative to Salesforce

https://github.com/twentyhq/twenty
1•tosh•1h ago•0 comments

Raspberry Pi: More memory-driven price rises

https://www.raspberrypi.com/news/more-memory-driven-price-rises/
2•calcifer•1h ago•0 comments

Level Up Your Gaming

https://d4.h5go.life/
1•LinkLens•1h ago•1 comments

Di.day is a movement to encourage people to ditch Big Tech

https://itsfoss.com/news/di-day-celebration/
4•MilnerRoute•1h ago•0 comments
Open in hackernews

Ask HN: Are we pretending RAG is ready, when it's barely out of demo phase?

11•TXTOS•6mo ago
Been watching the RAG (Retrieval-Augmented Generation) wave crash into production for over a year now.

But something keeps bugging me: Most setups still feel like glorified notebooks stitched together with hope and vector search.

Yeah, it "works" — until you actually need it to. Suddenly: irrelevant chunks, hallucinations, shallow query rewriting, no memory loop, and a retrieval stack that breaks if you breathe on it wrong.

We’ve got: • pipelines that don’t align with what users actually want to ask, • retrieval that acts more like a search engine than a reasoning aid, • brittle evals (because "correct context" ≠ "correct answer"), • and no one’s sure where grounding ends and illusion begins.

Sure, you can make it work — if you’re okay duct-taping every component and babysitting the system 24/7.

So I gotta ask: Is RAG just stuck in prototype land pretending to be production? Or has someone here actually built a setup that survives user chaos and edge cases?

Would love to hear what’s worked, what hasn't, and what you had to throw away.

Not pushing anything, just been knee-deep in this and looking to sanity check with folks who’ve actually shipped stuff.

Comments

kingkongjaffa•6mo ago
We have a RAG powered product in production right now used by thousands of users.

RAG is part of the solution, it provides the required style, formatting and subject matter idiosyncrasies of the domain.

But it isn't enough to do (prompt + RAG query on that prompt) alone, we have a handwritten series of prompts, so the user input is just one step in a branching decision tree of deciding which prompts to apply, in sequence (prompt 1 output = prompt 2 input) and also composition (deciding to combine prompt (3 + 5, but not prompt 4)) for example.

TXTOS•6mo ago
Totally agree, RAG by itself isn’t enough — especially when users don’t follow the script.

We’ve seen similar pain: one-shot retrieval works great in perfect lab settings, then collapses once you let in real humans asking weird followups like

“do that again but with grandma’s style” and suddenly your context window looks like a Salvador Dali painting.

That branching tree approach you mentioned — composing prompt→prompt→query in a structured cascade — is underrated genius. We ended up building something similar, but layered a semantic engine on top to decide which prompt chain deserves to exist in that moment, not just statically prewiring them.

It’s duct tape + divination right now. But hey — the thing kinda works.

Appreciate your battle-tested insight — makes me feel slightly less insane.

mikert89•6mo ago
what planet are you on? RAG has been working everywhere for a while
TXTOS•6mo ago
haha fair — guess I’ve just been on the planet where the moment someone asks a followup like “can you explain that in simpler terms?”, the whole RAG stack folds like a house of cards.

if it’s been smooth for you, that’s awesome. I’ve just been chasing edge cases where users go off-script, or where prompt alignment + retrieval break in weird semantic corners.

so yeah, maybe it’s a timezone thing

moomoo11•6mo ago
You’re better off using OS/ES and using AI to help you craft the exact query to run on an index.
TXTOS•6mo ago
agree — I’ve used Q/S with AI-assisted query shaping too, especially when domain vocab gets wild. the part I kept bumping into was: even with perfect-looking queries, the retrieved context still lacked semantic intent alignment.

so I started layering reasoning before retrieval — like a semantic router that decides not just what to fetch, but why that logic path even makes sense for this user prompt.

different stack, same headache. appreciate your insight though — it’s a solid route when retrieval infra is strong.

moomoo11•6mo ago
What sort of data are you working with?

In my case, users would be searching through either custom defined data models (I have custom forms and stuff), or if they were trying to find a comment on a Task, or various other attached data on common entities.

For example, "When did Mark say that the field team ran into an issue with the groundwater swelling?"

That would return the Comment, tied to the Task.

In my system there are discussions and comments (common) tied to every data entity (and I'm using graphdb, which makes things exponentially simpler). I index all of these anyway for OS, so the AI is able to construct the query to find this. So I can go from comment -> top level entity or vice versa.

I spent maybe 60-100 hours writing dozens maybe a hundred tests to get the prompt right, taking it from 95% success to 100% success. In over 2 months it hasn't failed yet.

Sorry, I should mention maybe our use-cases are different. I am basically building an audit log.

TXTOS•6mo ago
ah yeah that makes sense — sounds like you're indexing for traceability first, which honestly makes your graph setup way more stable than most RAG stacks I’ve seen.

I’m more on the side of: “why is this even the logic path the system thinks makes sense for the user’s intent?” — like, how did we get from prompt to retrieval to that hallucination?

So I stopped treating retrieval as the answer. It’s just an echo. I started routing logic first — like a pre-retrieval dialectic, if you will. No index can help you if the question shouldn’t even be a question yet.

Your setup sounds tight though — we’re just solving different headaches. I’m more in the “why did the LLM go crazy” clinic. You’re in the “make the query land” ward.

Either way, I love that you built a graph audit log that hasn’t failed in two months. That's probably more production-ready than 90% of what people call “RAG solutions” now.

moomoo11•6mo ago
Thanks :)

And really cool stuff you’re doing too. Honestly I have not spent as much time as maybe I should really diving into all the LLM tooling and stuff like you have.

Good luck!!

TXTOS•6mo ago
hey — really appreciate that. honestly I’m still duct-taping this whole system together half the time, but glad it’s useful enough to sound like “tooling”

I think the whole LLM space is still missing a core idea: that logic routing before retrieval might be more important than retrieval itself. when the LLM “hallucinates,” it’s not always because it lacked facts — sometimes it just followed a bad question.

but yeah — if any part of this helps or sparks new stuff, then we’re already winning. appreciate the good vibes, and good luck on your own build too