frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

It's time for the world to boycott the US

https://www.aljazeera.com/opinions/2026/2/5/its-time-for-the-world-to-boycott-the-us
1•HotGarbage•23s ago•0 comments

Show HN: Semantic Search for terminal commands in the Browser (No Back end)

https://jslambda.github.io/tldr-vsearch/
1•jslambda•27s ago•0 comments

The AI CEO Experiment

https://yukicapital.com/blog/the-ai-ceo-experiment/
2•romainsimon•2m ago•0 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
2•surprisetalk•5m ago•0 comments

MS-DOS game copy protection and cracks

https://www.dosdays.co.uk/topics/game_cracks.php
2•TheCraiggers•6m ago•0 comments

Updates on GNU/Hurd progress [video]

https://fosdem.org/2026/schedule/event/7FZXHF-updates_on_gnuhurd_progress_rump_drivers_64bit_smp_...
2•birdculture•7m ago•0 comments

Epstein took a photo of his 2015 dinner with Zuckerberg and Musk

https://xcancel.com/search?f=tweets&q=davenewworld_2%2Fstatus%2F2020128223850316274
5•doener•7m ago•2 comments

MyFlames: Visualize MySQL query execution plans as interactive FlameGraphs

https://github.com/vgrippa/myflames
1•tanelpoder•9m ago•0 comments

Show HN: LLM of Babel

https://clairefro.github.io/llm-of-babel/
1•marjipan200•9m ago•0 comments

A modern iperf3 alternative with a live TUI, multi-client server, QUIC support

https://github.com/lance0/xfr
3•tanelpoder•10m ago•0 comments

Famfamfam Silk icons – also with CSS spritesheet

https://github.com/legacy-icons/famfamfam-silk
1•thunderbong•10m ago•0 comments

Apple is the only Big Tech company whose capex declined last quarter

https://sherwood.news/tech/apple-is-the-only-big-tech-company-whose-capex-declined-last-quarter/
2•elsewhen•14m ago•0 comments

Reverse-Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
2•todsacerdoti•15m ago•0 comments

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•19m ago•0 comments

The Greater Copenhagen Region could be your friend's next career move

https://www.greatercphregion.com/friend-recruiter-program
2•mooreds•19m ago•0 comments

Do Not Confirm – Fiction by OpenClaw

https://thedailymolt.substack.com/p/do-not-confirm
1•jamesjyu•20m ago•0 comments

The Analytical Profile of Peas

https://www.fossanalytics.com/en/news-articles/more-industries/the-analytical-profile-of-peas
1•mooreds•20m ago•0 comments

Hallucinations in GPT5 – Can models say "I don't know" (June 2025)

https://jobswithgpt.com/blog/llm-eval-hallucinations-t20-cricket/
1•sp1982•20m ago•0 comments

What AI is good for, according to developers

https://github.blog/ai-and-ml/generative-ai/what-ai-is-actually-good-for-according-to-developers/
1•mooreds•20m ago•0 comments

OpenAI might pivot to the "most addictive digital friend" or face extinction

https://twitter.com/lebed2045/status/2020184853271167186
1•lebed2045•21m ago•2 comments

Show HN: Know how your SaaS is doing in 30 seconds

https://anypanel.io
1•dasfelix•22m ago•0 comments

ClawdBot Ordered Me Lunch

https://nickalexander.org/drafts/auto-sandwich.html
3•nick007•23m ago•0 comments

What the News media thinks about your Indian stock investments

https://stocktrends.numerical.works/
1•mindaslab•24m ago•0 comments

Running Lua on a tiny console from 2001

https://ivie.codes/page/pokemon-mini-lua
1•Charmunk•24m ago•0 comments

Google and Microsoft Paying Creators $500K+ to Promote AI Tools

https://www.cnbc.com/2026/02/06/google-microsoft-pay-creators-500000-and-more-to-promote-ai.html
3•belter•26m ago•0 comments

New filtration technology could be game-changer in removal of PFAS

https://www.theguardian.com/environment/2026/jan/23/pfas-forever-chemicals-filtration
1•PaulHoule•27m ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
2•momciloo•28m ago•0 comments

Kinda Surprised by Seadance2's Moderation

https://seedanceai.me/
1•ri-vai•28m ago•2 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
2•valyala•28m ago•1 comments

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
2•sgt•29m ago•0 comments
Open in hackernews

Show HN: I made it fast and easy to launch your own RAG-powered AI chatbots

https://www.chatrag.ai
2•carlos_marcial•2mo ago

Comments

carlos_marcial•2mo ago
I built the tech stack behind ChatRAG to handle the increasing number of clients I started getting about a year ago who needed Retrieval Augmented Generation (RAG) powered chatbots.

After a lot of trial and error, I settled on this tech stack for ChatRAG:

Frontend

- Next.js 16 (App Router) Latest React framework with server components and streaming

- React 19 + React Compiler: Automatic memoization, no more useMemo/useCallback hell

- Zustand: Lightweight state management (3kb vs Redux bloat)

- Tailwind CSS + Framer Motion: Styling + buttery animations

- Embed a chat widget version of your RAG chatbot on any web page, apart from creating a ChatGPT or Claude looking web UI

AI / LLM Layer

- Vercel AI SDK 5 – Unified streaming interface for all providers

- OpenRouter – Single API for Claude, GPT-4, DeepSeek, Gemini, etc.

- MCP (Model Context Protocol) – Tool use and function calling across models

RAG Pipeline

- Text chunking → documents split for optimal retrieval

- OpenAI embeddings (1536 dim vectors) – Semantic search representation

- pgvector with HNSW indexes – Fast approximate nearest neighbor search directly in Postgres

Database & Auth

- Supabase (PostgreSQL) – Database, auth, realtime, storage in one

- GitHub & Google OAuth via Supabase – Third party sign in providers managed by Supabase

- Row Level Security – Multi-tenant data isolation at the DB level

Multi-Modal Generation

- Use Fal.ai or Replicate.ai API keys for generating image, video and 3D assets inside of your RAG chatbot

Integrations

- WhatsApp via Baileys – Chat with your RAG from WhatsApp

- Stripe / Polar – Payments and subscriptions

Infra

- Fly.io / Koyeb – Edge deployment for WhatsApp workers

- Vercel – Frontend hosting with edge functions

My special sauce: pgvector HNSW indexes (m=64, ef_construction=200) give you sub-100ms semantic search without leaving Postgres. No Pinecone/Weaviate vendor lock-in.

Single-tenant vs Multi-tenant RAG setups: Why not both?

ChatRAG supports both deployment modes depending on your use case:

Single-tenant

- One knowledge base → many users

- Ideal for celebrity/expert AI clones or brand-specific agents

- e.g., "Tony Robbins AI chatbot" or "Deepak Chopra AI"

- All users interact with the same dataset and the same personality layer

Multi-tenant

- Users have workspace/project isolation — each with its own knowledge base, project-based system prompt and settings

- Perfect for SaaS products or platform builders that want to offer AI chatbots to their customers

- Every customer gets private data and their own RAG

My long term vision is to keep evolving ChatRAG so I can eventually release a fully open-source version for everyone to build with.

cluckindan•2mo ago
How does it handle context curation in long conversations? Does it detect user intent for each message to see if it can act purely on existing context, or if it needs to do another retrieval and/or purge/summarize previous discussion and context?
carlos_marcial•2mo ago
Great questions! Here's how ChatRAG handles context curation:

1. Intent Detection Per Message

Every message is classified to decide the retrieval strategy:

  - conversational (greetings, small talk) → skip RAG entirely
  - document_search (explicit knowledge queries) → always retrieve
  - complex/exploratory → adaptive retrieval
  - tool_required → route to MCP tools instead
This prevents unnecessary retrieval on follow-ups like "thanks" or "can you explain that differently?"

2. Adaptive Retrieval

When retrieval is needed, the system adjusts based on query characteristics:

  - Similarity thresholds: 0.45 for general queries → 0.75 for exact data requests
  - Result limits: 15 for specific queries → 30 for broad exploration
  - Strategy: semantic-only vs. hybrid vs. temporal-boosted

  "What was Q3 2024 revenue?" triggers exact_match mode at 0.75 threshold, while "Tell me about the company" uses standard semantic search at 0.45.
3. Conversation History

Full conversation history is passed to the model without truncation—modern LLMs have 100K-200K token windows, and RAG context is freshly curated each turn rather than accumulated.

4. Not Implemented (Yet)

  - Conversation summarization/compression
  - Explicit context window management
  - "Forget previous context" mechanisms
These would matter at scale or for very long sessions. That said, ChatRAG is a fully customizable boilerplate, and the idea is for developers to keep building on top of it, as I will too.
cluckindan•2mo ago
Did you just copy-paste this answer from an LLM? It sounds exactly like a generative model, and there are obvious issues with the response.

For example, ”RAG context is freshly curated each turn rather than accumulated” is nonsensical: it implies that context is curated instead of replaced, but also that the previous context is forgotten and replaced, and neither means that full conversation history (including context) is passed without truncation.

Also, no mention of tool-based retrieval. I have a hunch this is not a real product, just AI slop. Prove me wrong.

carlos_marcial•2mo ago
Yes, of course I used an LLM for the response, but not just any LLM. I used an instance of ChatRAG that has all the ChatRAG documentation built into it. I’m a solo developer trying to build software, do the marketing, handle customer support, and also be a dad and a husband. Of course I’m going to use my own product to help me out. This is precisely why I know ChatRAG is good, I use it myself daily.

That said, let me clarify the points you raised. On "freshly curated each turn rather than accumulated": this is actually the correct behavior, though I can see how the phrasing was confusing. The key is that conversation history and RAG context are two different things. The full conversation history... all previous user and assistant message... is always passed to the model without truncation. But the retrieved document chunkz are a separate layer. Those get requeried based on the current message and injected into the system prompt fresh each turn. Previous RAG results dont accumulate, they are replaced with whatever is most relevant to the current query.

Why this design? Say you ask about 'Q3 revenue' and then follow up with 'What about employe benefits?' If the system accumulated chunks, you'd have Q3 revenue data polluting the context when you're now asking about something completely different. Fresh retrieval ensures the context stays relevant to what you're actually asking about right now.

On tool-based retrieval: the system does have this through MCP integration. Query classification detects when tools are needed (Gmail, Calendar, Drive, Zapier, etc.) and routes accordingly. The model receives both the RAG context and access to tools, so it can use document knowledge and execute external actions within the same response when needed

Let me know if you have any other questions. I’ll be more than happy to answer, with the help of my ChatRAG instance, any other doubts you may have !