frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I got tired of hand-syncing AI coding rules across four tools

https://github.com/sampleXbro/agentsmesh
1•samplexBro•1m ago•0 comments

Show HN: I built a way to see if your SDK is AI-friendly

1•nguyenhu•6m ago•0 comments

Building a Threadiverse Community Platform

https://fedify.dev/tutorial/threadiverse
1•dahlia•7m ago•0 comments

Australia threatens tech companies with 2.25% tax if they don't pay publishers

https://www.theregister.com/2026/04/28/australia_news_bargaining_incentive/
2•defrost•12m ago•1 comments

How Do Perpetual Futures Differ from Spot Trading in Crypto?

https://www.bitdeal.net/cryptocurrency-exchange-development
1•harrisonrichrd•17m ago•0 comments

Meta prepares to undo acquisition of Singapore-based Manus after China ban

https://www.businesstimes.com.sg/international/global/meta-prepares-undo-acquisition-singapore-ba...
2•doppp•18m ago•0 comments

Freelancer for hire – full stack, ML, DevOps

1•Hopfield•19m ago•0 comments

Talos OS images are now bit-by-bit reproducible

https://github.com/siderolabs/talos/releases/tag/v1.13.0
1•matesz•21m ago•0 comments

I Use AI in 2026

https://fedepaol.github.io/blog/2026/04/25/how-i-use-ai-in-2026/
1•fedepaol•22m ago•0 comments

Come From

https://wiki.c2.com/?ComeFrom
1•pramodbiligiri•23m ago•0 comments

Steal Claude Code Architecture

https://teamcal.ai/blog/claude-code-architecture
1•rajl•26m ago•0 comments

How to build advanced features for AI chatbots on SSE

https://zknill.io/posts/everyone-said-sse-token-streaming-was-easy/
1•zknill•30m ago•0 comments

Show HN: VibeBrowser – Give your AI agent your real logged-in browser via MCP

https://www.vibebrowser.app/mcp
1•denis4inet•30m ago•0 comments

Show HN: Financial Database API for Vibe Coders

https://xfinlink.com
1•lyonghee97•38m ago•1 comments

Hotta GameDriverX64.sys shipping in Neverness to Everness preload

https://github.com/LaggyTMD/nte-driver-analysis
1•LaggyTMD•39m ago•0 comments

Anthropic Claude Code HERMES.md billing flaw

https://consumerrights.wiki/w/Anthropic_Claude_Code_HERMES.md_billing_flaw
1•Palmik•40m ago•0 comments

Scraping 241 UK council planning portals – 2.6M decisions so far

28•mebkorea•45m ago•30 comments

Show HN: BeVisible.app - Blog that runs itself

https://www.bevisible.app
2•evanyang•48m ago•0 comments

Xiaomi MiMo Orbit: 100T Token Grant for Builders

https://100t.xiaomimimo.com/
1•whtsky•49m ago•0 comments

SwiftBash: Pure-Swift, sandboxed bash interpreter

https://github.com/cocoanetics/swiftbash
2•ingve•49m ago•0 comments

Text Is the New Binary

https://andreabaccega.com/blog/text-is-the-new-binary/
2•veke87•52m ago•0 comments

Bugs in the original 1977 Cave Adventure Fortran source

https://colossalcave.cc/bugs.php
2•ultra-nick•55m ago•1 comments

A case report of someone who self-managed Fatal Familial Insomnia

https://pmc.ncbi.nlm.nih.gov/articles/PMC1781276/
1•abinaryquibit•55m ago•1 comments

Asimov v1: Open-Source Humanoid Robot

https://github.com/asimovinc/asimov-v1
1•Philipp2398•56m ago•0 comments

I built a coach for people who are tired of being yelled at by Stockfish

https://chessmentorai.com/en
1•sepiropht•57m ago•0 comments

Set a Meeting Budget

https://alexhans.github.io/posts/meeting-budget.html
2•alexhans•1h ago•1 comments

Ask HN: When might we not have to do laundry or fold clothes or cook

2•samarthv•1h ago•0 comments

Google signs classified AI deal with Pentagon

https://www.reuters.com/technology/google-signs-classified-ai-deal-with-pentagon-information-repo...
5•afshinmeh•1h ago•3 comments

The 278k language running 20% of the Internet

https://www.ismatsamadov.com/blog/lua-278k-language-running-the-internet
1•ismats•1h ago•0 comments

Unitree G1 humanoid robot roller skating [video]

https://www.youtube.com/watch?v=srPz8TRpZ_8
1•nathanh4903•1h ago•0 comments
Open in hackernews

An Enterprise-Level Retrieval-Augmented Generation System

https://comfyai.app/article/llm-applications/enterprise-level-rag-hands-on-practice-II
6•zljdanceholic•11mo ago

Comments

zljdanceholic•11mo ago
How can we search the wanted key information from 10,000+ pages of PDFs within 2.5 hours? For fact check, how do we implement it so that answers are backed by page-level references, minimizing hallucinations?

RAG-Challenge-2 is a great open-source project by Ilya Rice that ranked 1st at the Enterprise RAG Challenge, which has 4500+ lines of code for implementing a high-performing RAG system. It might seem overwhelming to newcomers who are just beginning to learn this technology. Therefore, to help you get started quickly—and to motivate myself to learn its ins and outs—I’ve created a complete tutorial on this.

We have a complete graph to explain its workflow where multiple tools are used: Docling for parsing PDFs, LangChain for chunking text, faiss for vectorization and similarity searching, and chatgpt for LLMs.

Besides, I also outline the codeflow, demonstrating the running logic involving multiple python files where starters can easily get lost. Different files are colored differently. The purpose of showing this is not letting you memorize all of these file relationships. It works better for you to check the source code yourself and use this as a reference if you find yourself lost in the code.

The original project of Ilya Rice design its RAG system for answering questions of annual reports from companies, so he only designed three types of question response format for that challenge: a name, a number, or a boolean. But to ask questions about technical stuff, we absolutely ask general questions like How does RoPE work? to know about some concepts and the like Therefore, I further modify the system logic to fit this need by customizing an AnswerWithRAGContextExplanationPrompt class and automatically matching the most related chapter and corresponding pages via searching through all faiss databases (only retrieve the top-1) The final performance is demonstrated below (not cherry-picked, only tested once).

- What's the steps to train a nanoGPT from scratch? The answer is:

Training nanoGPT from scratch involves several clearly defined steps. First, set up the environment by installing necessary libraries, using either Anaconda or Google Colab, and then download the dataset (e.g., tinyShakespeare). Next, tokenize the text into numerical representations and split the data into training and validation sets. Define the model architecture including token/positional embeddings, transformer blocks with multi-head self-attention and feed-forward networks, and layer normalization. Configure training hyperparameters and set up an optimizer (such as AdamW). Proceed with a training loop that performs forward passes, computes loss, backpropagates, and updates parameters, while periodically evaluating performance on both training and validation data. Finally, use the trained model to generate new text from a given context.

All code are provided on Colab and the tutorial is referenced here. Hope this helps!