frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

An Enterprise-Level Retrieval-Augmented Generation System

https://comfyai.app/article/llm-applications/enterprise-level-rag-hands-on-practice-II
6•zljdanceholic•11mo ago

Comments

zljdanceholic•11mo ago
How can we search the wanted key information from 10,000+ pages of PDFs within 2.5 hours? For fact check, how do we implement it so that answers are backed by page-level references, minimizing hallucinations?

RAG-Challenge-2 is a great open-source project by Ilya Rice that ranked 1st at the Enterprise RAG Challenge, which has 4500+ lines of code for implementing a high-performing RAG system. It might seem overwhelming to newcomers who are just beginning to learn this technology. Therefore, to help you get started quickly—and to motivate myself to learn its ins and outs—I’ve created a complete tutorial on this.

We have a complete graph to explain its workflow where multiple tools are used: Docling for parsing PDFs, LangChain for chunking text, faiss for vectorization and similarity searching, and chatgpt for LLMs.

Besides, I also outline the codeflow, demonstrating the running logic involving multiple python files where starters can easily get lost. Different files are colored differently. The purpose of showing this is not letting you memorize all of these file relationships. It works better for you to check the source code yourself and use this as a reference if you find yourself lost in the code.

The original project of Ilya Rice design its RAG system for answering questions of annual reports from companies, so he only designed three types of question response format for that challenge: a name, a number, or a boolean. But to ask questions about technical stuff, we absolutely ask general questions like How does RoPE work? to know about some concepts and the like Therefore, I further modify the system logic to fit this need by customizing an AnswerWithRAGContextExplanationPrompt class and automatically matching the most related chapter and corresponding pages via searching through all faiss databases (only retrieve the top-1) The final performance is demonstrated below (not cherry-picked, only tested once).

- What's the steps to train a nanoGPT from scratch? The answer is:

Training nanoGPT from scratch involves several clearly defined steps. First, set up the environment by installing necessary libraries, using either Anaconda or Google Colab, and then download the dataset (e.g., tinyShakespeare). Next, tokenize the text into numerical representations and split the data into training and validation sets. Define the model architecture including token/positional embeddings, transformer blocks with multi-head self-attention and feed-forward networks, and layer normalization. Configure training hyperparameters and set up an optimizer (such as AdamW). Proceed with a training loop that performs forward passes, computes loss, backpropagates, and updates parameters, while periodically evaluating performance on both training and validation data. Finally, use the trained model to generate new text from a given context.

All code are provided on Colab and the tutorial is referenced here. Hope this helps!

Superscript Asterisk in Unicode

https://blog.zgp.org/superscript-asterisk-in-unicode/
1•b6dybuyv•19s ago•0 comments

Spinel: Ruby AOT Native Compiler

https://github.com/matz/spinel
1•dluan•3m ago•0 comments

Stock markets are too high and set to fall, says Bank of England deputy

https://www.bbc.com/news/articles/c75kp1y43lgo
2•wood_spirit•4m ago•0 comments

TorchWebGPU: Running PyTorch Natively on WebGPU

https://github.com/jmaczan/torch-webgpu
1•yu3zhou4•5m ago•0 comments

I over-engineered my AI coding setup one justified upgrade at a time

https://machinethoughts.substack.com/p/every-upgrade-made-sense-how-i-over
1•jurreB•12m ago•0 comments

A red pixel in the snow: How AI found a lost climber

https://www.bbc.com/future/article/20260108-how-ai-solved-the-mystery-of-a-missing-mountaineer
1•tellarin•12m ago•0 comments

We Are Xbox

https://news.xbox.com/en-us/2026/04/23/we-are-xbox/
2•quyleanh•14m ago•0 comments

SSE token streaming is easy, they said

https://zknill.io/posts/everyone-said-sse-token-streaming-was-easy/
1•zknill•16m ago•0 comments

UK gaming icon Peter Molyneux on AI, his final creation and a changing industry

https://www.bbc.com/news/articles/c4glw5nyrggo
3•tellarin•16m ago•0 comments

Software engineering may no longer be a lifetime career

https://www.seangoedecke.com/software-engineering-may-no-longer-be-a-lifetime-career/
2•sarmike31•25m ago•0 comments

DroidVM – Run virtual machine on Android Phones with near-native performance

https://github.com/droid-vm/droidvm
1•shelfchair•25m ago•0 comments

Okren – Founding Engineering Operator – Europe /Remote – Pre-Seed – Equity-First

https://okrenai.com/
1•freddiebrown3rd•27m ago•0 comments

Show HN: Founder Decision Engine

https://github.com/michaelaz774/decision-engine
1•michael774•28m ago•0 comments

Tim Cook wrote a winning recipe for Apple

https://www.economist.com/leaders/2026/04/23/tim-cook-wrote-a-winning-recipe-for-apple
1•edward•29m ago•0 comments

Design.md: A format spec for describing a visual identity to coding agents

https://github.com/google-labs-code/design.md
4•rbanffy•31m ago•1 comments

Vision Banana | Google DeepMind

https://vision-banana.github.io
1•rldjbpin•33m ago•0 comments

Is Helium the Browser Brave Was Meant to Be?

https://itsfoss.com/helium-browser/
1•dotcoma•34m ago•0 comments

Self-Reference

https://en.wikipedia.org/wiki/Self-reference
1•nill0•34m ago•0 comments

Discouraging "the voice from nowhere" (~LLMs) in documentation

https://forum.djangoproject.com/t/discouraging-the-voice-from-nowhere-llms-in-documentation/44699
1•marbartolome•35m ago•0 comments

Vibe Coding Isn't the Problem – It's Your Approvals Process

https://kristopherleads.substack.com/p/vibe-coding-isnt-the-problem-its
1•kristopherleads•35m ago•2 comments

You're about to feel the AI money squeeze

https://www.theverge.com/ai-artificial-intelligence/917380/ai-monetization-anthropic-openai-token...
2•eternalreturn•36m ago•0 comments

DeepSeek V4 in vLLM: Efficient Long-Context Attention

https://vllm-website-pdzeaspbm-inferact-inc.vercel.app/blog/deepseek-v4
3•zagwdt•37m ago•0 comments

Open-Source in the Era of "Infinite" Compute

https://community.computer/infinite-compute.html
4•r3ason•40m ago•0 comments

Trailmark Turns Code into Graphs

https://blog.trailofbits.com/2026/04/23/trailmark-turns-code-into-graphs/
1•ingve•41m ago•0 comments

EU to warn against early nuclear exits in effort to address energy crisis

https://uk.news.yahoo.com/eu-warn-against-early-nuclear-103501522.html
1•mpweiher•41m ago•0 comments

Dinstinguishing between coroutines and fibers [pdf]

https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4024.pdf
2•dgan•44m ago•0 comments

I Built a Pente App for iPhone

https://apps.apple.com/us/app/pente-go/id6449915046
1•ElbertF•46m ago•1 comments

Claude can now connect to lifestyle apps like Spotify, Instacart and AllTrails

https://www.engadget.com/ai/claude-can-now-connect-to-lifestyle-apps-like-spotify-instacart-and-a...
1•anujbans•46m ago•0 comments

Eric Trump Brags About $24M Pentagon Deal His Company Landed

https://newrepublic.com/post/209419/eric-trump-brags-defense-department-contract
3•tcp_handshaker•47m ago•0 comments

QuickSWMS – AI generator for Australian construction safety docs ($100 startup)

https://quickswms.co
1•EzraDe•47m ago•0 comments