frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

New built-in interoperability between Google Meet and Microsoft Teams

https://workspaceupdates.googleblog.com/2026/02/meet-hardware-microsoft-teams-intero.html
1•ChrisArchitect•1m ago•0 comments

The Chinese planemaker taking on Boeing and Airbus

https://www.bbc.com/news/articles/ce3exl1k247o
1•belter•5m ago•0 comments

User stories as docs in the repo instead of tickets

https://docs.testchimp.io/test-planning/intro/
1•nuwansam_87•6m ago•1 comments

Google terminated my YouTube channel even thought I made no videos or used it

3•paulpauper•7m ago•0 comments

Mekara: Workflows as Code Proof-of-Concept

https://meksys-dev.github.io/mekara/docs/
1•amosjyng•8m ago•0 comments

Making on a Manager's Schedule

https://zsuss.substack.com/p/making-on-a-managers-schedule
1•z-mach9•8m ago•0 comments

NEA Small Modular Reactor Digital Dashboard

https://www.oecd-nea.org/jcms/pl_107879/nea-small-modular-reactor-digital-dashboard
1•simonebrunozzi•8m ago•0 comments

Gary Goddard Interview (2013)

https://www.insideuniversal.net/2013/11/interview-gary-goddard-part-1/
1•exvi•10m ago•1 comments

John Bell Studio Concept Art

https://www.johnbell.studio
1•exvi•12m ago•0 comments

Simpler Java Project Setup with Mill

https://mill-build.org/blog/17-simpler-jvm-mill-110.html
1•lihaoyi•13m ago•0 comments

Modeling DeepSeek-R1's Instability as a Topological Limit

https://gist.github.com/eric2675-coder/3801106f24c03e43c2183766a377d958
2•eric2675•14m ago•1 comments

Immortal Now

https://mnvr.in/2026/immortal
1•vishnukvmd•15m ago•0 comments

Adrian Conejo Arias and child vs. Noem, Bondi, et al. [pdf]

https://storage.courtlistener.com/recap/gov.uscourts.txwd.1172886492/gov.uscourts.txwd.1172886492...
2•mizzao•17m ago•0 comments

Book Review: 'The Elements of Power'

https://www.nytimes.com/2026/01/16/books/review/the-elements-of-power-nicolas-niarchos.html
1•lxm•19m ago•0 comments

I miss thinking hard

https://www.jernesto.com/articles/thinking_hard
13•jernestomg•27m ago•8 comments

The Computer Chronicles – Artificial Intelligence (1984)

https://www.youtube.com/watch?v=_S3m0V_ZF_Q
2•belter•29m ago•0 comments

Show HN: AI-credit – measure AI contribution to a codebase

https://ai-credits.vercel.app
1•924412409•30m ago•0 comments

What Is Overfitting?

https://aws.amazon.com/what-is/overfitting/
1•teleforce•35m ago•0 comments

Show HN: OpenClaw Assistant – Replace Google Assistant with Any AI

https://github.com/yuga-hashimoto/OpenClawAssistant
1•YugaHashimoto•36m ago•0 comments

Stop overpaying for OpenClaw: Multi-model routing guide

https://velvetshark.com/openclaw-multi-model-routing
1•tinbucket•36m ago•0 comments

Data Agent Ready Database: Designing the Next-Gen Enterprise Data Warehouse

https://www.databend.com/blog/category-product/databend-agent-ready-database
1•river_wu•37m ago•1 comments

SHOW HN: Notepad++ Vulnerability Checker

https://github.com/nHunter0/Notepad-vulnerability-checker
1•10000000001•41m ago•1 comments

A Confession from Your Newest User

https://public.3.basecamp.com/p/njmKUBfBAJkfKuB8NHqV1qJ7
1•doppp•47m ago•0 comments

Tips for Using Claude Code from the Claude Code Team

https://twitter.com/bcherny/status/2017742741636321619
1•divbzero•48m ago•0 comments

JSBooks – a curated list of the best JavaScript books

https://github.com/minouou/JSBooks
2•mahsima•51m ago•0 comments

Anthropic Performance Team Take-Home for Dummies

https://www.ikot.blog/anthropic-take-home-for-dummies
1•ternaus•53m ago•0 comments

Updates on the Status of Adobe Animate

https://old.reddit.com/r/adobeanimate/comments/1qv5yju/updates_on_the_status_of_adobe_animate
2•crispinh•53m ago•1 comments

The hottest job in tech: Writing words

https://www.businessinsider.com/hottest-job-in-tech-writing-words-ai-hiring-2026-2
2•rfarley04•57m ago•1 comments

Deepfaking Orson Welles's Mangled Masterpiece

https://www.newyorker.com/magazine/2026/02/09/deepfaking-orson-welless-mangled-masterpiece
1•mitchbob•1h ago•1 comments

Why so little news from China?

2•wrqvrwvq•1h ago•3 comments
Open in hackernews

An Enterprise-Level Retrieval-Augmented Generation System

https://comfyai.app/article/llm-applications/enterprise-level-rag-hands-on-practice-II
6•zljdanceholic•9mo ago

Comments

zljdanceholic•9mo ago
How can we search the wanted key information from 10,000+ pages of PDFs within 2.5 hours? For fact check, how do we implement it so that answers are backed by page-level references, minimizing hallucinations?

RAG-Challenge-2 is a great open-source project by Ilya Rice that ranked 1st at the Enterprise RAG Challenge, which has 4500+ lines of code for implementing a high-performing RAG system. It might seem overwhelming to newcomers who are just beginning to learn this technology. Therefore, to help you get started quickly—and to motivate myself to learn its ins and outs—I’ve created a complete tutorial on this.

We have a complete graph to explain its workflow where multiple tools are used: Docling for parsing PDFs, LangChain for chunking text, faiss for vectorization and similarity searching, and chatgpt for LLMs.

Besides, I also outline the codeflow, demonstrating the running logic involving multiple python files where starters can easily get lost. Different files are colored differently. The purpose of showing this is not letting you memorize all of these file relationships. It works better for you to check the source code yourself and use this as a reference if you find yourself lost in the code.

The original project of Ilya Rice design its RAG system for answering questions of annual reports from companies, so he only designed three types of question response format for that challenge: a name, a number, or a boolean. But to ask questions about technical stuff, we absolutely ask general questions like How does RoPE work? to know about some concepts and the like Therefore, I further modify the system logic to fit this need by customizing an AnswerWithRAGContextExplanationPrompt class and automatically matching the most related chapter and corresponding pages via searching through all faiss databases (only retrieve the top-1) The final performance is demonstrated below (not cherry-picked, only tested once).

- What's the steps to train a nanoGPT from scratch? The answer is:

Training nanoGPT from scratch involves several clearly defined steps. First, set up the environment by installing necessary libraries, using either Anaconda or Google Colab, and then download the dataset (e.g., tinyShakespeare). Next, tokenize the text into numerical representations and split the data into training and validation sets. Define the model architecture including token/positional embeddings, transformer blocks with multi-head self-attention and feed-forward networks, and layer normalization. Configure training hyperparameters and set up an optimizer (such as AdamW). Proceed with a training loop that performs forward passes, computes loss, backpropagates, and updates parameters, while periodically evaluating performance on both training and validation data. Finally, use the trained model to generate new text from a given context.

All code are provided on Colab and the tutorial is referenced here. Hope this helps!