frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Exodus of IPv4 from War-torn Ukraine (2025)

https://www.kentik.com/blog/exodus-of-ipv4-from-war-torn-ukraine/
1•throw0101c•27s ago•0 comments

I compared WinApps and WinBoat, which integrate a Windows VM to desktop Linux

https://www.theregister.com/2026/02/14/winapps_and_winboat/
1•lproven•55s ago•0 comments

Ask HN: LLMs helping you read papers and books

1•amelius•3m ago•0 comments

How to Solve the Tenor Shortage

https://www.economist.com/leaders/2026/02/12/how-to-solve-the-tenor-shortage
1•testdelacc1•3m ago•1 comments

A Brief History of Sega Enterprises

https://www.abortretry.fail/p/a-brief-history-of-sega-enterprises
1•rbanffy•6m ago•0 comments

Show HN: Pluma – Write professional articles for your ideas and projects

https://pluma.ink/
1•nader0913•7m ago•0 comments

Go and Versioning: Minimal Version Selection

https://research.swtch.com/vgo-mvs
1•ryangibb•8m ago•0 comments

On Tilt – America's new gambling epidemic

https://harpers.org/archive/2026/02/on-tilt-america-gambling-epidemic-jasper-craven/
1•pseudolus•9m ago•0 comments

TexGuardian – Claude Code, but for LaTeX academic papers

https://github.com/arcAman07/TexGuardian
1•amananytime07•11m ago•1 comments

My math theory to automata Hilbert, Fourier and integral orders

https://github.com/tambetvali/LaeMath/tree/main/MathFuncs/Docs
2•tvali•18m ago•1 comments

Charles Bonnet Syndrome

https://en.wikipedia.org/wiki/Visual_release_hallucinations
1•debarshri•20m ago•0 comments

Show HN: Eliza, a line-by-line remake of the original AI chatbot from 1966

https://marquisdegeek.github.io/Eliza-Origins/
1•marquisdegeek•23m ago•0 comments

Record Low Snow in the West Will Mean Less Water, More Fire, and Political Chaos

https://www.wired.com/story/record-low-snow-in-the-west-will-mean-less-water-more-fire-and-politi...
1•xbmcuser•26m ago•0 comments

Show HN: Custom illustrated kids' book, generated and printed (StoryStarling)

2•storystarling•27m ago•0 comments

Show HN: Boredom Challenge – Test and Improve Your Boredom Tolerance

https://jsattler.github.io/boredom-challenge/
1•jsattler•33m ago•0 comments

Study validates ability to influence dreams, aiding problem-solving during REM

https://news.northwestern.edu/stories/2026/02/dream-engineering-can-help-solve-puzzling-questions
1•giuliomagnifico•34m ago•0 comments

'It's over for us': release of AI video generator Seedance 2.0 spooks Hollywood

https://www.theguardian.com/film/2026/feb/13/new-ai-video-generator-seedance-tom-cruise-brad-pitt
2•mellosouls•37m ago•0 comments

Reversed engineered game Starflight (1986)

https://github.com/s-macke/starflight-reverse
2•tosh•38m ago•0 comments

Show HN: Dw2md – Compile all DeepWiki pages into a single, LLM-friendly file

https://github.com/tnguyen21/dw2md
1•nwyin•39m ago•0 comments

Proof of Humanity

https://www.workingtheorys.com/p/proof-of-humanity
1•jger15•42m ago•0 comments

What to Expect when Using 5G DECT NR+ [pdf]

https://hal.science/hal-05287148v1
1•teleforce•45m ago•0 comments

Show HN: AIWriteBook – AI tool to write, design, and publish full-length books

https://aiwritebook.com
2•marakaci•48m ago•1 comments

No-doomscroll: Ad-block filter lists to hide social media feeds

https://github.com/ZenPrivacy/filter-lists/blob/master/no-doomscroll/readme.md
4•anfragment•50m ago•0 comments

I Fixed Windows Native Development

https://marler8997.github.io/blog/fixed-windows/
2•deevus•50m ago•0 comments

A Forth vocabulary for iteration (2023)

https://blog.information-superhighway.net/a-forth-vocabulary-for-iteration
1•tosh•51m ago•0 comments

AgentProbe – adversarial security testing for AI agents (134 attack patterns)

https://github.com/alexmelges/agentprobe
1•alexmelges•52m ago•0 comments

Sortie En Mer

https://drowningsimulator.wtf/
1•carlos-menezes•54m ago•0 comments

Show HN: Free mission statement generator – paste URL, get draft in 5s

https://champsignal.com/tools/mission-statement-generator
1•maximedupre•59m ago•0 comments

Rune – Open spec pattern for consistent AI code generation

https://github.com/vict00r99/Rune-stone
2•vict00r99•1h ago•2 comments

UltrafastSecp256k1 Zero-dep C++20 secp256k1 with ASM,CUDA, 27 coins,MuSig2,FROST

https://github.com/shrec/UltrafastSecp256k1
2•shrecshrec•1h ago•1 comments
Open in hackernews

An Enterprise-Level Retrieval-Augmented Generation System

https://comfyai.app/article/llm-applications/enterprise-level-rag-hands-on-practice-II
6•zljdanceholic•9mo ago

Comments

zljdanceholic•9mo ago
How can we search the wanted key information from 10,000+ pages of PDFs within 2.5 hours? For fact check, how do we implement it so that answers are backed by page-level references, minimizing hallucinations?

RAG-Challenge-2 is a great open-source project by Ilya Rice that ranked 1st at the Enterprise RAG Challenge, which has 4500+ lines of code for implementing a high-performing RAG system. It might seem overwhelming to newcomers who are just beginning to learn this technology. Therefore, to help you get started quickly—and to motivate myself to learn its ins and outs—I’ve created a complete tutorial on this.

We have a complete graph to explain its workflow where multiple tools are used: Docling for parsing PDFs, LangChain for chunking text, faiss for vectorization and similarity searching, and chatgpt for LLMs.

Besides, I also outline the codeflow, demonstrating the running logic involving multiple python files where starters can easily get lost. Different files are colored differently. The purpose of showing this is not letting you memorize all of these file relationships. It works better for you to check the source code yourself and use this as a reference if you find yourself lost in the code.

The original project of Ilya Rice design its RAG system for answering questions of annual reports from companies, so he only designed three types of question response format for that challenge: a name, a number, or a boolean. But to ask questions about technical stuff, we absolutely ask general questions like How does RoPE work? to know about some concepts and the like Therefore, I further modify the system logic to fit this need by customizing an AnswerWithRAGContextExplanationPrompt class and automatically matching the most related chapter and corresponding pages via searching through all faiss databases (only retrieve the top-1) The final performance is demonstrated below (not cherry-picked, only tested once).

- What's the steps to train a nanoGPT from scratch? The answer is:

Training nanoGPT from scratch involves several clearly defined steps. First, set up the environment by installing necessary libraries, using either Anaconda or Google Colab, and then download the dataset (e.g., tinyShakespeare). Next, tokenize the text into numerical representations and split the data into training and validation sets. Define the model architecture including token/positional embeddings, transformer blocks with multi-head self-attention and feed-forward networks, and layer normalization. Configure training hyperparameters and set up an optimizer (such as AdamW). Proceed with a training loop that performs forward passes, computes loss, backpropagates, and updates parameters, while periodically evaluating performance on both training and validation data. Finally, use the trained model to generate new text from a given context.

All code are provided on Colab and the tutorial is referenced here. Hope this helps!