frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

TOML 1.1.0

https://github.com/toml-lang/toml/releases/tag/1.1.0
1•epage•37s ago•0 comments

Show HN: DIY E-Ink Home Dashboard Without Headless Chrome (Python/Pillow)

https://tjoskar.dev/posts/2025-11-02-eink-pi/
1•tjoskar•1m ago•0 comments

The x86 Inferno – A Descent into Advent of Code

https://hallofdreams.org/posts/x86-inferno/
1•TheCog•2m ago•1 comments

Show HN: I built an app to fix my ADHD using the FBI meme

https://github.com/mgiovani/vigilant
2•m_giovani•3m ago•0 comments

Dear ACM, you're doing AI wrong but you can still get it right

https://anil.recoil.org/notes/acm-ai-recs
1•todsacerdoti•3m ago•0 comments

Matter by JetBrains brings vibecoding to existing codebases

https://www.youtube.com/watch?v=nVk5yP7m12Y
1•berlena•4m ago•1 comments

Half of graduates 'would earn more as a higher-level apprentice'

https://www.thetimes.com/uk/education/article/graduates-earn-more-higher-level-apprentice-debt-rk...
1•bookofjoe•6m ago•1 comments

How many GPUs do you need to break SHA-1?

https://twitter.com/AnomalRoil/status/2001427733021180216
1•anomalroil•6m ago•0 comments

Executorch: On-device AI across mobile, embedded and edge for PyTorch

https://github.com/pytorch/executorch
1•klaussilveira•8m ago•0 comments

AI's trillion-dollar question just got louder

https://www.mindstream.news/p/ai-s-trillion-dollar-question-just-got-louder
2•Anon84•8m ago•0 comments

Lovable raises $330M to power the age of the builder

https://lovable.dev/blog/series-b
2•SirOibaf•9m ago•2 comments

Long Live the Aeonophiles

https://aeon.co/essays/the-discovery-of-aeonophiles-expands-our-definition-of-life
1•rifish•9m ago•0 comments

Truth Social Parent to Merge with Nuclear Fusion Firm in $6B Deal

https://www.nytimes.com/2025/12/18/business/trump-media-tae-technologies-fusion-power-deal.html
2•2OEH8eoCRo0•9m ago•0 comments

Black Inventors Who Changed the World

https://www.thecollector.com/black-inventors-who-changed-the-world/
1•Tomte•9m ago•0 comments

The World Needs a Space Cop

https://foreignpolicy.com/2025/12/18/outer-space-treaty-cop-satellites-congestion-orbit-debris/
1•voxleone•10m ago•0 comments

Boyd's Law of Iteration (2007)

https://blog.codinghorror.com/boyds-law-of-iteration/
1•thunderbong•10m ago•0 comments

AI Has a Communism Problem

https://gpt3experiments.substack.com/p/ai-has-a-communism-problem
1•nutanc•11m ago•2 comments

On Reading Proust's in Search of Lost Time

https://nabeelqu.substack.com/p/on-reading-prousts-in-search-of-lost
1•jger15•11m ago•0 comments

Ask HN: Show and tell your successfull sideprojects based in EU

1•gethly•12m ago•1 comments

Dynamicland Front Shelf

https://dynamicland.org/
2•eamonnsullivan•12m ago•0 comments

Universal Tower Defense Codes December 2025 – Free Gems and Rewards

https://universaltowerdefensecodes.org/
1•john_mayor•13m ago•0 comments

Heart Association Revives Theory That Light Drinking May Be Good for You

https://www.nytimes.com/2025/12/16/health/alcohol-heart-disease-cancer.html
1•brandonb•13m ago•0 comments

The Cuban Embargo Does Not Exist

https://www.journalofdemocracy.org/online-exclusive/the-cuban-embargo-does-not-exist/
1•prmph•15m ago•1 comments

Woodpecker CI

https://github.com/woodpecker-ci/woodpecker
1•klaussilveira•15m ago•0 comments

Tech Billionaires Are Creating Private Cities to Flee America

https://offthefrontpage.com/tech-billionaires-are-creating-private-cities-to-flee-america/
1•robtherobber•16m ago•0 comments

Apple Watch detects 89% of sleep apnea

https://www.empirical.health/blog/apple-watch-sleep-apnea/
2•brandonb•16m ago•0 comments

We've rewritten Claude Code's terminal rendering to reduce flickering by 85%

https://github.com/anthropics/claude-code/issues/769
1•bcherny•17m ago•3 comments

Rhetorical Demagoguery: An Exploration of Trump's and Hitler's Rise to Power

https://digitalcommons.gardner-webb.edu/undergrad-honors/62/
2•KnuthIsGod•18m ago•0 comments

Akunmlk618 Gmail.com

1•mhdfazri•18m ago•0 comments

Partial Inlining

https://xania.org/202512/18-partial-inlining
2•hasheddan•19m ago•0 comments
Open in hackernews

An Enterprise-Level Retrieval-Augmented Generation System

https://comfyai.app/article/llm-applications/enterprise-level-rag-hands-on-practice-II
6•zljdanceholic•7mo ago

Comments

zljdanceholic•7mo ago
How can we search the wanted key information from 10,000+ pages of PDFs within 2.5 hours? For fact check, how do we implement it so that answers are backed by page-level references, minimizing hallucinations?

RAG-Challenge-2 is a great open-source project by Ilya Rice that ranked 1st at the Enterprise RAG Challenge, which has 4500+ lines of code for implementing a high-performing RAG system. It might seem overwhelming to newcomers who are just beginning to learn this technology. Therefore, to help you get started quickly—and to motivate myself to learn its ins and outs—I’ve created a complete tutorial on this.

We have a complete graph to explain its workflow where multiple tools are used: Docling for parsing PDFs, LangChain for chunking text, faiss for vectorization and similarity searching, and chatgpt for LLMs.

Besides, I also outline the codeflow, demonstrating the running logic involving multiple python files where starters can easily get lost. Different files are colored differently. The purpose of showing this is not letting you memorize all of these file relationships. It works better for you to check the source code yourself and use this as a reference if you find yourself lost in the code.

The original project of Ilya Rice design its RAG system for answering questions of annual reports from companies, so he only designed three types of question response format for that challenge: a name, a number, or a boolean. But to ask questions about technical stuff, we absolutely ask general questions like How does RoPE work? to know about some concepts and the like Therefore, I further modify the system logic to fit this need by customizing an AnswerWithRAGContextExplanationPrompt class and automatically matching the most related chapter and corresponding pages via searching through all faiss databases (only retrieve the top-1) The final performance is demonstrated below (not cherry-picked, only tested once).

- What's the steps to train a nanoGPT from scratch? The answer is:

Training nanoGPT from scratch involves several clearly defined steps. First, set up the environment by installing necessary libraries, using either Anaconda or Google Colab, and then download the dataset (e.g., tinyShakespeare). Next, tokenize the text into numerical representations and split the data into training and validation sets. Define the model architecture including token/positional embeddings, transformer blocks with multi-head self-attention and feed-forward networks, and layer normalization. Configure training hyperparameters and set up an optimizer (such as AdamW). Proceed with a training loop that performs forward passes, computes loss, backpropagates, and updates parameters, while periodically evaluating performance on both training and validation data. Finally, use the trained model to generate new text from a given context.

All code are provided on Colab and the tutorial is referenced here. Hope this helps!