frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•2m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•5m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•5m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•5m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•5m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
1•juujian•7m ago•0 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•9m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•11m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
1•DEntisT_•13m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•14m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•14m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•17m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
4•sakanakana00•20m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•22m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•23m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•24m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•25m ago•6 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•28m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
2•chartscout•31m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•34m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•35m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•40m ago•1 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•42m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•44m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•44m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•45m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•51m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•56m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•58m ago•1 comments

Slop News - The Front Page right now but it's only Slop

https://slop-news.pages.dev/slop-news
1•keepamovin•1h ago•1 comments
Open in hackernews

Show HN: Pyversity – Fast Result Diversification for Retrieval and RAG

https://github.com/Pringled/pyversity
86•Tananon•3mo ago
Hey HN! I’ve recently open-sourced Pyversity, a lightweight library for diversifying retrieval results. Most retrieval systems optimize only for relevance, which can lead to top-k results that look almost identical. Pyversity efficiently re-ranks results to balance relevance and diversity, surfacing items that remain relevant but are less redundant. This helps with improving retrieval, recommendation, and RAG pipelines without adding latency or complexity.

Main features:

- Unified API: one function (diversify) supporting several well-known strategies: MMR, MSD, DPP, and COVER (with more to come)

- Lightweight: the only dependency is NumPy, keeping the package small and easy to install

- Fast: efficient implementations for all supported strategies; diversify results in milliseconds

Re-ranking with cross-encoders is very popular right now, but also very expensive. From my experience, you can usually improve retrieval results with simpler and faster methods, such as the ones implemented in this package. This helps retrieval, recommendation, and RAG systems present richer, more informative results by ensuring each new item adds new information.

Code and docs: github.com/pringled/pyversity

Let me know if you have any feedback, or suggestions for other diversification strategies to support!

Comments

leobg•3mo ago
Might also be useful for dataset curation, or even just prompt engineering. For example when training a classification task and picking a diverse set of examples for training or evaluation.
Tananon•3mo ago
True, I think that's also a great usecase! Though these algorithms likely won't scale to very large datasets (e.g. millions of samples), but for smaller datasets, like fine-tuning sets, I think this would work very well. I've worked on something similar in the past that works for larger datasets (semantic deduplication: https://github.com/MinishLab/semhash).
CarlosD•3mo ago
Fascinating!
pu_pu•3mo ago
The biggest problem with retrieval is actually semantic relevance. I think most embedding models don't really capture sentence-level semantic content and instead act more like bag-of-words models averaging local word-level information.

Consider this simple test I’ve been running:

Anchor: “A background service listens to a task queue and processes incoming data payloads using a custom rules engine before persisting output to a local SQLite database.”

Option A (Lexical Match): “A background service listens to a message queue and processes outgoing authentication tokens using a custom hash function before transmitting output to a local SQLite database.”

Option B (Semantic Match): “An asynchronous worker fetches jobs from a scheduling channel, transforms each record according to a user-defined logic system, and saves the results to an embedded relational data store on disk.”

Any decent LLM (e.g., Gemini 2.5 Pro, GPT-4/5) immediately knows that the Anchor and Option B describe the same concept just with different words. But when I test embedding models like gemini-embedding-001 (currently top of MTEB), they consistently rate Option A as more similar measured by cosine similarity. They’re getting tricked by surface-level word overlap.

I put together a small GitHub repo that uses ChatGPT to generate and test these “semantic triplets:

https://github.com/semvec/embedstresstest

gemini-embedding-001 (current #1 on MTEB leaderboard ) scored close to 0% on these adversarial examples.

The repo is unpolished at the moment but it gets the idea across and everything is reproducible.

Anyway, did anyone else notice this problem?

softwaredoug•3mo ago
I’m not sure what the “biggest” problem is, but I do think diversity is vastly underappreciated compared to relevance.

You can have maximally relevant search results that are horrible. Because most users (and LLMs) want to understand the range of options, not just one type of relevant option.

Search for “shoes” and only see athletic shoes is a bad experience. You’ll sell more shoes, and keep the user engaged, if you show a diverse range of shoes.

jimmySixDOF•3mo ago
I liked how Karpathy explained part of this problem as "silent collapse" in his recent Dwarkesh podcast. Meaning the models tend to fall into a local minima situation of using a few output wording templates for a large number of similar questions, and this lack of entropy diversity it becomes a tough hard to detect problem when doing distillation or synthetic data generation in general. These algorithms as nice python functions are also useful repurposed for labeling parts of ontology and topic clusters etc [1]. Will definitely star and keep an eye on the repo !

[1] https://jina.ai/news/submodular-optimization-for-text-select...

Tananon•3mo ago
Nice, I actually read that Jina article when it was published, but forgot they use facility location as well! The saturated coverage algorithm looks pretty interesting, I'll have a look at how feasible it would be to add that to Pyversity.
ehsanu1•3mo ago
This seems like a good template to generate synthetic data, with positive/negative examples, allowing an embedding model to be aligned more semantically to underlying concepts.

Anyways, I'd hope reranking models do better, have you tried those?

paulfharrison•3mo ago
Producing a diverse list of results may still help in a couple of ways here.

* If there are a lot of lexical matches, real semantic matches may still be in the list but far down the list. A diverse set of, say, 20 results may have a better chance of including a semantic match than the top 20 results by some score.

* There might be a lot of semantic matches, but a vast majority of the semantic matches follow a particular viewpoint. A diverse set of results has a better chance of including the viewpoint that solves the problem.

Yes, semantic matching is important, but this is solving an orthogonal and complementary problem. Both are important.

liqilin1567•3mo ago
It would be better if there are some real-world performance tests and comparisons with other embedding-only search methods.
Tananon•3mo ago
That's indeed something I plan to add in the near future. I'll probably add a tutorial as well to showcase how you can use this with e.g. sentence transformers. There's some pretty good benchmarks in the paper that I used as inspiration for some of these algorithms: https://arxiv.org/pdf/1709.05135, I'll most likely try to reproduce some of these.