frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Aftermath of North Korean Frigate Launch Seen in Satellite Image

https://www.twz.com/air/aftermath-of-disastrous-north-korean-frigate-launch-seen-in-satellite-image
2•perihelions•5m ago•0 comments

Root. For. Your. Friends

https://josephthacker.com/personal/2025/05/13/root-for-your-friends.html
1•rez0123•5m ago•0 comments

Ultra-low power, miniature electrophysiological electronics

https://starfishneuroscience.com/blog/ultra-low-power-miniature-electrophysiological-electronics/
1•walterbell•5m ago•0 comments

BYD sells more electric vehicles in Europe than Tesla for first time

https://www.ft.com/content/53ec9a08-1112-4969-8982-2d9f06ee8ea4
2•zhengiszen•11m ago•0 comments

Judge blocks Trump admin's ban on Harvard accepting international students

https://www.theguardian.com/education/2025/may/23/harvard-university-sues-trump-administration-ban-foreign-students
8•teleforce•16m ago•0 comments

No Internet Access? SSH to the Rescue

https://isc.sans.edu/diary/31932
1•indigodaddy•19m ago•0 comments

FreeBSD 2025 Q1 Status Report

https://www.freebsd.org/status/report-2025-01-2025-03/
1•vermaden•19m ago•0 comments

A Popular College Major Has One of the Highest Unemployment Rates

https://www.newsweek.com/computer-science-popular-college-major-has-one-highest-unemployment-rates-2076514
1•hackernj•22m ago•0 comments

Evaluating AI Agents with Azure AI Evaluation

https://techcommunity.microsoft.com/blog/azure-ai-services-blog/evaluating-agentic-ai-systems-a-deep-dive-into-agentic-metrics/4403923
2•airylizard•26m ago•0 comments

KotlinConf 2025 Unpacked

https://blog.jetbrains.com/kotlin/2025/05/kotlinconf-2025-language-features-ai-powered-development-and-kotlin-multiplatform/
2•dustingetz•31m ago•0 comments

What Is a Pipeline?

https://inconsistentrecords.co.uk/blog/what-the-fook-is-a-pipeline/https://inconsistentrecords.co.uk/blog/what-the-fook-is-a-pipeline/
1•circadian•32m ago•1 comments

Develop, Deploy, Operate

https://queue.acm.org/detail.cfm?id=3733703
2•gpi•34m ago•0 comments

Show HN: We added notes to our indie read-it-later app

https://apps.apple.com/us/app/eyeball-ai-bookmarks-notes/id6670705634
1•quinto_quarto•35m ago•0 comments

EU Cyber Resilience Act is about to tell us how to code

https://berthub.eu/articles/posts/eu-cra-secure-coding-solution/
5•dr_dshiv•36m ago•0 comments

Ask HN: Is anyone else burnt out on AI?

3•throwaway12345t•37m ago•1 comments

With REAL ID, America now has national ID cards and internal passports

https://reason.com/2025/05/23/with-real-id-america-now-has-national-id-cards-and-internal-passports/
8•cempaka•40m ago•0 comments

The Snowclones Database

https://snowclones.org/
1•layer8•44m ago•0 comments

Rork – Build any cross-platform mobile app, fast

https://rork.com
2•colesantiago•50m ago•2 comments

Circle K just opened a new spot exclusively for EV charging with no gas pumps

https://electrek.co/2025/05/23/circle-k-opens-first-ev-charging-only-spot-with-no-gas-pumps/
5•gnabgib•54m ago•0 comments

I don't want to use GitHub ... what would you recommend?

5•ColinWright•56m ago•5 comments

FlectoLine: Façades in motion

https://www.uni-stuttgart.de/en/university/news/all/FlectoLine-Facades-in-motion/
2•geox•59m ago•0 comments

Expert perspectives on 10-year moratorium of state AI laws

https://www.techpolicy.press/expert-perspectives-on-10-year-moratorium-on-enforcement-of-us-state-ai-laws/
1•anigbrowl•1h ago•0 comments

Cheating at Casinos [video]

https://www.youtube.com/watch?v=0QWP4IZOu0I
2•novaleaf•1h ago•0 comments

Ask HN: What's your LLM/AI development workflow?

2•wewewedxfgdf•1h ago•0 comments

Indian IT giant investigates M&S cyber attack link

https://www.bbc.co.uk/news/articles/c989le2p3lno
2•bodelecta•1h ago•0 comments

Building a Giant Catchers' Mitt on the Moon

https://www.universetoday.com/articles/building-a-giant-catchers-mitt-on-the-moon
3•consumer451•1h ago•0 comments

Amazon has canceled its Wheel of Time series,despite 97% Rotten Tomatoes rating

https://www.theverge.com/news/673899/amazon-wheel-of-time-canceled
11•rock57•1h ago•9 comments

WordPress 6.8 breaking update discovery for plugins not hosted on .org

https://old.reddit.com/r/Wordpress/comments/1ktpzv3/wordpress_68_seems_to_be_breaking_update/
4•ValentineC•1h ago•0 comments

Fast Flux Trainer: Fine-tune FLUX in ~2 minutes for –$2

https://replicate.com/replicate/fast-flux-trainer/train
1•lucataco•1h ago•1 comments

Dyson PencilVac [video]

https://www.youtube.com/watch?v=ve6JuJV17FQ
3•tosh•1h ago•0 comments
Open in hackernews

You Don't Need Re-Ranking: Understanding the Superlinked Vector Layer

https://superlinked.com/vectorhub/articles/why-do-not-need-re-ranking
20•softwaredoug•6h ago

Comments

petesergeant•5h ago
> The key idea here is that with Superlinked, your search system can understand what you want and adjust accordingly.

I read as much of this article as I could be bothered to and still didn’t really understand how it removes the need for reranking. It starts talking about mixing vector and non-vector search, so ok fine. Is there any signal here or is it all marketing fluff?

dev_l1x_be•5h ago
I might not know enough about this subject and think the main idea is to make the initial search retrieval much smarter and more comprehensive, so the results are already good enough, lessening or removing the need for a second, often costly, re-ranking step.

They achieve this with few different ways:

- Unified Multimodal Vectors (Mixing Data Types from the Start)

Instead of just creating a vector from the text description, Superlinked creates a single, richer vector for each item (e.g., a pair of headphones) right when it's indexed. This "multimodal vector" already encodes not just the text's meaning, but also its numerical attributes (like price, rating, battery life) and categorical attributes (like "electronics," "on-ear").

- Dynamic Query-Time Weighting (Telling the Search What Matters Now)

When you make a query, you can tell Superlinked how important each of those "baked-in" aspects of the multimodal vector is for that specific search. For example: "Find affordable wireless headphones under $200 and high ratings" – you can weight the "price" aspect heavily (to favor lower prices), the "rating" aspect heavily, and the "text similarity" to "wireless headphones" also significantly, all within the initial query to the unified vector.

- Hard Filtering Before Vector Search (Cutting Out Irrelevant Items Early)

You apply these hard filters (like price <= 200 or category == "electronics") before the vector similarity search even happens on the remaining items.

If these are implemented well, Superlinked could improve the quality of initial retrieval to a point where a separate re-ranking stage becomes less necessary.

Does this answer your question?

janalsncm•5h ago
I don’t think the author understands the purpose of reranking.

During vector retrieval, we retrieve documents in sublinear time from a vector index. This allows us to reduce the number of documents from potentially billions to a much smaller number. The purpose of re-ranking is to allow high powered models to evaluate docs much more closely.

It is true that we can attempt to distill that reranking signal into a vector index. Most search engines already do this. But there is no replacement for using the high powered behavior based models in reranking.

_QrE•5h ago
I agree.

> "The real challenge in traditional vector search isn't just poor re-ranking; it's weak initial retrieval. If the first layer of results misses the right signals, no amount of re-sorting will fix it. That's where Superlinked changes the game."

Currently a lot of RAG pipelines use the BM25 algorithm for retrieval, which is very good. You then use an agent to rerank stuff only after you've got your top 5-25 results, which is not that slow or expensive, if you've done a good job with your chunking. Using metadata is also not really a 'new' approach (well, in LLM time at least) - it's more about what metadata you use and how you use them.

nostrebored•5h ago
If this were true, and initial candidate retrieval were a solved problem, teams where search is revenue aligned wouldn't have teams of very well paid people looking for marginal improvement here.

Treating BM25 as a silver bullet is just as strange as treating vector search as the "true way" to solve retrieval.

_QrE•4h ago
I don't mean to imply that it's a solved problem; all I'm saying is that in a lot of cases, the "weak initial retrieval" assertion stated by the article is not true. And if you can get a long way using what has now become the industry standard, there's not really a case to be made that BM25 is bad/unsuited, unless the improvement you gain from something more complex is more than just marginal.
nostrebored•5h ago
That "much smaller number" is the tricky part. Most rerankers degrade substantially in quality over a few hundred candidates. No amount of powerful rerankers will make "high powered behavior based models" more effective. Those behavioral signals and intents have to be encoded in the query and the latent space.
janalsncm•2h ago
> Most rerankers degrade substantially in quality over a few hundred candidates.

The reason we don’t use the most powerful models on thousands/millions of candidates is because of latency, not quality. It’s the same reason we use ANN search rather than cosine sim for every doc in the index.

laszlo_cravens•3h ago
I agree as well. Especially in the context of recommendation systems, the decoupling of retrieval from a heavy ranker has a lot of benefits. It allows for 1) faster experimentation, and 2) the use of different retrieval sources. In reality, the retrieval might consist of a healthy mix of different algorithms (collaborative filtering, personalized page rank, word2vec/2tower embeddings, popular items near the user, etc.) and fallback heuristics
AmazingTurtle•5h ago
At everfind.ai, we've found a middle ground that leverages both structured and unstructured data effectively in retrieval systems. We utilize a linear OpenSearch index for chunked information but complement this by capturing structured metadata during ingestion—either via integrations or through schema extraction using LLMs. This structured metadata allows us to take full advantage of OpenSearch's field-type capabilities.

At retrieval time, our approach involves a broad "prefetching" step: we quickly identify the most relevant schemas, perform targeted vector searches within these schemas, and then rerank the top results using the LLM before agentic reasoning and execution. The LLM is provided with carefully pre-selected tools and fields, empowering it to dive deeper into prefetched results or explore alternate queries dynamically. This method significantly boosts RAG pipeline performance, ensuring both speed and relevance.

Additionally, by limiting visibility of the "agentic execution context" to just the current operation span and collapsing it in subsequent interactions, we keep context sizes manageable, further enhancing responsiveness and scalability.

ccleve•4h ago
Is there a paper or some other explanation of what they're doing under the hood?
rooftopzen•4h ago
>"When it comes to vector search, it's not just about matching words. Understanding the meaning behind them is equally important."

This statement ^ is clearly incorrect on its premise -semantic meaning is already vectorized, and the problems with that are old news and have little to do w indexing.

I went through the article though, and realized the company is probably on its last legs - an effort that was interesting 2 years ago for about a week, but funded by non-developers without any gauge of reality.