frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
143•theblazehen•2d ago•42 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
668•klaussilveira•14h ago•202 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
949•xnx•19h ago•551 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
122•matheusalmeida•2d ago•33 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
53•videotopia•4d ago•2 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
17•kaonwarb•3d ago•19 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
229•isitcontent•14h ago•25 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
28•jesperordrup•4h ago•16 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
223•dmpetrov•14h ago•118 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
331•vecti•16h ago•143 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
494•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
381•ostacke•20h ago•95 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
288•eljojo•17h ago•169 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
412•lstoll•20h ago•278 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
63•kmm•5d ago•6 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
19•bikenaga•3d ago•4 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
90•quibono•4d ago•21 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
256•i5heu•17h ago•196 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
44•helloplanets•4d ago•42 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
12•speckx•3d ago•6 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
59•gfortaine•12h ago•25 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
33•gmays•9h ago•12 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1066•cdrnsf•23h ago•446 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•67 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
288•surprisetalk•3d ago•43 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
150•SerCe•10h ago•138 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
183•limoce•3d ago•98 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•13h ago•14 comments
Open in hackernews

Using Vectorize to build an unreasonably good search engine in 160 lines of code

https://blog.partykit.io/posts/using-vectorize-to-build-search/
143•ColinWright•1mo ago

Comments

Supermancho•1mo ago
Site has a neat feature where you can see the pointers of other people, marked by regional? notations, scrolling through the content.
wormpilled•1mo ago
It's amazing! Got so distracted, gotta switch to reader mode haha. Never seen anything like that.
fnord77•1mo ago
that got annoying fast
wqaatwt•1mo ago
Seems fantastic for analytics. I wonder how many news sites do that
TheLNL•1mo ago
It doesn't look to be live though, I didn't see anyone reacting to weird cursor movements I was making
mips_avatar•1mo ago
There’s a lot of previously intractable problems that are getting solved with these new embeddings models. I’ve been building a geocoder for the past few months and it’s been remarkable how close to google places I can get with just slightly enriched open street maps plus embedding vectors
occupant•1mo ago
That sounds really interesting. If you’re open to it, I’d be curious what the high-level architecture looks like (what gets embedded, how you rank results)?
isaachh•1mo ago
Id love to hear more about this
robrenaud•1mo ago
What are you embedding? Are you doing a geo restricted area (small universe?).
mips_avatar•1mo ago
Yeah basically similar to the Gemini google maps grounding api, except all open data
repeekad•1mo ago
What about re-ranking? In my limited experience, adding fast+cheap re-ranking with something like Cohere to the query results took an okay vector based search and made top 1-5 results much stronger
vjerancrnjak•1mo ago
Query expansion works better.
repeekad•1mo ago
Query expansion happens before the retrieval query, reranking is applied after the ranked results are returned, both are important
sa-code•1mo ago
Query expansion and re ranking can and often do coexist

Roughly, first there is the query analysis/manipulation phase where you might have NER, spell check, query expansion/relaxation etc

Then there is the selection phase, where you retrieve all items that are relevant. Sometimes people will bring in results from both text and vector based indices. Perhaps and additional layer to group results

Then finally you have the reranking layer using a cross encoder model which might even have some personalisation in the mix

Also, with vector search you might not need query expansion necessarily since semantic similarity does loose association. But every domain is unique and there’s only one way to find out

sgk284•1mo ago
Reranking is definitely the way to go. We personally found common reranker models to be a little too opaque (can't explain to the user why this result was picked) and not quite steerable enough, so we just use another LLM for reranking.

We open-sourced our impl just this week: https://github.com/with-logic/intent

We use Groq with gpt-oss-20b, which gives great results and only adds ~250ms to the processing pipeline.

If you use mini / flash models from OpenAI / Gemini, expect it to be 2.5s-3s of overhead.

simonw•1mo ago
I was super-excited about vector search and embeddings in 2024 but my enthusiasm has faded somewhat in 2025 for a few reasons:

- LLMs with a grep or full-text search tool turn out to be great at fuzzy search already - they throw a bunch of OR conditions together and run further searches if they don't find what they want

- ChatGPT web search and Claude Code code search are my favorite AI-assisted search tools and neither bother with vectors

- Building and maintaining a large vector speech index is a pain. The vector are usually pretty big and you need to keep them in memory to get truly great performance. FTS and grep are way less hassle.

- Vector matches are weird. So you get back the top twenty results... those might be super relevant or they might be total garbage, it's on you to do a second pass to figure out if they're actually useful results or not.

I expected to spend much of 2025 building vector search engines, but ended up not finding them as valuable as I had thought.

markerz•1mo ago
The problem with LLMs using full-text-search is they’re very slow compared to a vector search query. I will admit the results are impressive but often it’s because I kick off an agent query and step away for 5 minutes.

On the other hand, generating and regenerating embeddings for all your documents can be time consuming and costly, depending on how often you need to reindex

leobg•1mo ago
Not an apples to apples comparison. Vector search is only fast after you have built an index. The same is true for full text search. That too, will be blazing fast once you have built an index (like Google pre-transformer).
markerz•1mo ago
LLMs will always have the tool call overhead, which I find to be quite expensive (seconds) on most models. Directly using vector databases without the LLM interface gets you a lot of the semantic search ability without the multi-second latency, which is pretty nice for querying documents on a website. E.G. finding relevant pages on a documentation website, showing related pages, etc. Can be applied to GitHub Issues to deduplicate issues, or show existing issues that could match what the user is about to report. There are plenty of places where “cheap and fast” is better and an LLM interface just gets in the way. I think this is a lot of the unsqueezed juice in our industry.
Someone•1mo ago
> The vector are usually pretty big and you need to keep them in memory to get truly great performance. FTS and grep are way less hassle.

If you find disk I/O for grep acceptable, why would it matter for vectors? They aren’t much bigger, are they?

marginalia_nu•1mo ago
The ultimate bottleneck in any search application is IOPS; how much data can you get off disk to compare within a tolerable time span.

Embeddings are huge compared to what you need with FTS, which generally has good locality, compresses extremely well, and permits sub-linear intersection algorithms and other tricks to make the most of your IOPS.

Regardless of vector size, you are unlikely to get more than one embedding per I/O operation with a vector approach. Even if you can fit more vectors into a block, there is no good way of arranging them to ensure efficient locality like you can with e.g. a postings list.

Thus off a 500K IOPS drive, given a 100ms execution window, your theoretical upper bound is 50K embeddings ranked, assuming actual ranking takes no time and no other disk operations are performed and you have only a single user.

Given you are more than likely comparing multiple embeddings per document, this carriage turns to a pumpkin pretty rapidly.

croemer•1mo ago
Doesn't ChatGPT web search use a (vector) search engine under the hood, e.g. Bing? Do we know how it works exactly?
simonw•1mo ago
I've not heard about Bing using vector search, at least outside of their image search feature https://arxiv.org/abs/1802.04914

Information about how Bing text search works appears to be pretty sparse though.

One of the great mysteries to me right now is how ChatGPT search actually works.

It was Bing when they first launched it, but OpenAI have been investing a ton into their own search infrastructure since then. I can't figure out how much of it is Bing these days vs their own home-rolled system.

What's confusing is how secretive OpenAI are about it! I would personally value it a whole lot more if I understood how it works.

So maybe it's way more vector-based than I believe.

I'd expect any modern search engine to have aspects of vectors somewhere - some kind of hybrid BM25 + vectors thing, or using vectors for re-ranking after retrieving likely matches via FTS. That's different from being pure vectors though.

windexh8er•1mo ago
Given that it's not documented also becomes a trust issue. OpenAI is clearly headed towards monetizing results and if search is biased / injected with unlabeled ads or questionable sources they become a new vector for both untrustworthy results and potential misdirection or misinformation.
jdthedisciple•1mo ago
In my experience vector search (top 50 results) combined with reranking (top 5-15 of those 50 results) yields not only great results but is even quite performant if done right (which is not hard!).
softwaredoug•1mo ago
The main problem isn’t embeddings, in my experience, it’s that “vector search” is the wrong conceptual framework to think about the problem

We need to think about query+content understanding before deciding a sub problem happens to be helped by embeddings. RAG naively looks like a question answering “passage retrieval” problem, when in reality it’s more structured retrieval than we first assume (and LLMs can learn how to use more structured approaches to explore data much better now than in 2022)

https://softwaredoug.com/blog/2025/12/09/rag-users-want-affo...

bonecrusher2102•1mo ago
Love seeing you in these threads! We use “AI Powered Search” as a bible on our team. Thanks for all your contributions to the community.
softwaredoug•1mo ago
Thank you. Trey gets the lions share of credit for most of that book :)
sa-code•1mo ago
Models like bge are small and quantized versions will fit in browser or on a tiny machine. Not sure why everyone reaches for an API as their first choice
yuzhun•1mo ago
While embeddings are generally not required in the context of code, I am interested in how they perform in the legal and regulatory domain, where documents are substantially longer. Specifically, how do embeddings compare with approaches such as ripgrep in terms of effectiveness?
RomanPushkin•1mo ago
You might be getting a good _recall_ rate, since vectorize search is ANN, but the _precision_ can be low, because reranker piece is missing. So I would slightly improve it by adding 10 more lines of code and introducing reranker after the search (slightly increasing topK). Query expansion in the beginning can be also added to improve recall.
____tom____•1mo ago
You didn't build a search engine in 160 lines of code. You build a client for a search engine in 160 lines of code. The vector database is providing the search.
ivanjermakov•1mo ago
Look, I made a thing in two lines of code!

    import thing from everything
    thing()
fortyseven•1mo ago
Impressive! My company is willing to buy this from you for 200 million dollars.
croemer•1mo ago
Missing a (2024)
novoreorx•1mo ago
True, it's as if embedding and vector search have just come out
daquisu•1mo ago
Now it is even easier. Cloudflare has a beta product called AI Search that implements most of these 160 lines of code
abhinavb05•1mo ago
At my workplace we are using vector embedding to build recommendation system and the results are amazing
wg0•1mo ago
Could you elaborate on the storage engine and processing pipeline if not confidential?
alansaber•1mo ago
Good to see the article touching on the performance impact of a niche vs general embedding model/aggressive subword tokenization
jdthedisciple•1mo ago
So it's an ad for partykit and they're just doing what anyone does by now?

Really dislike this type of content...