frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•59s ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
1•okaywriting•7m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
1•todsacerdoti•10m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•10m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•11m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•12m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•13m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•13m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•13m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•18m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•18m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•19m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•19m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•27m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•28m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•30m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•30m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
2•surprisetalk•30m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
3•pseudolus•31m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•31m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•32m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
2•1vuio0pswjnm7•32m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•33m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
2•jackhalford•34m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•34m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
2•tangjiehao•37m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•38m ago•1 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•38m ago•0 comments

Show HN: Tesseract – A forum where AI agents and humans post in the same space

https://tesseract-thread.vercel.app/
1•agliolioyyami•39m ago•0 comments

Show HN: Vibe Colors – Instantly visualize color palettes on UI layouts

https://vibecolors.life/
2•tusharnaik•40m ago•0 comments
Open in hackernews

Show HN: Lingo – A linguistic database in Rust with nanosecond-level performance

42•peerlesscasual•4mo ago
Hi HN, I made Lingo - the SQLite of semantic search.

I'm a self-taught developer and researcher who left school at 16, and I've spent some time exploring a first-principles approach to system design for various frontier problems. In this case it's AI that challenges the 'bigger is better' transformer paradigm.

Lingo is the first piece of that research, a high-performance linguistic database designed to run on-device.

The full technical overview and manifesto is here: https://medium.com/@robm.antunes/bcd1e9752af6

The paper has been archived on Zenodo with a DOI: https://doi.org/10.5281/zenodo.17196613

The code is open-source and can be found at https://github.com/RobAntunes/lingodb, it's currently broken and feature incomplete but I'm working on it - just wanted to start getting some feedback.

All benchmarks are reproducible from the repo and can also be found in the various texts.

As an independent without academic affiliation, I'd be incredibly grateful for your feedback! I'm here to answer any questions.

Cheers!

Comments

apavlo•4mo ago
> • Memory-Mapping (mmap): We treat the database file as if it’s already in memory, eliminating the distinction between disk and RAM.

Ugh, not another one...

0x264•4mo ago
Yep, another developer enthusiastically proposing mmap as an "easy win" for database design, when in reality it often causes hard-to-debug correctness and performance problems.
nurettin•4mo ago
To be fair, I use it to share financial time series between multiple processes and as long as there is a single writer it works well. Been in production since several years.
pclmulqdq•4mo ago
Creating a shared memory buffer by mapping it as a file is not the same as mapping files on disk. The latter has weird and subtle problems, whereas the former just works.
nurettin•4mo ago
To be clear, I am indeed doing mmap to the same file on disk. Not using shmap. But there is only one thread in one process writing to it and the readers are tolerant to millisecond delays.
pclmulqdq•4mo ago
> millisecond delays

I thought you said financial time series!

But yeah, this is a case where mmap works great - convenience, not super fast, single writer and not necessarily super durable.

nurettin•4mo ago
> I thought you said financial time series!

Yeah it is just your average normal financial time series.

madushan1000•4mo ago
Why not though, from what I can see from the docs, these databases supposed to be static and read only. At least when you use it on a device.
0xdeafbeef•4mo ago
Page cache reclamation is mostly single threaded. It's much simpler, than you can create in a user space, it has no weight for specific pages etc.

Traveling into kernel flushes branch predictor caches, tlb. So it's not free at all.

anonzzzies•4mo ago
No issue if you know what you are doing. Not sure about the author but I know very high perf mmap systems for decades without corruption / issues (in hft/finance/payments).
porridgeraisin•4mo ago
Ctrl-Fd you here the moment i saw that in the article
rzz3•4mo ago
Really impressive work :)
vouwfietsman•4mo ago
Ok, since you're looking for sincere feedback.

Great vision, challenging the "scale" of current AI solutions is super valid, if only for the reason that humans don't learn like this.

Architecture: despite other comments, I am not so bothered with MMAP (if read only) but rather with the performance claims. If your total db is 13kb you should be answering queries at amazing speeds, because you're just running code on in-cache data at that point. The performance claim at this point means nothing, because what you're doing is not performance intensive.

Claims: A frontal attack on the current paradigm would at least have to include real semantic queries, which I think is not currently what you're doing, you're just doing language analytics like NLP. This is maybe how you intend to solve semantic queries later, but since this is not what you're doing, I think that should be clear from the get-go. Especially because the "scale" of the current AI paradigm has nothing to do with how the tokenization happens, but rather with how the statistical model is trained to answer semantic queries.

Finally, the example of "Find all Greek-origin technical terms" is a poor one because it is exactly the kind of "knowledge graph" question that was answerable before the current AI hype.

Nevertheless, love the effort, good luck!

(oh and btw: I'm not an expert, so if any of this is wrong, please correct me)

sigfubar•4mo ago
The repo is 100% AI slop.

Advice to OP: lay off the Claude Code if your goal is to become an “independent researcher”. Claude doesn’t know what it’s doing, but it’s happy to lead you into a false sense of achievement because it’ll never tell you when you’re wrong, or when it’s wrong.

mpeg•4mo ago
Bizarre because a quick look at the code and commit log shows it was likely 100% coded by AI, so the author is not trying too hard to hide it, but they also seemed to forget to mention it anywhere in the README or the blog post.
rahkiin•4mo ago
Out of interest: can you elaborate how you analyzed the repo to come to this conclusion?
jdiez17•4mo ago
All of the code is imported in 1 commit. The rest of the commits are deleting the specs that I guess were used to generate the code. There’s one commit adding code which explicitly says generated by Claude code. There’s basically no chance the whole codebase is not AI slop.
thunfischbrot•4mo ago
For those interested in the referenced spec:

https://github.com/RobAntunes/lingodb/blob/e8e56a2b2dfe19a27...

mpeg•4mo ago
The specs themselves seem generated with LLMs too, as in https://github.com/RobAntunes/lingodb/blob/5e3834de648debf08... – overuse of emojis, excitement, etc
teiferer•4mo ago
Already the title of your submission does not check out. Do you know how many clock cycles a 1 GHz CPU realizes in one nanosecond? One. Just reading the input argument of a function takes a "nanosecond-scale" amount of time.

> I'm a self-taught developer and researcher who left school at 16, and I've spent some time exploring a first-principles approach to system design for various frontier problems.

As much as I appreciate new ways of thinking, whenever I read "first-principles approach", my alarm bells go off. More often than not it just means "I chose to ignore (or am too impatient to learn about) all insights that generations of research in this field have made". The "left school at 16" and "self-taught" parts also indicate that. This may explain the hyperbole of the title as well, as it does not pass the smell test.

If you are looking for advice, here is mine: try to not ignore those that came before you. Giants' shoulders are very wide, very high up and pretty solid. There is no shame in standing on them, but it takes effort to climb up.

ozgrakkurt•4mo ago
What an amazing comment, criticism on the title without going into any content with a side of character judgement
nitishr•4mo ago
Not too sure, reading with mmap is ok but simultaneous read/write operations are a bit tricky.
bitmagier•4mo ago
Summary from my side:

Outstanding features:

- way better representation (very information-dense) of different basic language properties directly as a storage layout property (which seems absolutely possible to me to achieve)

- attention (signal) as resonance: analog wave signal processing methods can be used -> way less computation power needed

Analysis: It will have the same fundamental limitations in terms of "understanding" and "thinking" as traditional LLMs, as its "knowledge" is still based on language itself. I believe it would be implemented in combination with other models, which supply nuances of actual content – namely traditional LLMs, which are focussed on written text as it appears. Nevertheless, it should add a high-quality and high-efficient building block for language processing to the landscape of LLMs. Furthermore it may also be a nice starting point for a general development towards rethinking architecture patterns in favor of lower resource consumption and fine quality of any kind of information.