frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•1m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•1m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•3m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•3m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•4m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•4m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•5m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•8m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
3•codexon•8m ago•1 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•9m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•12m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•13m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•14m ago•0 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•14m ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•14m ago•0 comments

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•17m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
2•_____k•17m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•19m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
3•CurtHagenlocher•21m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•22m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•22m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•22m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•24m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•25m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•28m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•32m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•33m ago•0 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
1•latentio•34m ago•0 comments

Robust and Interactable World Models in Computer Vision [video]

https://www.youtube.com/watch?v=9B4kkaGOozA
2•Anon84•37m ago•0 comments

Nestlé couldn't crack Japan's coffee market.Then they hired a child psychologist

https://twitter.com/BigBrainMkting/status/2019792335509541220
1•rmason•39m ago•1 comments
Open in hackernews

Show HN: TuringDB – The fastest analytical in-memory graph database in C++

https://github.com/turing-db/turingdb
7•remy_boutonnet•1w ago
Hi HN,

I am one of the cofounders of http://turingdb.ai. We built TuringDB while working on large biological knowledge graphs and graph-based digital twins with pharma & hospitals, where existing graph databases were unusable for deep graph traversals with hundreds or thousands of hops on (crappy) machines you can find in a hospital.

https://github.com/turing-db/turingdb

TuringDB is a new in-memory, column-oriented graph database optimised for read-heavy analytical workloads:

- Milliseconds (1) for multi-hop queries on graphs with 10M+ nodes/edges

- Lock-free reads via immutable snapshots

- Git-like versioning for graphs (branch, merge, time travel queries)

- Built-in graph exploration UI for large subgraphs

We wrote TuringDB from scratch in C++ and designed to have predictable memory and concurrency behaviour.

For example, for the Reactome biological knowledge graph, we see ~100× to 300× speedups over Neo4j on multi-hop analytical queries out of the box (details in first comment).

A free Community version is available and runnable locally:

https://docs.turingdb.ai/quickstart

https://github.com/turing-db/turingdb

Happy to answer technical questions.

(1): We actually hit sub-millisecond performance on many queries

Comments

remy_boutonnet•1w ago
Some extra context and technical details for those interested.

We built TuringDB because our workloads were dominated by analytical graph queries (multi-hop traversals, neighborhood expansion, similarity analysis) on large, relatively stable graphs, extracted from scientific literature. After all, scientists don’t publish millions of new papers per second (yet). Write transactions throughput was not the bottleneck, it was latency when you need to go deep.

A few design choices that may be of interest:

- Column-oriented graph storage

Nodes, edges and properties are stored all adjacently column-wise to maximise cache locality during traversals. This isn’t a relational system with joins layered on top, and nodes & edges are not their own distinct heap-allocated objects like in Neo4J or Memgraph, all of them are stored together in big columnar storage, for memory efficiency and decrease the amount of random pointer-chasing done by the engine. Property values are also stored all together column-wise for all the nodes & edges so filtering nodes by property value is quite fast out of the box even without any index.

We also implemented a streaming query engine for Cypher from scratch so that nodes and edges are processed by chunks in a streaming fashion to maximise cache efficiency.

- Immutable snapshots and lock-free reads

Every read query runs against a consistent immutable snapshot of the graph. Reads are never locked, and writes never block reads. We eliminated all the locks on the read path once a snapshot is acquired. By comparison, Memgraph has to acquire a lock on each node & edge when traversing graphs from node to node. Mutexes cost CPU cycles.

This makes long-running analytical queries predictable and avoids performance cliffs under concurrency.

- Versioning as part of the storage model

Every change creates a commit just like in git. You can query any historical version of the graph at full speed, branch datasets for experiments or simulations, and merge changes back. This is critical for regulated or safety-critical domains where auditability and reproducibility matter.

- We like C++ and TuringDB was born as an experiment in design space

The engine is written in C++ from scratch because we like C++ and it’s fun.

We implemented our own storage engine, query engine and column format from the ground up. We wanted to bring columnar storage and column-oriented streaming query execution to the world of graph databases. We wanted to make a graph DB that’s heavily focused on read intensive workloads for once, instead of transactional performance. In that sense TuringDB is also an experiment in the space of possible designs for a graph database engine.

We believe in paying very careful attention to memory layout, clear execution paths, not using any external magic that has not been thought through for what we want to build.

- Knowledge graphs and GraphRAG

A common use case is grounding LLMs in structured graph context rather than relying on text-only retrieval. We’re shipping native vector search and embeddings inside TuringDB this week so graph traversal and vector similarity can be combined in one system.

remy_boutonnet•1w ago
***Benchmark summary (TuringDB vs Neo4j)***

We benchmarked TuringDB against Neo4j using the Reactome biological knowledge graph, which is a real-world, highly connected dataset (millions of nodes/edges, deep traversal patterns).

- Both systems were run out of the box, cold start.

- No manual indexing, tuning, or query rewriting on either side.

- Same logical queries, same dataset.

- Benchmarks are reproducible via an open-source runner.

Multi-hop traversal from a small seed set (15 nodes):

- 1–4 hops: ~110×–130× faster

- 7 hops: ~300× faster

- 8 hops: ~200× faster

Example:

- Neo4j: ~98s for an 8-hop traversal

- TuringDB: ~0.48s for the same query

Label scans and label-constrained traversals:

- Simple label scan (`match (n:Drug)`): ~1200× faster

- Multi-label scan: up to ~4000× faster

More complex bidirectional traversals:

- Speedups range from ~6× to ~600× depending on query shape and result materialisation.

Why the difference exists:

The speedups are not from query tricks, but from architectural choices:

- *Column-oriented execution*: vectorized, SIMD-friendly scans instead of record-at-a-time execution.

- *Streaming traversal engine*: processes node/edge chunks in batches rather than pointer chasing one node at a time.

- *Immutable snapshots*: no read locks, no coordination overhead during deep analytical queries.

- *Graph-native storage*: traversal is the primary access path, not an emergent property of joins.

Neo4j’s record-oriented model performs reasonably for short traversals but degrades sharply on long paths and broad scans, especially on cold starts.

Scope and caveats:

- These benchmarks focus on *analytical read-heavy workloads*, not high-write OLTP scenarios.

- We’re not claiming universal superiority, different workloads want different systems.

- The full benchmark code and instructions are public and reproducible:

https://github.com/turing-db/turing-bench

This is not meant to replace transactional graph databases or general-purpose OLTP systems. It’s designed for large, read-heavy analytical graphs where latency, explainability, and control matter more than write throughput. The exact applications that blocked us when doing biomedical digital twins & large knowledge graphs.

Feedback always welcome.