frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Sknet.ai – AI agents debate on a forum, no humans posting

https://sknet.ai/
1•BeinerChes•28s ago•0 comments

University of Waterloo Webring

https://cs.uwatering.com/
1•ark296•51s ago•0 comments

Large tech companies don't need heroes

https://www.seangoedecke.com/heroism/
1•medbar•2m ago•0 comments

Backing up all the little things with a Pi5

https://alexlance.blog/nas.html
1•alance•3m ago•1 comments

Game of Trees (Got)

https://www.gameoftrees.org/
1•akagusu•3m ago•1 comments

Human Systems Research Submolt

https://www.moltbook.com/m/humansystems
1•cl42•3m ago•0 comments

The Threads Algorithm Loves Rage Bait

https://blog.popey.com/2026/02/the-threads-algorithm-loves-rage-bait/
1•MBCook•5m ago•0 comments

Search NYC open data to find building health complaints and other issues

https://www.nycbuildingcheck.com/
1•aej11•9m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
2•lxm•11m ago•0 comments

Show HN: Grovia – Long-Range Greenhouse Monitoring System

https://github.com/benb0jangles/Remote-greenhouse-monitor
1•benbojangles•15m ago•1 comments

Ask HN: The Coming Class War

1•fud101•15m ago•1 comments

Mind the GAAP Again

https://blog.dshr.org/2026/02/mind-gaap-again.html
1•gmays•17m ago•0 comments

The Yardbirds, Dazed and Confused (1968)

https://archive.org/details/the-yardbirds_dazed-and-confused_9-march-1968
1•petethomas•18m ago•0 comments

Agent News Chat – AI agents talk to each other about the news

https://www.agentnewschat.com/
2•kiddz•18m ago•0 comments

Do you have a mathematically attractive face?

https://www.doimog.com
3•a_n•22m ago•1 comments

Code only says what it does

https://brooker.co.za/blog/2020/06/23/code.html
2•logicprog•28m ago•0 comments

The success of 'natural language programming'

https://brooker.co.za/blog/2025/12/16/natural-language.html
1•logicprog•28m ago•0 comments

The Scriptovision Super Micro Script video titler is almost a home computer

http://oldvcr.blogspot.com/2026/02/the-scriptovision-super-micro-script.html
3•todsacerdoti•28m ago•0 comments

Discovering the "original" iPhone from 1995 [video]

https://www.youtube.com/watch?v=7cip9w-UxIc
1•fortran77•29m ago•0 comments

Psychometric Comparability of LLM-Based Digital Twins

https://arxiv.org/abs/2601.14264
1•PaulHoule•31m ago•0 comments

SidePop – track revenue, costs, and overall business health in one place

https://www.sidepop.io
1•ecaglar•33m ago•1 comments

The Other Markov's Inequality

https://www.ethanepperly.com/index.php/2026/01/16/the-other-markovs-inequality/
2•tzury•35m ago•0 comments

The Cascading Effects of Repackaged APIs [pdf]

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6055034
1•Tejas_dmg•37m ago•0 comments

Lightweight and extensible compatibility layer between dataframe libraries

https://narwhals-dev.github.io/narwhals/
1•kermatt•40m ago•0 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
3•RebelPotato•43m ago•0 comments

Dorsey's Block cutting up to 10% of staff

https://www.reuters.com/business/dorseys-block-cutting-up-10-staff-bloomberg-news-reports-2026-02...
2•dev_tty01•46m ago•0 comments

Show HN: Freenet Lives – Real-Time Decentralized Apps at Scale [video]

https://www.youtube.com/watch?v=3SxNBz1VTE0
1•sanity•47m ago•1 comments

In the AI age, 'slow and steady' doesn't win

https://www.semafor.com/article/01/30/2026/in-the-ai-age-slow-and-steady-is-on-the-outs
1•mooreds•55m ago•1 comments

Administration won't let student deported to Honduras return

https://www.reuters.com/world/us/trump-administration-wont-let-student-deported-honduras-return-2...
1•petethomas•55m ago•0 comments

How were the NIST ECDSA curve parameters generated? (2023)

https://saweis.net/posts/nist-curve-seed-origins.html
2•mooreds•56m ago•0 comments
Open in hackernews

Show HN: TuringDB – The fastest analytical in-memory graph database in C++

https://github.com/turing-db/turingdb
7•remy_boutonnet•1w ago
Hi HN,

I am one of the cofounders of http://turingdb.ai. We built TuringDB while working on large biological knowledge graphs and graph-based digital twins with pharma & hospitals, where existing graph databases were unusable for deep graph traversals with hundreds or thousands of hops on (crappy) machines you can find in a hospital.

https://github.com/turing-db/turingdb

TuringDB is a new in-memory, column-oriented graph database optimised for read-heavy analytical workloads:

- Milliseconds (1) for multi-hop queries on graphs with 10M+ nodes/edges

- Lock-free reads via immutable snapshots

- Git-like versioning for graphs (branch, merge, time travel queries)

- Built-in graph exploration UI for large subgraphs

We wrote TuringDB from scratch in C++ and designed to have predictable memory and concurrency behaviour.

For example, for the Reactome biological knowledge graph, we see ~100× to 300× speedups over Neo4j on multi-hop analytical queries out of the box (details in first comment).

A free Community version is available and runnable locally:

https://docs.turingdb.ai/quickstart

https://github.com/turing-db/turingdb

Happy to answer technical questions.

(1): We actually hit sub-millisecond performance on many queries

Comments

remy_boutonnet•1w ago
Some extra context and technical details for those interested.

We built TuringDB because our workloads were dominated by analytical graph queries (multi-hop traversals, neighborhood expansion, similarity analysis) on large, relatively stable graphs, extracted from scientific literature. After all, scientists don’t publish millions of new papers per second (yet). Write transactions throughput was not the bottleneck, it was latency when you need to go deep.

A few design choices that may be of interest:

- Column-oriented graph storage

Nodes, edges and properties are stored all adjacently column-wise to maximise cache locality during traversals. This isn’t a relational system with joins layered on top, and nodes & edges are not their own distinct heap-allocated objects like in Neo4J or Memgraph, all of them are stored together in big columnar storage, for memory efficiency and decrease the amount of random pointer-chasing done by the engine. Property values are also stored all together column-wise for all the nodes & edges so filtering nodes by property value is quite fast out of the box even without any index.

We also implemented a streaming query engine for Cypher from scratch so that nodes and edges are processed by chunks in a streaming fashion to maximise cache efficiency.

- Immutable snapshots and lock-free reads

Every read query runs against a consistent immutable snapshot of the graph. Reads are never locked, and writes never block reads. We eliminated all the locks on the read path once a snapshot is acquired. By comparison, Memgraph has to acquire a lock on each node & edge when traversing graphs from node to node. Mutexes cost CPU cycles.

This makes long-running analytical queries predictable and avoids performance cliffs under concurrency.

- Versioning as part of the storage model

Every change creates a commit just like in git. You can query any historical version of the graph at full speed, branch datasets for experiments or simulations, and merge changes back. This is critical for regulated or safety-critical domains where auditability and reproducibility matter.

- We like C++ and TuringDB was born as an experiment in design space

The engine is written in C++ from scratch because we like C++ and it’s fun.

We implemented our own storage engine, query engine and column format from the ground up. We wanted to bring columnar storage and column-oriented streaming query execution to the world of graph databases. We wanted to make a graph DB that’s heavily focused on read intensive workloads for once, instead of transactional performance. In that sense TuringDB is also an experiment in the space of possible designs for a graph database engine.

We believe in paying very careful attention to memory layout, clear execution paths, not using any external magic that has not been thought through for what we want to build.

- Knowledge graphs and GraphRAG

A common use case is grounding LLMs in structured graph context rather than relying on text-only retrieval. We’re shipping native vector search and embeddings inside TuringDB this week so graph traversal and vector similarity can be combined in one system.

remy_boutonnet•1w ago
***Benchmark summary (TuringDB vs Neo4j)***

We benchmarked TuringDB against Neo4j using the Reactome biological knowledge graph, which is a real-world, highly connected dataset (millions of nodes/edges, deep traversal patterns).

- Both systems were run out of the box, cold start.

- No manual indexing, tuning, or query rewriting on either side.

- Same logical queries, same dataset.

- Benchmarks are reproducible via an open-source runner.

Multi-hop traversal from a small seed set (15 nodes):

- 1–4 hops: ~110×–130× faster

- 7 hops: ~300× faster

- 8 hops: ~200× faster

Example:

- Neo4j: ~98s for an 8-hop traversal

- TuringDB: ~0.48s for the same query

Label scans and label-constrained traversals:

- Simple label scan (`match (n:Drug)`): ~1200× faster

- Multi-label scan: up to ~4000× faster

More complex bidirectional traversals:

- Speedups range from ~6× to ~600× depending on query shape and result materialisation.

Why the difference exists:

The speedups are not from query tricks, but from architectural choices:

- *Column-oriented execution*: vectorized, SIMD-friendly scans instead of record-at-a-time execution.

- *Streaming traversal engine*: processes node/edge chunks in batches rather than pointer chasing one node at a time.

- *Immutable snapshots*: no read locks, no coordination overhead during deep analytical queries.

- *Graph-native storage*: traversal is the primary access path, not an emergent property of joins.

Neo4j’s record-oriented model performs reasonably for short traversals but degrades sharply on long paths and broad scans, especially on cold starts.

Scope and caveats:

- These benchmarks focus on *analytical read-heavy workloads*, not high-write OLTP scenarios.

- We’re not claiming universal superiority, different workloads want different systems.

- The full benchmark code and instructions are public and reproducible:

https://github.com/turing-db/turing-bench

This is not meant to replace transactional graph databases or general-purpose OLTP systems. It’s designed for large, read-heavy analytical graphs where latency, explainability, and control matter more than write throughput. The exact applications that blocked us when doing biomedical digital twins & large knowledge graphs.

Feedback always welcome.