frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
163•theblazehen•2d ago•47 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
674•klaussilveira•14h ago•202 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
950•xnx•20h ago•552 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
123•matheusalmeida•2d ago•33 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
22•kaonwarb•3d ago•19 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
58•videotopia•4d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
232•isitcontent•14h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
225•dmpetrov•15h ago•118 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
332•vecti•16h ago•144 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
495•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
383•ostacke•20h ago•95 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
360•aktau•21h ago•182 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
289•eljojo•17h ago•175 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
413•lstoll•21h ago•279 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
32•jesperordrup•4h ago•16 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
20•bikenaga•3d ago•8 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
17•speckx•3d ago•7 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
63•kmm•5d ago•7 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
91•quibono•4d ago•21 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
258•i5heu•17h ago•196 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
44•helloplanets•4d ago•42 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
60•gfortaine•12h ago•26 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1070•cdrnsf•1d ago•446 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
36•gmays•9h ago•12 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•70 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
288•surprisetalk•3d ago•43 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
150•SerCe•10h ago•142 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
186•limoce•3d ago•100 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•14h ago•14 comments
Open in hackernews

Show HN: AnuDB– Backed on RocksDB, 279x Faster Than SQLite in Parallel Workloads

https://github.com/hash-anu/AnuDB
22•hashmak_jsn•9mo ago
We recently benchmarked AnuDB, a lightweight embedded database built on top of RocksDB, against SQLite on a Raspberry Pi. The performance difference, especially for parallel operations, was dramatic.

GitHub Links:

AnuDBBenchmark: https://github.com/hash-anu/AnuDBBenchmark

AnuDB (Core): https://github.com/hash-anu/AnuDB

Why Compare AnuDB and SQLite? SQLite is excellent for many embedded use cases — it’s simple, battle-tested, and extremely reliable. But it doesn't scale well when parallelism or concurrent writes are required.

AnuDB, built over RocksDB, offers better concurrency out of the box. We wanted to measure the practical differences using real benchmarks on a Raspberry Pi.

Benchmark Setup Platform: Raspberry Pi 2 (ARMv7)

Benchmarked operations: Insert, Query, Update, Delete, Parallel

AnuDB uses RocksDB and MsgPack serialization

SQLite uses raw data, with WAL mode enabled for fairness

Key Results Insert:

AnuDB: 448 ops/sec

SQLite: 838 ops/sec

Query:

AnuDB: 54 ops/sec

SQLite: 30 ops/sec

Update:

AnuDB: 408 ops/sec

SQLite: 600 ops/sec

Delete:

AnuDB: 555 ops/sec

SQLite: 1942 ops/sec

Parallel (10 threads):

AnuDB: 412 ops/sec

SQLite: 1.4 ops/sec (!)

In the parallel case, AnuDB was over 279x faster than SQLite.

Why the Huge Parallel Difference? SQLite, even with WAL mode, uses global database-level locks. It’s not designed for high-concurrency scenarios.

RocksDB (used in AnuDB) supports:

Fine-grained locking

Concurrent readers/writers

Better parallelism using LSM-tree architecture

This explains why AnuDB significantly outperforms SQLite under threaded workloads.

Try It Yourself Clone the repo:

git clone https://github.com/hash-anu/AnuDBBenchmark cd AnuDBBenchmark ./build.sh /path/to/AnuDB /path/to/sqlite ./benchmark

Results are saved to benchmark_results.csv.

When to Use AnuDB Use AnuDB if:

You need embedded storage with high concurrency

You’re dealing with telemetry, sensor data, or parallel workloads

You want something lightweight and faster than SQLite under load

Stick with SQLite if:

You need SQL compatibility

You value mature ecosystem/tooling

Feedback Welcome This is an early experiment. We’re actively developing AnuDB and would love feedback:

Is our benchmark fair?

Where could we optimize further?

Would this be useful in your embedded project?

Comments

winkeltripel•9mo ago
So it's only much faster at delete, which is the least used operation? Am I reading the results correctly?
hashmak_jsn•9mo ago
SQLite is faster at insertion/updation/delete operations..
fidotron•9mo ago
You are talking a lot about performance but using JSON everywhere. You would be much better off using protobuf or flatbuffers for this.
hashmak_jsn•9mo ago
sure, that would be next item. thanks for suggestion
nottorp•9mo ago
Are they comparing a nosql database with no search/filtering with a sql database that has those operations by any chance?

And the 279x number is for parallel deletes? If you have to do that many parallel deletes it's probably a maintenance operation and you might as well copy the remaining data out, drop the db and recreate it...

There goes any credibility.

hashmak_jsn•9mo ago
Good point — just to clarify, the "279x" number isn’t about parallel deletes. The parallel test runs a mix of operations (insert, query, update, delete) across multiple threads. Each thread works on its own document range to simulate a real-world concurrent workload (like telemetry ingestion).

SQLite (even in WAL mode) hits write lock contention under concurrency, while AnuDB (using RocksDB) handles concurrent writes better due to its design.

Also, AnuDB supports indexing via an API using RocksDB's prefix extractor, so it’s not just a key-value store — basic filtering is supported.

Appreciate the feedback — will revise the post to make this clearer!

whizzter•9mo ago
Is your benchmark fair? No, it's really an apples to oranges comparison in many cases.

First as RocksDB is built to function as a backend to high throughput systems and has a lot of high complexity tuning parameters and background threads and is built to work on bigger workloads (with more threads obviously).

SQLite is built to be an easy option in "smaller" scenarios, in "larger" scenarios a common scenario is multiple SQLite databases (one per customer for example).

Also a dataset of 10000 entries is too small to really matter for many more complicated scenarios (one can probably hold it all in memory and just use SQLite to store things).

Does your document system handle indexing (or is there support?), an SQL user will often tune indexes (and the SQLite query handler will use properly setup indexes). I'm evaluating RocksDB in a project and from what I gathered it itself doesn't have a notion of it (but you can easily build them as separate column families).

The version of your Raspberry PI is not specified, I've used RPI's for benchmarking but the evolution of CPU's (And in later versions the peripherals like support for NVMe disks) makes each version behave differently both to each other and to "real" machines (I was able to use that to my advantage since the difference in benchmarking between the versions gave information about relative importance of code generation strategies for newer vs older CPU's).

MOST importantly, if you want to gain traction for your project you should _focus_ on the use-case that motivated you to build it (The entire MQTT thing mentioned on the GH page seems to show some other direction) rather than doing a halfbaked comparison to SQLite (that I guess you maybe used before but wasn't really suited for your use-case).

hashmak_jsn•9mo ago
Thanks for the thoughtful and constructive feedback — you're absolutely right that this isn't a strict apples-to-apples comparison. Our aim was to evaluate practical performance in edge workloads, especially for MQTT-style use cases on constrained devices like Raspberry Pi.

A few clarifications:

Indexing: AnuDB supports indexing via an explicit API — the user needs to define indexes manually. Internally, it's backed by RocksDB and uses a prefix extractor to optimize lookups. While it's not a full SQL-style index planner, it's efficient for our document-store model.

Parallel Writes: SQLite does well in many embedded use cases, but it struggles with highly parallel writes — even in WAL mode. RocksDB (and thus AnuDB) is built for concurrency and handles write-heavy parallel loads much better. That shows in our "Parallel" test.

Dataset Size: Agreed, 10K entries is small. We kept it modest to demonstrate behavior under low-latency edge conditions, but we’re planning larger-scale tests in follow-ups.

Hardware: The test was done on a Raspberry Pi 2 with 1GB RAM and microSD storage. Thanks for pointing out that CPU/peripheral differences could affect results — that’s something we’ll document better in future benchmarks.

Use Case Focus: You're spot on about the importance of use-case-driven evaluation. AnuDB was motivated by the need for a lightweight document database for IoT and edge scenarios with MQTT support — not as a direct SQLite replacement, but as an alternative where document flexibility and concurrent ingestion matter.

notpushkin•9mo ago
But is it web scale?
geodel•9mo ago
Its anuscale.
hashmak_jsn•9mo ago
yeah anu is everywhere
hashmak_jsn•9mo ago
you can run it. we validated it by injecting 1 million documents. we don’t face any issue. and please feel free to raise issue in github.
graealex•9mo ago
The reason for using an embedded database is ease-of-deployment. Sometimes deployment size is relevant as well. Those use cases are usually oppositional - software that relies on a simple deployment usually doesn't require dozens of concurrent writers, and/or hundreds of transactions per second. Software that does usually needs a more in-depth installation anyway, or at least it's worthwhile to go the extra mile to optimize performance. Also the size of the dataset will often be orders-of-magnitude apart.

The reason for using an SQL database is that your data is structured in a way where using SQL to query it makes sense. Comparing SQL to No-SQL is pointless, unless your use case can be easily adapted to either of them, without suffering too much.

There are plenty of No-SQL embedded databases. A number of SQL embedded databases as well.

If speed, ease-of-deployment/embedded and SQL were important, I would use Firebird as an embedded database, like SQLite gets loaded into process memory. Depending on the version and feature set you include, this is also only a few MB, and even more ANSI SQL compliant than SQLite.

If you want to do comparisons, compare against other document/No-SQL databases.

hashmak_jsn•8mo ago
thanks for suggestion,I will compare with other no sql database.