frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
45•valyala•2h ago•19 comments

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
228•ColinWright•1h ago•247 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
31•valyala•2h ago•4 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
128•AlexeyBrin•8h ago•25 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
8•gnufx•1h ago•1 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
132•1vuio0pswjnm7•9h ago•161 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
71•vinhnx•5h ago•9 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
836•klaussilveira•22h ago•251 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
181•alephnerd•2h ago•124 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
57•thelok•4h ago•8 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1064•xnx•1d ago•613 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
85•onurkanbkrc•7h ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
493•theblazehen•3d ago•178 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
215•jesperordrup•12h ago•77 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
15•momciloo•2h ago•0 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
231•alainrk•7h ago•366 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
577•nar001•6h ago•261 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
9•languid-photic•3d ago•1 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
41•rbanffy•4d ago•8 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
30•marklit•5d ago•3 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
19•brudgers•5d ago•4 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
114•videotopia•4d ago•35 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
80•speckx•4d ago•91 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
278•isitcontent•22h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
201•limoce•4d ago•112 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
289•dmpetrov•23h ago•156 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
558•todsacerdoti•1d ago•272 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
431•ostacke•1d ago•111 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
22•sandGorgon•2d ago•12 comments
Open in hackernews

Show HN: AnuDB– Backed on RocksDB, 279x Faster Than SQLite in Parallel Workloads

https://github.com/hash-anu/AnuDB
22•hashmak_jsn•9mo ago
We recently benchmarked AnuDB, a lightweight embedded database built on top of RocksDB, against SQLite on a Raspberry Pi. The performance difference, especially for parallel operations, was dramatic.

GitHub Links:

AnuDBBenchmark: https://github.com/hash-anu/AnuDBBenchmark

AnuDB (Core): https://github.com/hash-anu/AnuDB

Why Compare AnuDB and SQLite? SQLite is excellent for many embedded use cases — it’s simple, battle-tested, and extremely reliable. But it doesn't scale well when parallelism or concurrent writes are required.

AnuDB, built over RocksDB, offers better concurrency out of the box. We wanted to measure the practical differences using real benchmarks on a Raspberry Pi.

Benchmark Setup Platform: Raspberry Pi 2 (ARMv7)

Benchmarked operations: Insert, Query, Update, Delete, Parallel

AnuDB uses RocksDB and MsgPack serialization

SQLite uses raw data, with WAL mode enabled for fairness

Key Results Insert:

AnuDB: 448 ops/sec

SQLite: 838 ops/sec

Query:

AnuDB: 54 ops/sec

SQLite: 30 ops/sec

Update:

AnuDB: 408 ops/sec

SQLite: 600 ops/sec

Delete:

AnuDB: 555 ops/sec

SQLite: 1942 ops/sec

Parallel (10 threads):

AnuDB: 412 ops/sec

SQLite: 1.4 ops/sec (!)

In the parallel case, AnuDB was over 279x faster than SQLite.

Why the Huge Parallel Difference? SQLite, even with WAL mode, uses global database-level locks. It’s not designed for high-concurrency scenarios.

RocksDB (used in AnuDB) supports:

Fine-grained locking

Concurrent readers/writers

Better parallelism using LSM-tree architecture

This explains why AnuDB significantly outperforms SQLite under threaded workloads.

Try It Yourself Clone the repo:

git clone https://github.com/hash-anu/AnuDBBenchmark cd AnuDBBenchmark ./build.sh /path/to/AnuDB /path/to/sqlite ./benchmark

Results are saved to benchmark_results.csv.

When to Use AnuDB Use AnuDB if:

You need embedded storage with high concurrency

You’re dealing with telemetry, sensor data, or parallel workloads

You want something lightweight and faster than SQLite under load

Stick with SQLite if:

You need SQL compatibility

You value mature ecosystem/tooling

Feedback Welcome This is an early experiment. We’re actively developing AnuDB and would love feedback:

Is our benchmark fair?

Where could we optimize further?

Would this be useful in your embedded project?

Comments

winkeltripel•9mo ago
So it's only much faster at delete, which is the least used operation? Am I reading the results correctly?
hashmak_jsn•9mo ago
SQLite is faster at insertion/updation/delete operations..
fidotron•9mo ago
You are talking a lot about performance but using JSON everywhere. You would be much better off using protobuf or flatbuffers for this.
hashmak_jsn•9mo ago
sure, that would be next item. thanks for suggestion
nottorp•9mo ago
Are they comparing a nosql database with no search/filtering with a sql database that has those operations by any chance?

And the 279x number is for parallel deletes? If you have to do that many parallel deletes it's probably a maintenance operation and you might as well copy the remaining data out, drop the db and recreate it...

There goes any credibility.

hashmak_jsn•9mo ago
Good point — just to clarify, the "279x" number isn’t about parallel deletes. The parallel test runs a mix of operations (insert, query, update, delete) across multiple threads. Each thread works on its own document range to simulate a real-world concurrent workload (like telemetry ingestion).

SQLite (even in WAL mode) hits write lock contention under concurrency, while AnuDB (using RocksDB) handles concurrent writes better due to its design.

Also, AnuDB supports indexing via an API using RocksDB's prefix extractor, so it’s not just a key-value store — basic filtering is supported.

Appreciate the feedback — will revise the post to make this clearer!

whizzter•9mo ago
Is your benchmark fair? No, it's really an apples to oranges comparison in many cases.

First as RocksDB is built to function as a backend to high throughput systems and has a lot of high complexity tuning parameters and background threads and is built to work on bigger workloads (with more threads obviously).

SQLite is built to be an easy option in "smaller" scenarios, in "larger" scenarios a common scenario is multiple SQLite databases (one per customer for example).

Also a dataset of 10000 entries is too small to really matter for many more complicated scenarios (one can probably hold it all in memory and just use SQLite to store things).

Does your document system handle indexing (or is there support?), an SQL user will often tune indexes (and the SQLite query handler will use properly setup indexes). I'm evaluating RocksDB in a project and from what I gathered it itself doesn't have a notion of it (but you can easily build them as separate column families).

The version of your Raspberry PI is not specified, I've used RPI's for benchmarking but the evolution of CPU's (And in later versions the peripherals like support for NVMe disks) makes each version behave differently both to each other and to "real" machines (I was able to use that to my advantage since the difference in benchmarking between the versions gave information about relative importance of code generation strategies for newer vs older CPU's).

MOST importantly, if you want to gain traction for your project you should _focus_ on the use-case that motivated you to build it (The entire MQTT thing mentioned on the GH page seems to show some other direction) rather than doing a halfbaked comparison to SQLite (that I guess you maybe used before but wasn't really suited for your use-case).

hashmak_jsn•9mo ago
Thanks for the thoughtful and constructive feedback — you're absolutely right that this isn't a strict apples-to-apples comparison. Our aim was to evaluate practical performance in edge workloads, especially for MQTT-style use cases on constrained devices like Raspberry Pi.

A few clarifications:

Indexing: AnuDB supports indexing via an explicit API — the user needs to define indexes manually. Internally, it's backed by RocksDB and uses a prefix extractor to optimize lookups. While it's not a full SQL-style index planner, it's efficient for our document-store model.

Parallel Writes: SQLite does well in many embedded use cases, but it struggles with highly parallel writes — even in WAL mode. RocksDB (and thus AnuDB) is built for concurrency and handles write-heavy parallel loads much better. That shows in our "Parallel" test.

Dataset Size: Agreed, 10K entries is small. We kept it modest to demonstrate behavior under low-latency edge conditions, but we’re planning larger-scale tests in follow-ups.

Hardware: The test was done on a Raspberry Pi 2 with 1GB RAM and microSD storage. Thanks for pointing out that CPU/peripheral differences could affect results — that’s something we’ll document better in future benchmarks.

Use Case Focus: You're spot on about the importance of use-case-driven evaluation. AnuDB was motivated by the need for a lightweight document database for IoT and edge scenarios with MQTT support — not as a direct SQLite replacement, but as an alternative where document flexibility and concurrent ingestion matter.

notpushkin•9mo ago
But is it web scale?
geodel•9mo ago
Its anuscale.
hashmak_jsn•9mo ago
yeah anu is everywhere
hashmak_jsn•9mo ago
you can run it. we validated it by injecting 1 million documents. we don’t face any issue. and please feel free to raise issue in github.
graealex•9mo ago
The reason for using an embedded database is ease-of-deployment. Sometimes deployment size is relevant as well. Those use cases are usually oppositional - software that relies on a simple deployment usually doesn't require dozens of concurrent writers, and/or hundreds of transactions per second. Software that does usually needs a more in-depth installation anyway, or at least it's worthwhile to go the extra mile to optimize performance. Also the size of the dataset will often be orders-of-magnitude apart.

The reason for using an SQL database is that your data is structured in a way where using SQL to query it makes sense. Comparing SQL to No-SQL is pointless, unless your use case can be easily adapted to either of them, without suffering too much.

There are plenty of No-SQL embedded databases. A number of SQL embedded databases as well.

If speed, ease-of-deployment/embedded and SQL were important, I would use Firebird as an embedded database, like SQLite gets loaded into process memory. Depending on the version and feature set you include, this is also only a few MB, and even more ANSI SQL compliant than SQLite.

If you want to do comparisons, compare against other document/No-SQL databases.

hashmak_jsn•8mo ago
thanks for suggestion,I will compare with other no sql database.