frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

44•UmYeahNo•1d ago•28 comments

Ask HN: Ideas for small ways to make the world a better place

10•jlmcgraw•10h ago•17 comments

Ask HN: Non-profit, volunteers run org needs CRM. Is Odoo Community a good sol.?

2•netfortius•5h ago•1 comments

Ask HN: Non AI-obsessed tech forums

19•nanocat•8h ago•14 comments

Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?

43•Invictus0•1d ago•11 comments

AI Regex Scientist: A self-improving regex solver

6•PranoyP•12h ago•1 comments

Ask HN: Who wants to be hired? (February 2026)

139•whoishiring•4d ago•513 comments

Ask HN: Who is hiring? (February 2026)

312•whoishiring•4d ago•511 comments

Tell HN: Another round of Zendesk email spam

104•Philpax•2d ago•54 comments

Ask HN: Is Connecting via SSH Risky?

19•atrevbot•2d ago•37 comments

Ask HN: Why LLM providers sell access instead of consulting services?

4•pera•18h ago•13 comments

Ask HN: Any International Job Boards for International Workers?

2•15charslong•7h ago•2 comments

Ask HN: Has your whole engineering team gone big into AI coding? How's it going?

17•jchung•2d ago•12 comments

Ask HN: What is the most complicated Algorithm you came up with yourself?

3•meffmadd•20h ago•7 comments

Ask HN: How does ChatGPT decide which websites to recommend?

5•nworley•1d ago•11 comments

Ask HN: Is it just me or are most businesses insane?

7•justenough•1d ago•5 comments

Ask HN: Mem0 stores memories, but doesn't learn user patterns

9•fliellerjulian•2d ago•6 comments

Ask HN: Is there anyone here who still uses slide rules?

123•blenderob•3d ago•122 comments

Ask HN: Anyone Seeing YT ads related to chats on ChatGPT?

2•guhsnamih•1d ago•4 comments

Ask HN: Does global decoupling from the USA signal comeback of the desktop app?

5•wewewedxfgdf•1d ago•2 comments

Kernighan on Programming

170•chrisjj•4d ago•61 comments

We built a serverless GPU inference platform with predictable latency

5•QubridAI•2d ago•1 comments

Ask HN: How Did You Validate?

4•haute_cuisine•1d ago•4 comments

Ask HN: Does a good "read it later" app exist?

8•buchanae•3d ago•18 comments

Ask HN: Have you been fired because of AI?

17•s-stude•4d ago•15 comments

Ask HN: Cheap laptop for Linux without GUI (for writing)

15•locusofself•3d ago•16 comments

Ask HN: Anyone have a "sovereign" solution for phone calls?

12•kldg•3d ago•1 comments

Test management tools for automation heavy teams

2•Divyakurian•1d ago•2 comments

Ask HN: OpenClaw users, what is your token spend?

14•8cvor6j844qw_d6•4d ago•6 comments

Ask HN: Has anybody moved their local community off of Facebook groups?

23•madsohm•4d ago•18 comments
Open in hackernews

Ask HN: How would you design for this scale today?

5•phs318u•2w ago
20 years ago I worked at a utility (electricity distribution). The government introduced a mandate to switch everyone from "basic" meters (4 data points per year, read quarterly by a human looking at a dial), to "remotely read interval meters" (48 data points per day, streamed periodocally over the day). The company had around a million meters and our data was going intake was going to jump by roughly 3.5 orders of magnitude. Key requirements: store 15 months of meter data hot/active, bill customers on time-of-use tariffs, estimate/substitute missing data (previous period lookup or guess), and (key requirement) send the meter data to the regulator and market participants within 24 hours of receipt.

The solution I came up with back in 2005 was a sharded Oracle 10g database, storing 1 read per row (controversial!) and the biggie - using Oracle Advanced Queuing to run worker jobs that could scale independently (no. of processes per queue) with a design that was basically a queue per business action (e.g. send data, aggregate consumption etc) and worker processes designed to do 1 thing - read a message off the queue (1 msg = 1 unit of work), and do its thing (if successful, write messages to other queues for downstream activity).

This all ran on a single piece of Sun hardware running Solaris, with the worker processes being either C++ applets or PL/SQL stored procs. Processed 48 million rows per-day, stored about 18 billion rows in the active table. Ran for about 10 years before being decommissioned for COTS solution.

My question is: how would such a solution be architected today? What design, what technologies etc. I am interested for my own learning (I've been working away from the pointy end of an IDE for quite some time).

Addendum: probably the only thing I wished we had done was use Oracle RAC. Instead we opted for an active-passive topology (annoyingly wasteful of an 50% of our compute).

Feel free to ask clarifying questions.

Comments

theamk•2w ago
Input data rate: 1e6 * 48 / (24*60*60) = ~560 TPS

Working data size (1 day): (8B meter id + 8B timestamp + 8B value) * 48 * 1e6 = 1.1 gigabyte

Archival: 1.1GB/day * 15 month = 482 GB .. but looks like mostly write-only data?

That is pretty small as far as modern servers go. My usual approach for IoT things is event sourcing-like architecture - you have logging servers (for disaster recovery) and processing server(s), which keep things in RAM. If processing servers crash, they restart, and logging servers re-send data from last checkpoint.

But I am sure that a Postgres can be used as well, or some sort of time-series database too. Based on your description, you really don't need much as far as database go - basically range queries and that's it.

mtmail•2w ago
The solution back then makes sense. It was a couple of years too early for CouchDB, Cassandra or MySQL Cluster (https://en.wikipedia.org/wiki/MySQL_Cluster) which are more suited for write-heavy applications and clustering across servers.

Later spinning harddrives were replaced by SSD, then NVME. More open source NoSQL and columnular storage solution. Cloud services started offering hosted databases with unlimited scale. 500GB would all be 'hot' ready to be queried in realtime.

Today I'd see three options

* multiple cloud servers which receive the data and put it into a managed cloud database, like Google BigQuery. They'll handle all scale, including region replication, backups. You might overpay but likely still less than Oracle software licence.

* specialist SaaS for IoT, for example ClickHouse. They can handle 10.000 incoming rows per second. The data store later does defragmentation, storing data by date and other optimizations which make it faster to query recent data, vs older data.

* place it into JSON or CSV files, one per hour, or one day and query with DuckDB.

fouc•2w ago
Elixir would be a good fit I think.
aristofun•2w ago
Those numbers and all the features are easy-peasy for a single not very big flink cluster