frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: I would like you to evaluate my profile

https://github.com/imshota1009/imshota1009
1•imshota1009•1m ago•0 comments

Oakland to silence police radios from public beginning Wednesday

https://www.mercurynews.com/2025/08/31/oakland-to-silence-police-radios-from-public-beginning-wed...
1•pfexec•3m ago•0 comments

Anthropic's surprise settlement adds new wrinkle in AI copyright war

https://www.reuters.com/legal/government/anthropics-surprise-settlement-adds-new-wrinkle-ai-copyr...
1•1vuio0pswjnm7•3m ago•0 comments

Vvvv Gamma 7.0 Release

https://vvvv.org/blog/2025/vvvv-gamma-7.0-release/
1•bj-rn•3m ago•0 comments

Triboluminescence

https://en.wikipedia.org/wiki/Triboluminescence
1•bookofjoe•6m ago•0 comments

Data-Driven Mechanism Design: Jointly Eliciting Preferences and Information [pdf]

https://cowles.yale.edu/sites/default/files/2025-03/d2418r1.pdf
1•sito42•6m ago•0 comments

The FTC Warns Big Tech Companies Not to Apply the Digital Services Act

https://www.wired.com/story/big-tech-companies-in-the-us-have-been-told-not-to-apply-the-digital-...
1•kurhan•6m ago•0 comments

Apple: 11-Inch MacBook Air and Two Other Macs Are Now Obsolete

https://www.macrumors.com/2025/08/31/11-inch-macbook-air-is-obsolete/
2•tosh•7m ago•0 comments

Inside The Box: Everything I Did with an Arduino Starter Kit [video]

https://www.youtube.com/watch?v=25vJvHLKvSE
1•lopespm•8m ago•0 comments

Inverting the Xorshift128 random number generator

https://littlemaninmyhead.wordpress.com/2025/08/31/inverting-the-xorshift128-random-number-genera...
1•rurban•10m ago•0 comments

Is It a Comet or Alien Technology? [video]

https://www.youtube.com/watch?v=FsyzVoIuUGU
1•breadwinner•12m ago•0 comments

Show HN: I built a tool to edit images with AI

https://pixeledit.ai
1•andreict•12m ago•0 comments

Super Micro shares dip after AI server maker flags financial control concerns

https://www.reuters.com/business/super-micro-shares-dip-after-ai-server-maker-flags-financial-con...
1•1vuio0pswjnm7•13m ago•0 comments

ChatGPT affirmed Greenwich man's fears before murder-suicide

https://www.ctinsider.com/news/article/chatgpt-greenwich-ct-murder-stein-erik-soelberg-21022277.php
1•healsdata•15m ago•0 comments

Ayfkm blog: Painful bureaucratic journeys of a multicultural family

https://ayfkm.blog/
1•lollobomb•16m ago•0 comments

Writing in Djot

https://pdx.su/blog/2025-06-28-writing-in-djot/
1•networked•17m ago•0 comments

Breakneck: China's Quest to Engineer the Future

https://danwang.co/breakneck/
2•naves•23m ago•0 comments

Media Influence and Spatial Voting: The Role of Perceived Party Positions

https://link.springer.com/article/10.1007/s11109-025-10031-9
1•PaulHoule•23m ago•0 comments

90% of European gaming revenue in 2024 was digital purchases with only 15% on PC [pdf]

https://www.videogameseurope.eu/wp-content/uploads/2025/08/VGE-2024-KF-2024.pdf
1•HelloUsername•25m ago•0 comments

What Is Algebra? (2011)

https://profkeithdevlin.org/2011/11/20/what-is-algebra/
1•FromTheArchives•26m ago•0 comments

Chicago has the most lead pipes in the nation. We mapped them all

https://grist.org/accountability/chicago-lead-pipe-replacement-map-health/
3•rntn•27m ago•1 comments

Vibe Security – Vibe-coding security scanner that works

https://vibesecurity.co/
1•benstirling•27m ago•0 comments

Apple Hints at iPhone 17 Models Lacking SIM Card Slot in More Countries

https://www.macrumors.com/2025/08/31/apple-hints-at-esim-only-iphone-17/
2•onesandofgrain•27m ago•0 comments

Double-tap strike kills 5 more journalists in Gaza hospital

https://www.reuters.com/business/media-telecom/obituary-hussam-al-masri-reuters-journalist-killed...
5•andrepd•34m ago•2 comments

Ocean current 'collapse' could trigger 'profound cooling' in northern Europe

https://www.carbonbrief.org/ocean-current-collapse-could-trigger-profound-cooling-in-northern-eur...
3•shinryuu•34m ago•0 comments

Firer

https://firer.io
1•robertsinc•37m ago•1 comments

Rome Podcast (2007)

https://thehistoryofrome.typepad.com/the_history_of_rome/2007/07/
2•sonicrocketman•39m ago•0 comments

Binary Inference Dictionaries for Electoral NLP

https://matthodges.com/posts/2023-10-01-BIDEN-binary-inference-dictionaries-for-electoral-nlp/
1•m-hodges•42m ago•0 comments

The 'self-inflicted injury' to US tourism making Americans angry, disappointed

https://www.cnn.com/2025/08/31/travel/international-tourist-decline-united-states
12•mikhael•43m ago•0 comments

How many HTTP requests/second can a Single Machine handle?

https://binaryigor.com/how-many-http-requests-can-a-single-machine-handle.html
16•BinaryIgor•49m ago•7 comments
Open in hackernews

Replacing a Cache Service with a Database

https://avi.im/blag/2025/db-cache/
42•avinassh•4h ago

Comments

cbsmith•3h ago
So close to getting push driven architecture...
hoppp•3h ago
The cache service is a database of sorts that usually stores key value pairs.

The difference is in persistence and scaling and read/write permissions

Supermancho•2h ago
ie A cache is a database. The difference is features and usage.
hinkley•11m ago
A database is usually a union of all of the questions that can be asked about a topic. A cache by definition is a subset of that. Subsets are not the sets. And if you treat them as if they are, which 90% of people do, you’re gonna have a bad time.
barrkel•2h ago
No, what makes a cache a cache is invalidation. A cache is stale data. It's a latent out of date calculation. It's misinformation that risks surviving until it lies to the user.
phoronixrly•3h ago
Rails also has a take on this https://github.com/rails/solid_cache
xixixao•3h ago
This is a good deep dive into the complexity around caching: https://stack.convex.dev/caching-in

Having caching by default (like in Convex) is a really neat simplification to app development.

simonw•2h ago
A friend of mine once argued that adding a cache to a system is almost always an indication that you have an architectural problem further down the stack, and you should try to address that instead.

The more software development experience I gain the more I agree with him on that!

jitl•2h ago
Yeah my architecture problem is that Postgres RDS EBS storage is slow as dog. Sure our data won’t go poof if we lose an instance but it’s so slow.

(It’s not really my architecture problem. My architecture problem is that we store pages as grains of sand in a db instead of in a bucket, and that we allow user defined schemas)

jmull•2h ago
That's true in my experience.

Caches have perfectly valid uses, but they are so often used in fundamentally poor ways, especially with databases.

AtheistOfFail•2h ago
I disagree. For large search pages where you're building payloads from multiple records that don't change often, it could be beneficial to use a cache. Your cache ends up helping the most common results to be fetched less often and return data faster.
DrBazza•2h ago
I'd argue the database falls into that category.

The two questions no one seems to ask are 'do I even need a database?', and 'where do I need my database?'

There are alternate data storage 'patterns' that aren't databases. Though ultimately some sort of (Structure) query language gets invented to query them.

barrkel•2h ago
Caches suck because invalidation needs to be sprinkled all over the place in what is often an abstraction-violating way.

Then there's memoization, often a hack for an algorithm problem.

I once "solved" a huge performance problem with a couple of caches. The stain of it lies on my conscience. It was actually admitting defeat in reorganizing the logic to eliminate the need for the cache. I know that the invalidation logic will have caused bugs for years. I'm sure an engineer will curse my name for as long as that code lives.

IgorPartola•2h ago
If you think of it as a cache, yes. If you think of it as another data layer then no.

For example, let’s say that every web page your CMS produces is created using a computationally expensive compilation. But the final product is more or less static and only gets updated every so often. You can basically have your compilation process pull the data from your source of truth such as your RSBMS but then store the final page (or large fragments of it) in something like MongoDB. In other words the cache replacement happens at generation time and not on demand. This means there is always a cached version available (though possibly slightly stale), and it is always served out of a very fast data store without expensive computation. I prefer this style of caching to on demand caching because it means you avoid cache invalidation issues AND the thundering herd problem.

Of course this doesn’t work for every workflow but I can get you quite far. And yes this example can also be sort of solved with a static site generator but look beyond that at things like document fragments, etc. This works very well for dynamic content where the read to write ratio is high.

cpursley•2h ago
Lost me at DumpsterFireDB as cache. But if the goal is to create an even worse architecture thats even harder to maintain, go for it.
IgorPartola•53m ago
Sorry you lack the imagination to substitute your preferred data store into what I wrote. Hope it gets easier.
cpursley•18m ago
I'll never have enough imagination to believe mongo is a good solution. Postgres has jsonb, redis is a fine-enough cache. Why use a junk database when these are superior solutions and truly open source?
chamomeal•1h ago
I already typed a longer comment elsewhere that I don’t feel like reiterating but I agree with you. Caching is a natural outcome of not having infinite time and memory for running programs. Sometimes it’s a bandaid over bad design, but often it’s a responsible decision to take load off of other important systems
lemmsjid•57m ago
Quite agree, this is how I explain it to people. When you think of cache as another derived dataset then you start to realize that the issues caches bring to architectures are often the result of not having an agreement between the business and engineering on acceptable data consistency tolerances. For example, outside the world of caching, if you email users a report, and the data is embedded in the email, then you are accepting that the user will see a snapshot of data at a particular time. In many cases this is fine, even preferred. Sometimes not, and instead you link the user to a realtime dashboard instead.

Pretty much every view the user sees of data should include an understanding as to how consistent that data is with the source of truth. Issues with caching (besides basic bugs) often come up when a performance issue comes up and people slap in a cache without renegotiating how the end user would expect the data to look relative to its upstream state.

hinkley•14m ago
The cache is an incomplete dataset by definition. It’s not a data set, it’s a cache of a data set. You can never ensure you get a clean read of the system state from the cache because it’s never in sync and has gaps.
hinkley•18m ago
No.

It’s not a data layer, it’s global shared state. Global shared state always has consequences. Sometimes the consequences are worth the trouble. But it is trouble.

If you think about Source of Truth, System of Record, cache is neither of those, and sits between them. There’s a lot of problems you can fix instead by improving the SoT or SoR situation in that area if the code.

hinkley•23m ago
When all else fails, use caches. If all else hasn’t failed, it will once you use caches.
tengbretson•2h ago
Maybe these distinctions are useful to people in some situations, but to me this reads like wondering whether we can replace houses with buildings.
jayd16•2h ago
More like they're stocking the fridge and wondering what living next to the market is like.
zeras•2h ago
I think a fundamental mistake I see many developers make is they use caching trying to solve problems rather than improve efficiency.

It's the equivalent of adding more RAM to fix poor memory management or adding more CPUs/servers to compensate for resource heavy and slow requests and complex queries.

If your application requires caching to function effectively then you have a core issue that needs to be resolved, and if you don't address that issue then caching will become the problem eventually as your application grows more complex and active.

chamomeal•1h ago
Idk I think caching is a crucial part of many well-designed systems. There’s a lot of very cache-able data out there. If invalidating events are well defined or the data is fine being stale (week/month level dashboards, for example), that’s a fantastic reason to use a cache. I’d much rather just stuff those values in a cache than figure out any other more complicated solution.

I also just think it’s a necessary evil of big systems. Sometimes you need derived data. You can even think about databases as a kind of cache: the “real” data is the stream of every event that ever updated data in the database! (Yes this stretching the meaning of cache lol)

However I agree that caching is often an easy bandaid for a bad architecture.

This talk on Apache Samza completely changed how I think about caching and derived data in general: https://youtu.be/fU9hR3kiOK0?si=t9IhfPtCsSyszscf

And this interview has some interesting insights on the problems that caching faces at super large scale systems (twitter specifically): https://softwareengineeringdaily.com/2023/01/12/caching-at-t...

hinkley•24m ago
There are a lot of things necessary to be a successful human but doing them without doing the fundamentals just makes you a monkey in a suit.

Caching belongs at the end of a long development arc. And it will be the end whether you want it too or not. Adding caching is the beginning of the end of large architectural improvements, because caches jam up the analysis and testing infrastructure. Everything about improving or adding features to the code slows down, eventually to a crawl.

hinkley•45m ago
> It's the equivalent of adding more RAM to fix poor memory management

No it’s ten times worse than that. Adding RAM doesn’t make the task of fixing the memory management problems intrinsically harder. It just makes the problem bigger when you do fix it.

Adding caching to your app makes all of the tools used for detecting and categorizing performance issues much harder to use. We already have too many developers and “engineers” who balk at learning more than the basics of using these tools. Caching is like stirring up sediment in a submarine cave. Now only the most disciplined can still function and often just barely.

When you don’t have caches, data has to flow along the call tree. So if you need a user’s data in three places, that data either flows to those three or you have to look it up three times, which can introduce concurrency issues if the user metadata changes in the middle of a request. But because it’s inefficient there is clear incentive to fix the data propagation issues. Fixing those issues will make testing easier because now the data is passed in instead of having to mock the lookup code.

Then you introduce caching. Now the incentive is mostly gone, since you will only improve cold start performance. And now there is a perverse incentive to never propagate the data again. You start moving backward. Soon there are eight places in the code that use that data, because looking it up was “free” and they are all detached from each other. And now you can’t even turn off the cache, and cache traffic doesn’t tell you what your costs are.

And because the lookup is “free” the user lookup code disappears from your perf data and flame graphs. Only a madman like me will still tackle such a mess, and even I have difficulty finding the motivation.

For these reasons I say with great confidence and no small authority: adding caching to your app is the last major performance improvement most teams will ever see. So if you reach for it prematurely, you’re stuck with what you’ve got. Now a more astute competitor can deliver a faster, cheaper, or both product that eats your lunch and your team will swear there is nothing they can do about it because the app is already as fast as they can make it, and here are the statistics that “prove” it.

Friends don’t let friends put caches on immature apps.

lemmsjid•9m ago
I’d say a useful way of thinking about caching is through the lens of the CAP theorem. You are facing a situation where compute requirements exceed the bounds of a single process. There are a variety of things you can do here, all with consequences to the Consistency aspect of your data. Two strategies with consequences are caching and horizontal scaling. So look to vertical scaling or efficiencies in data modeling first.

I like your comment btw. I’d add Observability to CAP to incorporate what you’re saying.

cortesoft•20m ago
> If your application requires caching to function effectively then you have a core issue that needs to be resolved, and if you don't address that issue then caching will become the problem eventually as your application grows more complex and active.

I don’t think this is always true. Sometimes your app simply has data that takes a lot of computation to generate but doesn’t need to be generated often. Any way you solve this is going to be able to be described as a ‘cache’ even if you are just storing calculations in your main database. That doesn’t mean your application has a fundamental design flaw, it could mean your use case has a fundamental cache requirement.

jayd16•2h ago
So I guess this guy wants Firestore (or the OSS equivalent)?
eatonphil•1h ago
Many of these points are not compelling to me when 1) you can filter both rows and columns (in postgres logical replication anyway [0]) and 2) SQL views.

[0] https://www.postgresql.org/docs/current/logical-replication-...

avinassh•1h ago
Is it possible to create a filter that can work over a complex join operation?

That's what IVM systems like Noria can do. With application + cache, the application stores the final result in the cache. So, with these new IVM systems, you get that precomputed data directly from the database.

Views in Postgres are not materialized right? so every small delta would require refresh of entire view.

jamesblonde•1h ago
Some of these questions are informed by the Redis/DynamoDB or Postgres/MySQL world the author seems to inhabit.

Why would you want to do this? "I don’t know of any database built to handle hundreds of thousands of read replicas constantly pulling data."

If you want an open-source database with Redis latencies to handle millions of concurrent reads, you can use RonDB (disclaimer, I work on it).

"Since I’m only interested in a subset of the data, setting up a full read replica feels like overkill. It would be great to have a read replica with just partial data. It would be great to have a read replica with just partial data."

This is very unclear. Redis returns complete rows because it does not support pushdown projections or ordered indexes. RonDB supports these and distion aware partition-pruned index scans (start the transaction on the node/partition that contains the rows that are found with the index).

Reference:

https://www.rondb.com/post/the-process-to-reach-100m-key-loo...

miggy•51m ago
We had a critical service that often got overwhelmed, not by one client app but by different apps over time. One week it was app A, the next week app B, each with its own buggy code suddenly spamming the service.

The quick fix suggested was caching, since a lot of requests were for the same query. But after debating, we went with rate limiting instead. Our reasoning: caching would just hide the bad behavior and keep the broken clients alive, only for them to cause failures in other downstream systems later. By rate limiting, we stopped abusive patterns across all apps and forced bugs to surface. In fact, we discovered multiple issues in different apps this way.

Takeaway: caching is good, but it is not a replacement for fixing buggy code or misuse. Sometimes the better fix is to protect the service and let the bugs show up where they belong.

gethly•48m ago
Event-sourcing is a powerful tool that helps with exactly this. Why spin up a cache server when you can spin up another read DB instance for the same price and get unlimited capabilities...