frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
1•throwaw12•1m ago•0 comments

MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•1m ago•1 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•1m ago•0 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•3m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•7m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
1•andreabat•9m ago•0 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
1•mgh2•15m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•17m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•22m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•24m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•24m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•27m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•28m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•30m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•32m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•34m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•35m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•38m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•39m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•39m ago•1 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•41m ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

2•prateekdalal•45m ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•50m ago•1 comments

Internationalization and Localization in the Age of Agents

https://myblog.ru/internationalization-and-localization-in-the-age-of-agents
1•xenator•50m ago•0 comments

Building a Custom Clawdbot Workflow to Automate Website Creation

https://seedance2api.org/
1•pekingzcc•53m ago•1 comments

Why the "Taiwan Dome" won't survive a Chinese attack

https://www.lowyinstitute.org/the-interpreter/why-taiwan-dome-won-t-survive-chinese-attack
2•ryan_j_naughton•53m ago•0 comments

Xkcd: Game AIs

https://xkcd.com/1002/
2•ravenical•54m ago•0 comments

Windows 11 is finally killing off legacy printer drivers in 2026

https://www.windowscentral.com/microsoft/windows-11/windows-11-finally-pulls-the-plug-on-legacy-p...
1•ValdikSS•55m ago•0 comments

From Offloading to Engagement (Study on Generative AI)

https://www.mdpi.com/2306-5729/10/11/172
1•boshomi•57m ago•1 comments

AI for People

https://justsitandgrin.im/posts/ai-for-people/
1•dive•58m ago•0 comments
Open in hackernews

Replacing a cache service with a database

https://avi.im/blag/2025/db-cache/
81•avinassh•5mo ago

Comments

cbsmith•5mo ago
So close to getting push driven architecture...
hoppp•5mo ago
The cache service is a database of sorts that usually stores key value pairs.

The difference is in persistence and scaling and read/write permissions

Supermancho•5mo ago
ie A cache is a database. The difference is features and usage.
hinkley•5mo ago
A database is usually a union of all of the questions that can be asked about a topic. A cache by definition is a subset of that. Subsets are not the sets. And if you treat them as if they are, which 90% of people do, you’re gonna have a bad time.
Supermancho•5mo ago
> A database is usually a union of all of the questions that can be asked about a topic

That's some AI level sophism.

A database is a durable store of data that can be modified and read. Ostensibly, we're talking about computer databases. You can define the soft terms at your leisure and to suit your needs. There are many categories of discussion that will never intersect with this definition. Communication is not a database. Art is not a database. History is not a database. Medicine is not a database. et al.

A cache is a database. Differentiating a cache and database by label is a misnomer.

hinkley•5mo ago
> That's some AI level sophism.

Oh fuck off. Calling everything AI is so 2024.

A database is a system of record. It can also be a source of truth. A cache is neither. Treating it as one is dangerous. Insisting others should is idiocy.

Supermancho•5mo ago
> A database is a system of record. It can also be a source of truth.

This is meaningless. A cache is used in lieu of the value because it's considered equivalent.

> Insisting others should is idiocy.

I did no such thing. Good luck with whatever.

barrkel•5mo ago
No, what makes a cache a cache is invalidation. A cache is stale data. It's a latent out of date calculation. It's misinformation that risks surviving until it lies to the user.
jedberg•5mo ago
This is true but a lot of the trouble in invalidation can be avoided by using smarter cache keys.

For example, on reddit, fully rendered comments are cached, so that the renderer doesn't have to redo its work. But the cache key includes the date of the last edit on the comment, which is already known when requesting the value from the cache. In this way, you never have to invalidate that key, because editing the comment makes a new key. The old one will just get ejected eventually.

phoronixrly•5mo ago
Rails also has a take on this https://github.com/rails/solid_cache
xixixao•5mo ago
This is a good deep dive into the complexity around caching: https://stack.convex.dev/caching-in

Having caching by default (like in Convex) is a really neat simplification to app development.

simonw•5mo ago
A friend of mine once argued that adding a cache to a system is almost always an indication that you have an architectural problem further down the stack, and you should try to address that instead.

The more software development experience I gain the more I agree with him on that!

jitl•5mo ago
Yeah my architecture problem is that Postgres RDS EBS storage is slow as dog. Sure our data won’t go poof if we lose an instance but it’s so slow.

(It’s not really my architecture problem. My architecture problem is that we store pages as grains of sand in a db instead of in a bucket, and that we allow user defined schemas)

jmull•5mo ago
That's true in my experience.

Caches have perfectly valid uses, but they are so often used in fundamentally poor ways, especially with databases.

AtheistOfFail•5mo ago
I disagree. For large search pages where you're building payloads from multiple records that don't change often, it could be beneficial to use a cache. Your cache ends up helping the most common results to be fetched less often and return data faster.
DrBazza•5mo ago
I'd argue the database falls into that category.

The two questions no one seems to ask are 'do I even need a database?', and 'where do I need my database?'

There are alternate data storage 'patterns' that aren't databases. Though ultimately some sort of (Structure) query language gets invented to query them.

barrkel•5mo ago
Caches suck because invalidation needs to be sprinkled all over the place in what is often an abstraction-violating way.

Then there's memoization, often a hack for an algorithm problem.

I once "solved" a huge performance problem with a couple of caches. The stain of it lies on my conscience. It was actually admitting defeat in reorganizing the logic to eliminate the need for the cache. I know that the invalidation logic will have caused bugs for years. I'm sure an engineer will curse my name for as long as that code lives.

IgorPartola•5mo ago
If you think of it as a cache, yes. If you think of it as another data layer then no.

For example, let’s say that every web page your CMS produces is created using a computationally expensive compilation. But the final product is more or less static and only gets updated every so often. You can basically have your compilation process pull the data from your source of truth such as your RSBMS but then store the final page (or large fragments of it) in something like MongoDB. In other words the cache replacement happens at generation time and not on demand. This means there is always a cached version available (though possibly slightly stale), and it is always served out of a very fast data store without expensive computation. I prefer this style of caching to on demand caching because it means you avoid cache invalidation issues AND the thundering herd problem.

Of course this doesn’t work for every workflow but I can get you quite far. And yes this example can also be sort of solved with a static site generator but look beyond that at things like document fragments, etc. This works very well for dynamic content where the read to write ratio is high.

cpursley•5mo ago
Lost me at DumpsterFireDB as cache. But if the goal is to create an even worse architecture thats even harder to maintain, go for it.
IgorPartola•5mo ago
Sorry you lack the imagination to substitute your preferred data store into what I wrote. Hope it gets easier.
cpursley•5mo ago
I'll never have enough imagination to believe mongo is a good solution. Postgres has jsonb, vector type; redis is a fine-enough cache. Why use a known junk "database" when there are superior solutions and truly open source?
IgorPartola•5mo ago
I didn’t say you have to use it. I said you could. Or any other data store that fits your use case. I used a MongoDB instance back in 2012 in a serious production environment in this exact way and it worked flawlessly while Postgres was what gave us trouble (it had a bunch of features added since that would have made those issues disappear but back then it didn’t have built in replication for example.)

But again this is not an endorsement of MongoDB. I wouldn’t use it today but I did use it successfully and that company and tech stack sold for quite a bit of money and the software still runs, though I’m not sure on what stack. Again, if you are stuck on this one part of my comment… can’t help you.

chamomeal•5mo ago
I already typed a longer comment elsewhere that I don’t feel like reiterating but I agree with you. Caching is a natural outcome of not having infinite time and memory for running programs. Sometimes it’s a bandaid over bad design, but often it’s a responsible decision to take load off of other important systems
lemmsjid•5mo ago
Quite agree, this is how I explain it to people. When you think of cache as another derived dataset then you start to realize that the issues caches bring to architectures are often the result of not having an agreement between the business and engineering on acceptable data consistency tolerances. For example, outside the world of caching, if you email users a report, and the data is embedded in the email, then you are accepting that the user will see a snapshot of data at a particular time. In many cases this is fine, even preferred. Sometimes not, and instead you link the user to a realtime dashboard instead.

Pretty much every view the user sees of data should include an understanding as to how consistent that data is with the source of truth. Issues with caching (besides basic bugs) often come up when a performance issue comes up and people slap in a cache without renegotiating how the end user would expect the data to look relative to its upstream state.

hinkley•5mo ago
The cache is an incomplete dataset by definition. It’s not a data set, it’s a cache of a data set. You can never ensure you get a clean read of the system state from the cache because it’s never in sync and has gaps.
IgorPartola•5mo ago
What about materialized views? CPU cache? Only the Sith deal in absolutes :)
hinkley•5mo ago
CPU cache means that the same value read twice will return the same value. Some exceptions for NUMA, and mu[tiple threads. But two reads of a cache cache make no such guarantees.

There is a vast number of undiagnosed race conditions in modern code cause by cache eviction in the middle of 'transactions' under high system load.

hinkley•5mo ago
No.

It’s not a data layer, it’s global shared state. Global shared state always has consequences. Sometimes the consequences are worth the trouble. But it is trouble.

If you think about Source of Truth, System of Record, cache is neither of those, and sits between them. There’s a lot of problems you can fix instead by improving the SoT or SoR situation in that area if the code.

convolvatron•5mo ago
in particular, the database already _has_ a cache. usually its on the other side of the evaluation, at the block layer. which means that you have a pay a cost to get to it (the network protocol, and the evaluation).

if you use materialized views, that surfaces exactly what you want in a cache, except here the views consistency with the underlying data is maintained. that's hugely important.

that leaves us with the protocol. prepared statements might help. now we really should be about the same as the bump-on-the-wire cache. that doesn't get us the same performance is the in-process cache. but we didn't have to sacrifice any performance or add any additional operational overhead to get it.

IgorPartola•5mo ago
Hard disagree. Having used the architecture I described in large practical deployments it works way better than what you are making it out to be. But I don’t know the domain you work in and your constraints so it is possible that for you it would not work.
hinkley•5mo ago
When all else fails, use caches. If all else hasn’t failed, it will once you use caches.
jedberg•5mo ago
If you have no cache, and your first thought is "this needs a cache", you're probably right. Chances are you need to optimize a query or storage pattern. But you're thinking like an engineer. It may be true that there is a "more correct" engineering solution, but adding a cache might be the most expedient solution.

But after you'd done all the optimizations, there is still a use case for caches. The main one being that a cache holds a hot set of data. Databases are getting better at this, and with AI in everything, latency of queries is getting swamped by waiting for the LLM, but I still see caches being important for decades to come.

tootie•5mo ago
Most of the time I use caching it's to cut down on network round trips. If I'm fetching data on every end user request that only updates daily or weekly caching that's a no-brainer. Edge caching for content sites is also a no-brainer. Caching something computationally expensive may be fishy but also may be useful. Even if you are just papering over some inefficient process, that's not necessarily a sin. Sometimes you have to be pragmatic.
tengbretson•5mo ago
Maybe these distinctions are useful to people in some situations, but to me this reads like wondering whether we can replace houses with buildings.
jayd16•5mo ago
More like they're stocking the fridge and wondering what living next to the market is like.
zeras•5mo ago
I think a fundamental mistake I see many developers make is they use caching trying to solve problems rather than improve efficiency.

It's the equivalent of adding more RAM to fix poor memory management or adding more CPUs/servers to compensate for resource heavy and slow requests and complex queries.

If your application requires caching to function effectively then you have a core issue that needs to be resolved, and if you don't address that issue then caching will become the problem eventually as your application grows more complex and active.

chamomeal•5mo ago
Idk I think caching is a crucial part of many well-designed systems. There’s a lot of very cache-able data out there. If invalidating events are well defined or the data is fine being stale (week/month level dashboards, for example), that’s a fantastic reason to use a cache. I’d much rather just stuff those values in a cache than figure out any other more complicated solution.

I also just think it’s a necessary evil of big systems. Sometimes you need derived data. You can even think about databases as a kind of cache: the “real” data is the stream of every event that ever updated data in the database! (Yes this stretching the meaning of cache lol)

However I agree that caching is often an easy bandaid for a bad architecture.

This talk on Apache Samza completely changed how I think about caching and derived data in general: https://youtu.be/fU9hR3kiOK0?si=t9IhfPtCsSyszscf

And this interview has some interesting insights on the problems that caching faces at super large scale systems (twitter specifically): https://softwareengineeringdaily.com/2023/01/12/caching-at-t...

hinkley•5mo ago
There are a lot of things necessary to be a successful human but doing them without doing the fundamentals just makes you a monkey in a suit.

Caching belongs at the end of a long development arc. And it will be the end whether you want it too or not. Adding caching is the beginning of the end of large architectural improvements, because caches jam up the analysis and testing infrastructure. Everything about improving or adding features to the code slows down, eventually to a crawl.

zeras•5mo ago
Caching is definitely a useful and even a key component to producing efficent and high performance applications and services.

I think the mistake is not using caching, but rather using it too soon in the development process.

There are times when caching is a requirement because there is simply no way to provide efficient performance without it, but I think too many times developers jump straight to caching without thinking because it solves potential problems for them before they happen.

The real problem comes later though at scale when caching can no long compensate for the development inefficiencies.

Now the developers have to start rewriting core code which will take time to thoroughly complete and test and/or the engineers have to figure out a way to throw more resources at the problem.

hinkley•5mo ago
> It's the equivalent of adding more RAM to fix poor memory management

No it’s ten times worse than that. Adding RAM doesn’t make the task of fixing the memory management problems intrinsically harder. It just makes the problem bigger when you do fix it.

Adding caching to your app makes all of the tools used for detecting and categorizing performance issues much harder to use. We already have too many developers and “engineers” who balk at learning more than the basics of using these tools. Caching is like stirring up sediment in a submarine cave. Now only the most disciplined can still function and often just barely.

When you don’t have caches, data has to flow along the call tree. So if you need a user’s data in three places, that data either flows to those three or you have to look it up three times, which can introduce concurrency issues if the user metadata changes in the middle of a request. But because it’s inefficient there is clear incentive to fix the data propagation issues. Fixing those issues will make testing easier because now the data is passed in instead of having to mock the lookup code.

Then you introduce caching. Now the incentive is mostly gone, since you will only improve cold start performance. And now there is a perverse incentive to never propagate the data again. You start moving backward. Soon there are eight places in the code that use that data, because looking it up was “free” and they are all detached from each other. And now you can’t even turn off the cache, and cache traffic doesn’t tell you what your costs are.

And because the lookup is “free” the user lookup code disappears from your perf data and flame graphs. Only a madman like me will still tackle such a mess, and even I have difficulty finding the motivation.

For these reasons I say with great confidence and no small authority: adding caching to your app is the last major performance improvement most teams will ever see. So if you reach for it prematurely, you’re stuck with what you’ve got. Now a more astute competitor can deliver a faster, cheaper, or both product that eats your lunch and your team will swear there is nothing they can do about it because the app is already as fast as they can make it, and here are the statistics that “prove” it.

Friends don’t let friends put caches on immature apps.

lemmsjid•5mo ago
I’d say a useful way of thinking about caching is through the lens of the CAP theorem. You are facing a situation where compute requirements exceed the bounds of a single process. There are a variety of things you can do here, all with consequences to the Consistency aspect of your data. Two strategies with consequences are caching and horizontal scaling. So look to vertical scaling or efficiencies in data modeling first.

I like your comment btw. I’d add Observability to CAP to incorporate what you’re saying.

cortesoft•5mo ago
> If your application requires caching to function effectively then you have a core issue that needs to be resolved, and if you don't address that issue then caching will become the problem eventually as your application grows more complex and active.

I don’t think this is always true. Sometimes your app simply has data that takes a lot of computation to generate but doesn’t need to be generated often. Any way you solve this is going to be able to be described as a ‘cache’ even if you are just storing calculations in your main database. That doesn’t mean your application has a fundamental design flaw, it could mean your use case has a fundamental cache requirement.

jiggawatts•5mo ago
Not to mention latency! Caching does nothing to fix the latency of “misses”, which means any app that uses a caching layer to paper over a bad design will forever have a terrible P99 (or even P90) latency.

“But, but, when I reload the page now it’s fast! I fixed it!”

mannyv•5mo ago
If your database is slow because it's on spinning disks, then a cache will speed up access.

That's not a fundamental mistake, and there's very little you can do about that from an efficiency point of view.

It's easy to forget that there was a world without SSDs, high speed pipes, etc - but it actually did exist. And that wasn't so long ago either.

And of course sometimes putting data nearer to the user actually makes sense...like the Netflix movie boxes inside various POPs or CDNs. Bandwidth and latency are actual factors for many applications.

That said, most applications probably should investigate adding indexes to their databases (or noSQL databases) instead of adding a cache layer.

jayd16•5mo ago
So I guess this guy wants Firestore (or the OSS equivalent)?
eatonphil•5mo ago
Many of these points are not compelling to me when 1) you can filter both rows and columns (in postgres logical replication anyway [0]) and 2) SQL views.

[0] https://www.postgresql.org/docs/current/logical-replication-...

avinassh•5mo ago
Is it possible to create a filter that can work over a complex join operation?

That's what IVM systems like Noria can do. With application + cache, the application stores the final result in the cache. So, with these new IVM systems, you get that precomputed data directly from the database.

Views in Postgres are not materialized right? so every small delta would require refresh of entire view.

jamesblonde•5mo ago
Some of these questions are informed by the Redis/DynamoDB or Postgres/MySQL world the author seems to inhabit.

Why would you want to do this? "I don’t know of any database built to handle hundreds of thousands of read replicas constantly pulling data."

If you want an open-source database with Redis latencies to handle millions of concurrent reads, you can use RonDB (disclaimer, I work on it).

"Since I’m only interested in a subset of the data, setting up a full read replica feels like overkill. It would be great to have a read replica with just partial data. It would be great to have a read replica with just partial data."

This is very unclear. Redis returns complete rows because it does not support pushdown projections or ordered indexes. RonDB supports these and distion aware partition-pruned index scans (start the transaction on the node/partition that contains the rows that are found with the index).

Reference:

https://www.rondb.com/post/the-process-to-reach-100m-key-loo...

miggy•5mo ago
We had a critical service that often got overwhelmed, not by one client app but by different apps over time. One week it was app A, the next week app B, each with its own buggy code suddenly spamming the service.

The quick fix suggested was caching, since a lot of requests were for the same query. But after debating, we went with rate limiting instead. Our reasoning: caching would just hide the bad behavior and keep the broken clients alive, only for them to cause failures in other downstream systems later. By rate limiting, we stopped abusive patterns across all apps and forced bugs to surface. In fact, we discovered multiple issues in different apps this way.

Takeaway: caching is good, but it is not a replacement for fixing buggy code or misuse. Sometimes the better fix is to protect the service and let the bugs show up where they belong.

andersmurphy•5mo ago
I guess CPUs are pretty buggy with all their caches. If only the hardware people could fix their buggy systems.

In all seriousness sometimes a cache is what you need. Inline caching is a classic example.

WillDaSilva•5mo ago
There are times when a cache is appropriate, but I often find that it's more appropriate for the cache to be on the side of whoever is making all the requests. This isn't applicable when that is e.g. millions of different clients all making their own requests, but rather when we're talking about one internal service putting heavy load on another one.

The team with the demanding service can add a cache that's appropriate for their needs, and will be motivated to do so in order to avoid hitting the rate limit (or reduce costs, which should be attributed to them).

spyspy•5mo ago
You cannot trust your clients. Period. It doesn’t matter if they’re internal or external. If you design (and test!) with this assumption in mind, you’ll never have a bad day. I’ve really never understood why teams and companies have taken this defensive stance that their service is being “abused” despite having nothing even resembling an SLA. It seemed pretty inexcusable to not have a horizontally scaling service back in 2010 when I first started interning at tech companies, and I’m really confused why this is still an issue today.
WillDaSilva•5mo ago
I fully agree. The rate limits are how you control the behaviour of the clients. My suggestion of leaving caching to the clients, which they may want to do in order to avoid hitting the rate limit.
pixl97•5mo ago
>why teams and companies have taken this defensive stance that their service is being “abused” despite having nothing even resembling an SLA.

I mean because bad code on a fast client system can cause a load higher than all other users put together. This is why half the internet is behind something like cloudflare these days. Limiting, blocking, and banning has to be baked in.

Alex_L_Wood•5mo ago
It's funny how I encountered a problem which went exactly the opposite way! We initially introduced a rate limiter that was adequate for the time, but with the product scaling up it stopped being adequate, and any failures with 429 were either ignored, or closed as client bugs. Only after some time we realized that the rate of requests scaled up approximately with the rate of product growth, and a quick fix was to simply remove the limiter, but after a couple of times when DB decided to take a nap after being overwhelmed, we added a caching layer.

Just goes to show that there is no silver bullet - context, experience and good amount of gut feeling is paramount.

spyspy•5mo ago
Something that was drilled into me early in my career was that you cannot expect your cache to be up 100% of the time. The logical extension of that is your main DB needs to be able to handle 100% of your traffic at a moment’s notice. Not only has this kind of thinking saved my ass on several occasions, but it’s also actually kept my code much cleaner. I don’t want to say rate limiters and circuit breakers are the mark of bad engineering, butttt they’re usually just good engineering deferred.
sellmesoap•5mo ago
Reminds me of gas plumbing, the indoor lines are only a few psi above ambient, but the lines themselves have to take line pressure to 300psi is case the regulator fails. It's good advice!
spyspy•5mo ago
You can never trust clients to behave. If your goal is to reduce infra cost, sure, rate limiting is an acceptable answer. But is it really that hard to throw on a cache and provision your service to be horizontally scalable?
miggy•5mo ago
Scaling matters, but why pay for abusive clients or bots? Adding a cache is easy; the hard part is invalidation, sync, and thundering herd. Use it if the product needs it, not as a band-aid.
gethly•5mo ago
Event-sourcing is a powerful tool that helps with exactly this. Why spin up a cache server when you can spin up another read DB instance for the same price and get unlimited capabilities...
mannyv•5mo ago
Instead of redis etc you could get away with static files served via a cdn.

Again, you should test. But the main reason imo for redis is connections and speed, not just speed.

chamomeal•5mo ago
Hey OP you may have seen this already, but in case you didn’t see my other comment, you should definitely check out this talk by Martin Kleppman.

https://youtu.be/fU9hR3kiOK0?si=t9IhfPtCsSyszscf

It details Apache samza, which I didn’t totally grasp but it seems similar to what you’re talking about here.

He talks about how if you could essentially use an event stream as your source of truth instead of a database, and you had a sufficiently powerful stream processor, you could define views on that data by consuming stream events.

The end result is kind of like an auto-updating cache with no invalidation issues or race conditions. Need a new view on the data? Just define it and run the entire event stream through it. Once the stream is processed, that source of data is perpetually accurate and up-to-date.

I’m not a database guy and most of this stuff is over my head, but I loved this talk and I think you should check it out! It’s the first thing I thought of when I read your post.

ajcp•5mo ago
Thank you for sharing. I thoroughly enjoyed the talk and am as well not a "database guy".
stevoski•5mo ago
Something missing from the article:

For the type of cache usage described in the article, cache lookups are almost always O(1). This is because a cache value is retrieved for a specific key.

Whereas db queries are often more complicated and therefore take longer. Yes, plenty of db queries are fetching a row by a key, and therefore fast. But many queries use a join and a somewhat complicated WHERE clause.

interstice•5mo ago
I've been thinking a lot recently about edge/client layer data sync, interesting to hear where others are at. Noria seems to have got as far as a smart way to store and manage tabular data, however this doesn't seem to help much when the frontend is built on blobs & if one isn't prepared to write the additional layer for read/write on top of the rest of the fetching system.

The dumb/MVP approach I'd like to try sometime is close-to-client read only sqlite db's that get managed in the background and neatly handled by wrapper functions around things like fetch. The part I've been slowly thinking about is Noria style efficient handling of data structures while allowing for 'raw' queries, ideally I'd like to set this up so the frontend doesn't need an additional layers worth of read/write functionality just to have CDN-like behaviour. Maybe something like plugins to [de/re]normalise different kinds of blob to tables (from gql, groqd, etc). I'd also like to include a realtime cache invalidation/update system to keep all clients in sync without cache clearing... If I ever get that far.

interstice•5mo ago
This got me thinking a bit more. Rest / GraphQL / Groq handled with adapters, flatten anything nested that references an ID to the row level. Opinionated queries (queries only fetch a superset/subset of the same structure). Fetched data 'fans out' the new content into the rows based on ID to fill out/update structure. Lives in a service worker or side by side with frontend. Drops oldest/least fetched data when limits are reached. Would something like that work?

Alternatively just ship an entire shallow copy of least changed / most used data as sqlite db's to the edge, push updates to those, and fetch from source anything that isn't in the DB. Might be simpler.