Also: HN needs to upgrade its bot upvoting detection tech. This is embarrassing. It was proper ownage of the HN #1 position for like 15 minutes straight. And then like #2-3 for an hour or so.
Personally I agree that it's both possible to detect this better and would actually drastically improve the quality of this site if that wasn't the meta and think it's something that should be openly discussed (in terms of practical suggestions).
But then who left to look at the recruitment ads if the quality of the content, comments, and community degrades enough that everyone stops coming?
All I know is that pretty much nobody here knows enough about the whole picture to have a meaningfully informed opinion. So a lot of these opinions are going to be driven by their imagination of the whole picture.
Indeed. On your schema. On your usage. On your app. On your users.
I’ve been mostly using the `pg_stat_statements` table (the second technique) to find out whether my workload is read or write heavy, it’s plenty good in most situations.
WHERE
-- Filter to only show tables that have had some form of read or write activity
(s.n_tup_ins + s.n_tup_upd + s.n_tup_del) > 0
AND
(si.heap_blks_read + si.idx_blks_read) > 0
)
Should be ORFor a start, the article does not mention any other database. I don't know how you can say something is read or write heavy without comparing it to something else. It doesn't even compare different queries on the same database. Like, they just wrote a query and it does a lot of reads - so what? There's nothing here. Am I going mad? Why does this article exist?
But title aside, I found this post very useful for better understanding PG reads and writes (under the hood) and how to actually measure your workload
Curious if the tuning actions any different if you're using a non-vanilla storage engine like AWS Aurora or GCP AlloyDB or Neon?
Queries that need to operate on more data than will fit in the allocated working memory will write to a temporary table on disk, then in some cases perform an operation on that temporary table like sorting the whole thing and finally, after it's done delete it which is even more disk write stress.
It's not really about whether it's ready heavy or write heavy, it's about whether it's usage creates Disk I/O stress.
You can write millions of increment integers and while technically that's "write heavy", there's no stress involved because you're just changing the value in a defined space that's already been allocated. Update space that is more dynamic, like growing a TEXT or JSON field frequently...it's a different story.
Also, in my experience "Faster SSD Storage" point applies to both read and write heavy workloads.
alberth•7h ago
Postgres an an OLTP databases, which are designed for write heavy workloads.
While that being said, I agree most people have read-heavy needs.
da_chicken•7h ago
Most applications are used operationally or have a mix of read and write. Even on applications where the user can only consume content present there, there is often more than enough data capture just tracking page history to be relatively write heavy.
withinboredom•6h ago
da_chicken•4h ago
I think a very small amount of data is read very often. However, until your DB gets very large, that data is going to end up as data pages cached in memory. So that data is extremely cheap to read.
I also think a significant amount of data that is generated in an OLTP system is written and never read, but you still had to pay the cost to write it. If you have an audit log, chances are you never need to look at it for any one piece of data. But you definitely had to write all the metadata for it.
But I'm also assuming that writes are at least 10 times as expensive as reads. More so if what you're modifying has indexes, since indexes are often functionally identical to a partial copy of the entire table. Indeed, I think that 10 times mark is conservative. Most RDBMSs use transaction logging and some kind of locking on writes. There's data validation and integrity checks on inserts and updates (and deletes if you have foreign keys).
I think 1 write to 10 reads is still write-heavy.
bigiain•1h ago
Pretty easy to tune the suppled SQL query to suit your opinion.
Pretty sure you just need to tweak the 2nd line
ratio_target AS (SELECT 5 AS ratio),
hinkley•5h ago
The write traffic may be very write heavy, but then you have many, many users who need to see that data. The question is whether the database or a copy of the data from the database is what services that interest.
If you mediate all reads through a cache, then you have split the source of truth from the system of record. And then the read traffic on the system of record is a much lower ratio.