frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Volvelle, an early example of a paper analog computer

https://en.wikipedia.org/wiki/Volvelle
1•valzevul•5m ago•0 comments

The Launchpad macOS 26 deserves

https://www.launchie.app
1•nickfthedev•11m ago•0 comments

Dumper: CLI utility for creating database backups – PostgreSQL, MySQL and others

https://github.com/elkirrs/dumper
2•thunderbong•16m ago•0 comments

Scientists Discover How Leukemia Cells Evade Treatment

https://www.rutgers.edu/news/scientists-discover-how-leukemia-cells-evade-treatment
1•geox•18m ago•0 comments

The Inevitable Shift from Prompts to Answers

https://www.aivojournal.org/the-inevitable-shift-from-prompts-to-answers/
2•businessmate•25m ago•1 comments

BoE chief: Brexit impact on UK economy negative for foreseeable future

https://news.sky.com/story/brexit-impact-on-uk-economy-negative-for-foreseeable-future-bank-of-en...
3•teleforce•25m ago•0 comments

I wish SSDs gave you CPU performance style metrics about their activity

https://utcc.utoronto.ca/~cks/space/blog/tech/SSDWritePerfMetricsWish
1•zdw•31m ago•0 comments

Lightning Computational Graph Theory

https://www.youtube.com/watch?v=A-z2ZIMWbuY
3•_untra_•34m ago•0 comments

But AI companies grow so fast

https://99d.substack.com/p/but-ai-companies-grow-so-fast
2•airstrike•34m ago•0 comments

Ask HN: Are you a real human or an LLM?

1•whatever1•36m ago•2 comments

Researchers find adding simple sentence to prompts makes AI models more creative

https://venturebeat.com/ai/researchers-find-adding-this-one-simple-sentence-to-prompts-makes-ai-m...
3•jdnier•43m ago•0 comments

Mortality in the news vs. what we usually die from

https://flowingdata.com/2025/10/08/mortality-in-the-news-vs-what-we-usually-die-from/
2•paulpauper•48m ago•0 comments

What I Learned from Lifting

https://www.atvbt.com/what-i-learned-from-lifting/
2•paulpauper•48m ago•0 comments

Another axiom that Euclid missed

https://web.archive.org/web/20250821165148/https://mathenchant.wordpress.com/2025/01/17/the-real-...
2•gsf_emergency_4•53m ago•0 comments

Show HN: NoCloud Bulk Image Converter (Cross-Platform, Privacy-First)

https://github.com/goto-eof/noc-convert
1•cbrx31•54m ago•1 comments

Dive-computer evidence ignored after 12yr-old's death

https://divernet.com/scuba-news/health-safety/death/dive-computer-evidence-ignored-after-12yr-old...
3•pooyamehri•58m ago•1 comments

Show HN: Drag to AirDrop

https://sindresorhus.com/menu-drop
3•mofle•1h ago•0 comments

Kintsugi Love

https://asim.bearblog.dev/kintsugi-love/
4•asim-shrestha•1h ago•1 comments

The traffickers are winning the war on drugs

https://www.economist.com/briefing/2025/10/16/the-traffickers-are-winning-the-war-on-drugs
25•coloneltcb•1h ago•19 comments

'Girl Take Your Crazy Pills ': Antidepressants Recast as Hot Lifestyle Accessory

https://www.wsj.com/health/wellness/anti-depressants-lifestyle-accessory-3b66027d
3•clanky•1h ago•0 comments

Zeno – open-source AI assistant that turns ideas into tasks

https://zenoapp.site/
2•CrazyCompiler01•1h ago•0 comments

Progress on defeating lifetime-end pointer zapping

https://lwn.net/Articles/1038757/
1•pykello•1h ago•0 comments

Wealth AI – Your Personal AI CFO That Understands Every Rupee You Spend

https://www.sideprojectors.com/project/67099/wealthai
2•WoWSaaS•1h ago•0 comments

Nutrition Beliefs Are Just-So Stories

https://www.cremieux.xyz/p/nutrition-beliefs-are-just-so-stories
4•smnthermes•1h ago•1 comments

Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity

https://arxiv.org/abs/2510.01171
1•jdnier•1h ago•0 comments

Is the Serenibrain EEG headband the best alternative to the Muse headband?

https://ihnnk.tech/pages/mindfulness-meditation-system
1•lijunshi•1h ago•0 comments

Rotring NC-Scriber CS 100 (1990)

https://archive.org/details/rotring-nc-scriber-cs-100-1990
3•gregsadetsky•1h ago•0 comments

How I bootstrapped a platform with a team of LLMs

https://alyx.substack.com/p/how-i-bootstrapped-a-platform-with
2•larakerns•1h ago•0 comments

Show HN: We built the first comprehensive benchmark for legal retrieval

https://huggingface.co/blog/isaacus/introducing-mleb
1•ubutler•1h ago•0 comments

Should scientists be allowed to edit animals' genes? Yes say conservation groups

https://www.nbcnews.com/science/science-news/animals-genetic-engineering-iucn-conservation-groups...
1•jnord•1h ago•0 comments
Open in hackernews

Is Postgres read heavy or write heavy?

https://www.crunchydata.com/blog/is-postgres-read-heavy-or-write-heavy-and-why-should-you-care
130•soheilpro•1d ago

Comments

alberth•7h ago
Odd that OLTP wasn’t mentioned in the article.

Postgres an an OLTP databases, which are designed for write heavy workloads.

While that being said, I agree most people have read-heavy needs.

da_chicken•7h ago
I disagree. I think the only people that have read-heavy needs are big data and data warehouses. AI being hot right now doesn't mean big data is the only game in town.

Most applications are used operationally or have a mix of read and write. Even on applications where the user can only consume content present there, there is often more than enough data capture just tracking page history to be relatively write heavy.

withinboredom•6h ago
Hmmm. Not really. Yes, everything is a mix, but for applications, it very much is on the read-heavy side. Think about how many queries you have to do just to display an arbitrary page. You might, maybe, just maybe, net 2-3 writes vs. hundreds of reads. If that starts to balance out, or even flip, then you probably need to rethink your database as you start to exit traditional db usage patterns. But <30% writes is not write-heavy.
da_chicken•4h ago
I am thinking about that. I don't think most data is read that often in an OLTP system.

I think a very small amount of data is read very often. However, until your DB gets very large, that data is going to end up as data pages cached in memory. So that data is extremely cheap to read.

I also think a significant amount of data that is generated in an OLTP system is written and never read, but you still had to pay the cost to write it. If you have an audit log, chances are you never need to look at it for any one piece of data. But you definitely had to write all the metadata for it.

But I'm also assuming that writes are at least 10 times as expensive as reads. More so if what you're modifying has indexes, since indexes are often functionally identical to a partial copy of the entire table. Indeed, I think that 10 times mark is conservative. Most RDBMSs use transaction logging and some kind of locking on writes. There's data validation and integrity checks on inserts and updates (and deletes if you have foreign keys).

I think 1 write to 10 reads is still write-heavy.

bigiain•1h ago
> I think 1 write to 10 reads is still write-heavy.

Pretty easy to tune the suppled SQL query to suit your opinion.

Pretty sure you just need to tweak the 2nd line

ratio_target AS (SELECT 5 AS ratio),

hinkley•5h ago
I think read replicas disagree with that pretty strongly.

The write traffic may be very write heavy, but then you have many, many users who need to see that data. The question is whether the database or a copy of the data from the database is what services that interest.

If you mediate all reads through a cache, then you have split the source of truth from the system of record. And then the read traffic on the system of record is a much lower ratio.

developper39•7h ago
Very usefull, and it is clear that the author knows what he is talking about. Nice intro to Pg18 too.
gdulli•7h ago
Is a ball red or green? How long is a piece of string?
Rendello•7h ago
How thick is WAL?

https://youtu.be/PvDBGqEykvc?t=7

phalangion•6h ago
Did you read the article? It’s about how to tell if your database is read or write heavy.
johncolanduoni•5h ago
I think a large part of what people are responding to here is the title, which comes off as something someone who doesn't actually understand the nature of a database workload would write. It may be a simple typo, but "Is YOUR Postgres Read Heavy or Write Heavy?" is the question that can have an answer. "Is Postgres More Appropriate for Read Heavy or Write Heavy workloads?" would also be fine, but it would be a totally different article from the written one.
lysace•7h ago
Insipid text.

Also: HN needs to upgrade its bot upvoting detection tech. This is embarrassing. It was proper ownage of the HN #1 position for like 15 minutes straight. And then like #2-3 for an hour or so.

akerl_•6h ago
The rules are pretty clear that you should not do this.

https://news.ycombinator.com/newsguidelines.html

zug_zug•6h ago
Off topic, but I do feel like there is a significant number of things that mysteriously get to frontpage with 12-40 upvotes, zero comments, and then sit there getting no more upvotes / comments for like 20 minutes.

Personally I agree that it's both possible to detect this better and would actually drastically improve the quality of this site if that wasn't the meta and think it's something that should be openly discussed (in terms of practical suggestions).

lysace•6h ago
It is so incredibly obvious when see it, yes.
add-sub-mul-div•6h ago
They don't care what gets ranked where other than their own recruitment and other submissions, for which this site exists.
Waterluvian•5h ago
I don’t think this holds up to higher order thinking. On the surface, sure that makes sense.

But then who left to look at the recruitment ads if the quality of the content, comments, and community degrades enough that everyone stops coming?

All I know is that pretty much nobody here knows enough about the whole picture to have a meaningfully informed opinion. So a lot of these opinions are going to be driven by their imagination of the whole picture.

jagged-chisel•6h ago
> When someone asks about [database] tuning, I always say “it depends”.

Indeed. On your schema. On your usage. On your app. On your users.

mikepurvis•5h ago
If it didn’t depend they’d just make the “tuned” value the default.
acscott•4h ago
Exactly. The parameters you can configure are there due to a lack of automating those since what you want to optimize for might be different than an automaton would.
rednafi•6h ago
Despite using CTEs, I found the first query quite impenetrable. Could be because I don’t spend that much time reading non-trivial SQL queries.

I’ve been mostly using the `pg_stat_statements` table (the second technique) to find out whether my workload is read or write heavy, it’s plenty good in most situations.

teej•5h ago
pg_ system tables aren’t built for direct consumption. You typically have to massage them quite a bit to measure whatever statistic you need.
Cupprum•6h ago
Surprising amount of downvoted comments under this article. I wonder why
Normal_gaussian•5h ago
At the time of writing the query has a small error. The filter is checking for reads and writes, but it should be reads or writes.

    WHERE
     -- Filter to only show tables that have had some form of read or write activity
    (s.n_tup_ins + s.n_tup_upd + s.n_tup_del) > 0
    AND
     (si.heap_blks_read + si.idx_blks_read) > 0
 )
Should be OR
J_McQuade•5h ago
This, as a few other commenters have mentioned, is a terrible article.

For a start, the article does not mention any other database. I don't know how you can say something is read or write heavy without comparing it to something else. It doesn't even compare different queries on the same database. Like, they just wrote a query and it does a lot of reads - so what? There's nothing here. Am I going mad? Why does this article exist?

acscott•5h ago
A little context may be of help. Maybe a better headline for the article would have been, "How Can You Determine if your PostgreSQL Instance's Workload is Read-Heavy or Write-Heavy?" It's useful to know to help optimize settings and hardware for your workload as well as to nkow whether an index might be useful or not. Most major DBMSs will have some way to answer this question, the article is aimed at PostgreSQL only.
moomoo11•4h ago
This article quality makes me not trust the company.
wirelesspotat•4h ago
Agree with other commenters that the title is a bit confusing and should be renamed to something like "Is your Postgres workload read heavy or write heavy?"

But title aside, I found this post very useful for better understanding PG reads and writes (under the hood) and how to actually measure your workload

Curious if the tuning actions any different if you're using a non-vanilla storage engine like AWS Aurora or GCP AlloyDB or Neon?

spl757•3h ago
iotop?
tonyhart7•2h ago
ram heavy
SteveLauC•2h ago
Regarding write-heavy workloads, especially for Postgres, I think we really need to distinguish between INSERTs and UPDATEs, because every update to a tuple in Postgres duplicates the whole tuple due to its MVCC implementation (if you use the default heap storage engine)
brightball•1h ago
One thing that catches people by surprise is that read heavy workloads can generate heavy writes.

Queries that need to operate on more data than will fit in the allocated working memory will write to a temporary table on disk, then in some cases perform an operation on that temporary table like sorting the whole thing and finally, after it's done delete it which is even more disk write stress.

It's not really about whether it's ready heavy or write heavy, it's about whether it's usage creates Disk I/O stress.

You can write millions of increment integers and while technically that's "write heavy", there's no stress involved because you're just changing the value in a defined space that's already been allocated. Update space that is more dynamic, like growing a TEXT or JSON field frequently...it's a different story.

scottcodie•1h ago
I've spent my entire career developing databases (oracle, cassandra, my own database startup). Knowing if your workload is read or write heavy is one of the first questions when evaluating database choice, and is critical for tuning options. I would give this article hate just because it feels partially written by AI and the title needs a possessive 'your' in it, but its core ideas are sound and frame the issue correctly.
spprashant•1h ago
The thing we really strive for with Postgres is to keep the UPDATE traffic as low as possible. Because of MVCC, table bloat and the subsequent vacuum jobs will kill your IO even further. This means designing the applications and data model in a way that most write traffic is INSERT, with occasional UPDATEs which cannot be avoided. If you know you are going to have a UPDATE heavy table, be sure to set the fill_factor on the table ahead of time to optimize for it.

Also, in my experience "Faster SSD Storage" point applies to both read and write heavy workloads.

AtlasBarfed•11m ago
Does anyone disagree that if it isn't a Log Structured Merge Tree engine, you aren't write-heavy?