What is your local machine? Max parallelism is often a function of CPU count, so knowing how many core the box has is useful, and might explain needing to do more tests between 20 and 30.
There’s an old rule of thumb that tasks that alternate between cpu and IO usage, such as for instance compiler jobs, often benefit from running 2 tasks per core as the second task can make about 50% more progress per second. Your results are awfully similar to that rule of thumb so I’m wondering is this an 8 core box? 10? 12?
> If the buffer has reached its maximum size , it blocks writes until the buffer has been flushed, to properly exert backpressure on the application.
I cannot think of a situation where I could convince my team to allow 100k inserts to be pooled up waiting in memory for a process to crash or a deployment to wipe it out, but having a single buffer is also a mistake. io_uring is a refinement of a very old idea of having a read and a write buffer that swap back and forth between two threads or tasks.
You write to one for a while, and then you hand it off to someone to process it and clear it, and you write to another while that is happening. You only block when both are full. I suspect you’re seeing some of the fruits of this in the COPY example, if you’re creating a new file for each batch.
Good point! I didn't mention this in the article, but when we utilize this strategy we will either buffer messages consumed from a message queue, or if doing this from a handler the max capacity on the buffer will slow the rate of writes to exert backpressure on the client (an iteration of a strategy employed by tools like RabbitMQ that have built-in flow control) w/ a maximum context timeout.
Also, the buffers don't pool 100k rows at a time -- the 100k rows were single connection benchmarks (before adding buffers). It's important that buffers are small and unnecessary for them to be larger, like I mention later on.
Excited to try out PG 18 with built-in support for async i/o!
> What is your local machine? Max parallelism is often a function of CPU count, so knowing how many core the box has is useful, and might explain needing to do more tests between 20 and 30.
2023 MacBook Pro, Apple M3 Max chip.
There's a great Laurenz Albe article linked in the blog which gives the following formula for connections:
connections < min(num_cores, parallel_io_limit) / (session_busy_ratio \* avg_parallelism)
The article: https://www.cybertec-postgresql.com/en/estimating-connection...It's slightly more interesting on real data, I just checked against one of our other databases and it evaluates to 0.8, so we'll land at about 18 connections.
I tried to make the testing framework easily accessible and tunable: https://github.com/abelanger5/postgres-fast-inserts
For example, to run the benchmark for an hour:
pg-inserts continuous copyfrom --duration 1h --batch-size 100 --max-conns 20 --writers 20 --flush-interval 10ms
We run load tests on the order of hours and sometimes days at Hatchet -- which tends to squeeze our AWS bill but it helps us see behavior that we wouldn't otherwise see. CREATE TABLE tasks (
id BIGINT GENERATED ALWAYS AS IDENTITY,
created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,
args JSONB,
PRIMARY KEY (id)
);
Then Postgres (or any relational DB) is the wrong choice.Is this how developers view a database in 2025? A place to store a JSON blob?
If you already have Postgres in place for other use-cases, using it for this purpose is a completely valid trade-off.
If transactionality is important, it’s a valid trade off.
If I don’t need what other data store would give me (perf, HA writes), why would I bother setting up and operating a completely separate system.
And even for non relational use cases, Postgres is still a better option in many cases
These queries had many joins and sometimes aggregates on huge tables, which took hours. Surprisingly, for most queries, I got nearly a linear speedup wrt core count (on a 32-core server) by running a separate query in parallel for separate ranges of pkeys. It was best to pick a table to shard that the query planner picked as the inner or outermost when you EXPLAIN the original query, cause that wouldn't change the query plan.
Postgres at the time had no kind of parallelization for a single query. It later got it, but limited to seq scans and other things later. So this method might still be advantageous today. It's not too useful for typical app backends where you want to query your data at one consistent timestamp, but I wonder if Postgres could build some feature that uses parallel queries at the same MVCC timestamp.
The final thing I was considering doing was to implement a dynamic batch/concurrency scheduler. Something like hill climbing or AIMD, where the system tries to increase the batch size and concurrency until a sweet spot is found. It seems crazily unscientific to me for a system to hard-code the concurrency and batch size when said system may be running under different load at any time, not to mention on different hardware in the future.
Has anyone here found a nice algorithm for this? There seem to be a number of options.
Something else not mentioned in the article is that you can significantly increase insert performance by dropping indexes first and creating them after. That's normally not possible, of course, unless you are creating a table/partition just for the one insert job. I wish Postgres had a batch mode where you could say "here are all my rows, insert them and only build indexes when I commit".
When we rewrote our inserts system, we opted to just ensure that the buffers would flush immediately on first write, which makes writes fast on anything that's on the order of < 1k writes/s with constant load over the second (10 buffers, 10ms flush interval), which is a pretty good baseline for our system. This is simpler and has been working really well for us, so there wasn't a need at this point to reintroduce any sort of exponential scaling, but I'll be experimenting more with this.
That's better than nothing, of course, but I would very much like to also be able to ramp down if the throughput suddenly drops (e.g. due to competing activity).
IIRC I got something like 70k rows/second on reddit dump data for an extended (couple hours) run using a similar approach with Scala on my i5-6600k (4 cores) with a JSON API that accepted 1 post at a time and batched internally. I think I used something like 2-4 workers, batch size 128, max flush of 5 ms. No COPY (the library I was using didn't have support for that, though I think it's not too bad to do manually with jdbc? I don't remember)
If it is possible to take your table offlkine for the insert then you'll get 10 to 100 X speedup by deleting all the indexes first then recreating them.
You should also ensure you set all the Postgres session variables to have enough memory and workers on the job.
WAL writes during inserts have a massive impact on insert performance too so you can look at setti8ngs like synchronous_commit and checkpoint_timeout. Creating the table as UNLOGGED will also disable WAL writes for that table giving a massive speedup.
Also, recreating your indexes sequentially can be faster than doing them in parallel and look out for CONCURRENTLY in index creation - its positive is it allows DB operations during index creation but its negative is it is much slower and risks failure.
Probably something along the lines of this - which also show how to set default workers for a table so you don't need your queries to keep setting that Postgres session variable.
DO $$
DECLARE
table_name TEXT := 'your_table_name'; -- Replace with your table name
schema_name TEXT := 'public'; -- Replace with your schema
data_file TEXT := '/path/to/your/data.csv'; -- Replace with your data file path
index_info RECORD;
index_sql TEXT;
BEGIN
-- 1. Store existing indexes for later recreation
CREATE TEMP TABLE index_definitions AS
SELECT indexname, indexdef
FROM pg_indexes
WHERE schemaname = schema_name AND tablename = table_name;
-- 2. Drop all existing indexes (except primary key)
FOR index_info IN
SELECT indexname
FROM pg_indexes
WHERE schemaname = schema_name AND tablename = table_name
AND indexdef NOT LIKE '%PRIMARY KEY%'
LOOP
EXECUTE 'DROP INDEX ' || schema_name || '.' || index_info.indexname;
RAISE NOTICE 'Dropped index: %', index_info.indexname;
END LOOP;
-- 3. Optimize PostgreSQL for bulk loading (non-sysadmin settings only)
-- Memory settings
SET maintenance_work_mem = '1GB'; -- Increase for faster index creation
SET work_mem = '256MB'; -- Increase for better sort performance
-- WAL and checkpoint settings
SET synchronous_commit = OFF; -- Delay WAL writes as requested
SET checkpoint_timeout = '30min'; -- Less frequent checkpoints during load
-- Worker/parallel settings
SET max_parallel_workers_per_gather = 8; -- Increase parallel workers
SET max_parallel_workers = 16; -- Maximum parallel workers
SET effective_io_concurrency = 200; -- Better IO performance for SSDs
SET random_page_cost = 1.1; -- Optimize for SSD storage
-- 4. Set parallel workers on the target table
EXECUTE 'ALTER TABLE ' || schema_name || '.' || table_name || ' SET (parallel_workers = 8)';
-- 5. Perform the COPY operation
EXECUTE 'COPY ' || schema_name || '.' || table_name || ' FROM ''' || data_file || ''' WITH (FORMAT CSV, HEADER true)';
-- 6. Rebuild all indexes (using the stored definitions)
FOR index_info IN SELECT * FROM index_definitions LOOP
index_sql := index_info.indexdef;
RAISE NOTICE 'Recreating index: %', index_info.indexname;
EXECUTE index_sql;
END LOOP;
-- 7. Drop temporary table
DROP TABLE index_definitions;
RAISE NOTICE 'Data loading completed successfully';
END $$;
pwmtr•8h ago