frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

The Collapse of GPT

https://cacm.acm.org/news/the-collapse-of-gpt/
4•pseudolus•12m ago•0 comments

Pallene: A statically typed ahead-of-time compiled sister language to Lua, with

https://github.com/pallene-lang/pallene
3•todsacerdoti•13m ago•0 comments

The Connoisseur of Desire

https://www.nybooks.com/articles/2025/05/29/the-connoisseur-of-desire-the-annotated-great-gatsby/
4•samclemens•16m ago•0 comments

AI job alerts that match your skills

https://jobsphere.onboardai.net/
2•Gurnoorsb•19m ago•2 comments

They Were Identical 'Twinnies' Who Charmed Orwell, Camus and More

https://www.nytimes.com/2025/05/04/books/review/the-dazzling-paget-sisters-ariane-bankes.html
7•lermontov•20m ago•1 comments

AI Food Detection and Free Calorie Counter App by Recipe

https://whatthefood.io
1•OdehAhwal•23m ago•0 comments

Nord Stream 2 Enters Debt Restructuring Deal with Creditors

https://oilprice.com/Latest-Energy-News/World-News/Nord-Stream-2-Enters-Debt-Restructuring-Deal-with-Creditors.html
1•PaulHoule•26m ago•0 comments

Better air quality is the easiest way not to die

https://dynomight.net/air/
3•haltingproblem•26m ago•0 comments

Jane Street-Millennium Trade Secrets Fight Ends in Settlement (2024)

https://www.bnnbloomberg.ca/business/international/2024/12/05/jane-street-millennium-settle-india-options-trade-secrets-case/
1•walterbell•30m ago•0 comments

TalwarAI: A suite of autonomous security agents

https://test.pypi.org/project/talwarai-beta/
1•handfuloflight•37m ago•0 comments

Apple Says Fortnite for iOS Isn't Blocked Worldwide, Just the U.S.

https://www.macrumors.com/2025/05/16/apple-fortnite-ios-not-blocked-worldwide/
4•smileybarry•42m ago•1 comments

Typograph: Prompt to Font

https://typograph.studio/en
1•handfuloflight•45m ago•0 comments

Harvard bought a Magna Carta copy for $27. It turned out to be an original

https://www.usatoday.com/story/news/nation/2025/05/15/harvard-magna-carta-1300/83643266007/
4•rmason•51m ago•0 comments

Core War

https://en.wikipedia.org/wiki/Core_War
4•michalpleban•52m ago•0 comments

Reddit is down

2•tom1337•53m ago•2 comments

Yeast-Based LLM Research

1•daly•53m ago•0 comments

Berkshire Hathaway Inc Q4 2024 vs. Q1 2025 13F Holdings Comparison

https://13f.info/13f/000095012325005701-berkshire-hathaway-inc-q1-2025
1•kamaraju•54m ago•0 comments

How to Split Ranges in C++23 and C++26

https://www.cppstories.com/2025/ranges_split_chunk/
2•ibobev•59m ago•0 comments

Leica M10 Battery Teardown and Reverse Engineering

https://tokilabs.co/tech/
1•k2enemy•1h ago•0 comments

The Digital Panopticon Nightmare

https://www.thedissident.news/the-digital-panopticon-nightmare/
4•anigbrowl•1h ago•0 comments

Ask HN: Did anyone else see an avalanche of old email appear in Gmail?

1•DamnInteresting•1h ago•0 comments

Party Till the Break of 10 P.M

https://www.nytimes.com/2025/05/15/style/earlybirds-club-dance-party.html
2•whack•1h ago•0 comments

On-Demand: AI Agent Automation

https://on-demand.io/
1•handfuloflight•1h ago•0 comments

U.S. Loses Last Triple-A Credit Rating

https://www.wsj.com/economy/central-banking/u-s-loses-last-triple-a-credit-rating-bfcbae5d
9•mudil•1h ago•1 comments

FDA clears first blood test for diagnosing Alzheimer's

https://www.statnews.com/2025/05/16/alzheimers-fujirebio-fda-approval/
1•pseudolus•1h ago•0 comments

Really Really Simple "Pure CSS" Squircles

https://gist.github.com/pouyakary/136fafc75a14abd867e0100856add5a0
3•pmkary•1h ago•0 comments

After HTTPS: Indicating Risk Instead of Security (2019)

https://scholarsarchive.byu.edu/etd/7403/
2•transpute•1h ago•0 comments

Our Idea of Happiness Has Gotten Shallow

https://www.nytimes.com/2025/05/03/magazine/happiness-history-living-well.html
1•gmays•1h ago•0 comments

Dept Homeland Security in vetting process for immigrant reality TV show

https://www.cnn.com/2025/05/16/politics/dhs-vetting-immigrant-reality-tv-show
3•jeffwass•1h ago•0 comments

HMPL v3.0: Small template language for displaying UI from server to client

https://github.com/hmpl-language/hmpl/releases/tag/3.0.0
3•todsacerdoti•1h ago•0 comments
Open in hackernews

The fastest Postgres inserts

https://docs.hatchet.run/blog/fastest-postgres-inserts
39•abelanger•9h ago

Comments

pwmtr•8h ago
Really appreciated the authors persistence on keeping to use PostgreSQL. There are many specialized solutions out there, but at the end they usually lack PostgreSQL's verstatility and battle testedness.
hinkley•6h ago
> On my local machine, we saturate the usefulness of more connections at around 20 connections

What is your local machine? Max parallelism is often a function of CPU count, so knowing how many core the box has is useful, and might explain needing to do more tests between 20 and 30.

There’s an old rule of thumb that tasks that alternate between cpu and IO usage, such as for instance compiler jobs, often benefit from running 2 tasks per core as the second task can make about 50% more progress per second. Your results are awfully similar to that rule of thumb so I’m wondering is this an 8 core box? 10? 12?

> If the buffer has reached its maximum size , it blocks writes until the buffer has been flushed, to properly exert backpressure on the application.

I cannot think of a situation where I could convince my team to allow 100k inserts to be pooled up waiting in memory for a process to crash or a deployment to wipe it out, but having a single buffer is also a mistake. io_uring is a refinement of a very old idea of having a read and a write buffer that swap back and forth between two threads or tasks.

You write to one for a while, and then you hand it off to someone to process it and clear it, and you write to another while that is happening. You only block when both are full. I suspect you’re seeing some of the fruits of this in the COPY example, if you’re creating a new file for each batch.

abelanger•6h ago
> I cannot think of a situation where I could convince my team to allow 100k inserts to be pooled up waiting in memory for a process to crash or a deployment to wipe it out, but having a single buffer is also amazing mistake. io_uring is a refinement of a very old idea of having a read and a write buffer that swap back and forth between two threads or tasks.

Good point! I didn't mention this in the article, but when we utilize this strategy we will either buffer messages consumed from a message queue, or if doing this from a handler the max capacity on the buffer will slow the rate of writes to exert backpressure on the client (an iteration of a strategy employed by tools like RabbitMQ that have built-in flow control) w/ a maximum context timeout.

Also, the buffers don't pool 100k rows at a time -- the 100k rows were single connection benchmarks (before adding buffers). It's important that buffers are small and unnecessary for them to be larger, like I mention later on.

Excited to try out PG 18 with built-in support for async i/o!

> What is your local machine? Max parallelism is often a function of CPU count, so knowing how many core the box has is useful, and might explain needing to do more tests between 20 and 30.

2023 MacBook Pro, Apple M3 Max chip.

There's a great Laurenz Albe article linked in the blog which gives the following formula for connections:

    connections < min(num_cores, parallel_io_limit) / (session_busy_ratio \* avg_parallelism)
The article: https://www.cybertec-postgresql.com/en/estimating-connection...
hinkley•5h ago
So 14 cores vs 20 or 30 connections. What did you get when running that query?
abelanger•5h ago
With these queries, `session_busy_ratio` is just going to be 1, so it's most likely going to trivially evaluate to 14 assuming we're CPU bound on writes.

It's slightly more interesting on real data, I just checked against one of our other databases and it evaluates to 0.8, so we'll land at about 18 connections.

hinkley•4h ago
Might be worth adding that to Further Reading.
natebrennand•6h ago
None of these tests seem to run for longer than a minute? Drawing conclusions from experiments like this is fraught.
abelanger•6h ago
OP here, I agree. The point of the article is both educational and also aims to set a lower bound on latency and an upper bound on throughput for Postgres writes, and see how these tradeoff under different circumstances.

I tried to make the testing framework easily accessible and tunable: https://github.com/abelanger5/postgres-fast-inserts

For example, to run the benchmark for an hour:

    pg-inserts continuous copyfrom --duration 1h --batch-size 100 --max-conns 20 --writers 20 --flush-interval 10ms
We run load tests on the order of hours and sometimes days at Hatchet -- which tends to squeeze our AWS bill but it helps us see behavior that we wouldn't otherwise see.
xzzx•5h ago
Would be interesting to see these comparisons with pipeline mode, since it seems like it would address a lot of the issues covered in this post
semiquaver•4h ago
In my experience pipelining inserts is not nearly as fast as COPY, although obviously it’s a lot more flexible.
SoftTalker•5h ago
If this is representative of your database:

    CREATE TABLE tasks (
       id BIGINT GENERATED ALWAYS AS IDENTITY,
       created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,
       args JSONB,
       PRIMARY KEY (id)
   );
Then Postgres (or any relational DB) is the wrong choice.

Is this how developers view a database in 2025? A place to store a JSON blob?

ceejayoz•5h ago
Postgres is a perfectly acceptable key/value store if you need it for that.
williamdclt•4h ago
Its trade offs.

If you already have Postgres in place for other use-cases, using it for this purpose is a completely valid trade-off.

If transactionality is important, it’s a valid trade off.

If I don’t need what other data store would give me (perf, HA writes), why would I bother setting up and operating a completely separate system.

And even for non relational use cases, Postgres is still a better option in many cases

VWWHFSfQ•4h ago
This is perfectly reasonable if you need document features in some tables. I would strongly prefer this over something like Mongo DB.
tucnak•3h ago
Your example demonstrates how little of PostgreSQL syntax you actually know.
wewewedxfgdf•1h ago
Postgres has very sophisticated JSON querying and storage.
frollogaston•4h ago
A long time ago, I was working on a team that sorta misused Postgres as a data processing pipeline, so a lot of temp tables were involved. As the article said, COPY was by far the fastest way to get data in from outside. But the next problem after that was INSERT (SELECT ...) statements going from one stage to another.

These queries had many joins and sometimes aggregates on huge tables, which took hours. Surprisingly, for most queries, I got nearly a linear speedup wrt core count (on a 32-core server) by running a separate query in parallel for separate ranges of pkeys. It was best to pick a table to shard that the query planner picked as the inner or outermost when you EXPLAIN the original query, cause that wouldn't change the query plan.

Postgres at the time had no kind of parallelization for a single query. It later got it, but limited to seq scans and other things later. So this method might still be advantageous today. It's not too useful for typical app backends where you want to query your data at one consistent timestamp, but I wonder if Postgres could build some feature that uses parallel queries at the same MVCC timestamp.

atombender•3h ago
I've been optimizing inserts recently, and I've already reached the limit here (i.e. already batching in parallel etc.).

The final thing I was considering doing was to implement a dynamic batch/concurrency scheduler. Something like hill climbing or AIMD, where the system tries to increase the batch size and concurrency until a sweet spot is found. It seems crazily unscientific to me for a system to hard-code the concurrency and batch size when said system may be running under different load at any time, not to mention on different hardware in the future.

Has anyone here found a nice algorithm for this? There seem to be a number of options.

Something else not mentioned in the article is that you can significantly increase insert performance by dropping indexes first and creating them after. That's normally not possible, of course, unless you are creating a table/partition just for the one insert job. I wish Postgres had a batch mode where you could say "here are all my rows, insert them and only build indexes when I commit".

abelanger•3h ago
While we didn't look too closely into this, our buffers used to scale capacity based on a Fibonacci sequence if we hit capacity within the flush interval, up to a max capacity. I'm sure there's much better prior work on the ideal scaling laws here but this worked really well for us, and I'm sure any exponential would do fine.

When we rewrote our inserts system, we opted to just ensure that the buffers would flush immediately on first write, which makes writes fast on anything that's on the order of < 1k writes/s with constant load over the second (10 buffers, 10ms flush interval), which is a pretty good baseline for our system. This is simpler and has been working really well for us, so there wasn't a need at this point to reintroduce any sort of exponential scaling, but I'll be experimenting more with this.

atombender•2h ago
So you measure the throughput and slowly increase the batch size until the throughput stabilizes?

That's better than nothing, of course, but I would very much like to also be able to ramp down if the throughput suddenly drops (e.g. due to competing activity).

ndriscoll•3h ago
There's no need to wait for the flush interval to flush a full batch. Just do it. You'll get backpressure when your workers are all busy. Also, set your flush interval closer to your throughput for your batch size. e.g. if you can do 100k/s, plan to flush a batch of 100 every ~2 ms. Then when you're not at peak load, you don't really pay for any extra latency for batching. As you approach peak load, you'll end up with full batches.

IIRC I got something like 70k rows/second on reddit dump data for an extended (couple hours) run using a similar approach with Scala on my i5-6600k (4 cores) with a JSON API that accepted 1 post at a time and batched internally. I think I used something like 2-4 workers, batch size 128, max flush of 5 ms. No COPY (the library I was using didn't have support for that, though I think it's not too bad to do manually with jdbc? I don't remember)

wewewedxfgdf•1h ago
Nothing impacts Postgres insert performance more than index updates during insert.

If it is possible to take your table offlkine for the insert then you'll get 10 to 100 X speedup by deleting all the indexes first then recreating them.

You should also ensure you set all the Postgres session variables to have enough memory and workers on the job.

WAL writes during inserts have a massive impact on insert performance too so you can look at setti8ngs like synchronous_commit and checkpoint_timeout. Creating the table as UNLOGGED will also disable WAL writes for that table giving a massive speedup.

Also, recreating your indexes sequentially can be faster than doing them in parallel and look out for CONCURRENTLY in index creation - its positive is it allows DB operations during index creation but its negative is it is much slower and risks failure.

Probably something along the lines of this - which also show how to set default workers for a table so you don't need your queries to keep setting that Postgres session variable.

    DO $$
    DECLARE
        table_name TEXT := 'your_table_name';  -- Replace with your table name
        schema_name TEXT := 'public';          -- Replace with your schema
        data_file TEXT := '/path/to/your/data.csv'; -- Replace with your data file path
        index_info RECORD;
        index_sql TEXT;
    BEGIN
        -- 1. Store existing indexes for later recreation
        CREATE TEMP TABLE index_definitions AS
        SELECT indexname, indexdef
        FROM pg_indexes
        WHERE schemaname = schema_name AND tablename = table_name;
        
        -- 2. Drop all existing indexes (except primary key)
        FOR index_info IN 
            SELECT indexname 
            FROM pg_indexes 
            WHERE schemaname = schema_name AND tablename = table_name
                  AND indexdef NOT LIKE '%PRIMARY KEY%'
        LOOP
            EXECUTE 'DROP INDEX ' || schema_name || '.' || index_info.indexname;
            RAISE NOTICE 'Dropped index: %', index_info.indexname;
        END LOOP;
        
        -- 3. Optimize PostgreSQL for bulk loading (non-sysadmin settings only)
        -- Memory settings
        SET maintenance_work_mem = '1GB';        -- Increase for faster index creation
        SET work_mem = '256MB';                  -- Increase for better sort performance
        
        -- WAL and checkpoint settings
        SET synchronous_commit = OFF;            -- Delay WAL writes as requested
        SET checkpoint_timeout = '30min';        -- Less frequent checkpoints during load
        
        -- Worker/parallel settings
        SET max_parallel_workers_per_gather = 8; -- Increase parallel workers
        SET max_parallel_workers = 16;           -- Maximum parallel workers
        SET effective_io_concurrency = 200;      -- Better IO performance for SSDs
        SET random_page_cost = 1.1;              -- Optimize for SSD storage
        
        -- 4. Set parallel workers on the target table
        EXECUTE 'ALTER TABLE ' || schema_name || '.' || table_name || ' SET (parallel_workers = 8)';
        
        -- 5. Perform the COPY operation
        EXECUTE 'COPY ' || schema_name || '.' || table_name || ' FROM ''' || data_file || ''' WITH (FORMAT CSV, HEADER true)';
        
        -- 6. Rebuild all indexes (using the stored definitions)
        FOR index_info IN SELECT * FROM index_definitions LOOP
            index_sql := index_info.indexdef;
            RAISE NOTICE 'Recreating index: %', index_info.indexname;
            EXECUTE index_sql;
        END LOOP;
        
        -- 7. Drop temporary table
        DROP TABLE index_definitions;
        
        RAISE NOTICE 'Data loading completed successfully';
    END $$;